You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@drill.apache.org by br...@apache.org on 2018/11/02 20:19:39 UTC

[drill-site] branch asf-site updated: apache drill site rebuild w/ updated version of jekyll

This is an automated email from the ASF dual-hosted git repository.

bridgetb pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/drill-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 65be75b  apache drill site rebuild w/ updated version of jekyll
65be75b is described below

commit 65be75b0ad3cb46dfcb7074f4f9845d7bde3c3d9
Author: Bridget Bevens <bb...@maprtech.com>
AuthorDate: Fri Nov 2 13:18:22 2018 -0700

    apache drill site rebuild w/ updated version of jekyll
---
 blog/2014/11/19/sql-on-mongodb/index.html          |  4 +-
 blog/2014/12/02/drill-top-level-project/index.html |  2 +-
 .../apache-drill-qa-panelist-spotlight/index.html  | 15 +++---
 blog/2014/12/16/whats-coming-in-2015/index.html    |  4 +-
 blog/2014/12/23/drill-0.7-released/index.html      |  2 +-
 .../index.html                                     |  2 +-
 blog/2015/03/31/drill-0.8-released/index.html      |  2 +-
 blog/2015/05/04/drill-0.9-released/index.html      |  2 +-
 blog/2015/05/19/drill-1.0-released/index.html      |  2 +-
 blog/2015/07/05/drill-1.1-released/index.html      |  6 +--
 .../23/drill-tutorial-at-nosql-now-2015/index.html |  7 +--
 blog/2015/10/16/drill-1.2-released/index.html      |  4 +-
 blog/2015/11/23/drill-1.3-released/index.html      |  4 +-
 blog/2015/12/14/drill-1.4-released/index.html      |  2 +-
 blog/2016/02/16/drill-1.5-released/index.html      |  2 +-
 blog/2016/03/16/drill-1.6-released/index.html      |  2 +-
 blog/2016/06/28/drill-1.7-released/index.html      |  2 +-
 blog/2016/08/30/drill-1.8-released/index.html      |  2 +-
 blog/2016/11/29/drill-1.9-released/index.html      |  2 +-
 blog/2017/03/15/drill-1.10-released/index.html     |  2 +-
 blog/2017/07/31/drill-1.11-released/index.html     | 20 ++++----
 blog/2017/12/15/drill-1.12-released/index.html     |  8 +--
 blog/2018/03/18/drill-1.13-released/index.html     |  8 +--
 blog/2018/08/05/drill-1.14-released/index.html     | 10 ++--
 docs/aggregate-window-functions/index.html         | 10 ++--
 .../analyzing-the-yelp-academic-dataset/index.html | 14 ++---
 docs/apache-drill-0-5-0-release-notes/index.html   |  2 +-
 docs/apache-drill-0-6-0-release-notes/index.html   |  2 +-
 docs/apache-drill-0-8-0-release-notes/index.html   |  2 +-
 docs/apache-drill-0-9-0-release-notes/index.html   |  2 +-
 docs/apache-drill-1-1-0-release-notes/index.html   |  6 +--
 docs/apache-drill-1-2-0-release-notes/index.html   |  2 +-
 .../index.html                                     | 10 ++--
 docs/apache-drill-contribution-ideas/index.html    |  2 +-
 docs/appendix-a-release-note-issues/index.html     |  4 +-
 docs/azure-blob-storage-plugin/index.html          |  2 +-
 docs/compiling-drill-from-source/index.html        |  4 +-
 .../index.html                                     |  2 +-
 docs/configuring-jreport-with-drill/index.html     |  6 +--
 docs/configuring-kerberos-security/index.html      | 10 ++--
 docs/configuring-odbc-on-linux/index.html          | 10 ++--
 docs/configuring-odbc-on-mac-os-x/index.html       | 10 ++--
 docs/configuring-odbc-on-windows/index.html        |  8 +--
 .../index.html                                     |  4 +-
 docs/configuring-ssl-tls-for-encryption/index.html | 12 ++---
 docs/configuring-storage-plugins/index.html        |  2 +-
 .../index.html                                     |  8 +--
 .../index.html                                     | 10 ++--
 docs/configuring-user-impersonation/index.html     |  2 +-
 .../index.html                                     | 10 ++--
 docs/custom-function-interfaces/index.html         |  6 +--
 docs/data-type-conversion/index.html               |  2 +-
 docs/date-time-and-timestamp/index.html            |  2 +-
 docs/date-time-functions-and-arithmetic/index.html |  2 +-
 docs/drill-introduction/index.html                 | 30 +++++------
 docs/drill-plan-syntax/index.html                  |  2 +-
 docs/drop-table/index.html                         | 18 +++----
 docs/enabling-web-ui-security/index.html           |  2 +-
 docs/getting-to-know-the-drill-sandbox/index.html  |  2 +-
 docs/hive-storage-plugin/index.html                |  2 +-
 docs/how-to-partition-data/index.html              |  4 +-
 docs/image-metadata-format-plugin/index.html       |  4 +-
 .../installing-the-apache-drill-sandbox/index.html |  6 +--
 docs/installing-the-driver-on-linux/index.html     |  6 +--
 docs/installing-the-driver-on-mac-os-x/index.html  |  6 +--
 docs/installing-the-driver-on-windows/index.html   |  6 +--
 docs/json-data-model/index.html                    | 18 +++----
 docs/kafka-storage-plugin/index.html               |  2 +-
 docs/kvgen/index.html                              |  2 +-
 docs/lateral-join/index.html                       |  2 +-
 docs/lesson-1-learn-about-the-data-set/index.html  | 30 +++++------
 docs/lesson-2-run-queries-with-ansi-sql/index.html | 28 +++++-----
 .../index.html                                     | 36 ++++++-------
 docs/logfile-plugin/index.html                     |  2 +-
 docs/logging-and-tracing/index.html                |  2 +-
 docs/mongodb-storage-plugin/index.html             |  2 +-
 docs/parquet-filter-pushdown/index.html            |  2 +-
 docs/parquet-format/index.html                     |  2 +-
 docs/partition-pruning-introduction/index.html     |  2 +-
 docs/phonetic-functions/index.html                 | 20 ++++----
 docs/query-directory-functions/index.html          |  2 +-
 docs/query-profiles/index.html                     |  2 +-
 docs/querying-hbase/index.html                     |  2 +-
 docs/querying-hive/index.html                      |  2 +-
 docs/querying-json-files/index.html                |  2 +-
 docs/querying-plain-text-files/index.html          |  4 +-
 docs/querying-system-tables/index.html             | 12 ++---
 docs/ranking-window-functions/index.html           | 10 ++--
 docs/rest-api-introduction/index.html              | 28 +++++-----
 docs/rpc-overview/index.html                       |  2 +-
 docs/s3-storage-plugin/index.html                  |  2 +-
 docs/secure-communication-paths/index.html         |  2 +-
 docs/sql-extensions/index.html                     |  2 +-
 docs/starting-drill-in-distributed-mode/index.html |  4 +-
 docs/starting-the-web-console/index.html           |  2 +-
 docs/string-distance-functions/index.html          | 14 ++---
 docs/tableau-examples/index.html                   | 28 +++++-----
 docs/troubleshooting/index.html                    |  8 +--
 docs/tutorial-develop-a-simple-function/index.html | 18 +++----
 docs/useful-research/index.html                    |  4 +-
 .../index.html                                     |  6 +--
 .../index.html                                     |  8 +--
 .../index.html                                     |  6 +--
 .../index.html                                     |  6 +--
 .../using-jdbc-with-squirrel-on-windows/index.html | 12 ++---
 .../index.html                                     | 12 ++---
 docs/using-qlik-sense-with-drill/index.html        | 10 ++--
 .../index.html                                     |  2 +-
 .../index.html                                     |  4 +-
 docs/value-window-functions/index.html             | 12 ++---
 docs/why-drill/index.html                          | 20 ++++----
 docs/workspaces/index.html                         |  2 +-
 faq/index.html                                     | 34 ++++++------
 feed.xml                                           | 60 +++++++++++-----------
 114 files changed, 434 insertions(+), 432 deletions(-)

diff --git a/blog/2014/11/19/sql-on-mongodb/index.html b/blog/2014/11/19/sql-on-mongodb/index.html
index 3206fe2..bdfdd06 100644
--- a/blog/2014/11/19/sql-on-mongodb/index.html
+++ b/blog/2014/11/19/sql-on-mongodb/index.html
@@ -156,7 +156,7 @@
 <li>Optimizations</li>
 </ul>
 
-<h2 id="drill-and-mongodb-setup-(standalone/replicated/sharded)">Drill and MongoDB Setup (Standalone/Replicated/Sharded)</h2>
+<h2 id="drill-and-mongodb-setup-standalone-replicated-sharded">Drill and MongoDB Setup (Standalone/Replicated/Sharded)</h2>
 
 <h3 id="standalone">Standalone</h3>
 
@@ -197,7 +197,7 @@
 
 <p>In replicated mode, whichever drillbit receives the query connects to the nearest <code>mongod</code> (local <code>mongod</code>) to read the data.</p>
 
-<h3 id="sharded/sharded-with-replica-set">Sharded/Sharded with Replica Set</h3>
+<h3 id="sharded-sharded-with-replica-set">Sharded/Sharded with Replica Set</h3>
 
 <ul>
 <li>Start Mongo processes in sharded mode</li>
diff --git a/blog/2014/12/02/drill-top-level-project/index.html b/blog/2014/12/02/drill-top-level-project/index.html
index a859d96..914a475 100644
--- a/blog/2014/12/02/drill-top-level-project/index.html
+++ b/blog/2014/12/02/drill-top-level-project/index.html
@@ -167,7 +167,7 @@
 
 <p>After almost two years of research and development, we released Drill 0.4 in August, and continued with monthly releases since then.</p>
 
-<h2 id="what&#39;s-next">What&#39;s Next</h2>
+<h2 id="whats-next">What&#39;s Next</h2>
 
 <p>Graduating to a top-level project is a significant milestone, but it&#39;s really just the beginning of the journey. In fact, we&#39;re currently wrapping up Drill 0.7, which includes hundreds of fixes and enhancements, and we expect to release that in the next couple weeks.</p>
 
diff --git a/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html b/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html
index 38327d2..c89b749 100644
--- a/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html
+++ b/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html
@@ -134,8 +134,9 @@
   <div class="addthis_sharing_toolbox"></div>
 
   <article class="post-content">
-    <p><script type="text/javascript" src="//addthisevent.com/libs/1.5.8/ate.min.js"></script>
-<a href="/blog/2014/12/11/apache-drill-qa-panelist-spotlight/" title="Add to Calendar" class="addthisevent">
+    <script type="text/javascript" src="//addthisevent.com/libs/1.5.8/ate.min.js"></script>
+
+<p><a href="/blog/2014/12/11/apache-drill-qa-panelist-spotlight/" title="Add to Calendar" class="addthisevent">
     Add to Calendar
     <span class="_start">12-17-2014 11:30:00</span>
     <span class="_end">12-17-2014 12:30:00</span>
@@ -159,23 +160,23 @@
 
 <p>Apache Drill committers Tomer Shiran, Jacques Nadeau, and Ted Dunning, as well as Tableau Product Manager Jeff Feng and Data Scientist Dr. Kirk Borne will be on hand to answer your questions.</p>
 
-<h2 id="tomer-shiran,-apache-drill-founder-(@tshiran)">Tomer Shiran, Apache Drill Founder (@tshiran)</h2>
+<h2 id="tomer-shiran-apache-drill-founder-tshiran">Tomer Shiran, Apache Drill Founder (@tshiran)</h2>
 
 <p>Tomer Shiran is the founder of Apache Drill, and a PMC member and committer on the project. He is VP Product Management at MapR, responsible for product strategy, roadmap and new feature development. Prior to MapR, Tomer held numerous product management and engineering roles at Microsoft, most recently as the product manager for Microsoft Internet Security &amp; Acceleration Server (now Microsoft Forefront). He is the founder of two websites that have served tens of millions of users, [...]
 
-<h2 id="jeff-feng,-product-manager-tableau-software-(@jtfeng)">Jeff Feng, Product Manager Tableau Software (@jtfeng)</h2>
+<h2 id="jeff-feng-product-manager-tableau-software-jtfeng">Jeff Feng, Product Manager Tableau Software (@jtfeng)</h2>
 
 <p>Jeff Feng is a Product Manager at Tableau and leads their Big Data product roadmap &amp; strategic vision.  In his role, he focuses on joint technology integration and partnership efforts with a number of Hadoop, NoSQL and web application partners in helping users see and understand their data.</p>
 
-<h2 id="ted-dunning,-apache-drill-comitter-(@ted_dunning)">Ted Dunning, Apache Drill Comitter (@Ted_Dunning)</h2>
+<h2 id="ted-dunning-apache-drill-comitter-ted_dunning">Ted Dunning, Apache Drill Comitter (@Ted_Dunning)</h2>
 
 <p>Ted Dunning is Chief Applications Architect at MapR Technologies and committer and PMC member of the Apache Mahout, Apache ZooKeeper, and Apache Drill projects and mentor for Apache Storm. He contributed to Mahout clustering, classification and matrix decomposition algorithms  and helped expand the new version of Mahout Math library. Ted was the chief architect behind the MusicMatch (now Yahoo Music) and Veoh recommendation systems, he built fraud detection systems for ID Analytics (L [...]
 
-<h2 id="jacques-nadeau,-vice-president,-apache-drill-(@intjesus)">Jacques Nadeau, Vice President, Apache Drill (@intjesus)</h2>
+<h2 id="jacques-nadeau-vice-president-apache-drill-intjesus">Jacques Nadeau, Vice President, Apache Drill (@intjesus)</h2>
 
 <p>Jacques Nadeau leads Apache Drill development efforts at MapR Technologies. He is an industry veteran with over 15 years of big data and analytics experience. Most recently, he was cofounder and CTO of search engine startup YapMap. Before that, he was director of new product engineering with Quigo (contextual advertising, acquired by AOL in 2007). He also built the Avenue A | Razorfish analytics data warehousing system and associated services practice (acquired by Microsoft).</p>
 
-<h2 id="dr.-kirk-borne,-george-mason-university-(@kirkdborne)">Dr. Kirk Borne, George Mason University (@KirkDBorne)</h2>
+<h2 id="dr-kirk-borne-george-mason-university-kirkdborne">Dr. Kirk Borne, George Mason University (@KirkDBorne)</h2>
 
 <p>Dr. Kirk Borne is a Transdisciplinary Data Scientist and an Astrophysicist. He is Professor of Astrophysics and Computational Science in the George Mason University School of Physics, Astronomy, and Computational Sciences. He has been at Mason since 2003, where he teaches and advises students in the graduate and undergraduate Computational Science, Informatics, and Data Science programs. Previously, he spent nearly 20 years in positions supporting NASA projects, including an assignmen [...]
 
diff --git a/blog/2014/12/16/whats-coming-in-2015/index.html b/blog/2014/12/16/whats-coming-in-2015/index.html
index c98ffd3..163db7f 100644
--- a/blog/2014/12/16/whats-coming-in-2015/index.html
+++ b/blog/2014/12/16/whats-coming-in-2015/index.html
@@ -220,7 +220,7 @@
 
 <p>If you&#39;re interested in implementing a new storage plugin, I would encourage you to reach out to the Drill developer community on <a href="mailto:dev@drill.apache.org">dev@drill.apache.org</a>. I&#39;m looking forward to publishing an example of a single-query join across 10 data sources.</p>
 
-<h2 id="drill/spark-integration">Drill/Spark Integration</h2>
+<h2 id="drill-spark-integration">Drill/Spark Integration</h2>
 
 <p>We&#39;re seeing growing interest in Spark as an execution engine for data pipelines, providing an alternative to MapReduce. The Drill community is working on integrating Drill and Spark to address a few new use cases:</p>
 
@@ -246,7 +246,7 @@
 <li><strong>Workload management</strong>: A single cluster is often shared among many users and groups, and everyone expects answers in real-time. Workload management prioritizes the allocation of resources to ensure that the most important workloads get done first so that business demands can be met. Administrators need to be able to assign priorities and quotas at a fine granularity. We&#39;re working on enhancing Drill&#39;s workload management to provide these capabilities while prov [...]
 </ul>
 
-<h2 id="we-would-love-to-hear-from-you!">We Would Love to Hear From You!</h2>
+<h2 id="we-would-love-to-hear-from-you">We Would Love to Hear From You!</h2>
 
 <p>Are there other features you would like to see in Drill? We would love to hear from you:</p>
 
diff --git a/blog/2014/12/23/drill-0.7-released/index.html b/blog/2014/12/23/drill-0.7-released/index.html
index a7dc1c0..4fc1196 100644
--- a/blog/2014/12/23/drill-0.7-released/index.html
+++ b/blog/2014/12/23/drill-0.7-released/index.html
@@ -134,7 +134,7 @@
   <div class="addthis_sharing_toolbox"></div>
 
   <article class="post-content">
-    <p>I&#39;m excited to announce that the community has just released Drill 0.7, which includes <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&amp;version=12327473">228 resolved JIRAs</a> and numerous enhancements such as: </p>
+    <p>I&#39;m excited to announce that the community has just released Drill 0.7, which includes <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&version=12327473">228 resolved JIRAs</a> and numerous enhancements such as: </p>
 
 <ul>
 <li>No dependency on UDP multicast. Drill can now work on EC2, as well as clusters with multiple subnets or multihomed configurations</li>
diff --git a/blog/2015/01/27/schema-free-json-data-infrastructure/index.html b/blog/2015/01/27/schema-free-json-data-infrastructure/index.html
index 60a2aca..fd1408a 100644
--- a/blog/2015/01/27/schema-free-json-data-infrastructure/index.html
+++ b/blog/2015/01/27/schema-free-json-data-infrastructure/index.html
@@ -136,7 +136,7 @@
   <article class="post-content">
     <p>JSON has emerged in recent years as the de-facto standard data exchange format. It is being used everywhere. Front-end Web applications use JSON to maintain data and communicate with back-end applications. Web APIs are JSON-based (eg, <a href="https://dev.twitter.com/rest/public">Twitter REST APIs</a>, <a href="http://developers.marketo.com/documentation/rest/">Marketo REST APIs</a>, <a href="https://developer.github.com/v3/">GitHub API</a>). It&#39;s the format of choice for publ [...]
 
-<h1 id="why-is-json-a-convenient-data-exchange-format?">Why is JSON a Convenient Data Exchange Format?</h1>
+<h1 id="why-is-json-a-convenient-data-exchange-format">Why is JSON a Convenient Data Exchange Format?</h1>
 
 <p>While I won&#39;t dive into the historical roots of JSON (JavaScript Object Notation, <a href="http://en.wikipedia.org/wiki/JSON#JavaScript_eval.28.29"><code>eval()</code></a>, etc.), I do want to highlight several attributes of JSON that make it a convenient data exchange format:</p>
 
diff --git a/blog/2015/03/31/drill-0.8-released/index.html b/blog/2015/03/31/drill-0.8-released/index.html
index c7bb0b0..ed0a96a 100644
--- a/blog/2015/03/31/drill-0.8-released/index.html
+++ b/blog/2015/03/31/drill-0.8-released/index.html
@@ -141,7 +141,7 @@
   <div class="addthis_sharing_toolbox"></div>
 
   <article class="post-content">
-    <p>We&#39;re excited to announce that the community has just released Drill 0.8, which includes <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&amp;version=12328812">243 resolved JIRAs</a> and numerous enhancements such as: </p>
+    <p>We&#39;re excited to announce that the community has just released Drill 0.8, which includes <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&version=12328812">243 resolved JIRAs</a> and numerous enhancements such as: </p>
 
 <ul>
 <li><strong>Bytecode rewriting</strong>. Drill now leverages code optimization techniques such as bytecode rewriting and inlining to enhance the speed of many queries by reducing overall memory usage and CPU instructions.</li>
diff --git a/blog/2015/05/04/drill-0.9-released/index.html b/blog/2015/05/04/drill-0.9-released/index.html
index 64dd717..3a1b12f 100644
--- a/blog/2015/05/04/drill-0.9-released/index.html
+++ b/blog/2015/05/04/drill-0.9-released/index.html
@@ -141,7 +141,7 @@
   <div class="addthis_sharing_toolbox"></div>
 
   <article class="post-content">
-    <p>It has been about a month since the release of Drill 0.8, which included <a href="/blog/drill-0.8-released/">more than 240 improvements</a>. Today we&#39;re happy to announce the availability of Drill 0.9, providing additional enhancements and bug fixes. In fact, this release includes <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&amp;version=12328813">200 resolved JIRAs</a>. Some of the noteworthy features in Drill 0.9 are:</p>
+    <p>It has been about a month since the release of Drill 0.8, which included <a href="/blog/drill-0.8-released/">more than 240 improvements</a>. Today we&#39;re happy to announce the availability of Drill 0.9, providing additional enhancements and bug fixes. In fact, this release includes <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&version=12328813">200 resolved JIRAs</a>. Some of the noteworthy features in Drill 0.9 are:</p>
 
 <ul>
 <li><strong>Authentication</strong> (<a href="https://issues.apache.org/jira/browse/DRILL-2674">DRILL-2674</a>). Drill now supports username/password authentication through the Java and C++ clients, as well as JDBC and ODBC. On the server-side, Drill leverages Linux PAM to securely validate the credentials. Users can choose to use an external user directory such as Active Directory or LDAP. To enable authentication, set the <code>security.user.auth</code> option in <code>drill-override.c [...]
diff --git a/blog/2015/05/19/drill-1.0-released/index.html b/blog/2015/05/19/drill-1.0-released/index.html
index a38f99d..65459a9 100644
--- a/blog/2015/05/19/drill-1.0-released/index.html
+++ b/blog/2015/05/19/drill-1.0-released/index.html
@@ -148,7 +148,7 @@
 <li>Unlock the data housed in non-relational datastores like NoSQL, Hadoop and cloud storage, making it available not only to developers, but also business users, analysts, data scientists and anyone else who can write a SQL query or use a BI tool. Non-relational datastores are capturing an increasing share of the world&#39;s data, and it&#39;s incredibly hard to explore and analyze this data.</li>
 </ul>
 
-<p>Today we&#39;re happy to announce the availability of the production-ready Drill 1.0 release. This release addresses <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&amp;version=12325568">228 JIRAs</a> on top of the 0.9 release earlier this month. Highlights include:</p>
+<p>Today we&#39;re happy to announce the availability of the production-ready Drill 1.0 release. This release addresses <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&version=12325568">228 JIRAs</a> on top of the 0.9 release earlier this month. Highlights include:</p>
 
 <ul>
 <li>Substantial improvements in stability, memory handling and performance</li>
diff --git a/blog/2015/07/05/drill-1.1-released/index.html b/blog/2015/07/05/drill-1.1-released/index.html
index 90ad00c..c595cd3 100644
--- a/blog/2015/07/05/drill-1.1-released/index.html
+++ b/blog/2015/07/05/drill-1.1-released/index.html
@@ -134,7 +134,7 @@
   <div class="addthis_sharing_toolbox"></div>
 
   <article class="post-content">
-    <p>Today I&#39;m happy to announce the availability of the Drill 1.1 release. This release addresses <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&amp;version=12329689">162 JIRAs</a> on top of May&#39;s 1.0 release. Highlights include:</p>
+    <p>Today I&#39;m happy to announce the availability of the Drill 1.1 release. This release addresses <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&version=12329689">162 JIRAs</a> on top of May&#39;s 1.0 release. Highlights include:</p>
 
 <h2 id="automatic-partitioning-for-parquet-files">Automatic Partitioning for Parquet Files</h2>
 
@@ -174,13 +174,13 @@
   &lt;version&gt;1.1.0&lt;/version&gt;
 &lt;/dependency&gt;
 </code></pre></div>
-<h2 id="mongodb-3.0-support">MongoDB 3.0 Support</h2>
+<h2 id="mongodb-3-0-support">MongoDB 3.0 Support</h2>
 
 <p>Drill now uses MongoDB&#39;s latest Java driver and has enhanced connection pooling for better performance and resilience in large-scale deployments.  Learn more about using the <a href="https://drill.apache.org/docs/mongodb-plugin-for-apache-drill/">MongoDB plugin</a>.</p>
 
 <h2 id="many-more-fixes">Many More Fixes</h2>
 
-<p>Drill includes a variety of <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&amp;version=12329689">other fixes and enhancements</a> including:</p>
+<p>Drill includes a variety of <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&version=12329689">other fixes and enhancements</a> including:</p>
 
 <ul>
 <li>Improvements for certain types of exists and correlated subqueries</li>
diff --git a/blog/2015/07/23/drill-tutorial-at-nosql-now-2015/index.html b/blog/2015/07/23/drill-tutorial-at-nosql-now-2015/index.html
index 79e2638..2f46959 100644
--- a/blog/2015/07/23/drill-tutorial-at-nosql-now-2015/index.html
+++ b/blog/2015/07/23/drill-tutorial-at-nosql-now-2015/index.html
@@ -134,8 +134,9 @@
   <div class="addthis_sharing_toolbox"></div>
 
   <article class="post-content">
-    <p><script type="text/javascript" src="//addthisevent.com/libs/1.5.8/ate.min.js"></script>
-<a href="/blog/2015/07/23/drill-tutorial-at-nosql-now-2015/" title="Add to Calendar" class="addthisevent">
+    <script type="text/javascript" src="//addthisevent.com/libs/1.5.8/ate.min.js"></script>
+
+<p><a href="/blog/2015/07/23/drill-tutorial-at-nosql-now-2015/" title="Add to Calendar" class="addthisevent">
     Add to Calendar
     <span class="_start">08-20-2015 13:00:00</span>
     <span class="_end">08-20-2014 16:15:00</span>
@@ -149,7 +150,7 @@
     <span class="_date_format">MM-DD-YYYY</span>
 </a></p>
 
-<p>NoSQL Now! 2015 will be hosting a <a href="http://nosql2015.dataversity.net/sessionPop.cfm?confid=90&amp;proposalid=7727">3-hour tutorial</a> on Apache Drill. Jacques Nadeau and I will provide a deep dive on Drill and demonstrate how to analyze NoSQL data with SQL queries and standard BI tools. We would love to see you there!</p>
+<p>NoSQL Now! 2015 will be hosting a <a href="http://nosql2015.dataversity.net/sessionPop.cfm?confid=90&proposalid=7727">3-hour tutorial</a> on Apache Drill. Jacques Nadeau and I will provide a deep dive on Drill and demonstrate how to analyze NoSQL data with SQL queries and standard BI tools. We would love to see you there!</p>
 
 <p>When you <a href="http://nosql2015.dataversity.net/reg.cfm">register</a>, use the coupon code &quot;SPEAKER&quot; for a 20% discount on the registration fees.</p>
 
diff --git a/blog/2015/10/16/drill-1.2-released/index.html b/blog/2015/10/16/drill-1.2-released/index.html
index a6078b3..8404a55 100644
--- a/blog/2015/10/16/drill-1.2-released/index.html
+++ b/blog/2015/10/16/drill-1.2-released/index.html
@@ -134,7 +134,7 @@
   <div class="addthis_sharing_toolbox"></div>
 
   <article class="post-content">
-    <p>Today I&#39;m happy to announce the availability of the Drill 1.2 release. This release addresses <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12332042&amp;projectId=12313820">217 JIRAs</a> on top of the 1.1 release. Highlights include:</p>
+    <p>Today I&#39;m happy to announce the availability of the Drill 1.2 release. This release addresses <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12332042&projectId=12313820">217 JIRAs</a> on top of the 1.1 release. Highlights include:</p>
 
 <h2 id="relational-database-support">Relational Database Support</h2>
 
@@ -170,7 +170,7 @@
 
 <h2 id="many-more-fixes">Many More Fixes</h2>
 
-<p>Drill 1.2 includes hundreds of <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12332042&amp;projectId=12313820">other fixes and enhancements</a>.</p>
+<p>Drill 1.2 includes hundreds of <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12332042&projectId=12313820">other fixes and enhancements</a>.</p>
 
 <p>Download the <a href="https://drill.apache.org/download/">Drill 1.2 release</a> now and let us know your thoughts.</p>
 
diff --git a/blog/2015/11/23/drill-1.3-released/index.html b/blog/2015/11/23/drill-1.3-released/index.html
index 78561a0..dddeafc 100644
--- a/blog/2015/11/23/drill-1.3-released/index.html
+++ b/blog/2015/11/23/drill-1.3-released/index.html
@@ -134,7 +134,7 @@
   <div class="addthis_sharing_toolbox"></div>
 
   <article class="post-content">
-    <p>Today I&#39;m happy to announce the availability of the Drill 1.3 release. This release addresses <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&amp;version=12332946">58 JIRAs</a> on top of the 1.2 release. Highlights include:</p>
+    <p>Today I&#39;m happy to announce the availability of the Drill 1.3 release. This release addresses <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&version=12332946">58 JIRAs</a> on top of the 1.2 release. Highlights include:</p>
 
 <h2 id="enhanced-amazon-s3-support">Enhanced Amazon S3 Support</h2>
 
@@ -185,7 +185,7 @@ LIMIT 1
 </code></pre></div>
 <h2 id="many-more-fixes">Many More Fixes</h2>
 
-<p>Drill 1.3 includes many other improvements, including enhancements related to querying Hive tables, MongoDB collections and Avro files. Check out the complete list of <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&amp;version=12332946">fixes and enhancements</a> for more information.</p>
+<p>Drill 1.3 includes many other improvements, including enhancements related to querying Hive tables, MongoDB collections and Avro files. Check out the complete list of <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&version=12332946">fixes and enhancements</a> for more information.</p>
 
 <p>Download the <a href="https://drill.apache.org/download/">Drill 1.3 release</a> now and let us know your thoughts.</p>
 
diff --git a/blog/2015/12/14/drill-1.4-released/index.html b/blog/2015/12/14/drill-1.4-released/index.html
index 2a7a510..5b87ab3 100644
--- a/blog/2015/12/14/drill-1.4-released/index.html
+++ b/blog/2015/12/14/drill-1.4-released/index.html
@@ -134,7 +134,7 @@
   <div class="addthis_sharing_toolbox"></div>
 
   <article class="post-content">
-    <p>Apache Drill 1.4 (<a href="https://drill.apache.org/download/">available here</a>) includes bug fixes and enhancements from <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12332947&amp;projectId=12313820">32 
+    <p>Apache Drill 1.4 (<a href="https://drill.apache.org/download/">available here</a>) includes bug fixes and enhancements from <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12332947&projectId=12313820">32 
 JIRAs</a>.</p>
 
 <p>Here&#39;s a list of highlights from this newest version of Drill:</p>
diff --git a/blog/2016/02/16/drill-1.5-released/index.html b/blog/2016/02/16/drill-1.5-released/index.html
index a57273d..5395355 100644
--- a/blog/2016/02/16/drill-1.5-released/index.html
+++ b/blog/2016/02/16/drill-1.5-released/index.html
@@ -161,7 +161,7 @@
 
 <p>You can now configure the TTL for the Hive metadata client cache depending on how frequently the Hive metadata is updated. See the <a href="/docs/hive-metadata-caching/">Hive Metadata Caching</a> doc page for more info.</p>
 
-<p>A complete list of JIRAs resolved in the 1.5.0 release can be found <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&amp;version=12332948">here</a>.</p>
+<p>A complete list of JIRAs resolved in the 1.5.0 release can be found <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&version=12332948">here</a>.</p>
 
   </article>
  <div id="disqus_thread"></div>
diff --git a/blog/2016/03/16/drill-1.6-released/index.html b/blog/2016/03/16/drill-1.6-released/index.html
index af44c33..8befa54 100644
--- a/blog/2016/03/16/drill-1.6-released/index.html
+++ b/blog/2016/03/16/drill-1.6-released/index.html
@@ -146,7 +146,7 @@
 
 <p>The window function frame clause now supports additional custom frames. See <a href="/docs/sql-window-functions-introduction/#syntax">Window Function Syntax</a>. </p>
 
-<p>A complete list of JIRAs resolved in the 1.6.0 release can be found <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12334766&amp;styleName=Html&amp;projectId=12313820&amp;Create=Create&amp;atl_token=A5KQ-2QAV-T4JA-FDED%7C9ec2112379f0ae5d2b67a8cbd2626bcde62b41cd%7Clout">here</a>.</p>
+<p>A complete list of JIRAs resolved in the 1.6.0 release can be found <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12334766&styleName=Html&projectId=12313820&Create=Create&atl_token=A5KQ-2QAV-T4JA-FDED%7C9ec2112379f0ae5d2b67a8cbd2626bcde62b41cd%7Clout">here</a>.</p>
 
   </article>
  <div id="disqus_thread"></div>
diff --git a/blog/2016/06/28/drill-1.7-released/index.html b/blog/2016/06/28/drill-1.7-released/index.html
index c5cad57..0b8c385 100644
--- a/blog/2016/06/28/drill-1.7-released/index.html
+++ b/blog/2016/06/28/drill-1.7-released/index.html
@@ -150,7 +150,7 @@
 
 <p>Drill now supports HBase 1.x. </p>
 
-<p>A complete list of JIRAs resolved in the 1.7.0 release can be found <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12334767&amp;styleName=&amp;projectId=12313820">here</a>.</p>
+<p>A complete list of JIRAs resolved in the 1.7.0 release can be found <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12334767&styleName=&projectId=12313820">here</a>.</p>
 
   </article>
  <div id="disqus_thread"></div>
diff --git a/blog/2016/08/30/drill-1.8-released/index.html b/blog/2016/08/30/drill-1.8-released/index.html
index 0774e1a..df62f57 100644
--- a/blog/2016/08/30/drill-1.8-released/index.html
+++ b/blog/2016/08/30/drill-1.8-released/index.html
@@ -158,7 +158,7 @@
 
 <p>New parameters set the minimum filter selectivity estimate to increase the parallelization of the major fragment performing a join. See <a href="https://drill.apache.org/docs/configuration-options-introduction/#system-options">System Options</a>. </p>
 
-<p>A complete list of JIRAs resolved in the 1.8.0 release can be found <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12334768&amp;styleName=Html&amp;projectId=12313820&amp;Create=Create&amp;atl_token=A5KQ-2QAV-T4JA-FDED%7Ce8d020149d9a6082481af301e563adbe35c76a87%7Clout">here</a>.</p>
+<p>A complete list of JIRAs resolved in the 1.8.0 release can be found <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12334768&styleName=Html&projectId=12313820&Create=Create&atl_token=A5KQ-2QAV-T4JA-FDED%7Ce8d020149d9a6082481af301e563adbe35c76a87%7Clout">here</a>.</p>
 
   </article>
  <div id="disqus_thread"></div>
diff --git a/blog/2016/11/29/drill-1.9-released/index.html b/blog/2016/11/29/drill-1.9-released/index.html
index a8fd43f..20acb57 100644
--- a/blog/2016/11/29/drill-1.9-released/index.html
+++ b/blog/2016/11/29/drill-1.9-released/index.html
@@ -154,7 +154,7 @@
 
 <p>The new HTTPD format plugin adds the capability to query HTTP web server logs natively and also includes parse_url() and parse_query() UDFs. The parse_url() UDF returns maps of the URL. The parse_query() UDF returns the query string.  </p>
 
-<p>A complete list of JIRAs resolved in the 1.9.0 release can be found <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12337861&amp;styleName=Html&amp;projectId=12313820&amp;Create=Create&amp;atl_token=A5KQ-2QAV-T4JA-FDED%7Cedcc6294c1851bcd19a3686871e085181f755a91%7Clin">here</a>.</p>
+<p>A complete list of JIRAs resolved in the 1.9.0 release can be found <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12337861&styleName=Html&projectId=12313820&Create=Create&atl_token=A5KQ-2QAV-T4JA-FDED%7Cedcc6294c1851bcd19a3686871e085181f755a91%7Clin">here</a>.</p>
 
   </article>
  <div id="disqus_thread"></div>
diff --git a/blog/2017/03/15/drill-1.10-released/index.html b/blog/2017/03/15/drill-1.10-released/index.html
index 4819416..b8c98d7 100644
--- a/blog/2017/03/15/drill-1.10-released/index.html
+++ b/blog/2017/03/15/drill-1.10-released/index.html
@@ -158,7 +158,7 @@
 
 <p>Drill supports Kerberos authentication between the client and drillbit. See <a href="/docs/configuring-kerberos-authentication/">Configuring Kerberos Authentication</a> in the <a href="/docs/securing-drill/">Securing Drill</a> section.</p>
 
-<p>A complete list of JIRAs resolved in the 1.10.0 release can be found <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12338769&amp;styleName=Html&amp;projectId=12313820&amp;Create=Create&amp;atl_token=A5KQ-2QAV-T4JA-FDED%7C264858c85b35c3b8ac66b0573aa7e88ffa802c9d%7Clin">here</a>.</p>
+<p>A complete list of JIRAs resolved in the 1.10.0 release can be found <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12338769&styleName=Html&projectId=12313820&Create=Create&atl_token=A5KQ-2QAV-T4JA-FDED%7C264858c85b35c3b8ac66b0573aa7e88ffa802c9d%7Clin">here</a>.</p>
 
   </article>
  <div id="disqus_thread"></div>
diff --git a/blog/2017/07/31/drill-1.11-released/index.html b/blog/2017/07/31/drill-1.11-released/index.html
index 559b6e2..485c5a0 100644
--- a/blog/2017/07/31/drill-1.11-released/index.html
+++ b/blog/2017/07/31/drill-1.11-released/index.html
@@ -138,7 +138,7 @@
 
 <p>The release provides the following bug fixes and improvements:</p>
 
-<h2 id="cryptography-related-functions-(drill-5634)">Cryptography-Related Functions (DRILL-5634)</h2>
+<h2 id="cryptography-related-functions-drill-5634">Cryptography-Related Functions (DRILL-5634)</h2>
 
 <p>Drill provides the following cryptographic-related functions:</p>
 
@@ -151,38 +151,38 @@
 <li>sha2()<br></li>
 </ul>
 
-<h2 id="spill-to-disk-for-hash-aggregate-operator-(drill-5457)">Spill to Disk for Hash Aggregate Operator (DRILL-5457)</h2>
+<h2 id="spill-to-disk-for-hash-aggregate-operator-drill-5457">Spill to Disk for Hash Aggregate Operator (DRILL-5457)</h2>
 
 <p>The Hash aggregate operator can spill data to disk in cases where the operation exceeds the set memory limit. Note that you may need to increase the default value of the <code>planner.memory.max_query_memory_per_node</code> option due to insufficient memory.      </p>
 
-<h2 id="format-plugin-support-for-pcap-files-(drill-5432)">Format Plugin Support for PCAP Files (DRILL-5432)</h2>
+<h2 id="format-plugin-support-for-pcap-files-drill-5432">Format Plugin Support for PCAP Files (DRILL-5432)</h2>
 
 <p>A “pcap” format plugin enables Drill to read PCAP files. You must add the “pcap” format to the dfs storage plugin configuration, as shown:  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   &quot;pcap&quot;: {
           &quot;type&quot;: &quot;pcap&quot;
         }   
 </code></pre></div>
-<h2 id="change-the-hdfs-block-size-for-parquet-files-(drill-5379)">Change the HDFS Block Size for Parquet Files (DRILL-5379)</h2>
+<h2 id="change-the-hdfs-block-size-for-parquet-files-drill-5379">Change the HDFS Block Size for Parquet Files (DRILL-5379)</h2>
 
 <p>The <code>store.parquet.writer.use_single_fs_block</code> option enables Drill to write a Parquet file as a single file system block without changing the file system default block size.</p>
 
-<h2 id="store-query-profiles-in-memory-(drill-5481)">Store Query Profiles in Memory (DRILL-5481)</h2>
+<h2 id="store-query-profiles-in-memory-drill-5481">Store Query Profiles in Memory (DRILL-5481)</h2>
 
 <p>The <code>drill.exec.profiles.store.inmemory</code> option enables Drill to store query profiles in memory instead of writing the query profiles to disk. The <code>drill.exec.profiles.store.capacity</code> option sets the maximum number of most recent profiles to retain in memory.  </p>
 
-<h2 id="configurable-ctas-directory-and-file-permissions-option-(drill-5391)">Configurable CTAS Directory and File Permissions Option (DRILL-5391)</h2>
+<h2 id="configurable-ctas-directory-and-file-permissions-option-drill-5391">Configurable CTAS Directory and File Permissions Option (DRILL-5391)</h2>
 
 <p>You can use the <code>exec.persistent_table.umask</code> configuration option, at the system or session level, to modify permissions on directories and files that result from running the CTAS command. By default, the option is set to 002, which sets the default directory permissions to 775 and default file permissions to 664.   </p>
 
-<h2 id="support-for-network-encryption-(drill-4335)">Support for Network Encryption (DRILL-4335)</h2>
+<h2 id="support-for-network-encryption-drill-4335">Support for Network Encryption (DRILL-4335)</h2>
 
 <p>Drill can use SASL to support network encryption between the Drill client and drillbits, and also between drillbits.  </p>
 
-<h2 id="metadata-file-stores-relative-paths-(drill-3867)">Metadata file Stores Relative Paths (DRILL-3867)</h2>
+<h2 id="metadata-file-stores-relative-paths-drill-3867">Metadata file Stores Relative Paths (DRILL-3867)</h2>
 
 <p>Drill now stores the relative path in the metadata file (versus the absolute path), which enables you to move partitioned Parquet directories from one location in DFS to another without having to rebuild the Parquet metadata files; the metadata remains valid in the new location.  </p>
 
-<h2 id="support-for-additional-quoting-identifiers-(drill-3510)">Support for Additional Quoting Identifiers (DRILL-3510)</h2>
+<h2 id="support-for-additional-quoting-identifiers-drill-3510">Support for Additional Quoting Identifiers (DRILL-3510)</h2>
 
 <p>In addition to back ticks, the SQL parser in Drill can use double quotes and square brackets as identifier quotes. Use the <code>planner.parser.quoting_identifiers</code> configuration option, at the system or session level, to set the type of identifier quotes that the SQL parser in Drill uses, as shown:  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   ALTER SESSION SET planner.parser.quoting_identifiers = &#39;&quot;&#39;;  
@@ -191,7 +191,7 @@
 </code></pre></div>
 <p>The default setting is back ticks. The quoting identifier used in queries must match the setting. If you use another type of quoting identifier, Drill returns an error.  </p>
 
-<p>You can find a complete list of JIRAs resolved in the 1.11.0 release <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&amp;version=12339943">here</a>.</p>
+<p>You can find a complete list of JIRAs resolved in the 1.11.0 release <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&version=12339943">here</a>.</p>
 
   </article>
  <div id="disqus_thread"></div>
diff --git a/blog/2017/12/15/drill-1.12-released/index.html b/blog/2017/12/15/drill-1.12-released/index.html
index 596ae84..e52be54 100644
--- a/blog/2017/12/15/drill-1.12-released/index.html
+++ b/blog/2017/12/15/drill-1.12-released/index.html
@@ -138,7 +138,7 @@
 
 <p>The release provides the following bug fixes and improvements:</p>
 
-<h2 id="kafka-and-opentsdb-storage-plugins-(drill-4779,-drill-5337)">Kafka and OpenTSDB Storage Plugins (DRILL-4779, DRILL-5337)</h2>
+<h2 id="kafka-and-opentsdb-storage-plugins-drill-4779-drill-5337">Kafka and OpenTSDB Storage Plugins (DRILL-4779, DRILL-5337)</h2>
 
 <p>You can configure Kafka and OpenTSDB as Drill data sources.  </p>
 
@@ -157,7 +157,7 @@
 </code></pre></div></li>
 </ul>
 
-<h2 id="queue-based-memory-assignment-for-buffering-operators-(throttling)-(drill-5716)">Queue-Based Memory Assignment for Buffering Operators (Throttling) (DRILL-5716)</h2>
+<h2 id="queue-based-memory-assignment-for-buffering-operators-throttling-drill-5716">Queue-Based Memory Assignment for Buffering Operators (Throttling) (DRILL-5716)</h2>
 
 <p>Throttling limits the number of concurrent queries that run to prevent queries from failing with out-of-memory errors. When you enable throttling, you configure the number of concurrent queries that can run and the resource requirements for each query. Drill calculates the amount of memory to assign per query per node. See <a href="/docs/throttling/">Throttling</a> for more information. </p>
 
@@ -190,11 +190,11 @@
 
 <p>Drill 1.10 provided authentication support through Plain and Kerberos authentication mechanisms to authenticate the Drill client to Drillbit and Drillbit to Drillbit communication channels. Drill 1.11 extends that support to include encryption. Drill uses the Kerberos mechanism over the SASL framework to encrypt the communication channels. </p>
 
-<h2 id="access-to-paths-outside-the-current-workspace-(drill-5964)">Access to Paths Outside the Current Workspace (DRILL-5964)</h2>
+<h2 id="access-to-paths-outside-the-current-workspace-drill-5964">Access to Paths Outside the Current Workspace (DRILL-5964)</h2>
 
 <p>A new parameter, allowAccessOutsideWorkspace, in the dfs storage plugin configuration prevents users from accessing paths outside the root of a workspace. The default value for the parameter is false. Set the parameter to true to allow users access outside of a workspace. If existing storage plugin configurations do not specify the parameter, users cannot access paths outside the configured workspaces.</p>
 
-<p>You can find a complete list of JIRAs resolved in the 1.12.0 release <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12341087&amp;styleName=Html&amp;projectId=12313820&amp;Create=Create&amp;atl_token=A5KQ-2QAV-T4JA-FDED%7Cd194b12b906cd370f36d15e8af60a94592b89038%7Clin">here</a>.</p>
+<p>You can find a complete list of JIRAs resolved in the 1.12.0 release <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12341087&styleName=Html&projectId=12313820&Create=Create&atl_token=A5KQ-2QAV-T4JA-FDED%7Cd194b12b906cd370f36d15e8af60a94592b89038%7Clin">here</a>.</p>
 
   </article>
  <div id="disqus_thread"></div>
diff --git a/blog/2018/03/18/drill-1.13-released/index.html b/blog/2018/03/18/drill-1.13-released/index.html
index a6c8347..0be4772 100644
--- a/blog/2018/03/18/drill-1.13-released/index.html
+++ b/blog/2018/03/18/drill-1.13-released/index.html
@@ -138,19 +138,19 @@
 
 <p>The release provides the following bug fixes and improvements:</p>
 
-<h2 id="ability-to-run-drill-under-yarn-(drill-1170)">Ability to Run Drill Under YARN (DRILL-1170)</h2>
+<h2 id="ability-to-run-drill-under-yarn-drill-1170">Ability to Run Drill Under YARN (DRILL-1170)</h2>
 
 <p>You can run Drill as a YARN application (<a href="/docs/drill-on-yarn/">Drill-on-YARN</a>) if you want Drill to work alongside other applications, such as Hadoop and Spark, in a YARN-managed cluster. YARN assigns resources, such as memory and CPU, to applications in the cluster and eliminates the manual steps associated with installation and resource allocation for stand-alone applications in a multi-tenant environment. YARN automatically deploys (localizes) the Drill software onto ea [...]
 
-<h2 id="spnego-support-(drill-5425)">SPNEGO Support (DRILL-5425)</h2>
+<h2 id="spnego-support-drill-5425">SPNEGO Support (DRILL-5425)</h2>
 
 <p>You can use SPNEGO to extend Kerberos authentication to Web applications through HTTP. </p>
 
-<h2 id="sql-syntax-support-(drill-5868)">SQL Syntax Support (DRILL-5868)</h2>
+<h2 id="sql-syntax-support-drill-5868">SQL Syntax Support (DRILL-5868)</h2>
 
 <p>Query syntax appears highlighted in the Drill Web Console. In addition to syntax highlighting, auto-complete is supported in all SQL editors, including the Edit Query tab within an existing profile to rerun the query. For browsers like Chrome, you can type Ctrl+Space for a drop-down list and then use arrow keys for navigating through options. An auto-complete feature that specifies Drill keywords and functions, and the ability to write SQL from templates using snippets. </p>
 
-<h2 id="user/distribution-specific-configuration-checks-during-startup-(drill-5741)">User/Distribution-Specific Configuration Checks During Startup (DRILL-5741)</h2>
+<h2 id="user-distribution-specific-configuration-checks-during-startup-drill-5741">User/Distribution-Specific Configuration Checks During Startup (DRILL-5741)</h2>
 
 <p>You can define the maximum amount of cumulative memory allocated to the Drill process during startup through the <code>DRILLBIT_MAX_PROC_MEM</code> environment variable. For example, if you set <code>DRILLBIT_MAX_PROC_MEM to 40G</code>, the total amount of memory allocated to the following memory parameters cannot exceed 40G:  </p>
 
diff --git a/blog/2018/08/05/drill-1.14-released/index.html b/blog/2018/08/05/drill-1.14-released/index.html
index 244d11c..9910c27 100644
--- a/blog/2018/08/05/drill-1.14-released/index.html
+++ b/blog/2018/08/05/drill-1.14-released/index.html
@@ -138,24 +138,24 @@
 
 <p>The release provides the following bug fixes and improvements:</p>
 
-<h2 id="run-drill-in-a-docker-container-(drill-6346)">Run Drill in a Docker Container (DRILL-6346)</h2>
+<h2 id="run-drill-in-a-docker-container-drill-6346">Run Drill in a Docker Container (DRILL-6346)</h2>
 
 <p>Running Drill in a Docker container is the simplest way to start using Drill; all you need is the Docker client installed on your machine. You simply run a Docker command, and your Docker client downloads the Drill Docker image from the apache-drill repository on Docker Hub and then brings up a container with Apache Drill running in embedded mode. See <a href="/docs/running-drill-on-docker/">Running Drill on Docker</a>.  </p>
 
-<h2 id="export-and-save-storage-plugin-configurations-(drill-4580)">Export and Save Storage Plugin Configurations (DRILL-4580)</h2>
+<h2 id="export-and-save-storage-plugin-configurations-drill-4580">Export and Save Storage Plugin Configurations (DRILL-4580)</h2>
 
 <p>You can export and save your storage plugin configurations from the Storage page in the Drill Web UI. See <a href="/docs/configuring-storage-plugins/#exporting-storage-plugin-configurations">Exporting Storage Plugin Configurations</a>.  </p>
 
-<h2 id="manage-storage-plugin-configurations-in-a-configuration-file-(drill-6494)">Manage Storage Plugin Configurations in a Configuration File (DRILL-6494)</h2>
+<h2 id="manage-storage-plugin-configurations-in-a-configuration-file-drill-6494">Manage Storage Plugin Configurations in a Configuration File (DRILL-6494)</h2>
 
 <p>You can manage storage plugin configurations in the Drill configuration file,  storage-plugins-override.conf. When you provide the storage plugin configurations in the storage-plugins-override.conf file, Drill reads the file and configures the plugins during start-up. See <a href="https://drill.apache.org/docs/configuring-storage-plugins/#configuring-storage-plugins-with-the-storage-plugins-override.conf-file">Configuring Storage Plugins with the storage-plugins-override.conf File</a>.  </p>
 
-<h2 id="query-metadata-in-various-image-formats-(drill-4364)">Query Metadata in Various Image Formats (DRILL-4364)</h2>
+<h2 id="query-metadata-in-various-image-formats-drill-4364">Query Metadata in Various Image Formats (DRILL-4364)</h2>
 
 <p>The metadata format plugin is useful for querying a large number of image files stored in a distributed file system. You do not have to build a metadata repository in advance.<br>
 See <a href="/docs/image-metadata-format-plugin/">Image Metadata Format Plugin</a>.  </p>
 
-<h2 id="set-hive-properties-at-the-session-level-(drill-6575)">Set Hive Properties at the Session Level (DRILL-6575)</h2>
+<h2 id="set-hive-properties-at-the-session-level-drill-6575">Set Hive Properties at the Session Level (DRILL-6575)</h2>
 
 <p>The store.hive.conf.properties option enables you to specify Hive properties at the session level using the SET command. See <a href="/docs/hive-storage-plugin/#setting-hive-properties">Setting Hive Properties</a>.   </p>
 
diff --git a/docs/aggregate-window-functions/index.html b/docs/aggregate-window-functions/index.html
index 242f2c2..d23f340 100644
--- a/docs/aggregate-window-functions/index.html
+++ b/docs/aggregate-window-functions/index.html
@@ -1356,7 +1356,7 @@ If an ORDER BY clause is used for an aggregate function, an explicit frame claus
 
 <p>The following examples show queries that use each of the aggregate window functions in Drill. See <a href="/docs/sql-window-functions-examples/">SQL Window Functions Examples</a> for information about the data and setup for these examples.</p>
 
-<h3 id="avg()">AVG()</h3>
+<h3 id="avg">AVG()</h3>
 
 <p>The following query uses the AVG() window function with the PARTITION BY clause to calculate the average sales for each car dealer in Q1.  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select dealer_id, sales, avg(sales) over (partition by dealer_id) as avgsales from q1_sales;
@@ -1376,7 +1376,7 @@ If an ORDER BY clause is used for an aggregate function, an explicit frame claus
    +------------+--------+-----------+
    10 rows selected (0.455 seconds)
 </code></pre></div>
-<h3 id="count()">COUNT()</h3>
+<h3 id="count">COUNT()</h3>
 
 <p>The following query uses the COUNT (*) window function to count the number of sales in Q1, ordered by dealer_id. The word count is enclosed in back ticks (``) because it is a reserved keyword in Drill.  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select dealer_id, sales, count(*) over(order by dealer_id) as `count` from q1_sales;
@@ -1414,7 +1414,7 @@ If an ORDER BY clause is used for an aggregate function, an explicit frame claus
    +------------+--------+--------+
    10 rows selected (0.249 seconds)
 </code></pre></div>
-<h3 id="max()">MAX()</h3>
+<h3 id="max">MAX()</h3>
 
 <p>The following query uses the MAX() window function with the PARTITION BY clause to identify the employee with the maximum number of car sales in Q1 at each dealership. The word max is a reserved keyword in Drill and must be enclosed in back ticks (``).  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select emp_name, dealer_id, sales, max(sales) over(partition by dealer_id) as `max` from q1_sales;
@@ -1434,7 +1434,7 @@ If an ORDER BY clause is used for an aggregate function, an explicit frame claus
    +-----------------+------------+--------+--------+
    10 rows selected (0.402 seconds)
 </code></pre></div>
-<h3 id="min()">MIN()</h3>
+<h3 id="min">MIN()</h3>
 
 <p>The following query uses the MIN() window function with the PARTITION BY clause to identify the employee with the minimum number of car sales in Q1 at each dealership. The word min is a reserved keyword in Drill and must be enclosed in back ticks (``).  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select emp_name, dealer_id, sales, min(sales) over(partition by dealer_id) as `min` from q1_sales;
@@ -1454,7 +1454,7 @@ If an ORDER BY clause is used for an aggregate function, an explicit frame claus
    +-----------------+------------+--------+-------+
    10 rows selected (0.194 seconds)
 </code></pre></div>
-<h3 id="sum()">SUM()</h3>
+<h3 id="sum">SUM()</h3>
 
 <p>The following query uses the SUM() window function to total the amount of sales for each dealer in Q1. The word sum is a reserved keyword in Drill and must be enclosed in back ticks (``).  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select dealer_id, emp_name, sales, sum(sales) over(partition by dealer_id) as `sum` from q1_sales;
diff --git a/docs/analyzing-the-yelp-academic-dataset/index.html b/docs/analyzing-the-yelp-academic-dataset/index.html
index 2b79e4c..8350f98 100644
--- a/docs/analyzing-the-yelp-academic-dataset/index.html
+++ b/docs/analyzing-the-yelp-academic-dataset/index.html
@@ -1315,7 +1315,7 @@ analysis extremely easy.</p>
 
 <h2 id="querying-data-with-drill">Querying Data with Drill</h2>
 
-<h3 id="1.-view-the-contents-of-the-yelp-business-data">1. View the contents of the Yelp business data</h3>
+<h3 id="1-view-the-contents-of-the-yelp-business-data">1. View the contents of the Yelp business data</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=local&gt; !set maxwidth 10000
 
 0: jdbc:drill:zk=local&gt; select * from
@@ -1335,7 +1335,7 @@ analysis extremely easy.</p>
 
 <p>You can directly query self-describing files such as JSON, Parquet, and text. There is no need to create metadata definitions in the Hive metastore.</p>
 
-<h3 id="2.-explore-the-business-data-set-further">2. Explore the business data set further</h3>
+<h3 id="2-explore-the-business-data-set-further">2. Explore the business data set further</h3>
 
 <h4 id="total-reviews-in-the-data-set">Total reviews in the data set</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=local&gt; select sum(review_count) as totalreviews 
@@ -1386,7 +1386,7 @@ group by stars order by stars desc;
 | 1.0        | 4.0        |
 +------------+------------+
 </code></pre></div>
-<h4 id="top-businesses-with-high-review-counts-(&gt;-1000)">Top businesses with high review counts (&gt; 1000)</h4>
+<h4 id="top-businesses-with-high-review-counts-1000">Top businesses with high review counts (&gt; 1000)</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=local&gt; select name, state, city, `review_count` from
 dfs.`/&lt;path-to-yelp-dataset&gt;/yelp/yelp_academic_dataset_business.json`
 where review_count &gt; 1000 order by `review_count` desc limit 10;
@@ -1430,7 +1430,7 @@ b limit 10;
 </code></pre></div>
 <p>Note how Drill can traverse and refer through multiple levels of nesting.</p>
 
-<h3 id="3.-get-the-amenities-of-each-business-in-the-data-set">3. Get the amenities of each business in the data set</h3>
+<h3 id="3-get-the-amenities-of-each-business-in-the-data-set">3. Get the amenities of each business in the data set</h3>
 
 <p>Note that the attributes column in the Yelp business data set has a different
 element for every row, representing that businesses can have separate
@@ -1478,7 +1478,7 @@ on data.</p>
 | true  | store.json.all_text_mode updated.  |
 +-------+------------------------------------+
 </code></pre></div>
-<h3 id="4.-explore-the-restaurant-businesses-in-the-data-set">4. Explore the restaurant businesses in the data set</h3>
+<h3 id="4-explore-the-restaurant-businesses-in-the-data-set">4. Explore the restaurant businesses in the data set</h3>
 
 <h4 id="number-of-restaurants-in-the-data-set">Number of restaurants in the data set</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=local&gt; select count(*) as TotalRestaurants from dfs.`/&lt;path-to-yelp-dataset&gt;/yelp/yelp_academic_dataset_business.json` where true=repeated_contains(categories,&#39;Restaurants&#39;);
@@ -1550,9 +1550,9 @@ order by count(categories[0]) desc limit 10;
 | Hair Salons          | 901           |
 +----------------------+---------------+
 </code></pre></div>
-<h3 id="5.-explore-the-yelp-reviews-dataset-and-combine-with-the-businesses.">5. Explore the Yelp reviews dataset and combine with the businesses.</h3>
+<h3 id="5-explore-the-yelp-reviews-dataset-and-combine-with-the-businesses">5. Explore the Yelp reviews dataset and combine with the businesses.</h3>
 
-<h4 id="take-a-look-at-the-contents-of-the-yelp-reviews-dataset.">Take a look at the contents of the Yelp reviews dataset.</h4>
+<h4 id="take-a-look-at-the-contents-of-the-yelp-reviews-dataset">Take a look at the contents of the Yelp reviews dataset.</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=local&gt; select * 
 from dfs.`/&lt;path-to-yelp-dataset&gt;/yelp/yelp_academic_dataset_review.json` limit 1;
 +---------------------------------+------------------------+------------------------+-------+------------+----------------------------------------------------------------------+--------+------------------------+
diff --git a/docs/apache-drill-0-5-0-release-notes/index.html b/docs/apache-drill-0-5-0-release-notes/index.html
index 83c7816..c338992 100644
--- a/docs/apache-drill-0-5-0-release-notes/index.html
+++ b/docs/apache-drill-0-5-0-release-notes/index.html
@@ -1282,7 +1282,7 @@
 enthusiasts start working and experimenting with Drill. It also continues the
 Drill monthly release cycle as we drive towards general availability.</p>
 
-<p>The 0.5.0 release is primarily a bug fix release, with <a href="h%0Attps://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&amp;versi%0Aon=12324880">more than 100 JIRAs</a> closed, but there are some notable features. For information
+<p>The 0.5.0 release is primarily a bug fix release, with <a href="h%0Attps://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&versi%0Aon=12324880">more than 100 JIRAs</a> closed, but there are some notable features. For information
 about the features, see the <a href="https://blogs.apache.org/drill/entry/apache_drill_beta_release_see">Apache Drill Blog for the 0.5.0
 release</a>.</p>
 
diff --git a/docs/apache-drill-0-6-0-release-notes/index.html b/docs/apache-drill-0-6-0-release-notes/index.html
index d567489..191b2cc 100644
--- a/docs/apache-drill-0-6-0-release-notes/index.html
+++ b/docs/apache-drill-0-6-0-release-notes/index.html
@@ -1290,7 +1290,7 @@ JIRAs that can help you run Drill against your preferred distribution.</p>
 
 <p>Apache Drill 0.6.0 Key Features</p>
 
-<p>This release is primarily a bug fix release, with <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&amp;vers%0Aion=12327472">more than 30 JIRAs closed</a>, but there are some notable features:</p>
+<p>This release is primarily a bug fix release, with <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&vers%0Aion=12327472">more than 30 JIRAs closed</a>, but there are some notable features:</p>
 
 <ul>
 <li>Direct ANSI SQL access to MongoDB, using the latest <a href="/docs/mongodb-storage-plugin">MongoDB Plugin for Apache Drill</a></li>
diff --git a/docs/apache-drill-0-8-0-release-notes/index.html b/docs/apache-drill-0-8-0-release-notes/index.html
index e49cf9a..6ebc2a4 100644
--- a/docs/apache-drill-0-8-0-release-notes/index.html
+++ b/docs/apache-drill-0-8-0-release-notes/index.html
@@ -1279,7 +1279,7 @@
     <div class="int_text" align="left">
       
         <p>Apache Drill 0.8.0, continues the Drill release cycle as we drive towards general availability.
-This release includes <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&amp;version=12328812">243 resolved JIRAs</a> and numerous enhancements.</p>
+This release includes <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&version=12328812">243 resolved JIRAs</a> and numerous enhancements.</p>
 
 <p>This release is available as
 <a href="http://www.apache.org/dyn/closer.cgi/drill/drill-0.8.0/apache-drill-0.8.0.tar.gz">binary</a> and
diff --git a/docs/apache-drill-0-9-0-release-notes/index.html b/docs/apache-drill-0-9-0-release-notes/index.html
index ff13a29..f18a717 100644
--- a/docs/apache-drill-0-9-0-release-notes/index.html
+++ b/docs/apache-drill-0-9-0-release-notes/index.html
@@ -1278,7 +1278,7 @@
 
     <div class="int_text" align="left">
       
-        <p>It has been about a month since the release of Drill 0.8, which included <a href="/blog/2016/08/30/drill-1.8-released/">more than 240 improvements</a>. Today we&#39;re happy to announce the availability of Drill 0.9, providing additional enhancements and bug fixes. In fact, this release includes <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&amp;version=12328813">200 resolved JIRAs</a>. Some of the noteworthy features in Drill 0.9 are:</p>
+        <p>It has been about a month since the release of Drill 0.8, which included <a href="/blog/2016/08/30/drill-1.8-released/">more than 240 improvements</a>. Today we&#39;re happy to announce the availability of Drill 0.9, providing additional enhancements and bug fixes. In fact, this release includes <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&version=12328813">200 resolved JIRAs</a>. Some of the noteworthy features in Drill 0.9 are:</p>
 
 <ul>
 <li><strong>Authentication</strong> (<a href="https://issues.apache.org/jira/browse/DRILL-2674">DRILL-2674</a>). Drill now supports username/password authentication through the Java and C++ clients, as well as JDBC and ODBC. On the server-side, Drill leverages Linux PAM to securely validate the credentials. Users can choose to use an external user directory such as Active Directory or LDAP. To enable authentication, set the <code>security.user.auth</code> option in <code>drill-override.c [...]
diff --git a/docs/apache-drill-1-1-0-release-notes/index.html b/docs/apache-drill-1-1-0-release-notes/index.html
index 29446f1..040d488 100644
--- a/docs/apache-drill-1-1-0-release-notes/index.html
+++ b/docs/apache-drill-1-1-0-release-notes/index.html
@@ -1282,7 +1282,7 @@
 
 <p>It has been about 6 weeks since the release of Drill 1.0.0. Today we&#39;re happy to announce the availability of Drill 1.1.0, providing 119 additional enhancements and bug fixes. </p>
 
-<h2 id="noteworthy-new-features-in-drill-1.1.0">Noteworthy New Features in Drill 1.1.0</h2>
+<h2 id="noteworthy-new-features-in-drill-1-1-0">Noteworthy New Features in Drill 1.1.0</h2>
 
 <p>Drill now supports window functions, automatic partitioning, and Hive impersonation. </p>
 
@@ -1306,13 +1306,13 @@
 <li>AVG<br></li>
 </ul>
 
-<h3 id="automatic-partitioning-in-ctas-(drill-3333)"><a href="/docs/partition-pruning/#automatic-partitioning">Automatic Partitioning</a> in CTAS (DRILL-3333)</h3>
+<h3 id="automatic-partitioning-in-ctas-drill-3333"><a href="/docs/partition-pruning/#automatic-partitioning">Automatic Partitioning</a> in CTAS (DRILL-3333)</h3>
 
 <p>When a table is created with a partition by clause, the parquet writer will create separate files for the different partition values. The data will first be sorted by the partition keys, and the parquet writer will create a new file when it encounters a new value for the partition columns. </p>
 
 <p>When queries are issued against data that was created this way, partition pruning will work if the filter contains a partition column. Unlike directory-based partitioning, no view is required, nor is it necessary to reference the dir* column names. </p>
 
-<h3 id="hive-impersonation-support-(drill-3203)"><a href="/docs/configuring-user-impersonation-with-hive-authorization">Hive impersonation</a> support (DRILL-3203)</h3>
+<h3 id="hive-impersonation-support-drill-3203"><a href="/docs/configuring-user-impersonation-with-hive-authorization">Hive impersonation</a> support (DRILL-3203)</h3>
 
 <p>When impersonation is enabled, Drill now supports impersonating the user who issued the query when accessing Hive metadata/data (instead of accessing Hive as the user that started the drillbit). </p>
 
diff --git a/docs/apache-drill-1-2-0-release-notes/index.html b/docs/apache-drill-1-2-0-release-notes/index.html
index bc190e7..5b93590 100644
--- a/docs/apache-drill-1-2-0-release-notes/index.html
+++ b/docs/apache-drill-1-2-0-release-notes/index.html
@@ -1287,7 +1287,7 @@
 <li><a href="/docs/apache-drill-1-2-0-release-notes/#important-unresolved-issues">Important unresolved issues</a></li>
 </ul>
 
-<h2 id="noteworthy-new-features-in-drill-1.2.0">Noteworthy New Features in Drill 1.2.0</h2>
+<h2 id="noteworthy-new-features-in-drill-1-2-0">Noteworthy New Features in Drill 1.2.0</h2>
 
 <p>This release of Drill introduces a number of enhancements, including the following ones:</p>
 
diff --git a/docs/apache-drill-contribution-guidelines/index.html b/docs/apache-drill-contribution-guidelines/index.html
index c394910..b444e33 100644
--- a/docs/apache-drill-contribution-guidelines/index.html
+++ b/docs/apache-drill-contribution-guidelines/index.html
@@ -1299,12 +1299,12 @@ Drill. For ideas about <em>what</em> you might contribute, please see open ticke
 
 <p>You may also be interested in the <a href="/docs/apache-drill-contribution-guidelines/#additional-information">additional information</a> at the end of this document. </p>
 
-<h2 id="step-1:-get-the-source-code.">Step 1: Get the source code.</h2>
+<h2 id="step-1-get-the-source-code">Step 1: Get the source code.</h2>
 
 <p>First, you need the Drill source code. You can use Git to put the source code on your local drive. Most development is done on &quot;master.&quot;</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">git clone https://git-wip-us.apache.org/repos/asf/drill.git
 </code></pre></div>
-<h2 id="step-2:-get-approval-and-modify-the-source-code.">Step 2: Get approval and modify the source code.</h2>
+<h2 id="step-2-get-approval-and-modify-the-source-code">Step 2: Get approval and modify the source code.</h2>
 
 <p>Before you start, send a message to the <a href="http://mail-archives.apache.org/mod_mbox/drill-dev/">Drill developer mailing list</a> or file a bug report in <a href="https://issues.apache.org/jira/browse/DRILL">JIRA</a> describing your proposed changes. Doing this helps to verify that your changes will work with what others are doing and have planned for the project. Be patient, it may take folks a while to understand your requirements. For detailed designs, the Drill team uses <a h [...]
 
@@ -1351,7 +1351,7 @@ following settings into your browser:</p>
 <p>To build Drill with Maven, run the following command:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">mvn clean install 
 </code></pre></div>
-<h2 id="step-3:-get-your-code-reviewed-and-committed-to-the-project.">Step 3: Get your code reviewed and committed to the project.</h2>
+<h2 id="step-3-get-your-code-reviewed-and-committed-to-the-project">Step 3: Get your code reviewed and committed to the project.</h2>
 
 <p>This section describes the GitHub pull request-based review process for Apache Drill.   </p>
 
@@ -1409,7 +1409,7 @@ This information can be found in the <a href="https://issues.apache.org/jira/bro
 
 <h2 id="additional-information">Additional Information</h2>
 
-<h3 id="where-is-a-good-place-to-start-contributing?">Where is a good place to start contributing?</h3>
+<h3 id="where-is-a-good-place-to-start-contributing">Where is a good place to start contributing?</h3>
 
 <p>After getting the source code, building and running a few simple queries, one
 of the simplest places to start is to implement a DrillFunc. DrillFuncs are the way that Drill expresses all scalar functions (UDF or system).  </p>
@@ -1425,7 +1425,7 @@ or SQL Server). Then try to implement one.</p>
 
 <p>More contribution ideas are located on the <a href="/docs/apache-drill-contribution-ideas">Contribution Ideas</a> page.</p>
 
-<h3 id="what-are-the-jira-guidelines?">What are the JIRA guidelines?</h3>
+<h3 id="what-are-the-jira-guidelines">What are the JIRA guidelines?</h3>
 
 <p>Please comment on issues in JIRA, making their concerns known. Please also
 vote for issues that are a high priority for you.</p>
diff --git a/docs/apache-drill-contribution-ideas/index.html b/docs/apache-drill-contribution-ideas/index.html
index 4d97541..161cfbd 100644
--- a/docs/apache-drill-contribution-ideas/index.html
+++ b/docs/apache-drill-contribution-ideas/index.html
@@ -1336,7 +1336,7 @@ own use case). Then try to implement one.</p>
 <li>Approximate aggregate functions (such as what is available in BlinkDB)</li>
 </ul>
 
-<h2 id="support-for-new-file-format-readers/writers">Support for new file format readers/writers</h2>
+<h2 id="support-for-new-file-format-readers-writers">Support for new file format readers/writers</h2>
 
 <p>Currently Drill supports text, JSON and Parquet file formats natively when
 interacting with file system. More readers/writers can be introduced by
diff --git a/docs/appendix-a-release-note-issues/index.html b/docs/appendix-a-release-note-issues/index.html
index fdc3845..db0932f 100644
--- a/docs/appendix-a-release-note-issues/index.html
+++ b/docs/appendix-a-release-note-issues/index.html
@@ -1281,7 +1281,7 @@
         <p>Drill-on-YARN creates a tighter coupling between Drill and Hadoop than did previous Drill
 versions. You should be aware of the following compatibility issues:</p>
 
-<h2 id="migrating-the-$drill_home/conf/drill-env.sh-script">Migrating the $DRILL_HOME/conf/drill-env.sh Script</h2>
+<h2 id="migrating-the-drill_home-conf-drill-env-sh-script">Migrating the $DRILL_HOME/conf/drill-env.sh Script</h2>
 
 <p>Prior to Drill 1.8, the drill-env.sh script contained Drill defaults, distribution-specific
 settings, and configuration specific to your application (“site”.) In Drill 1.8, the Drill and distribution settings are moved to other locations. The site-specific settings change in format to allow YARN to override them. The following section details the changes you must make if you reuse a drill-env.sh file from a prior release. (If you create a new file, you can skip this section.)  </p>
@@ -1334,7 +1334,7 @@ $DRILL_HOME/jars/3rdparty directory. Although YARN offers Drill a Java class-pat
 the Hadoop jars, Drill uses its own copies instead to ensure Drill runs under the same
 configuration with which it was tested. Drill distributions that are part of a complete Hadoop distribution (such as the MapR distribution) have already verified version compatibility for you. If you are assembling your own Hadoop and Drill combination, you should verify that the Hadoop version packaged with Drill is compatible with the version running our YARN cluster.  </p>
 
-<h2 id="$drill_home/conf/core-site.xml-issue">$DRILL_HOME/conf/core-site.xml Issue</h2>
+<h2 id="drill_home-conf-core-site-xml-issue">$DRILL_HOME/conf/core-site.xml Issue</h2>
 
 <p>Prior versions of Drill included a file in the $DRILL_HOME/conf directory called
 core-site.xml. YARN relies on a file with the same name in the Hadoop configuration directory. The Drill copy hides the YARN copy, preventing YARN from operating correctly. For this reason, version 1.8 of Drill renames the example file to core-site-example.xml. When upgrading an existing Drill installation, do not copy the file from your current version of Drill to the new version. If you modified core-site.xml, you should merge your changes with Hadoop’s core-site.xml file.  </p>
diff --git a/docs/azure-blob-storage-plugin/index.html b/docs/azure-blob-storage-plugin/index.html
index f176e46..4f277e4 100644
--- a/docs/azure-blob-storage-plugin/index.html
+++ b/docs/azure-blob-storage-plugin/index.html
@@ -1305,7 +1305,7 @@
 
 <p>Refer to <a href="/docs/azure-blob-storage-plugin/#configuring-the-azure-blob-storage-plugin">Configuring the Azure Blob Storage Plugin</a>. </p>
 
-<h3 id="defining-access-keys-in-the-drill-core-site.xml-file">Defining Access Keys in the Drill core-site.xml File</h3>
+<h3 id="defining-access-keys-in-the-drill-core-site-xml-file">Defining Access Keys in the Drill core-site.xml File</h3>
 
 <p>In order to configure Drill to access the Azure Blob Storage that contains that data you want to query with Drill, the authentication key must be provided. To get the authentication key you can use AZ CLI:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">az storage account keys list -g &lt;resource-group&gt; -n &lt;storage-account-name&gt;
diff --git a/docs/compiling-drill-from-source/index.html b/docs/compiling-drill-from-source/index.html
index 58c6b0c..c206fb8 100644
--- a/docs/compiling-drill-from-source/index.html
+++ b/docs/compiling-drill-from-source/index.html
@@ -1297,10 +1297,10 @@ Maven and JDK installed:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">java -version
 mvn -version
 </code></pre></div>
-<h2 id="1.-clone-the-repository">1. Clone the Repository</h2>
+<h2 id="1-clone-the-repository">1. Clone the Repository</h2>
 <div class="highlight"><pre><code class="language-text" data-lang="text">git clone https://git-wip-us.apache.org/repos/asf/drill.git
 </code></pre></div>
-<h2 id="2.-compile-the-code">2. Compile the Code</h2>
+<h2 id="2-compile-the-code">2. Compile the Code</h2>
 <div class="highlight"><pre><code class="language-text" data-lang="text">cd drill
 mvn clean install -DskipTests
 </code></pre></div>
diff --git a/docs/configuring-drill-to-use-spnego-for-http-authentication/index.html b/docs/configuring-drill-to-use-spnego-for-http-authentication/index.html
index 00dff18..aa606f7 100644
--- a/docs/configuring-drill-to-use-spnego-for-http-authentication/index.html
+++ b/docs/configuring-drill-to-use-spnego-for-http-authentication/index.html
@@ -1313,7 +1313,7 @@
 
 <p>The following sections provide the steps that an administrator can follow to configure SPNEGO on the web server (Drillbit). An administrator or a user can follow the steps for configuring the web browser or client tool, such as curl.  </p>
 
-<h3 id="configuring-spnego-on-the-drillbit-(web-server)">Configuring SPNEGO on the Drillbit (Web Server)</h3>
+<h3 id="configuring-spnego-on-the-drillbit-web-server">Configuring SPNEGO on the Drillbit (Web Server)</h3>
 
 <p>To configure SPNEGO on the web server, complete the following steps:<br>
 1-Generate a Kerberos principal on each web server that will receive inbound SPNEGO traffic. Each principal must have a corresponding keytab. The principal must have the following form:  </p>
diff --git a/docs/configuring-jreport-with-drill/index.html b/docs/configuring-jreport-with-drill/index.html
index bf6b84c..a4c8828 100644
--- a/docs/configuring-jreport-with-drill/index.html
+++ b/docs/configuring-jreport-with-drill/index.html
@@ -1290,7 +1290,7 @@
 <li>Use JReport Designer to query the data and create a report.</li>
 </ol>
 
-<h2 id="step-1:-install-the-drill-jdbc-driver-with-jreport">Step 1: Install the Drill JDBC Driver with JReport</h2>
+<h2 id="step-1-install-the-drill-jdbc-driver-with-jreport">Step 1: Install the Drill JDBC Driver with JReport</h2>
 
 <p>Drill provides standard JDBC connectivity to integrate with JReport. JReport 13.1 requires Drill 1.0 or later.
 For general instructions on installing the Drill JDBC driver, see <a href="/docs/using-the-jdbc-driver/">Using JDBC</a>.</p>
@@ -1308,7 +1308,7 @@ For example, on Windows, copy the Drill JDBC driver jar file into:</p>
 <li><p>Verify that the JReport system can resolve the hostnames of the ZooKeeper nodes of the Drill cluster. You can do this by configuring DNS for all of the systems. Alternatively, you can edit the hosts file on the JReport system to include the hostnames and IP addresses of all the ZooKeeper nodes used with the Drill cluster.  For Linux systems, the hosts file is located at <code>/etc/hosts</code>. For Windows systems, the hosts file is located at <code>%WINDIR%\system32\drivers\etc\h [...]
 </ol>
 
-<h2 id="step-2:-create-a-new-jreport-catalog-to-manage-the-drill-connection">Step 2: Create a New JReport Catalog to Manage the Drill Connection</h2>
+<h2 id="step-2-create-a-new-jreport-catalog-to-manage-the-drill-connection">Step 2: Create a New JReport Catalog to Manage the Drill Connection</h2>
 
 <ol>
 <li> Click Create <strong>New -&gt; Catalog…</strong></li>
@@ -1323,7 +1323,7 @@ For example, on Windows, copy the Drill JDBC driver jar file into:</p>
 <li>Click <strong>Done</strong> when you have added all the tables you need. </li>
 </ol>
 
-<h2 id="step-3:-use-jreport-designer">Step 3: Use JReport Designer</h2>
+<h2 id="step-3-use-jreport-designer">Step 3: Use JReport Designer</h2>
 
 <ol>
 <li> In the Catalog Browser, right-click <strong>Queries</strong> and select <strong>Add Query…</strong></li>
diff --git a/docs/configuring-kerberos-security/index.html b/docs/configuring-kerberos-security/index.html
index 36b8c41..23ecc45 100644
--- a/docs/configuring-kerberos-security/index.html
+++ b/docs/configuring-kerberos-security/index.html
@@ -1495,7 +1495,7 @@
 
 <h4 id="example-of-a-simple-connection-url">Example of a Simple Connection URL</h4>
 
-<h5 id="example-1:-tgt-for-client-credentials">Example 1: TGT for Client Credentials</h5>
+<h5 id="example-1-tgt-for-client-credentials">Example 1: TGT for Client Credentials</h5>
 
 <p>The simplest way to connect using Kerberos is to generate a TGT on the client side. Only specify the service principal in the JDBC connection string for the drillbit the user wants to connect to.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">jdbc:drill:drillbit=10.10.10.10;principal=&lt;principal for host 10.10.10.10&gt;
@@ -1520,7 +1520,7 @@
   <p class="last">For end-to-end authentication to function, it is assumed that the proper principal for the drillbit service is configured in the KDC.  </p>
 </div>
 
-<h5 id="example-2:-drillbit-provided-by-direct-connection-string-and-configured-with-a-unique-service-principal">Example 2: Drillbit Provided by Direct Connection String and Configured with a Unique Service Principal</h5>
+<h5 id="example-2-drillbit-provided-by-direct-connection-string-and-configured-with-a-unique-service-principal">Example 2: Drillbit Provided by Direct Connection String and Configured with a Unique Service Principal</h5>
 
 <p>This type of connection string is used when:</p>
 
@@ -1540,7 +1540,7 @@
 
 <p>The internally created service principal will be <strong><code>drill/host1@&lt;realm from TGT&gt;</code></strong>.</p>
 
-<h5 id="example-3:-drillbit-selected-by-zookeeper-and-configured-with-unique-service-principal">Example 3: Drillbit Selected by ZooKeeper and Configured with Unique Service Principal</h5>
+<h5 id="example-3-drillbit-selected-by-zookeeper-and-configured-with-unique-service-principal">Example 3: Drillbit Selected by ZooKeeper and Configured with Unique Service Principal</h5>
 
 <p>This type of connection string is used when the drillbit is chosen by ZooKeeper instead of directly from the connection string.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">jdbc:drill:zk=host01.aws.lab:5181;auth=kerberos;service_name=myDrill
@@ -1554,7 +1554,7 @@
 
 <p>The internally created service principal will be <strong><code>myDrill/&lt;host address from zk&gt;@&lt;realm from TGT&gt;</code></strong>.</p>
 
-<h5 id="example-4:-drillbit-selected-by-zookeeper-and-configured-with-a-common-service-principal">Example 4: Drillbit Selected by Zookeeper and Configured with a Common Service Principal</h5>
+<h5 id="example-4-drillbit-selected-by-zookeeper-and-configured-with-a-common-service-principal">Example 4: Drillbit Selected by Zookeeper and Configured with a Common Service Principal</h5>
 
 <p>This type of connection string is used when all drillbits in a cluster use the same principal.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">jdbc:drill:zk=host01.aws.lab:5181;auth=kerberos;service_name=myDrill;service_host=myDrillCluster
@@ -1568,7 +1568,7 @@
 
 <p>The internally created service principal, which will be <strong><code>myDrill/myDrillCluster@&lt;realm from TGT&gt;</code></strong>.</p>
 
-<h5 id="example-5:-keytab-for-client-credentials">Example 5: Keytab for Client Credentials</h5>
+<h5 id="example-5-keytab-for-client-credentials">Example 5: Keytab for Client Credentials</h5>
 
 <p>If a client chooses to provide its credentials in a keytab instead of a TGT, it must also provide a principal in the user parameter.  In this case, realm information will be extracted from the <code>/etc/krb5.conf</code> file on the node if it is not provided in the connection URL. All other parameters can be used as shown in the preceding examples (1-4). This connection string is for the case when all drillbits in a cluster use the same principal.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">jdbc:drill:zk=host01.aws.lab:5181;auth=kerberos;service_name=myDrill;service_host=myDrillCluster;keytab=&lt;path to keytab file&gt;;user=&lt;client principal&gt;
diff --git a/docs/configuring-odbc-on-linux/index.html b/docs/configuring-odbc-on-linux/index.html
index 2ec5300..f7ee8ba 100644
--- a/docs/configuring-odbc-on-linux/index.html
+++ b/docs/configuring-odbc-on-linux/index.html
@@ -1312,7 +1312,7 @@ on Linux, copy the following configuration files in <code>/opt/mapr/drill/Setup<
 
 <hr>
 
-<h2 id="step-1:-set-environment-variables">Step 1: Set Environment Variables</h2>
+<h2 id="step-1-set-environment-variables">Step 1: Set Environment Variables</h2>
 
 <ol>
 <li><p>Set the ODBCINI environment variable to point to the <code>.odbc.ini</code> in your home directory. </p>
@@ -1338,7 +1338,7 @@ Only include the path to the shared libraries corresponding to the driver matchi
 
 <hr>
 
-<h2 id="step-2:-define-the-odbc-data-sources-in-.odbc.ini">Step 2: Define the ODBC Data Sources in .odbc.ini</h2>
+<h2 id="step-2-define-the-odbc-data-sources-in-odbc-ini">Step 2: Define the ODBC Data Sources in .odbc.ini</h2>
 
 <p>Define the ODBC data sources in the <code>~/.odbc.ini</code> configuration file for your environment. To use Drill in embedded mode, set the following properties:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">ConnectionType=Direct
@@ -1476,7 +1476,7 @@ behavior of DSNs using the Drill ODBC Driver.</p>
 
 <hr>
 
-<h2 id="step-3:-(optional)-define-the-odbc-driver-in-.odbcinst.ini">Step 3: (Optional) Define the ODBC Driver in .odbcinst.ini</h2>
+<h2 id="step-3-optional-define-the-odbc-driver-in-odbcinst-ini">Step 3: (Optional) Define the ODBC Driver in .odbcinst.ini</h2>
 
 <p>The <code>.odbcinst.ini</code> is an optional configuration file that defines the ODBC
 Drivers. This configuration file is optional because you can specify drivers
@@ -1497,7 +1497,7 @@ Driver=/opt/mapr/lib/64/libdrillodbc_sb64.so
 </code></pre></div>
 <hr>
 
-<h2 id="step-4:-configure-the-drill-odbc-driver">Step 4: Configure the Drill ODBC Driver</h2>
+<h2 id="step-4-configure-the-drill-odbc-driver">Step 4: Configure the Drill ODBC Driver</h2>
 
 <p>Configure the Drill ODBC Driver for your environment by modifying the <code>.mapr.drillodbc.ini</code> configuration
 file. This configures the driver to work with your ODBC driver manager. The following sample shows a possible configuration, which you can use as is if you installed the default iODBC driver manager.</p>
@@ -1515,7 +1515,7 @@ SwapFilePath=/tmp
 
 . . .
 </code></pre></div>
-<h3 id="configuring-.mapr.drillodbc.ini">Configuring .mapr.drillodbc.ini</h3>
+<h3 id="configuring-mapr-drillodbc-ini">Configuring .mapr.drillodbc.ini</h3>
 
 <p>To configure the Drill ODBC Driver in the <code>.mapr.drillodbc.ini</code> configuration file, complete the following steps:</p>
 
diff --git a/docs/configuring-odbc-on-mac-os-x/index.html b/docs/configuring-odbc-on-mac-os-x/index.html
index 2432dd1..bae86bc 100644
--- a/docs/configuring-odbc-on-mac-os-x/index.html
+++ b/docs/configuring-odbc-on-mac-os-x/index.html
@@ -1291,7 +1291,7 @@ procedure:</p>
 <li><a href="/docs/configuring-odbc-on-mac-os-x/#step-4:-configure-the-drill-odbc-driver">Step 4: Configure the Drill ODBC Driver</a></li>
 </ul>
 
-<h2 id="step-1:-driver-installer-updates-sample-configuration-files">Step 1: Driver Installer Updates Sample Configuration Files</h2>
+<h2 id="step-1-driver-installer-updates-sample-configuration-files">Step 1: Driver Installer Updates Sample Configuration Files</h2>
 
 <p>Before you connect to Drill through an ODBC client tool on Mac OS X, the driver installer copies the following configuration files in <code>/Library/mapr/drill/Setup</code> to your home directory unless the files already exist in your home directory:</p>
 
@@ -1325,7 +1325,7 @@ procedure:</p>
 
 <hr>
 
-<h2 id="step-2:-set-environment-variables">Step 2: Set Environment Variables</h2>
+<h2 id="step-2-set-environment-variables">Step 2: Set Environment Variables</h2>
 
 <p>The driver installer installs the <code>.mapr.drillodbc.ini</code> file to your home directory and adds an entry to the <code>$HOME/.odbc.ini</code> file. </p>
 
@@ -1337,7 +1337,7 @@ setenv ODBCINI /Users/joeuser/.odbc.ini
 
 <hr>
 
-<h2 id="step-3:-define-the-odbc-data-sources-in-.odbc.ini">Step 3: Define the ODBC Data Sources in .odbc.ini</h2>
+<h2 id="step-3-define-the-odbc-data-sources-in-odbc-ini">Step 3: Define the ODBC Data Sources in .odbc.ini</h2>
 
 <p>Define the ODBC data sources in the <code>~/.odbc.ini</code> configuration file for your environment. </p>
 
@@ -1456,7 +1456,7 @@ Schema=
 </code></pre></div>
 <hr>
 
-<h2 id="step-4:-configure-the-drill-odbc-driver">Step 4: Configure the Drill ODBC Driver</h2>
+<h2 id="step-4-configure-the-drill-odbc-driver">Step 4: Configure the Drill ODBC Driver</h2>
 
 <p>Configure the Drill ODBC Driver for your environment by modifying the <code>.mapr.drillodbc.ini</code> configuration
 file. This configures the driver to work with your ODBC driver manager. The following sample shows a possible configuration, which you can use as is if you installed the default iODBC driver manager.</p>
@@ -1471,7 +1471,7 @@ LogLevel=0
 LogPath=
 SwapFilePath=/tmp
 </code></pre></div>
-<h3 id="configuring-.mapr.drillodbc.ini">Configuring .mapr.drillodbc.ini</h3>
+<h3 id="configuring-mapr-drillodbc-ini">Configuring .mapr.drillodbc.ini</h3>
 
 <p>To configure the Drill ODBC Driver in the <code>.mapr.drillodbc.ini</code> configuration file, complete the following steps:</p>
 
diff --git a/docs/configuring-odbc-on-windows/index.html b/docs/configuring-odbc-on-windows/index.html
index a832838..c113092 100644
--- a/docs/configuring-odbc-on-windows/index.html
+++ b/docs/configuring-odbc-on-windows/index.html
@@ -1290,7 +1290,7 @@
 <li><a href="/docs/configuring-odbc-on-windows/#additional-configuration-options">Additional Configuration Options</a></li>
 </ul>
 
-<h2 id="step-1:-create-a-data-source-name-(dsn)">Step 1: Create a Data Source Name (DSN)</h2>
+<h2 id="step-1-create-a-data-source-name-dsn">Step 1: Create a Data Source Name (DSN)</h2>
 
 <p>You can see how to create a DSN to connect to Drill data sources by taking a look at the preconfigured sample that the installer sets up. If you want to create a DSN for a 32- or 64-bit application, you must use the 32- or 64-bit
 version of the ODBC Data Source Administrator to create the DSN.</p>
@@ -1311,7 +1311,7 @@ version of the ODBC Data Source Administrator to create the DSN.</p>
 
 <p>To access Drill Explorer, click <strong>Drill Explorer...</strong>. See <a href="/docs/drill-explorer-introduction/">Drill Explorer</a> for more information.</p>
 
-<h3 id="step-2:-select-an-authentication-option">Step 2: Select an Authentication Option</h3>
+<h3 id="step-2-select-an-authentication-option">Step 2: Select an Authentication Option</h3>
 
 <p>To password protect the DSN, select the appropriate authentication type in the <strong>Authentication Type</strong> dropdown.  If the Drillbit does not require authentication (or to configure no password protection), you can use the No Authentication option. You do not need to configure additional settings.</p>
 
@@ -1325,7 +1325,7 @@ version of the ODBC Data Source Administrator to create the DSN.</p>
 <li><strong>Plain Authentication</strong> - configure UID and PWD properties. </li>
 </ul>
 
-<h3 id="step-3:-configure-the-connection-type">Step 3: Configure the Connection Type</h3>
+<h3 id="step-3-configure-the-connection-type">Step 3: Configure the Connection Type</h3>
 
 <p>In the <strong>Connection Type</strong> section, <strong>Direct to Drillbit</strong> is selected for using Drill in embedded mode. To use Drill in embedded mode, set the connection type to <strong>Direct</strong> and define HOST and PORT properties. For example:</p>
 
@@ -1348,7 +1348,7 @@ Name of the drillbit cluster. Check the <code>drill-override.conf</code> file fo
 
 <p>Check the <code>drill-override.conf</code> file for the cluster name.  </p>
 
-<h2 id="step-4:-configure-advanced-properties-(optional)">Step 4: Configure Advanced Properties (optional)</h2>
+<h2 id="step-4-configure-advanced-properties-optional">Step 4: Configure Advanced Properties (optional)</h2>
 
 <p>The <a href="/docs/odbc-configuration-reference/">Advanced Properties</a> section describes the advanced configuration properties in detail.  </p>
 
diff --git a/docs/configuring-resources-for-a-shared-drillbit/index.html b/docs/configuring-resources-for-a-shared-drillbit/index.html
index cc5ed7a..5d4687b 100644
--- a/docs/configuring-resources-for-a-shared-drillbit/index.html
+++ b/docs/configuring-resources-for-a-shared-drillbit/index.html
@@ -1311,7 +1311,7 @@ The maximum degree of distribution of a query across cores and cluster nodes.</l
 Same as max per node but applies to the query as executed by the entire cluster.</li>
 </ul>
 
-<h3 id="planner.width.max_per_node">planner.width.max_per_node</h3>
+<h3 id="planner-width-max_per_node">planner.width.max_per_node</h3>
 
 <p>Configure the <code>planner.width.max_per_node</code> to achieve fine grained, absolute control over parallelization. In this context <em>width</em> refers to fanout or distribution potential: the ability to run a query in parallel across the cores on a node and the nodes on a cluster. A physical plan consists of intermediate operations, known as query &quot;fragments,&quot; that run concurrently, yielding opportunities for parallelism above and below each exchange operator in the pla [...]
 
@@ -1321,7 +1321,7 @@ Same as max per node but applies to the query as executed by the entire cluster.
 
 <p>When you modify the default setting, you can supply any meaningful number. The system does not automatically scale down your setting.</p>
 
-<h3 id="planner.width.max_per_query">planner.width.max_per_query</h3>
+<h3 id="planner-width-max_per_query">planner.width.max_per_query</h3>
 
 <p>The max_per_query value also sets the maximum degree of parallelism for any given stage of a query, but the setting applies to the query as executed by the whole cluster (multiple nodes). In effect, the actual maximum width per query is the <em>minimum of two values</em>: min((number of nodes * width.max_per_node), width.max_per_query)</p>
 
diff --git a/docs/configuring-ssl-tls-for-encryption/index.html b/docs/configuring-ssl-tls-for-encryption/index.html
index 22154aa..b1f31b1 100644
--- a/docs/configuring-ssl-tls-for-encryption/index.html
+++ b/docs/configuring-ssl-tls-for-encryption/index.html
@@ -1310,7 +1310,7 @@
 
 <p><a href="/docs/starting-drill-in-distributed-mode/">Restart Drill</a> after you modify the configuration options.  </p>
 
-<h2 id="enabling-and-configuring-ssl/tls">Enabling and Configuring SSL/TLS</h2>
+<h2 id="enabling-and-configuring-ssl-tls">Enabling and Configuring SSL/TLS</h2>
 
 <p>Enable SSL in <code>&lt;DRILL_INSTALL_HOME&gt;/conf/drill-override.conf</code>. You can use several configuration options to customize SSL.</p>
 
@@ -1318,13 +1318,13 @@
 
 <p>The following sections provide information and instructions for enabling and configuring SSL/TLS:</p>
 
-<h3 id="enabling-ssl/tls">Enabling SSL/TLS</h3>
+<h3 id="enabling-ssl-tls">Enabling SSL/TLS</h3>
 
 <p>When SSL is enabled, all Drill clients, such as JDBC and ODBC, must connect to Drill servers using SSL. Enable SSL in the Drill startup configuration file, <code>&lt;DRILL_HOME&gt;/conf/drill-override.conf</code>.</p>
 
 <p>To enable SSL for Drill, set the <code>drill.exec.security.user.encryption.ssl.enabled</code> option in <code>drill-override.conf</code> to <code>&quot;true&quot;</code>.  </p>
 
-<h3 id="configuring-ssl/tls">Configuring SSL/TLS</h3>
+<h3 id="configuring-ssl-tls">Configuring SSL/TLS</h3>
 
 <p>You can customize SSL on a Drillbit through the SSL configuration options. You can set the options from the command-line (using Java system properties), in the <code>drill-override.conf</code> file, or in the property file to which the Hadoop parameter <code>hadoop.ssl.server.conf</code> points (recommended).  </p>
 
@@ -1578,7 +1578,7 @@
 
 <p>The following sections provide insight to some common error messages that you may encounter with SSL.  </p>
 
-<h3 id="error:-no-cipher-suites-in-common.">ERROR: No Cipher suites in common.</h3>
+<h3 id="error-no-cipher-suites-in-common">ERROR: No Cipher suites in common.</h3>
 
 <p>This is a general purpose error message that may have many reasons. The most common reason is that in order to use certain cipher suites, JSSE needs to use the private key stored in the Keystore. If this key is not accessible, JSSE filters out all cipher suites that need a private key. This effectively prunes out all available cipher suites so that no cipher suites match between the client and the server.</p>
 
@@ -1598,11 +1598,11 @@
 
 <p>You can validate the keystore using keytool.  </p>
 
-<h3 id="error:-ssl-is-enabled,-but-cannot-be-initialized-due-to-the-‘cannot-recover-key’-exception.">ERROR: SSL is enabled, but cannot be initialized due to the ‘Cannot recover key’ exception.</h3>
+<h3 id="error-ssl-is-enabled-but-cannot-be-initialized-due-to-the-cannot-recover-key-exception">ERROR: SSL is enabled, but cannot be initialized due to the ‘Cannot recover key’ exception.</h3>
 
 <p>The key is protected with a password and the provided password is not correct.  </p>
 
-<h3 id="error:-client-connection-timeout.">ERROR: Client connection timeout.</h3>
+<h3 id="error-client-connection-timeout">ERROR: Client connection timeout.</h3>
 
 <p>A client connection can timeout because of networking issues or if there is a mismatch between the TLS/SSL configuration on the client and server.</p>
 
diff --git a/docs/configuring-storage-plugins/index.html b/docs/configuring-storage-plugins/index.html
index cdbc13d..540a8d9 100644
--- a/docs/configuring-storage-plugins/index.html
+++ b/docs/configuring-storage-plugins/index.html
@@ -1348,7 +1348,7 @@ The attribute settings as entered in the Web Console.</p></li>
 
 <p><strong>Note:</strong> If you load an HBase storage plugin configuration using the <code>bootstrap-storage-plugins.json</code> file and HBase is not installed, you might experience a delay when executing the queries. Configure the HBase client timeout and retry settings in the <code>config</code> block of the HBase plugin configuration.  </p>
 
-<h2 id="configuring-storage-plugins-with-the-storage-plugins-override.conf-file">Configuring Storage Plugins with the storage-plugins-override.conf File</h2>
+<h2 id="configuring-storage-plugins-with-the-storage-plugins-override-conf-file">Configuring Storage Plugins with the storage-plugins-override.conf File</h2>
 
 <p>Starting in Drill 1.14, you can manage storage plugin configurations in the Drill configuration file, <code>storage-plugins-override.conf</code>, located in the <code>&lt;drill-installation&gt;/conf</code> directory. When you provide the storage plugin configurations in the <code>storage-plugins-override.conf</code> file, Drill reads the file and configures the plugins during start-up. </p>
 
diff --git a/docs/configuring-tibco-spotfire-server-with-drill/index.html b/docs/configuring-tibco-spotfire-server-with-drill/index.html
index 4caf97d..a669658 100644
--- a/docs/configuring-tibco-spotfire-server-with-drill/index.html
+++ b/docs/configuring-tibco-spotfire-server-with-drill/index.html
@@ -1291,7 +1291,7 @@
 <li>Query and analyze various data formats with Tibco Spotfire and Drill.</li>
 </ol>
 
-<h2 id="step-1:-install-and-configure-the-drill-jdbc-driver">Step 1: Install and Configure the Drill JDBC Driver</h2>
+<h2 id="step-1-install-and-configure-the-drill-jdbc-driver">Step 1: Install and Configure the Drill JDBC Driver</h2>
 
 <p>Drill provides standard JDBC connectivity, making it easy to integrate data exploration capabilities on complex, schema-less data sets. Tibco Spotfire Server (TSS) requires Drill 1.0 or later, which incudes the JDBC driver. The JDBC driver is bundled with the Drill configuration files, and it is recommended that you use the JDBC driver that is shipped with the specific Drill version.</p>
 
@@ -1317,7 +1317,7 @@ For Windows systems, the hosts file is located here:
 <code>%WINDIR%\system32\drivers\etc\hosts</code></p></li>
 </ol>
 
-<h2 id="step-2:-configure-the-drill-data-source-template-in-tss">Step 2: Configure the Drill Data Source Template in TSS</h2>
+<h2 id="step-2-configure-the-drill-data-source-template-in-tss">Step 2: Configure the Drill Data Source Template in TSS</h2>
 
 <p>The Drill Data Source template can now be configured with the TSS Configuration Tool. The Windows-based TSS Configuration Tool is recommended. If TSS is installed on a Linux system, you also need to install TSS on a small Windows-based system so you can utilize the Configuration Tool. In this case, it is also recommended that you install the Drill JDBC driver on the TSS Windows system.</p>
 
@@ -1370,7 +1370,7 @@ For Windows systems, the hosts file is located here:
   &lt;/java-to-sql-type-conversions&gt;
   &lt;/jdbc-type-settings&gt;
 </code></pre></div>
-<h2 id="step-3:-configure-drill-data-sources-with-tibco-spotfire-desktop">Step 3: Configure Drill Data Sources with Tibco Spotfire Desktop</h2>
+<h2 id="step-3-configure-drill-data-sources-with-tibco-spotfire-desktop">Step 3: Configure Drill Data Sources with Tibco Spotfire Desktop</h2>
 
 <p>To configure Drill data sources in TSS, you need to use the Tibco Spotfire Desktop client.</p>
 
@@ -1385,7 +1385,7 @@ For Windows systems, the hosts file is located here:
 <li>When the data source is saved, it will appear in the <strong>Data Sources</strong> tab, and you will be able to navigate the schema. <img src="/docs/img/spotfire-server-datasources-tab.png" alt="drill query flow"></li>
 </ol>
 
-<h2 id="step-4:-query-and-analyze-the-data">Step 4: Query and Analyze the Data</h2>
+<h2 id="step-4-query-and-analyze-the-data">Step 4: Query and Analyze the Data</h2>
 
 <p>After the Drill data source has been configured in the Information Designer, the information elements can be defined. </p>
 
diff --git a/docs/configuring-user-impersonation-with-hive-authorization/index.html b/docs/configuring-user-impersonation-with-hive-authorization/index.html
index 28549ec..0608090 100644
--- a/docs/configuring-user-impersonation-with-hive-authorization/index.html
+++ b/docs/configuring-user-impersonation-with-hive-authorization/index.html
@@ -1312,7 +1312,7 @@
 <li>Hive remote metastore repository configured<br></li>
 </ul>
 
-<h2 id="step-1:-enabling-drill-impersonation">Step 1: Enabling Drill Impersonation</h2>
+<h2 id="step-1-enabling-drill-impersonation">Step 1: Enabling Drill Impersonation</h2>
 
 <p>Modify <code>&lt;DRILL_HOME&gt;/conf/drill-override.conf</code> on each Drill node to include the required properties, set the <a href="/docs/configuring-user-impersonation/#chained-impersonation">maximum number of chained user hops</a>, and restart the Drillbit process.</p>
 
@@ -1331,7 +1331,7 @@
 <code>&lt;DRILLINSTALL_HOME&gt;/bin/drillbit.sh restart</code>  </p></li>
 </ol>
 
-<h2 id="step-2:-updating-hive-site.xml">Step 2:  Updating hive-site.xml</h2>
+<h2 id="step-2-updating-hive-site-xml">Step 2:  Updating hive-site.xml</h2>
 
 <p>Update hive-site.xml with the parameters specific to the type of authorization that you are configuring and then restart Hive.  </p>
 
@@ -1363,7 +1363,7 @@
 <strong>Description:</strong> Tells HiveServer2 to execute Hive operations as the user submitting the query. Must be set to true for the storage based model.<br>
 <strong>Value:</strong> true</p>
 
-<h3 id="example-of-hive-site.xml-configuration-with-the-required-properties-for-storage-based-authorization">Example of hive-site.xml configuration with the required properties for storage based authorization</h3>
+<h3 id="example-of-hive-site-xml-configuration-with-the-required-properties-for-storage-based-authorization">Example of hive-site.xml configuration with the required properties for storage based authorization</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   &lt;configuration&gt;
      &lt;property&gt;
        &lt;name&gt;hive.metastore.uris&lt;/name&gt;
@@ -1439,7 +1439,7 @@
 <strong>Description:</strong> In unsecure mode, setting this property to true causes the metastore to execute DFS operations using the client&#39;s reported user and group permissions. Note: This property must be set on both the client and server sides. This is a best effort property. If the client is set to true and the server is set to false, the client setting is ignored.<br>
 <strong>Value:</strong> false  </p>
 
-<h3 id="example-of-hive-site.xml-configuration-with-the-required-properties-for-sql-standard-based-authorization">Example of hive-site.xml configuration with the required properties for SQL standard based authorization</h3>
+<h3 id="example-of-hive-site-xml-configuration-with-the-required-properties-for-sql-standard-based-authorization">Example of hive-site.xml configuration with the required properties for SQL standard based authorization</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   &lt;configuration&gt;
      &lt;property&gt;
        &lt;name&gt;hive.metastore.uris&lt;/name&gt;
@@ -1487,7 +1487,7 @@
      &lt;/property&gt;    
     &lt;/configuration&gt;
 </code></pre></div>
-<h2 id="step-3:-modifying-the-hive-storage-plugin">Step 3: Modifying the Hive Storage Plugin</h2>
+<h2 id="step-3-modifying-the-hive-storage-plugin">Step 3: Modifying the Hive Storage Plugin</h2>
 
 <p>Modify the Hive storage plugin configuration in the Drill Web Console to include specific authorization settings. The Drillbit that you use to access the Web Console must be running.  </p>
 
diff --git a/docs/configuring-user-impersonation/index.html b/docs/configuring-user-impersonation/index.html
index 2e3ebdd..592962e 100644
--- a/docs/configuring-user-impersonation/index.html
+++ b/docs/configuring-user-impersonation/index.html
@@ -1346,7 +1346,7 @@ hadoop fs –chown &lt;user&gt;:&lt;group&gt; &lt;file_name&gt;
 </code></pre></div>
 <p>Example: <code>hadoop fs –chmod 750 employees.view.drill</code></p>
 
-<h3 id="modifying-system|session-level-view-permissions">Modifying SYSTEM|SESSION Level View Permissions</h3>
+<h3 id="modifying-system-session-level-view-permissions">Modifying SYSTEM|SESSION Level View Permissions</h3>
 
 <p>Use the <code>ALTER SESSION|SYSTEM</code> command with the <code>new_view_default_permissions</code> parameter and the appropriate octal code to set view permissions at the system or session level prior to creating a view.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">ALTER SESSION SET `new_view_default_permissions` = &#39;&lt;octal_code&gt;&#39;;
diff --git a/docs/configuring-web-console-and-rest-api-security/index.html b/docs/configuring-web-console-and-rest-api-security/index.html
index 17ece3e..6be56b2 100644
--- a/docs/configuring-web-console-and-rest-api-security/index.html
+++ b/docs/configuring-web-console-and-rest-api-security/index.html
@@ -1557,35 +1557,35 @@ Set the value of this option to a comma-separated list of administrator groups.<
 </tr>
 </tbody></table>
 
-<h3 id="get-/profiles.json">GET /profiles.json</h3>
+<h3 id="get-profiles-json">GET /profiles.json</h3>
 
 <ul>
 <li>ADMIN - gets all profiles on the system.<br></li>
 <li>USER - only the profiles of the queries the user has launched.</li>
 </ul>
 
-<h3 id="get-/profiles">GET /profiles</h3>
+<h3 id="get-profiles">GET /profiles</h3>
 
 <ul>
 <li>ADMIN - gets all profiles on the system.<br></li>
 <li>USER - only the profiles of the queries the user has launched.</li>
 </ul>
 
-<h3 id="get-/profiles/{queryid}.json">GET /profiles/{queryid}.json</h3>
+<h3 id="get-profiles-queryid-json">GET /profiles/{queryid}.json</h3>
 
 <ul>
 <li>ADMIN - return the profile.<br></li>
 <li>USER - if the query is launched the by the requesting user return it. Otherwise, return an error saying no such profile exists.</li>
 </ul>
 
-<h3 id="get-/profiles/{queryid}">GET /profiles/{queryid}</h3>
+<h3 id="get-profiles-queryid">GET /profiles/{queryid}</h3>
 
 <ul>
 <li>ADMIN - return the profile.<br></li>
 <li>USER - if the query is launched the by the requesting user return it. Otherwise, return an error saying no such profile exists</li>
 </ul>
 
-<h3 id="get-/profiles/cancel/{queryid}">GET /profiles/cancel/{queryid}</h3>
+<h3 id="get-profiles-cancel-queryid">GET /profiles/cancel/{queryid}</h3>
 
 <ul>
 <li>ADMIN - can cancel the query.<br></li>
diff --git a/docs/custom-function-interfaces/index.html b/docs/custom-function-interfaces/index.html
index f626f85..c3416f3 100644
--- a/docs/custom-function-interfaces/index.html
+++ b/docs/custom-function-interfaces/index.html
@@ -1293,13 +1293,13 @@ public static class Add1 implements DrillSimpleFunc{
 
 <p>The simple function interface includes the <code>@Param</code> and <code>@Output</code> holders where you indicate the data types that your function can process.</p>
 
-<h3 id="@param-holder">@Param Holder</h3>
+<h3 id="param-holder">@Param Holder</h3>
 
 <p>This holder indicates the data type that the function processes as input and determines the number of parameters that your function accepts within the query. For example:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">@Param BigIntHolder input1;
 @Param BigIntHolder input2;
 </code></pre></div>
-<h3 id="@output-holder">@Output Holder</h3>
+<h3 id="output-holder">@Output Holder</h3>
 
 <p>This holder indicates the data type that the processing returns. For example:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">@Output BigIntHolder out;
@@ -1355,7 +1355,7 @@ public static class MySecondMin implements DrillAggFunc {
 </code></pre></div>
 <p>The aggregate function interface includes holders where you indicate the data types that your function can process. This interface includes the @Param and @Output holders previously described and also includes the @Workspace holder. </p>
 
-<h3 id="@workspace-holder">@Workspace holder</h3>
+<h3 id="workspace-holder">@Workspace holder</h3>
 
 <p>This holder indicates the data type used to store intermediate data during processing. For example:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">@Workspace BigIntHolder min;
diff --git a/docs/data-type-conversion/index.html b/docs/data-type-conversion/index.html
index a456207..c71b928 100644
--- a/docs/data-type-conversion/index.html
+++ b/docs/data-type-conversion/index.html
@@ -1874,7 +1874,7 @@ use in your Drill queries as described in this section:</p>
 </tr>
 </tbody></table>
 
-<h3 id="format-specifiers-for-date/time-conversions">Format Specifiers for Date/Time Conversions</h3>
+<h3 id="format-specifiers-for-date-time-conversions">Format Specifiers for Date/Time Conversions</h3>
 
 <p>Use the following Joda format specifiers for date/time conversions:</p>
 
diff --git a/docs/date-time-and-timestamp/index.html b/docs/date-time-and-timestamp/index.html
index 9b0dd66..c1d02be 100644
--- a/docs/date-time-and-timestamp/index.html
+++ b/docs/date-time-and-timestamp/index.html
@@ -1387,7 +1387,7 @@ SELECT INTERVAL &#39;13&#39; month FROM (VALUES(1));
 +------------+
 1 row selected (0.076 seconds)
 </code></pre></div>
-<h2 id="date,-time,-and-timestamp">DATE, TIME, and TIMESTAMP</h2>
+<h2 id="date-time-and-timestamp">DATE, TIME, and TIMESTAMP</h2>
 
 <p>DATE, TIME, and TIMESTAMP literals. Drill stores values in Coordinated Universal Time (UTC). Drill supports time functions in the range 1971 to 2037.</p>
 
diff --git a/docs/date-time-functions-and-arithmetic/index.html b/docs/date-time-functions-and-arithmetic/index.html
index a17ddf8..96c8968 100644
--- a/docs/date-time-functions-and-arithmetic/index.html
+++ b/docs/date-time-functions-and-arithmetic/index.html
@@ -1784,7 +1784,7 @@ SELECT NOW() FROM (VALUES(1));
 +------------+
 1 row selected (0.062 seconds)
 </code></pre></div>
-<h2 id="date,-time,-and-interval-arithmetic-functions">Date, Time, and Interval Arithmetic Functions</h2>
+<h2 id="date-time-and-interval-arithmetic-functions">Date, Time, and Interval Arithmetic Functions</h2>
 
 <p>Is the day returned from the NOW function the same as the day returned from the CURRENT_DATE function?</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">SELECT EXTRACT(day FROM NOW()) = EXTRACT(day FROM CURRENT_DATE) FROM (VALUES(1));
diff --git a/docs/drill-introduction/index.html b/docs/drill-introduction/index.html
index b84d8f4..ef68714 100644
--- a/docs/drill-introduction/index.html
+++ b/docs/drill-introduction/index.html
@@ -1285,7 +1285,7 @@ applications, while still providing the familiarity and ecosystem of ANSI SQL,
 the industry-standard query language. Drill provides plug-and-play integration
 with existing Apache Hive and Apache HBase deployments.  </p>
 
-<h2 id="what&#39;s-new-in-apache-drill-1.14">What&#39;s New in Apache Drill 1.14</h2>
+<h2 id="whats-new-in-apache-drill-1-14">What&#39;s New in Apache Drill 1.14</h2>
 
 <ul>
 <li>Ability to <a href="/docs/running-drill-on-docker/">run Drill in a Docker container</a>. (<a href="https://issues.apache.org/jira/browse/DRILL-6346">DRILL-6346</a>)<br></li>
@@ -1310,7 +1310,7 @@ with existing Apache Hive and Apache HBase deployments.  </p>
 <li>Early release of <a href="/docs/lateral-join/">lateral join</a>. (<a href="https://issues.apache.org/jira/browse/DRILL-5999">DRILL-5999</a>)<br></li>
 </ul>
 
-<h2 id="what&#39;s-new-in-apache-drill-1.13">What&#39;s New in Apache Drill 1.13</h2>
+<h2 id="whats-new-in-apache-drill-1-13">What&#39;s New in Apache Drill 1.13</h2>
 
 <ul>
 <li>JDK 8 support. (<a href="https://issues.apache.org/jira/browse/DRILL-1491">DRILL-1491</a>)<br></li>
@@ -1333,7 +1333,7 @@ with existing Apache Hive and Apache HBase deployments.  </p>
 <li>User/Distribution-specific configuration checks during startup (<a href="https://issues.apache.org/jira/browse/DRILL-5741">DRILL-5741</a>).<br></li>
 </ul>
 
-<h2 id="what&#39;s-new-in-apache-drill-1.12">What&#39;s New in Apache Drill 1.12</h2>
+<h2 id="whats-new-in-apache-drill-1-12">What&#39;s New in Apache Drill 1.12</h2>
 
 <p>Drill 1.12 provides the following new features and improvements:  </p>
 
@@ -1356,7 +1356,7 @@ with existing Apache Hive and Apache HBase deployments.  </p>
 <li>The Drill Web Console lists the completion of successfully completed queries as &quot;successful&quot; (DRILL-5923)</li>
 </ul>
 
-<h2 id="what&#39;s-new-in-apache-drill-1.11">What&#39;s New in Apache Drill 1.11</h2>
+<h2 id="whats-new-in-apache-drill-1-11">What&#39;s New in Apache Drill 1.11</h2>
 
 <p>Drill 1.11 provides the following new features and improvements:  </p>
 
@@ -1372,7 +1372,7 @@ with existing Apache Hive and Apache HBase deployments.  </p>
 <li>Support for ANSI_QUOTES. (DRILL-3510)<br></li>
 </ul>
 
-<h2 id="what&#39;s-new-in-apache-drill-1.10">What&#39;s New in Apache Drill 1.10</h2>
+<h2 id="whats-new-in-apache-drill-1-10">What&#39;s New in Apache Drill 1.10</h2>
 
 <p>Drill 1.10 provides the following new features and improvements:  </p>
 
@@ -1384,7 +1384,7 @@ with existing Apache Hive and Apache HBase deployments.  </p>
 <li>Support for Kerberos authentication between the client and drillbit.<br></li>
 </ul>
 
-<h2 id="what&#39;s-new-in-apache-drill-1.9">What&#39;s New in Apache Drill 1.9</h2>
+<h2 id="whats-new-in-apache-drill-1-9">What&#39;s New in Apache Drill 1.9</h2>
 
 <p>Drill 1.9 provides the following new features:  </p>
 
@@ -1395,7 +1395,7 @@ with existing Apache Hive and Apache HBase deployments.  </p>
 <li>HTTPD format plugin<br></li>
 </ul>
 
-<h2 id="what&#39;s-new-in-apache-drill-1.8">What&#39;s New in Apache Drill 1.8</h2>
+<h2 id="whats-new-in-apache-drill-1-8">What&#39;s New in Apache Drill 1.8</h2>
 
 <p>Drill 1.8 provides the following new features and changes: </p>
 
@@ -1408,7 +1408,7 @@ with existing Apache Hive and Apache HBase deployments.  </p>
 <li>Changes to the configuration and launch scripts - See <a href="/docs/apache-drill-1-8-0-release-notes/#configuration-and-launch-script-changes">Configuration and Launch Script Changes</a></li>
 </ul>
 
-<h2 id="what&#39;s-new-in-apache-drill-1.7">What&#39;s New in Apache Drill 1.7</h2>
+<h2 id="whats-new-in-apache-drill-1-7">What&#39;s New in Apache Drill 1.7</h2>
 
 <p>Drill 1.7 provides the following new features:  </p>
 
@@ -1418,7 +1418,7 @@ with existing Apache Hive and Apache HBase deployments.  </p>
 <li>HBase 1.x support<br></li>
 </ul>
 
-<h2 id="what&#39;s-new-in-apache-drill-1.6">What&#39;s New in Apache Drill 1.6</h2>
+<h2 id="whats-new-in-apache-drill-1-6">What&#39;s New in Apache Drill 1.6</h2>
 
 <p>Drill 1.6 provides the following new features:  </p>
 
@@ -1427,7 +1427,7 @@ with existing Apache Hive and Apache HBase deployments.  </p>
 <li>Additional custom window frames </li>
 </ul>
 
-<h2 id="what&#39;s-new-in-apache-drill-1.5">What&#39;s New in Apache Drill 1.5</h2>
+<h2 id="whats-new-in-apache-drill-1-5">What&#39;s New in Apache Drill 1.5</h2>
 
 <p>Drill 1.5 provides the following new features:  </p>
 
@@ -1438,7 +1438,7 @@ with existing Apache Hive and Apache HBase deployments.  </p>
 <li>Configurable caching for Hive metadata</li>
 </ul>
 
-<h2 id="what&#39;s-new-in-apache-drill-1.4">What&#39;s New in Apache Drill 1.4</h2>
+<h2 id="whats-new-in-apache-drill-1-4">What&#39;s New in Apache Drill 1.4</h2>
 
 <p>Drill 1.4 introduces the following improvements:</p>
 
@@ -1451,7 +1451,7 @@ with existing Apache Hive and Apache HBase deployments.  </p>
 
 <p>Drill 1.4 fixes an error that occurred when you query a Hive table using the HBaseStorageHandler (<a href="https://issues.apache.org/jira/browse/DRILL-3739">DRILL-3739</a>). To successfully query a Hive table using the HBaseStorageHandler, you need to configure the Hive storage plugin as described in the <a href="/docs/hive-storage-plugin/#connect-drill-to-the-hive-remote-metastore">Hive storage plugin documentation</a>.</p>
 
-<h2 id="what&#39;s-new-in-apache-drill-1.3">What&#39;s New in Apache Drill 1.3</h2>
+<h2 id="whats-new-in-apache-drill-1-3">What&#39;s New in Apache Drill 1.3</h2>
 
 <p>This releases fix issues and add a number of enhancements, including the following ones:</p>
 
@@ -1464,7 +1464,7 @@ Support for columns that evolve from one data type to another over time. </li>
 <li>Enhancements related to querying Hive tables, MongoDB collections, and Avro files</li>
 </ul>
 
-<h2 id="what&#39;s-new-in-apache-drill-1.2">What&#39;s New in Apache Drill 1.2</h2>
+<h2 id="whats-new-in-apache-drill-1-2">What&#39;s New in Apache Drill 1.2</h2>
 
 <p>This release of Drill fixes <a href="/docs/apache-drill-1-2-0-release-notes/">many issues</a> and introduces a number of enhancements, including the following ones:</p>
 
@@ -1497,7 +1497,7 @@ Javadocs and better application dependency compatibility<br></li>
 <li>Improved LIMIT processing</li>
 </ul>
 
-<h2 id="what&#39;s-new-in-apache-drill-1.1">What&#39;s New in Apache Drill 1.1</h2>
+<h2 id="whats-new-in-apache-drill-1-1">What&#39;s New in Apache Drill 1.1</h2>
 
 <p>Many enhancements in Apache Drill 1.1 include the following key features:</p>
 
@@ -1508,7 +1508,7 @@ Javadocs and better application dependency compatibility<br></li>
 <li>Support for UNION and UNION ALL and better optimized plans that include UNION.</li>
 </ul>
 
-<h2 id="what&#39;s-new-in-apache-drill-1.0">What&#39;s New in Apache Drill 1.0</h2>
+<h2 id="whats-new-in-apache-drill-1-0">What&#39;s New in Apache Drill 1.0</h2>
 
 <p>Apache Drill 1.0 offers the following new features:</p>
 
diff --git a/docs/drill-plan-syntax/index.html b/docs/drill-plan-syntax/index.html
index 12d896c..ce5d0e0 100644
--- a/docs/drill-plan-syntax/index.html
+++ b/docs/drill-plan-syntax/index.html
@@ -1280,7 +1280,7 @@
 
     <div class="int_text" align="left">
       
-        <h2 id="whats-the-plan?">Whats the plan?</h2>
+        <h2 id="whats-the-plan">Whats the plan?</h2>
 
 <p>This section is about the end-to-end plan flow for Drill. The incoming query
 to Drill can be a SQL 2003 query/DrQL or MongoQL. The query is converted to a
diff --git a/docs/drop-table/index.html b/docs/drop-table/index.html
index 1da3ccb..edc8717 100644
--- a/docs/drop-table/index.html
+++ b/docs/drop-table/index.html
@@ -1344,7 +1344,7 @@ A unique directory or file name, optionally prefaced by a storage plugin name, s
 
 <p>The following examples show results for several DROP TABLE scenarios.  </p>
 
-<h3 id="example-1:-identifying-a-schema">Example 1:  Identifying a schema</h3>
+<h3 id="example-1-identifying-a-schema">Example 1:  Identifying a schema</h3>
 
 <p>This example shows you how to identify a schema with the USE and DROP TABLE commands and successfully drop a table named <code>donuts_json</code> in the <code>&quot;donuts&quot;</code> workspace configured within the DFS storage plugin configuration.  </p>
 
@@ -1398,7 +1398,7 @@ A unique directory or file name, optionally prefaced by a storage plugin name, s
    Error: PARSE ERROR: Root schema is immutable. Creating or dropping tables/views is not allowed in root schema.Select a schema using &#39;USE schema&#39; command.
    [Error Id: 8c42cb6a-27eb-48fd-b42a-671a6fb58c14 on 10.250.56.218:31010] (state=,code=0)
 </code></pre></div>
-<h3 id="example-2:-dropping-a-table-created-from-a-file">Example 2: Dropping a table created from a file</h3>
+<h3 id="example-2-dropping-a-table-created-from-a-file">Example 2: Dropping a table created from a file</h3>
 
 <p>In the following example, the <code>donuts_json</code> table is removed from the <code>/tmp</code> workspace using the DROP TABLE command. This example assumes that the steps in the <a href="/docs/create-table-as-ctas/#complete-ctas-example">Complete CTAS Example</a> were already completed. </p>
 
@@ -1434,7 +1434,7 @@ A unique directory or file name, optionally prefaced by a storage plugin name, s
    +-------+------------------------------+
    1 row selected (0.107 seconds)  
 </code></pre></div>
-<h3 id="example-3:-dropping-a-table-created-as-a-directory">Example 3: Dropping a table created as a directory</h3>
+<h3 id="example-3-dropping-a-table-created-as-a-directory">Example 3: Dropping a table created as a directory</h3>
 
 <p>When you create a table that writes files to a directory, you can issue the <code>DROP TABLE</code> command against the table to remove the directory. All files and subdirectories are deleted. For example, the following CTAS command writes Parquet data from the <code>nation.parquet</code> file, installed with Drill, to the <code>/tmp/name_key</code> directory.  </p>
 
@@ -1491,7 +1491,7 @@ A unique directory or file name, optionally prefaced by a storage plugin name, s
    +-------+---------------------------+
    1 row selected (0.086 seconds)
 </code></pre></div>
-<h3 id="example-4:-dropping-a-table-that-does-not-exist">Example 4: Dropping a table that does not exist</h3>
+<h3 id="example-4-dropping-a-table-that-does-not-exist">Example 4: Dropping a table that does not exist</h3>
 
 <p>The following example shows the result of dropping a table that does not exist because it was either already dropped or never existed. </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   0: jdbc:drill:zk=local&gt; use dfs.tmp;
@@ -1507,7 +1507,7 @@ A unique directory or file name, optionally prefaced by a storage plugin name, s
    Error: VALIDATION ERROR: Table [name_key] not found
    [Error Id: fc6bfe17-d009-421c-8063-d759d7ea2f4e on 10.250.56.218:31010] (state=,code=0)  
 </code></pre></div>
-<h3 id="example-5:-dropping-a-table-that-does-not-exist-using-the-if-exists-parameter">Example 5: Dropping a table that does not exist using the IF EXISTS parameter</h3>
+<h3 id="example-5-dropping-a-table-that-does-not-exist-using-the-if-exists-parameter">Example 5: Dropping a table that does not exist using the IF EXISTS parameter</h3>
 
 <p>The following example shows the result of dropping a table that does not exist (because it was already dropped or never existed) using the IF EXISTS parameter with the DROP TABLE command:  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   0: jdbc:drill:zk=local&gt; use dfs.tmp;
@@ -1526,7 +1526,7 @@ A unique directory or file name, optionally prefaced by a storage plugin name, s
    +-------+-----------------------------+
    1 row selected (0.083 seconds)  
 </code></pre></div>
-<h3 id="example-6:-dropping-a-table-that-exists-using-the-if-exists-parameter">Example 6: Dropping a table that exists using the IF EXISTS parameter</h3>
+<h3 id="example-6-dropping-a-table-that-exists-using-the-if-exists-parameter">Example 6: Dropping a table that exists using the IF EXISTS parameter</h3>
 
 <p>The following example shows the result of dropping a table that exists using the IF EXISTS parameter with the DROP TABLE command.  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   0: jdbc:drill:zk=local&gt; use dfs.tmp;
@@ -1544,7 +1544,7 @@ A unique directory or file name, optionally prefaced by a storage plugin name, s
    | true  | Table &#39;name_key&#39; dropped  |
    +-------+---------------------------+  
 </code></pre></div>
-<h3 id="example-7:-dropping-a-table-without-permissions">Example 7: Dropping a table without permissions</h3>
+<h3 id="example-7-dropping-a-table-without-permissions">Example 7: Dropping a table without permissions</h3>
 
 <p>The following example shows the result of dropping a table without the appropriate permissions in the file system.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   0: jdbc:drill:zk=local&gt; drop table name_key;
@@ -1552,7 +1552,7 @@ A unique directory or file name, optionally prefaced by a storage plugin name, s
    Error: PERMISSION ERROR: Unauthorized to drop table
    [Error Id: 36f6b51a-786d-4950-a4a7-44250f153c55 on 10.10.30.167:31010] (state=,code=0)  
 </code></pre></div>
-<h3 id="example-8:-dropping-and-querying-a-table-concurrently">Example 8: Dropping and querying a table concurrently</h3>
+<h3 id="example-8-dropping-and-querying-a-table-concurrently">Example 8: Dropping and querying a table concurrently</h3>
 
 <p>The result of this scenario depends on the delta in time between one user dropping a table and another user issuing a query against the table. Results can also vary. In some instances the drop may succeed and the query fails completely or the query completes partially and then the table is dropped returning an exception in the middle of the query results.</p>
 
@@ -1574,7 +1574,7 @@ A unique directory or file name, optionally prefaced by a storage plugin name, s
    Fragment 1:0
    [Error Id: 6e3c6a8d-8cfd-4033-90c4-61230af80573 on 10.10.30.167:31010] (state=,code=0)
 </code></pre></div>
-<h3 id="example-9:-dropping-a-table-with-different-file-formats">Example 9: Dropping a table with different file formats</h3>
+<h3 id="example-9-dropping-a-table-with-different-file-formats">Example 9: Dropping a table with different file formats</h3>
 
 <p>The following example shows the result of dropping a table when multiple file formats exists in the directory. In this scenario, the <code>sales_dir</code> table resides in the <code>dfs.sales</code> workspace and contains Parquet, CSV, and JSON files.</p>
 
diff --git a/docs/enabling-web-ui-security/index.html b/docs/enabling-web-ui-security/index.html
index 9f0c714..3000d81 100644
--- a/docs/enabling-web-ui-security/index.html
+++ b/docs/enabling-web-ui-security/index.html
@@ -1294,7 +1294,7 @@ Drill’s user authentication.  </p>
 <p>Restart the Drill-on-YARN Application Master. When you visit the web UI, a login page should
 appear, prompting you to log in. Only the above user and password are valid. Simple security is not highly secure; but it is useful for testing, prototypes and the like.  </p>
 
-<h2 id="using-drill’s-user-authentication">Using Drill’s User Authentication</h2>
+<h2 id="using-drill-s-user-authentication">Using Drill’s User Authentication</h2>
 
 <p>Drill-on-YARN can use Drill’s authentication system. In this mode, the user name and password
 must match that of the user that started the Drill-on-YARN application. To enable Drill security:  </p>
diff --git a/docs/getting-to-know-the-drill-sandbox/index.html b/docs/getting-to-know-the-drill-sandbox/index.html
index 0595ec8..d947f6e 100644
--- a/docs/getting-to-know-the-drill-sandbox/index.html
+++ b/docs/getting-to-know-the-drill-sandbox/index.html
@@ -1390,7 +1390,7 @@ URI. Metadata for Hive tables is automatically available for users to query.</p>
 </code></pre></div>
 <p>Do not use this storage plugin configuration outside the sandbox. Use the configuration for either the <a href="/docs/hive-storage-plugin/">remote or embedded metastore configuration</a>.</p>
 
-<h2 id="what&#39;s-next">What&#39;s Next</h2>
+<h2 id="whats-next">What&#39;s Next</h2>
 
 <p>Start running queries by going to <a href="/docs/lesson-1-learn-about-the-data-set">Lesson 1: Learn About the Data
 Set</a>.</p>
diff --git a/docs/hive-storage-plugin/index.html b/docs/hive-storage-plugin/index.html
index 1bffddb..f69f821 100644
--- a/docs/hive-storage-plugin/index.html
+++ b/docs/hive-storage-plugin/index.html
@@ -1293,7 +1293,7 @@
 </code></pre></div>
 <p>You could query the Hive external table named <code>my_tbl</code>, and Drill would return results that included the data from the <code>data.txt</code> file.  </p>
 
-<h3 id="guidelines-for-using-the-store.hive.conf.properties-option">Guidelines for Using the <code>store.hive.conf.properties</code> Option</h3>
+<h3 id="guidelines-for-using-the-store-hive-conf-properties-option">Guidelines for Using the <code>store.hive.conf.properties</code> Option</h3>
 
 <p>When you set Hive properties at the session level, follow these guidelines:  </p>
 
diff --git a/docs/how-to-partition-data/index.html b/docs/how-to-partition-data/index.html
index d4bc7f0..df2fd01 100644
--- a/docs/how-to-partition-data/index.html
+++ b/docs/how-to-partition-data/index.html
@@ -1288,7 +1288,7 @@
 
 <p>Unlike using the Drill 1.0 partitioning, no view query is subsequently required, nor is it necessary to use the <a href="/docs/querying-directories">dir* variables</a> after you use the PARTITION BY clause in a CTAS statement. </p>
 
-<h2 id="drill-1.0-partitioning">Drill 1.0 Partitioning</h2>
+<h2 id="drill-1-0-partitioning">Drill 1.0 Partitioning</h2>
 
 <p>Drill 1.0 does not support the PARTITION BY clause of the CTAS command supported by later versions. Partitioning Drill 1.0-generated data involves performing the following steps.   </p>
 
@@ -1300,7 +1300,7 @@
 
 <p>After partitioning the data, you need to create a view of the partitioned data to query the data. You can use the <a href="/docs/querying-directories">dir* variables</a> in queries to refer to subdirectories in your workspace path.</p>
 
-<h3 id="drill-1.0-partitioning-example">Drill 1.0 Partitioning Example</h3>
+<h3 id="drill-1-0-partitioning-example">Drill 1.0 Partitioning Example</h3>
 
 <p>Suppose you have text files containing several years of log data. To partition the data by year and quarter, create the following hierarchy of directories:  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   …/logs/1994/Q1  
diff --git a/docs/image-metadata-format-plugin/index.html b/docs/image-metadata-format-plugin/index.html
index 2d842be..294f9eb 100644
--- a/docs/image-metadata-format-plugin/index.html
+++ b/docs/image-metadata-format-plugin/index.html
@@ -4751,7 +4751,7 @@ Camera Raw: ARW (Sony), CRW/CR2 (Canon), NEF (Nikon), ORF (Olympus), RAF (FujiFi
 <tr>
 <td>IPTC.FileFormat</td>
 <td>VARCHAR</td>
-<td>File format:<br><code>00</code> (No ObjectData)<br><code>01</code> (IPTC-NAA Digital Newsphoto Parameter Record)<br><code>02</code> (IPTC7901 Recommended Message Format)<br><code>03</code> (Tagged Image File Format (Adobe/Aldus Image data))<br><code>04</code> (Illustrator (Adobe Graphics data))<br><code>05</code> (AppleSingle (Apple Computer Inc))<br><code>06</code> (NAA 89-3 (ANPA 1312))<br><code>07</code> (MacBinary II)<br><code>08</code> (IPTC Unstructured Character Oriented File  [...]
+<td>File format:<br><code>00</code> (No ObjectData)<br><code>01</code> (IPTC-NAA Digital Newsphoto Parameter Record)<br><code>02</code> (IPTC7901 Recommended Message Format)<br><code>03</code> (Tagged Image File Format (Adobe/Aldus Image data))<br><code>04</code> (Illustrator (Adobe Graphics data))<br><code>05</code> (AppleSingle (Apple Computer Inc))<br><code>06</code> (NAA 89-3 (ANPA 1312))<br><code>07</code> (MacBinary II)<br><code>08</code> (IPTC Unstructured Character Oriented File  [...]
 </tr>
 <tr>
 <td>IPTC.FileVersion</td>
@@ -4976,7 +4976,7 @@ Camera Raw: ARW (Sony), CRW/CR2 (Canon), NEF (Nikon), ORF (Olympus), RAF (FujiFi
 <tr>
 <td>IPTC.ObjectDataPreviewFileFormat</td>
 <td>VARCHAR</td>
-<td>File format of the ObjectData Preview:<br><code>00</code> (No ObjectData)<br><code>01</code> (IPTC-NAA Digital Newsphoto Parameter Record)<br><code>02</code> (IPTC7901 Recommended Message Format)<br><code>03</code> (Tagged Image File Format (Adobe/Aldus Image data))<br><code>04</code> (Illustrator (Adobe Graphics data))<br><code>05</code> (AppleSingle (Apple Computer Inc))<br><code>06</code> (NAA 89-3 (ANPA 1312))<br><code>07</code> (MacBinary II)<br><code>08</code> (IPTC Unstructure [...]
+<td>File format of the ObjectData Preview:<br><code>00</code> (No ObjectData)<br><code>01</code> (IPTC-NAA Digital Newsphoto Parameter Record)<br><code>02</code> (IPTC7901 Recommended Message Format)<br><code>03</code> (Tagged Image File Format (Adobe/Aldus Image data))<br><code>04</code> (Illustrator (Adobe Graphics data))<br><code>05</code> (AppleSingle (Apple Computer Inc))<br><code>06</code> (NAA 89-3 (ANPA 1312))<br><code>07</code> (MacBinary II)<br><code>08</code> (IPTC Unstructure [...]
 </tr>
 <tr>
 <td>Photoshop.ResolutionInfo</td>
diff --git a/docs/installing-the-apache-drill-sandbox/index.html b/docs/installing-the-apache-drill-sandbox/index.html
index 1ce5736..69f28fa 100644
--- a/docs/installing-the-apache-drill-sandbox/index.html
+++ b/docs/installing-the-apache-drill-sandbox/index.html
@@ -1315,7 +1315,7 @@ instructions:</p>
 <li>To install VirtualBox, see the <a href="http://dlc.sun.com.edgesuite.net/virtualbox/4.3.4/UserManual.pdf">Oracle VM VirtualBox User Manual</a>. By downloading VirtualBox, you agree to the terms and conditions of the respective license.</li>
 </ul>
 
-<h2 id="installing-the-mapr-sandbox-with-apache-drill-on-vmware-player/vmware-fusion">Installing the MapR Sandbox with Apache Drill on VMware Player/VMware Fusion</h2>
+<h2 id="installing-the-mapr-sandbox-with-apache-drill-on-vmware-player-vmware-fusion">Installing the MapR Sandbox with Apache Drill on VMware Player/VMware Fusion</h2>
 
 <p>Complete the following steps to install the MapR Sandbox with Apache Drill on
 VMware Player or VMware Fusion:</p>
@@ -1357,7 +1357,7 @@ The Import Virtual Machine dialog appears.</p></li>
 <li>Alternatively, access the command line on the VM: Press Alt+F2 on Windows or Option+F5 on Mac.<br></li>
 </ul>
 
-<h3 id="what&#39;s-next">What&#39;s Next</h3>
+<h3 id="whats-next">What&#39;s Next</h3>
 
 <p>After downloading and installing the sandbox, continue with the tutorial by
 <a href="/docs/getting-to-know-the-drill-sandbox/">Getting to Know the Drill
@@ -1407,7 +1407,7 @@ VirtualBox:</p>
 </ul></li>
 </ol>
 
-<h3 id="what&#39;s-next">What&#39;s Next</h3>
+<h3 id="whats-next">What&#39;s Next</h3>
 
 <p>After downloading and installing the sandbox, continue with the tutorial by
 <a href="/docs/getting-to-know-the-drill-sandbox/">Getting to Know the Drill Sandbox</a>.</p>
diff --git a/docs/installing-the-driver-on-linux/index.html b/docs/installing-the-driver-on-linux/index.html
index 678f6b1..4e1ca4a 100644
--- a/docs/installing-the-driver-on-linux/index.html
+++ b/docs/installing-the-driver-on-linux/index.html
@@ -1319,11 +1319,11 @@ application that you use to access Drill. The 64-bit editions of Linux support
 
 <p>To install the driver, you need Administrator privileges on the computer. </p>
 
-<h2 id="step-1:-download-the-drill-odbc-driver">Step 1: Download the Drill ODBC Driver</h2>
+<h2 id="step-1-download-the-drill-odbc-driver">Step 1: Download the Drill ODBC Driver</h2>
 
 <p>Download the driver from the <a href="http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/">download site</a>. The current version is 1.3.8.</p>
 
-<h2 id="step-2:-install-the-drill-odbc-driver">Step 2: Install the Drill ODBC Driver</h2>
+<h2 id="step-2-install-the-drill-odbc-driver">Step 2: Install the Drill ODBC Driver</h2>
 
 <p>To install the driver, complete the following steps:</p>
 
@@ -1370,7 +1370,7 @@ locations and descriptions:</p>
 </tr>
 </tbody></table>
 
-<h2 id="step-3:-check-the-drill-odbc-driver-version">Step 3: Check the Drill ODBC Driver Version</h2>
+<h2 id="step-3-check-the-drill-odbc-driver-version">Step 3: Check the Drill ODBC Driver Version</h2>
 
 <p>To check the version of the driver you installed, use the following case-sensitive command on the terminal command line:</p>
 
diff --git a/docs/installing-the-driver-on-mac-os-x/index.html b/docs/installing-the-driver-on-mac-os-x/index.html
index a32c85d..b1de4d9 100644
--- a/docs/installing-the-driver-on-mac-os-x/index.html
+++ b/docs/installing-the-driver-on-mac-os-x/index.html
@@ -1305,12 +1305,12 @@ The iodbc-config file in the <code>/usr/local/iODBC/bin</code> includes the vers
 <p>Example: <code>127.0.0.1 localhost</code></p></li>
 </ul>
 
-<h2 id="step-1:-download-the-drill-odbc-driver">Step 1: Download the Drill ODBC Driver</h2>
+<h2 id="step-1-download-the-drill-odbc-driver">Step 1: Download the Drill ODBC Driver</h2>
 
 <p>To download ODBC drivers that support both 32- and 64-bit client applications, click 
 <a href="http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/">Drill ODBC Driver for Mac</a>.</p>
 
-<h2 id="step-2:-install-the-drill-odbc-driver">Step 2: Install the Drill ODBC Driver</h2>
+<h2 id="step-2-install-the-drill-odbc-driver">Step 2: Install the Drill ODBC Driver</h2>
 
 <p>To install the driver, complete the following steps:</p>
 
@@ -1329,7 +1329,7 @@ The iodbc-config file in the <code>/usr/local/iODBC/bin</code> includes the vers
 <li><code>/Library/mapr/drill/lib</code> – Binaries directory</li>
 </ul>
 
-<h2 id="step-3:-check-the-drill-odbc-driver-version">Step 3: Check the Drill ODBC Driver Version</h2>
+<h2 id="step-3-check-the-drill-odbc-driver-version">Step 3: Check the Drill ODBC Driver Version</h2>
 
 <p>To check the version of the driver you installed, use the following command on the terminal command line:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">$ pkgutil --pkg-info mapr.drillodbc  
diff --git a/docs/installing-the-driver-on-windows/index.html b/docs/installing-the-driver-on-windows/index.html
index 92e4c09..bfe4527 100644
--- a/docs/installing-the-driver-on-windows/index.html
+++ b/docs/installing-the-driver-on-windows/index.html
@@ -1334,7 +1334,7 @@ requirements:</p>
 
 <hr>
 
-<h2 id="step-1:-download-the-drill-odbc-driver">Step 1: Download the Drill ODBC Driver</h2>
+<h2 id="step-1-download-the-drill-odbc-driver">Step 1: Download the Drill ODBC Driver</h2>
 
 <p>Download the installer that corresponds to the bitness of the client application from which you want to create an ODBC connection. The current version is 1.3.8.</p>
 
@@ -1345,7 +1345,7 @@ requirements:</p>
 
 <hr>
 
-<h2 id="step-2:-install-the-drill-odbc-driver">Step 2: Install the Drill ODBC Driver</h2>
+<h2 id="step-2-install-the-drill-odbc-driver">Step 2: Install the Drill ODBC Driver</h2>
 
 <ol>
 <li>Double-click the installer from the location where you downloaded it.</li>
@@ -1358,7 +1358,7 @@ requirements:</p>
 
 <hr>
 
-<h2 id="step-3:-verify-the-installation">Step 3: Verify the Installation</h2>
+<h2 id="step-3-verify-the-installation">Step 3: Verify the Installation</h2>
 
 <p>To verify the installation on Windows 10, perform the following steps:</p>
 
diff --git a/docs/json-data-model/index.html b/docs/json-data-model/index.html
index fcafee8..c59f273 100644
--- a/docs/json-data-model/index.html
+++ b/docs/json-data-model/index.html
@@ -1373,7 +1373,7 @@ Reads all data from JSON files as VARCHAR. You need to cast numbers from VARCHAR
 
 <p>Drill uses these types internally for reading complex and nested data structures from data sources such as JSON.</p>
 
-<h3 id="experimental-feature:-heterogeneous-types">Experimental Feature: Heterogeneous types</h3>
+<h3 id="experimental-feature-heterogeneous-types">Experimental Feature: Heterogeneous types</h3>
 
 <p>The Union type allows storing different types in the same field. This new feature is still considered experimental, and must be explicitly enabled by setting the <code>exec.enabel_union_type</code> option to true.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">ALTER SESSION SET `exec.enable_union_type` = true;
@@ -1469,11 +1469,11 @@ y[z].x because these references are not ambiguous. Observe the following guideli
 <li>Generate key/value pairs for loosely structured data</li>
 </ul>
 
-<h2 id="example:-flatten-and-generate-key-values-for-complex-json">Example: Flatten and Generate Key Values for Complex JSON</h2>
+<h2 id="example-flatten-and-generate-key-values-for-complex-json">Example: Flatten and Generate Key Values for Complex JSON</h2>
 
 <p>This example uses the following data that represents unit sales of tickets to events that were sold over a period of several days in December:</p>
 
-<h3 id="ticket_sales.json-contents">ticket_sales.json Contents</h3>
+<h3 id="ticket_sales-json-contents">ticket_sales.json Contents</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">{
   &quot;type&quot;: &quot;ticket&quot;,
   &quot;venue&quot;: 123455,
@@ -1504,7 +1504,7 @@ y[z].x because these references are not ambiguous. Observe the following guideli
 +---------+---------+---------------------------------------------------------------+
 2 rows selected (1.343 seconds)
 </code></pre></div>
-<h3 id="generate-key/value-pairs">Generate Key/Value Pairs</h3>
+<h3 id="generate-key-value-pairs">Generate Key/Value Pairs</h3>
 
 <p>Continuing with the data from <a href="/docs/json-data-model/#example:-flatten-and-generate-key-values-for-complex-json">previous example</a>, use the KVGEN (Key Value Generator) function to generate key/value pairs from complex data. Generating key/value pairs is often helpful when working with data that contains arbitrary maps consisting of dynamic and unknown element names, such as the ticket sales data in this example. For example purposes, take a look at how kvgen breaks the sale [...]
 <div class="highlight"><pre><code class="language-text" data-lang="text">SELECT KVGEN(tkt.sales) AS `key dates:tickets sold` FROM dfs.`/Users/drilluser/ticket_sales.json` tkt;
@@ -1538,7 +1538,7 @@ FROM dfs.`/Users/drilluser/drill/ticket_sales.json`;
 +--------------------------------+
 8 rows selected (0.171 seconds)
 </code></pre></div>
-<h3 id="example:-aggregate-loosely-structured-data">Example: Aggregate Loosely Structured Data</h3>
+<h3 id="example-aggregate-loosely-structured-data">Example: Aggregate Loosely Structured Data</h3>
 
 <p>Use flatten and kvgen together to aggregate the data from the <a href="/docs/json-data-model/#example:-flatten-and-generate-key-values-for-complex-json">previous example</a>. Make sure all text mode is set to false to sum numbers. Drill returns an error if you attempt to sum data in all text mode.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">ALTER SESSION SET `store.json.all_text_mode` = false;
@@ -1553,7 +1553,7 @@ FROM dfs.`/Users/drilluser/drill/ticket_sales.json`;
 +--------------+
 1 row selected (0.244 seconds)
 </code></pre></div>
-<h3 id="example:-aggregate-and-sort-data">Example: Aggregate and Sort Data</h3>
+<h3 id="example-aggregate-and-sort-data">Example: Aggregate and Sort Data</h3>
 
 <p>Sum and group the ticket sales by date and sort in ascending order of total tickets sold.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">SELECT `right`(tkt.tot_sales.key,2) `December Date`,
@@ -1574,7 +1574,7 @@ ORDER BY TotalSales;
 +----------------+-------------+
 5 rows selected (0.252 seconds)
 </code></pre></div>
-<h3 id="example:-access-a-map-field-in-an-array">Example: Access a Map Field in an Array</h3>
+<h3 id="example-access-a-map-field-in-an-array">Example: Access a Map Field in an Array</h3>
 
 <p>To access a map field in an array, use dot notation to drill down through the hierarchy of the JSON data to the field. Examples are based on the following <a href="https://github.com/zemirco/sf-city-lots-json">City Lots San Francisco in .json</a>.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">{
@@ -1638,7 +1638,7 @@ FROM dfs.`/Users/drilluser/citylots.json`;
 
 <p>More examples of drilling down into an array are shown in <a href="/docs/selecting-nested-data-for-a-column">&quot;Selecting Nested Data for a Column&quot;</a>.</p>
 
-<h3 id="example:-flatten-an-array-of-maps-using-a-subquery">Example: Flatten an Array of Maps using a Subquery</h3>
+<h3 id="example-flatten-an-array-of-maps-using-a-subquery">Example: Flatten an Array of Maps using a Subquery</h3>
 
 <p>By flattening the following JSON file, which contains an array of maps, you can evaluate the records of the flattened data.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">{&quot;name&quot;:&quot;classic&quot;,&quot;fillings&quot;:[ {&quot;name&quot;:&quot;sugar&quot;,&quot;cal&quot;:500} , {&quot;name&quot;:&quot;flour&quot;,&quot;cal&quot;:300} ] }
@@ -1654,7 +1654,7 @@ SELECT flat.fill FROM (SELECT FLATTEN(t.fillings) AS fill FROM dfs.flatten.`test
 </code></pre></div>
 <p>Use a table alias for column fields and functions when working with complex data sets. Currently, you must use a subquery when operating on a flattened column. Eliminating the subquery and table alias in the WHERE clause, for example <code>flat.fillings[0].cal &gt; 300</code>, does not evaluate all records of the flattened data against the predicate and produces the wrong results.</p>
 
-<h3 id="example:-access-map-fields-in-a-map">Example: Access Map Fields in a Map</h3>
+<h3 id="example-access-map-fields-in-a-map">Example: Access Map Fields in a Map</h3>
 
 <p>This example uses a WHERE clause to drill down to a third level of the following JSON hierarchy to get the max_hdl greater than 160:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">{
diff --git a/docs/kafka-storage-plugin/index.html b/docs/kafka-storage-plugin/index.html
index 1778d6e..2eac922 100644
--- a/docs/kafka-storage-plugin/index.html
+++ b/docs/kafka-storage-plugin/index.html
@@ -1395,7 +1395,7 @@ The kafkaMsgTimestamp field maps to the timestamp stored for each Kafka message.
      &quot;enabled&quot;: true
    }  
 </code></pre></div>
-<h2 id="system|session-options">System|Session Options</h2>
+<h2 id="system-session-options">System|Session Options</h2>
 
 <p>You can modify the following options in Drill at the system or session level using the <a href="/docs/alter-system/">ALTER SYSTEM</a>|<a href="/docs/set/">SESSION SET</a> commands:  </p>
 
diff --git a/docs/kvgen/index.html b/docs/kvgen/index.html
index 70abbce..9bb7c49 100644
--- a/docs/kvgen/index.html
+++ b/docs/kvgen/index.html
@@ -1369,7 +1369,7 @@ array down into multiple distinct rows and further query those rows.</p>
 {&quot;key&quot;: &quot;c&quot;, &quot;value&quot;: &quot;valC&quot;}
 {&quot;key&quot;: &quot;d&quot;, &quot;value&quot;: &quot;valD&quot;}
 </code></pre></div>
-<h2 id="example:-different-data-type-values">Example: Different Data Type Values</h2>
+<h2 id="example-different-data-type-values">Example: Different Data Type Values</h2>
 
 <p>Assume that a JSON file called <code>kvgendata.json</code> includes multiple records that
 look like this one:</p>
diff --git a/docs/lateral-join/index.html b/docs/lateral-join/index.html
index 9b8678b..61e9f8f 100644
--- a/docs/lateral-join/index.html
+++ b/docs/lateral-join/index.html
@@ -1538,7 +1538,7 @@ tableReference:
 </code></pre></div>
 <p>Note that the FROM clause in the subquery references the ‘orders’ array from the table alias ‘c’ which is the outer table.  The reference to an outer table within the subquery makes this look like a correlated subquery.  However, there is an important difference in that a correlated subquery is used in the WHERE clause whereas what we really want is the set of rows from the un-nested array exposed as a ‘sub-table’ such that relevant filtering, aggregation, and so on can be performed.  </p>
 
-<h3 id="example-queries-with-lateral,-unnest,-and-aliases">Example Queries with LATERAL, UNNEST, and Aliases</h3>
+<h3 id="example-queries-with-lateral-unnest-and-aliases">Example Queries with LATERAL, UNNEST, and Aliases</h3>
 
 <p>The following query examples demonstrate the use of the LATERAL keyword and UNNEST relational operator with aliases.  </p>
 
diff --git a/docs/lesson-1-learn-about-the-data-set/index.html b/docs/lesson-1-learn-about-the-data-set/index.html
index 2467be0..9640878 100644
--- a/docs/lesson-1-learn-about-the-data-set/index.html
+++ b/docs/lesson-1-learn-about-the-data-set/index.html
@@ -1326,7 +1326,7 @@ the Drill shell, type:</p>
 +-------+--------------------------------------------+
 1 row selected 
 </code></pre></div>
-<h3 id="list-the-available-workspaces-and-databases:">List the available workspaces and databases:</h3>
+<h3 id="list-the-available-workspaces-and-databases">List the available workspaces and databases:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; show databases;
 +---------------------+
 |     SCHEMA_NAME     |
@@ -1356,7 +1356,7 @@ different database schemas (namespaces) in a relational database system.</p>
 This is a Hive external table pointing to the data stored in flat files on the
 MapR file system. The orders table contains 122,000 rows.</p>
 
-<h3 id="set-the-schema-to-hive:">Set the schema to hive:</h3>
+<h3 id="set-the-schema-to-hive">Set the schema to hive:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; use hive.`default`;
 +-------+-------------------------------------------+
 |  ok   |                  summary                  |
@@ -1368,7 +1368,7 @@ MapR file system. The orders table contains 122,000 rows.</p>
 <p>You will run the USE command throughout this tutorial. The USE command sets
 the schema for the current session.</p>
 
-<h3 id="describe-the-table:">Describe the table:</h3>
+<h3 id="describe-the-table">Describe the table:</h3>
 
 <p>You can use the DESCRIBE command to show the columns and data types for a Hive
 table:</p>
@@ -1387,7 +1387,7 @@ table:</p>
 <p>The DESCRIBE command returns complete schema information for Hive tables based
 on the metadata available in the Hive metastore.</p>
 
-<h3 id="select-5-rows-from-the-orders-table:">Select 5 rows from the orders table:</h3>
+<h3 id="select-5-rows-from-the-orders-table">Select 5 rows from the orders table:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select * from orders limit 5;
 +------------+------------+------------+------------+------------+-------------+
 |  order_id  |   month    |  cust_id   |   state    |  prod_id   | order_total |
@@ -1445,7 +1445,7 @@ columns typical of a time-series database.</p>
 
 <p>The customers table contains 993 rows.</p>
 
-<h3 id="set-the-workspace-to-maprdb:">Set the workspace to maprdb:</h3>
+<h3 id="set-the-workspace-to-maprdb">Set the workspace to maprdb:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">use maprdb;
 +-------+-------------------------------------+
 |  ok   |               summary               |
@@ -1454,7 +1454,7 @@ columns typical of a time-series database.</p>
 +-------+-------------------------------------+
 1 row selected
 </code></pre></div>
-<h3 id="describe-the-tables:">Describe the tables:</h3>
+<h3 id="describe-the-tables">Describe the tables:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; describe customers;
 +--------------+------------------------+--------------+
 | COLUMN_NAME  |       DATA_TYPE        | IS_NULLABLE  |
@@ -1487,7 +1487,7 @@ structure, and “ANY” represents the fact that the column value can be of any
 data type. Observe the row_key, which is also simply bytes and has the type
 ANY.</p>
 
-<h3 id="select-5-rows-from-the-products-table:">Select 5 rows from the products table:</h3>
+<h3 id="select-5-rows-from-the-products-table">Select 5 rows from the products table:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select * from products limit 5;
 +--------------+----------------------------------------------------------------------------------------------------------------+-------------------+
 |   row_key    |                                                    details                                                     |      pricing      |
@@ -1507,7 +1507,7 @@ and pricing) have the map data type and appear as JSON strings.</p>
 
 <p>In Lesson 2, you will use CAST functions to return typed data for each column.</p>
 
-<h3 id="select-5-rows-from-the-customers-table:">Select 5 rows from the customers table:</h3>
+<h3 id="select-5-rows-from-the-customers-table">Select 5 rows from the customers table:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">+0: jdbc:drill:&gt; select * from customers limit 5;
 +--------------+-----------------------+-------------------------------------------------+---------------------------------------------------------------------------------------+
 |   row_key    |        address        |                     loyalty                     |                                       personal                                        |
@@ -1551,7 +1551,7 @@ setup beyond the definition of a workspace.</p>
 
 <h3 id="query-nested-clickstream-data">Query nested clickstream data</h3>
 
-<h3 id="set-the-workspace-to-dfs.clicks:">Set the workspace to dfs.clicks:</h3>
+<h3 id="set-the-workspace-to-dfs-clicks">Set the workspace to dfs.clicks:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; use dfs.clicks;
 +-------+-----------------------------------------+
 |  ok   |                 summary                 |
@@ -1572,7 +1572,7 @@ location specified in the workspace. For example:</p>
 relative to this path. The clicks directory referred to in the following query
 is directly below the nested directory.</p>
 
-<h3 id="select-2-rows-from-the-clicks.json-file:">Select 2 rows from the clicks.json file:</h3>
+<h3 id="select-2-rows-from-the-clicks-json-file">Select 2 rows from the clicks.json file:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select * from `clicks/clicks.json` limit 2;
 +-----------+-------------+-----------+---------------------------------------------------+-------------------------------------------+
 | trans_id  |    date     |   time    |                     user_info                     |                trans_info                 |
@@ -1590,7 +1590,7 @@ to refer to a file in a local or distributed file system.</p>
 path. This is necessary whenever the file path contains Drill reserved words
 or characters.</p>
 
-<h3 id="select-2-rows-from-the-campaign.json-file:">Select 2 rows from the campaign.json file:</h3>
+<h3 id="select-2-rows-from-the-campaign-json-file">Select 2 rows from the campaign.json file:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select * from `clicks/clicks.campaign.json` limit 2;
 +-----------+-------------+-----------+---------------------------------------------------+---------------------+----------------------------------------+
 | trans_id  |    date     |   time    |                     user_info                     |       ad_info       |               trans_info               |
@@ -1624,7 +1624,7 @@ for that month. The total number of records in all log files is 48000.</p>
 are many of these files, but you can use Drill to query them all as a single
 data source, or to query a subset of the files.</p>
 
-<h3 id="set-the-workspace-to-dfs.logs:">Set the workspace to dfs.logs:</h3>
+<h3 id="set-the-workspace-to-dfs-logs">Set the workspace to dfs.logs:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; use dfs.logs;
 +-------+---------------------------------------+
 |  ok   |                summary                |
@@ -1633,7 +1633,7 @@ data source, or to query a subset of the files.</p>
 +-------+---------------------------------------+
 1 row selected
 </code></pre></div>
-<h3 id="select-2-rows-from-the-logs-directory:">Select 2 rows from the logs directory:</h3>
+<h3 id="select-2-rows-from-the-logs-directory">Select 2 rows from the logs directory:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select * from logs limit 2;
 +-------+-------+-----------+-------------+-----------+----------+---------+--------+----------+-----------+----------+-------------+
 | dir0  | dir1  | trans_id  |    date     |   time    | cust_id  | device  | state  | camp_id  | keywords  | prod_id  | purch_flag  |
@@ -1652,7 +1652,7 @@ directory path on the file system.</p>
 subdirectories below the logs directory. In Lesson 3, you will do more complex
 queries that leverage these dynamic variables.</p>
 
-<h3 id="find-the-total-number-of-rows-in-the-logs-directory-(all-files):">Find the total number of rows in the logs directory (all files):</h3>
+<h3 id="find-the-total-number-of-rows-in-the-logs-directory-all-files">Find the total number of rows in the logs directory (all files):</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select count(*) from logs;
 +---------+
 | EXPR$0  |
@@ -1664,7 +1664,7 @@ queries that leverage these dynamic variables.</p>
 <p>This query traverses all of the files in the logs directory and its
 subdirectories to return the total number of rows in those files.</p>
 
-<h1 id="what&#39;s-next">What&#39;s Next</h1>
+<h1 id="whats-next">What&#39;s Next</h1>
 
 <p>Go to <a href="/docs/lesson-2-run-queries-with-ansi-sql">Lesson 2: Run Queries with ANSI
 SQL</a>.</p>
diff --git a/docs/lesson-2-run-queries-with-ansi-sql/index.html b/docs/lesson-2-run-queries-with-ansi-sql/index.html
index e1e6fb3..1ec1ece 100644
--- a/docs/lesson-2-run-queries-with-ansi-sql/index.html
+++ b/docs/lesson-2-run-queries-with-ansi-sql/index.html
@@ -1304,7 +1304,7 @@ statement.</p>
 
 <h2 id="aggregation">Aggregation</h2>
 
-<h3 id="set-the-schema-to-hive:">Set the schema to hive:</h3>
+<h3 id="set-the-schema-to-hive">Set the schema to hive:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; use hive.`default`;
 +-------+-------------------------------------------+
 |  ok   |                  summary                  |
@@ -1313,7 +1313,7 @@ statement.</p>
 +-------+-------------------------------------------+
 1 row selected 
 </code></pre></div>
-<h3 id="return-sales-totals-by-month:">Return sales totals by month:</h3>
+<h3 id="return-sales-totals-by-month">Return sales totals by month:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select `month`, sum(order_total)
 from orders group by `month` order by 2 desc;
 +------------+---------+
@@ -1339,7 +1339,7 @@ database queries.</p>
 <p>Note that back ticks are required for the “month” column only because “month”
 is a reserved word in SQL.</p>
 
-<h3 id="return-the-top-20-sales-totals-by-month-and-state:">Return the top 20 sales totals by month and state:</h3>
+<h3 id="return-the-top-20-sales-totals-by-month-and-state">Return the top 20 sales totals by month and state:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select `month`, state, sum(order_total) as sales from orders group by `month`, state
 order by 3 desc limit 20;
 +-----------+--------+---------+
@@ -1375,7 +1375,7 @@ aliases and table aliases.</p>
 
 <p>This query uses the HAVING clause to constrain an aggregate result.</p>
 
-<h3 id="set-the-workspace-to-dfs.clicks">Set the workspace to dfs.clicks</h3>
+<h3 id="set-the-workspace-to-dfs-clicks">Set the workspace to dfs.clicks</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; use dfs.clicks;
 +-------+-----------------------------------------+
 |  ok   |                 summary                 |
@@ -1384,7 +1384,7 @@ aliases and table aliases.</p>
 +-------+-----------------------------------------+
 1 row selected
 </code></pre></div>
-<h3 id="return-total-number-of-clicks-for-devices-that-indicate-high-click-throughs:">Return total number of clicks for devices that indicate high click-throughs:</h3>
+<h3 id="return-total-number-of-clicks-for-devices-that-indicate-high-click-throughs">Return total number of clicks for devices that indicate high click-throughs:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select t.user_info.device, count(*) from `clicks/clicks.json` t 
 group by t.user_info.device
 having count(*) &gt; 1000;
@@ -1427,7 +1427,7 @@ duplicate rows from those files): <code>clicks.campaign.json</code> and <code>cl
 
 <h2 id="subqueries">Subqueries</h2>
 
-<h3 id="set-the-workspace-to-hive:">Set the workspace to hive:</h3>
+<h3 id="set-the-workspace-to-hive">Set the workspace to hive:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; use hive.`default`;
 +-------+-------------------------------------------+
 |  ok   |                  summary                  |
@@ -1436,7 +1436,7 @@ duplicate rows from those files): <code>clicks.campaign.json</code> and <code>cl
 +-------+-------------------------------------------+
 1 row selected
 </code></pre></div>
-<h3 id="compare-order-totals-across-states:">Compare order totals across states:</h3>
+<h3 id="compare-order-totals-across-states">Compare order totals across states:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select ny_sales.cust_id, ny_sales.total_orders, ca_sales.total_orders
 from
 (select o.cust_id, sum(o.order_total) as total_orders from hive.orders o where state = &#39;ny&#39; group by o.cust_id) ny_sales
@@ -1474,7 +1474,7 @@ limit 20;
 
 <h2 id="cast-function">CAST Function</h2>
 
-<h3 id="use-the-maprdb-workspace:">Use the maprdb workspace:</h3>
+<h3 id="use-the-maprdb-workspace">Use the maprdb workspace:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; use maprdb;
 +-------+-------------------------------------+
 |  ok   |               summary               |
@@ -1507,7 +1507,7 @@ from customers t limit 5;
 <li>The table alias t is required; otherwise the column family names would be parsed as table names and the query would return an error.</li>
 </ul>
 
-<h3 id="remove-the-quotes-from-the-strings:">Remove the quotes from the strings:</h3>
+<h3 id="remove-the-quotes-from-the-strings">Remove the quotes from the strings:</h3>
 
 <p>You can use the regexp_replace function to remove the quotes around the
 strings in the query results. For example, to return a state name va instead
@@ -1530,7 +1530,7 @@ from customers t limit 1;
 +-------+----------------------------------------+
 1 row selected
 </code></pre></div>
-<h3 id="use-a-mutable-workspace:">Use a mutable workspace:</h3>
+<h3 id="use-a-mutable-workspace">Use a mutable workspace:</h3>
 
 <p>A mutable (or writable) workspace is a workspace that is enabled for “write”
 operations. This attribute is part of the storage plugin configuration. You
@@ -1569,7 +1569,7 @@ statement.</p>
 defined in data sources such as Hive, HBase, and the file system. Drill also
 supports the creation of metadata in the file system.</p>
 
-<h3 id="query-data-from-the-view:">Query data from the view:</h3>
+<h3 id="query-data-from-the-view">Query data from the view:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select * from custview limit 1;
 +----------+-------------------+-----------+----------+--------+----------+-------------+
 | cust_id  |       name        |  gender   |   age    | state  | agg_rev  | membership  |
@@ -1584,7 +1584,7 @@ supports the creation of metadata in the file system.</p>
 
 <p>Continue using <code>dfs.views</code> for this query.</p>
 
-<h3 id="join-the-customers-view-and-the-orders-table:">Join the customers view and the orders table:</h3>
+<h3 id="join-the-customers-view-and-the-orders-table">Join the customers view and the orders table:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select membership, sum(order_total) as sales from hive.orders, custview
 where orders.cust_id=custview.cust_id
 group by membership order by 2;
@@ -1610,7 +1610,7 @@ rows are wide, set the maximum width of the display to 10000:</p>
 
 <p>Do not use a semicolon for this SET command.</p>
 
-<h3 id="join-the-customers,-orders,-and-clickstream-data:">Join the customers, orders, and clickstream data:</h3>
+<h3 id="join-the-customers-orders-and-clickstream-data">Join the customers, orders, and clickstream data:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select custview.membership, sum(orders.order_total) as sales from hive.orders, custview,
 dfs.`/mapr/demo.mapr.com/data/nested/clicks/clicks.json` c 
 where orders.cust_id=custview.cust_id and orders.cust_id=c.user_info.cust_id 
@@ -1640,7 +1640,7 @@ hive.orders table is also visible to the query.</p>
 workspace, so the query specifies the full path to the file:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">dfs.`/mapr/demo.mapr.com/data/nested/clicks/clicks.json`
 </code></pre></div>
-<h2 id="what&#39;s-next">What&#39;s Next</h2>
+<h2 id="whats-next">What&#39;s Next</h2>
 
 <p>Go to <a href="/docs/lesson-3-run-queries-on-complex-data-types">Lesson 3: Run Queries on Complex Data Types</a>. </p>
 
diff --git a/docs/lesson-3-run-queries-on-complex-data-types/index.html b/docs/lesson-3-run-queries-on-complex-data-types/index.html
index 2336327..f0d4316 100644
--- a/docs/lesson-3-run-queries-on-complex-data-types/index.html
+++ b/docs/lesson-3-run-queries-on-complex-data-types/index.html
@@ -1315,7 +1315,7 @@ exist. Here is a visual example of how this works:</p>
 
 <p><img src="/docs/img/example_query.png" alt="drill query flow"></p>
 
-<h3 id="set-workspace-to-dfs.logs:">Set workspace to dfs.logs:</h3>
+<h3 id="set-workspace-to-dfs-logs">Set workspace to dfs.logs:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; use dfs.logs;
 +-------+---------------------------------------+
 |  ok   |                summary                |
@@ -1324,7 +1324,7 @@ exist. Here is a visual example of how this works:</p>
 +-------+---------------------------------------+
 1 row selected
 </code></pre></div>
-<h3 id="query-logs-data-for-a-specific-year:">Query logs data for a specific year:</h3>
+<h3 id="query-logs-data-for-a-specific-year">Query logs data for a specific year:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select * from logs where dir0=&#39;2013&#39; limit 10;
 +-------+-------+-----------+-------------+-----------+----------+---------+--------+----------+-----------+----------+-------------+
 | dir0  | dir1  | trans_id  |    date     |   time    | cust_id  | device  | state  | camp_id  | keywords  | prod_id  | purch_flag  |
@@ -1346,7 +1346,7 @@ exist. Here is a visual example of how this works:</p>
 dir0 refers to the first level down from logs, dir1 to the next level, and so
 on. So this query returned 10 of the rows for February 2013.</p>
 
-<h3 id="further-constrain-the-results-using-multiple-predicates-in-the-query:">Further constrain the results using multiple predicates in the query:</h3>
+<h3 id="further-constrain-the-results-using-multiple-predicates-in-the-query">Further constrain the results using multiple predicates in the query:</h3>
 
 <p>This query returns a list of customer IDs for people who made a purchase via
 an IOS5 device in August 2013.</p>
@@ -1363,7 +1363,7 @@ order by `date`;
 
 ...
 </code></pre></div>
-<h3 id="return-monthly-counts-per-customer-for-a-given-year:">Return monthly counts per customer for a given year:</h3>
+<h3 id="return-monthly-counts-per-customer-for-a-given-year">Return monthly counts per customer for a given year:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select cust_id, dir1 month_no, count(*) month_count from logs
 where dir0=2014 group by cust_id, dir1 order by cust_id, month_no limit 10;
 +----------+-----------+--------------+
@@ -1391,7 +1391,7 @@ year: 2014.</p>
 analyze nested data natively without transformation. If you are familiar with
 JavaScript notation, you will already know how some of these extensions work.</p>
 
-<h3 id="set-the-workspace-to-dfs.clicks:">Set the workspace to dfs.clicks:</h3>
+<h3 id="set-the-workspace-to-dfs-clicks">Set the workspace to dfs.clicks:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; use dfs.clicks;
 +-------+-----------------------------------------+
 |  ok   |                 summary                 |
@@ -1400,7 +1400,7 @@ JavaScript notation, you will already know how some of these extensions work.</p
 +-------+-----------------------------------------+
 1 row selected
 </code></pre></div>
-<h3 id="explore-clickstream-data:">Explore clickstream data:</h3>
+<h3 id="explore-clickstream-data">Explore clickstream data:</h3>
 
 <p>Note that the user_info and trans_info columns contain nested data: arrays and
 arrays within arrays. The following queries show how to access this complex
@@ -1417,7 +1417,7 @@ data.</p>
 +-----------+-------------+-----------+---------------------------------------------------+---------------------------------------------------------------------------+
 5 rows selected
 </code></pre></div>
-<h3 id="unpack-the-user_info-column:">Unpack the user_info column:</h3>
+<h3 id="unpack-the-user_info-column">Unpack the user_info column:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select t.user_info.cust_id as custid, t.user_info.device as device,
 t.user_info.state as state
 from `clicks/clicks.json` t limit 5;
@@ -1442,7 +1442,7 @@ column name, and <code>cust_id</code> is a nested column name.</p>
 <p>The table alias is required; otherwise column names such as <code>user_info</code> are
 parsed as table names by the SQL parser.</p>
 
-<h3 id="unpack-the-trans_info-column:">Unpack the trans_info column:</h3>
+<h3 id="unpack-the-trans_info-column">Unpack the trans_info column:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select t.trans_info.prod_id as prodid, t.trans_info.purch_flag as
 purchased
 from `clicks/clicks.json` t limit 5;
@@ -1475,7 +1475,7 @@ notation to write interesting queries against nested array data.</p>
 </code></pre></div>
 <p>refers to the 21st value, assuming one exists.</p>
 
-<h3 id="find-the-first-product-that-is-searched-for-in-each-transaction:">Find the first product that is searched for in each transaction:</h3>
+<h3 id="find-the-first-product-that-is-searched-for-in-each-transaction">Find the first product that is searched for in each transaction:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select t.trans_id, t.trans_info.prod_id[0] from `clicks/clicks.json` t limit 5;
 +------------+------------+
 |  trans_id  |   EXPR$1   |
@@ -1488,7 +1488,7 @@ notation to write interesting queries against nested array data.</p>
 +------------+------------+
 5 rows selected
 </code></pre></div>
-<h3 id="for-which-transactions-did-customers-search-on-at-least-21-products?">For which transactions did customers search on at least 21 products?</h3>
+<h3 id="for-which-transactions-did-customers-search-on-at-least-21-products">For which transactions did customers search on at least 21 products?</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select t.trans_id, t.trans_info.prod_id[20]
 from `clicks/clicks.json` t
 where t.trans_info.prod_id[20] is not null
@@ -1507,7 +1507,7 @@ order by trans_id limit 5;
 <p>This query returns transaction IDs and product IDs for records that contain a
 non-null product ID at the 21st position in the array.</p>
 
-<h3 id="return-clicks-for-a-specific-product-range:">Return clicks for a specific product range:</h3>
+<h3 id="return-clicks-for-a-specific-product-range">Return clicks for a specific product range:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select * from (select t.trans_id, t.trans_info.prod_id[0] as prodid,
 t.trans_info.purch_flag as purchased
 from `clicks/clicks.json` t) sq
@@ -1530,7 +1530,7 @@ ordered list of products purchased rather than a random list).</p>
 
 <h2 id="perform-operations-on-arrays">Perform Operations on Arrays</h2>
 
-<h3 id="rank-successful-click-conversions-and-count-product-searches-for-each-session:">Rank successful click conversions and count product searches for each session:</h3>
+<h3 id="rank-successful-click-conversions-and-count-product-searches-for-each-session">Rank successful click conversions and count product searches for each session:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select t.trans_id, t.`date` as session_date, t.user_info.cust_id as
 cust_id, t.user_info.device as device, repeated_count(t.trans_info.prod_id) as
 prod_count, t.trans_info.purch_flag as purch_flag
@@ -1556,7 +1556,7 @@ in descending order. Only clicks that have resulted in a purchase are counted.</
 <p>To facilitate additional analysis on this result set, you can easily and
 quickly create a Drill table from the results of the query.</p>
 
-<h3 id="continue-to-use-the-dfs.clicks-workspace">Continue to use the dfs.clicks workspace</h3>
+<h3 id="continue-to-use-the-dfs-clicks-workspace">Continue to use the dfs.clicks workspace</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; use dfs.clicks;
 +-------+-----------------------------------------+
 |  ok   |                 summary                 |
@@ -1565,7 +1565,7 @@ quickly create a Drill table from the results of the query.</p>
 +-------+-----------------------------------------+
 1 row selected (1.61 seconds)
 </code></pre></div>
-<h3 id="return-product-searches-for-high-value-customers:">Return product searches for high-value customers:</h3>
+<h3 id="return-product-searches-for-high-value-customers">Return product searches for high-value customers:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select o.cust_id, o.order_total, t.trans_info.prod_id[0] as prod_id
 from 
 hive.orders as o
@@ -1589,7 +1589,7 @@ where o.order_total &gt; (select avg(inord.order_total)
 <p>This query returns a list of products that are being searched for by customers
 who have made transactions that are above the average in their states.</p>
 
-<h3 id="materialize-the-result-of-the-previous-query:">Materialize the result of the previous query:</h3>
+<h3 id="materialize-the-result-of-the-previous-query">Materialize the result of the previous query:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; create table product_search as select o.cust_id, o.order_total, t.trans_info.prod_id[0] as prod_id
 from
 hive.orders as o
@@ -1611,7 +1611,7 @@ query returns (107,482) and stores them in the format specified by the storage
 plugin (Parquet format in this example). You can create tables that store data
 in csv, parquet, and json formats.</p>
 
-<h3 id="query-the-new-table-to-verify-the-row-count:">Query the new table to verify the row count:</h3>
+<h3 id="query-the-new-table-to-verify-the-row-count">Query the new table to verify the row count:</h3>
 
 <p>This example simply checks that the CTAS statement worked by verifying the
 number of rows in the table.</p>
@@ -1623,7 +1623,7 @@ number of rows in the table.</p>
 +---------+
 1 row selected (0.155 seconds)
 </code></pre></div>
-<h3 id="find-the-storage-file-for-the-table:">Find the storage file for the table:</h3>
+<h3 id="find-the-storage-file-for-the-table">Find the storage file for the table:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">[root@maprdemo product_search]# cd /mapr/demo.mapr.com/data/nested/product_search
 [root@maprdemo product_search]# ls -la
 total 451
@@ -1637,7 +1637,7 @@ stored in the location defined by the dfs.clicks workspace:</p>
 </code></pre></div>
 <p>There is a subdirectory that has the same name as the table you created.</p>
 
-<h2 id="what&#39;s-next">What&#39;s Next</h2>
+<h2 id="whats-next">What&#39;s Next</h2>
 
 <p>Complete the tutorial with the <a href="/docs/summary">Summary</a>.</p>
 
diff --git a/docs/logfile-plugin/index.html b/docs/logfile-plugin/index.html
index 06b0f14..03cad75 100644
--- a/docs/logfile-plugin/index.html
+++ b/docs/logfile-plugin/index.html
@@ -1287,7 +1287,7 @@
 </code></pre></div>
 <p>To configure the Logfile plugin, you must first add the <code>drill-logfile-plugin-1.0.0</code> JAR file to Drill and then add the Logfile configuration to a <code>dfs</code> storage plugin, as described in the following sections.  </p>
 
-<h2 id="adding-drill-logfile-plugin-1.0.0.jar-to-drill">Adding drill-logfile-plugin-1.0.0.jar to Drill</h2>
+<h2 id="adding-drill-logfile-plugin-1-0-0-jar-to-drill">Adding drill-logfile-plugin-1.0.0.jar to Drill</h2>
 
 <p>You can either <a href="https://github.com/cgivre/drill-logfile-plugin/releases/download/v1.0/drill-logfile-plugin-1.0.0.jar">download</a> or build the <code>drill-logfile-plugin-1.0.0</code> JAR file with Maven, by running the following commands:  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   git clone https://github.com/cgivre/drill-logfile-plugin.git 
diff --git a/docs/logging-and-tracing/index.html b/docs/logging-and-tracing/index.html
index e37358d..f6d467c 100644
--- a/docs/logging-and-tracing/index.html
+++ b/docs/logging-and-tracing/index.html
@@ -1360,7 +1360,7 @@ For example:  </p>
 <li><p>In the <strong>Log Path</strong> (or Log Directory) field, specify the full path to the folder where you want to save log files. </p></li>
 <li><p>If necessary (for example, if requested by a Support team), type the name of the component for which to log messages in the <strong>Log Namespace</strong> field. Otherwise, do not type a value in the field.</p></li>
 <li><p>Click <strong>OK</strong> to close the Logging Options dialog box.</p></li>
-<li><p>Click <strong>OK</strong> to save your settings and close the <strong>DSN Configuration</strong> dialog box. Configuration changes will not be saved of picked up by the driver until you have clicked <strong>OK</strong> in the <strong>DSN Configuration *<em>dialog box. Click *</em>Cancel</strong> (or the X button) to discard changes.</p></li>
+<li><p>Click <strong>OK</strong> to save your settings and close the <strong>DSN Configuration</strong> dialog box. Configuration changes will not be saved of picked up by the driver until you have clicked <strong>OK</strong> in the <strong>DSN Configuration **dialog box. Click **Cancel</strong> (or the X button) to discard changes.</p></li>
 <li><p>Restart the application to make sure that the new settings take effect. Configuration changes will not be picked up by until the application reloads the driver.</p></li>
 </ol>
 
diff --git a/docs/mongodb-storage-plugin/index.html b/docs/mongodb-storage-plugin/index.html
index 1889ca0..553e963 100644
--- a/docs/mongodb-storage-plugin/index.html
+++ b/docs/mongodb-storage-plugin/index.html
@@ -1457,7 +1457,7 @@ Drill data sources, including MongoDB. </p>
 | -72.576142 |
 +------------+
 </code></pre></div>
-<h2 id="using-odbc/jdbc-drivers">Using ODBC/JDBC Drivers</h2>
+<h2 id="using-odbc-jdbc-drivers">Using ODBC/JDBC Drivers</h2>
 
 <p>You can query MongoDB through standard
 BI tools, such as Tableau and SQuirreL. For information about Drill ODBC and JDBC drivers, refer to <a href="/docs/odbc-jdbc-interfaces">Drill Interfaces</a>.</p>
diff --git a/docs/parquet-filter-pushdown/index.html b/docs/parquet-filter-pushdown/index.html
index da5079f..369684f 100644
--- a/docs/parquet-filter-pushdown/index.html
+++ b/docs/parquet-filter-pushdown/index.html
@@ -1287,7 +1287,7 @@
 </code></pre></div>
 <p>When a CTE, view, or subquery contains a star filter condition, the query planner in Drill can apply the filter and prune extraneous data, further reducing the amount of data that the scanner reads and improving performance. </p>
 
-<p><strong>Note:</strong> Currently, Drill only supports pushdown for simple star subselect queries without filters. See <a href="https://www.google.com/url?q=https://issues.apache.org/jira/browse/DRILL-6219&amp;sa=D&amp;ust=1522084453671000&amp;usg=AFQjCNFXp-nWMRXzM466BSRFlV3F63_ZYA">DRILL-6219</a> for more information.  </p>
+<p><strong>Note:</strong> Currently, Drill only supports pushdown for simple star subselect queries without filters. See <a href="https://www.google.com/url?q=https://issues.apache.org/jira/browse/DRILL-6219&sa=D&ust=1522084453671000&usg=AFQjCNFXp-nWMRXzM466BSRFlV3F63_ZYA">DRILL-6219</a> for more information.  </p>
 
 <h2 id="how-parquet-filter-pushdown-works">How Parquet Filter Pushdown Works</h2>
 
diff --git a/docs/parquet-format/index.html b/docs/parquet-format/index.html
index d8a669f..a520b26 100644
--- a/docs/parquet-format/index.html
+++ b/docs/parquet-format/index.html
@@ -1364,7 +1364,7 @@
 <li>In the CTAS command, cast JSON string data to corresponding <a href="/docs/json-data-model/#data-type-mapping">SQL types</a>.</li>
 </ul>
 
-<h3 id="example:-read-json,-write-parquet">Example: Read JSON, Write Parquet</h3>
+<h3 id="example-read-json-write-parquet">Example: Read JSON, Write Parquet</h3>
 
 <p>This example demonstrates a storage plugin definition, a sample row of data from a JSON file, and a Drill query that writes the JSON input to Parquet output. </p>
 
diff --git a/docs/partition-pruning-introduction/index.html b/docs/partition-pruning-introduction/index.html
index e39fe47..b48aa27 100644
--- a/docs/partition-pruning-introduction/index.html
+++ b/docs/partition-pruning-introduction/index.html
@@ -1291,7 +1291,7 @@
 </code></pre></div>
 <p>When a CTE, view, or subquery contains a star filter condition, the query planner in Drill can apply the filter and prune extraneous data, further reducing the amount of data that the scanner reads and improving performance. </p>
 
-<p><strong>Note:</strong> Currently, Drill only supports pushdown for simple star subselect queries without filters. See <a href="https://www.google.com/url?q=https://issues.apache.org/jira/browse/DRILL-6219&amp;sa=D&amp;ust=1522084453671000&amp;usg=AFQjCNFXp-nWMRXzM466BSRFlV3F63_ZYA">DRILL-6219</a> for more information.</p>
+<p><strong>Note:</strong> Currently, Drill only supports pushdown for simple star subselect queries without filters. See <a href="https://www.google.com/url?q=https://issues.apache.org/jira/browse/DRILL-6219&sa=D&ust=1522084453671000&usg=AFQjCNFXp-nWMRXzM466BSRFlV3F63_ZYA">DRILL-6219</a> for more information.</p>
 
 <h2 id="using-partitioned-drill-data">Using Partitioned Drill Data</h2>
 
diff --git a/docs/phonetic-functions/index.html b/docs/phonetic-functions/index.html
index 6a13c27..3f9579e 100644
--- a/docs/phonetic-functions/index.html
+++ b/docs/phonetic-functions/index.html
@@ -1306,19 +1306,19 @@
 
 <p>The following sections describe each of the phonetic functions that Drill supports. Each function has a different algorithm that may work better for certain words.  </p>
 
-<h3 id="caverphone1(string)">caverphone1(string)</h3>
+<h3 id="caverphone1-string">caverphone1(string)</h3>
 
 <p>An algorithm created by the Caversham Project at the University of Otago. It implements the Caverphone 1.0 algorithm.  </p>
 
-<h3 id="caverphone2(string)">caverphone2(string)</h3>
+<h3 id="caverphone2-string">caverphone2(string)</h3>
 
 <p>An algorithm created by the Caversham Project at the University of Otago. It implements the Caverphone 2.0 algorithm.</p>
 
-<h3 id="cologne_phonetic(string)">cologne_phonetic(string)</h3>
+<h3 id="cologne_phonetic-string">cologne_phonetic(string)</h3>
 
 <p>Encodes a string into a Cologne Phonetic value. Implements the Kölner Phonetik (Cologne Phonetic) algorithm issued by Hans Joachim Postel in 1969. The Kölner Phonetik is a phonetic algorithm which is optimized for the German language. It is related to the well-known soundex algorithm.</p>
 
-<h3 id="dm_soundex(string)">dm_soundex(string)</h3>
+<h3 id="dm_soundex-string">dm_soundex(string)</h3>
 
 <p>Encodes a string into a Daitch-Mokotoff Soundex value. The Daitch-Mokotoff Soundex algorithm is a refinement of the Russell and American Soundex algorithms, yielding greater accuracy in matching especially Slavish and Yiddish surnames with similar pronunciation, but differences in spelling. The main differences compared to the other soundex variants are:  </p>
 
@@ -1329,27 +1329,27 @@
 <li>multiple possible encodings for the same name (branching)</li>
 </ul>
 
-<h3 id="double_metaphone(string)">double_metaphone(string)</h3>
+<h3 id="double_metaphone-string">double_metaphone(string)</h3>
 
 <p>Implements the Double <a href="https://en.wikipedia.org/wiki/Metaphone">Metaphone</a> phonetic algorithm and calculates a given string&#39;s Double Metaphone value.  </p>
 
-<h3 id="match_rating_encoder(string)">match_rating_encoder(string)</h3>
+<h3 id="match_rating_encoder-string">match_rating_encoder(string)</h3>
 
 <p>Match Rating Approach Phonetic Algorithm Developed by Western Airlines in 1977.</p>
 
-<h3 id="metaphone(string)">metaphone(string)</h3>
+<h3 id="metaphone-string">metaphone(string)</h3>
 
 <p>Implements the <a href="https://en.wikipedia.org/wiki/Metaphone">Metaphone</a> phonetic algorithm and calculates a given string&#39;s Metaphone value.  </p>
 
-<h3 id="nysiis(string)">nysiis(string)</h3>
+<h3 id="nysiis-string">nysiis(string)</h3>
 
 <p>Encodes a string into a NYSIIS value. NYSIIS is an encoding used to relate similar names, but can also be used as a general purpose scheme to find word with similar phonemes. The New York State Identification and Intelligence System Phonetic Code, commonly known as NYSIIS, is a phonetic algorithm devised in 1970 as part of the New York State Identification and Intelligence System (now a part of the New York State Division of Criminal Justice Services). It features an accuracy increase [...]
 
-<h3 id="refined_soundex(string)">refined_soundex(string)</h3>
+<h3 id="refined_soundex-string">refined_soundex(string)</h3>
 
 <p>Encodes a string into a Refined Soundex value. Soundex is an encoding used to relate similar names, but can also be used as a general purpose scheme to find a word with similar phonemes. </p>
 
-<h3 id="soundex(string)">soundex(string)</h3>
+<h3 id="soundex-string">soundex(string)</h3>
 
 <p>Encodes a string into a Soundex value. Soundex is an encoding used to relate similar names, but can also be used as a general purpose scheme to find word with similar phonemes.</p>
 
diff --git a/docs/query-directory-functions/index.html b/docs/query-directory-functions/index.html
index ac7d018..db06607 100644
--- a/docs/query-directory-functions/index.html
+++ b/docs/query-directory-functions/index.html
@@ -1289,7 +1289,7 @@
 
 <p>The query directory functions restrict a query to one of a number of subdirectories. For example, suppose you had time-series data in subdirectories named 2015, 2014, and 2013. You could use the MAXDIR function to get the latest data and MINDIR to get the earliest data.</p>
 
-<p>In the case where the directory names contain alphabetic characters, the MAXDIR and MINDIR functions return the highest or lowest values, respectively in a case-sensitive string ordering. The IMAXDIR and IMINDIR functions return the corresponding values with <a href="https://support.office.com/en-za/article/Sort-records-in-case-sensitive-order-8fea1de4-6189-40e7-9359-00cd7d7845c0?ui=en-US&amp;rs=en-ZA&amp;ad=ZA">case-insensitive ordering</a>.</p>
+<p>In the case where the directory names contain alphabetic characters, the MAXDIR and MINDIR functions return the highest or lowest values, respectively in a case-sensitive string ordering. The IMAXDIR and IMINDIR functions return the corresponding values with <a href="https://support.office.com/en-za/article/Sort-records-in-case-sensitive-order-8fea1de4-6189-40e7-9359-00cd7d7845c0?ui=en-US&rs=en-ZA&ad=ZA">case-insensitive ordering</a>.</p>
 
 <p>The query directory functions are recommended instead of the MAX or MIN aggregate functions to prevent Drill from scanning all data in directories.</p>
 
diff --git a/docs/query-profiles/index.html b/docs/query-profiles/index.html
index e3095dd..e6ee90b 100644
--- a/docs/query-profiles/index.html
+++ b/docs/query-profiles/index.html
@@ -1286,7 +1286,7 @@
 
 <p>The Drill Web Console provides aggregate statistics across profile lists. Profile lists consist of data from major and minor fragments, operators, and input streams. You can use profiles in conjunction with Drill logs for debugging purposes. In addition to viewing query profiles, you can modify, resubmit, or cancel queries from the Drill Web Console.  </p>
 
-<h3 id="query,-fragment,-and-operator-identifiers">Query, Fragment, and Operator Identifiers</h3>
+<h3 id="query-fragment-and-operator-identifiers">Query, Fragment, and Operator Identifiers</h3>
 
 <p>Metrics in a query profile are associated with a coordinate system of identifiers. Drill uses a coordinate system comprised of query, fragment, and operator identifiers to track query execution activities and resources. Drill assigns a unique identifier, the QueryID, to each query received and then assigns an identifier to each fragment and operator that executes the query. An example of a QueryID is 2aa98add-15b3-e155-5669-603c03bfde86. The following images shows an example of fragme [...]
 
diff --git a/docs/querying-hbase/index.html b/docs/querying-hbase/index.html
index b1ef886..71b7c8e 100644
--- a/docs/querying-hbase/index.html
+++ b/docs/querying-hbase/index.html
@@ -1291,7 +1291,7 @@ How to use optimization features in Drill 1.2 and later<br></li>
 How to use Drill 1.2 to leverage new features introduced by <a href="https://issues.apache.org/jira/browse/HBASE-8201">HBASE-8201 Jira</a></li>
 </ul>
 
-<h2 id="tutorial--querying-hbase-data">Tutorial--Querying HBase Data</h2>
+<h2 id="tutorial-querying-hbase-data">Tutorial--Querying HBase Data</h2>
 
 <p>This tutorial shows how to connect Drill to an HBase data source, create simple HBase tables, and query the data using Drill.</p>
 
diff --git a/docs/querying-hive/index.html b/docs/querying-hive/index.html
index d13bce1..dbb0ff5 100644
--- a/docs/querying-hive/index.html
+++ b/docs/querying-hive/index.html
@@ -1280,7 +1280,7 @@
       
         <p>This is a simple exercise that provides steps for creating a Hive table and
 inserting data that you can query using Drill. Before you perform the steps,
-download the <a href="http://doc.mapr.com/download/attachments/28868943/customers.csv?version=1&amp;modificationDate=1426874930765&amp;api=v2">customers.csv</a> file.  </p>
+download the <a href="http://doc.mapr.com/download/attachments/28868943/customers.csv?version=1&modificationDate=1426874930765&api=v2">customers.csv</a> file.  </p>
 
 <div class="admonition note">
   <p class="first admonition-title">Note</p>
diff --git a/docs/querying-json-files/index.html b/docs/querying-json-files/index.html
index e0bafd4..a612b0f 100644
--- a/docs/querying-json-files/index.html
+++ b/docs/querying-json-files/index.html
@@ -1282,7 +1282,7 @@
       
         <p>To query complex JSON files, you need to understand the <a href="/docs/json-data-model/">&quot;JSON Data Model&quot;</a>. This section provides a trivial example of querying a sample file that Drill installs. </p>
 
-<h2 id="about-the-employee.json-file">About the employee.json File</h2>
+<h2 id="about-the-employee-json-file">About the employee.json File</h2>
 
 <p>The sample file, <code>employee.json</code>, is packaged in the Foodmart data JAR in Drill&#39;s
 classpath:  </p>
diff --git a/docs/querying-plain-text-files/index.html b/docs/querying-plain-text-files/index.html
index 30e8ef6..7a14428 100644
--- a/docs/querying-plain-text-files/index.html
+++ b/docs/querying-plain-text-files/index.html
@@ -1311,7 +1311,7 @@ found&quot; error if references to files in queries do not match these condition
       &quot;delimiter&quot;: &quot;|&quot;
     }
 </code></pre></div>
-<h2 id="select-*-from-a-csv-file">SELECT * FROM a CSV File</h2>
+<h2 id="select-from-a-csv-file">SELECT * FROM a CSV File</h2>
 
 <p>The first query selects rows from a <code>.csv</code> text file. The file contains seven
 records:</p>
@@ -1342,7 +1342,7 @@ each row.</p>
 +-----------------------------------+
 7 rows selected (0.089 seconds)
 </code></pre></div>
-<h2 id="columns[n]-syntax">Columns[n] Syntax</h2>
+<h2 id="columns-n-syntax">Columns[n] Syntax</h2>
 
 <p>You can use the <code>COLUMNS[n]</code> syntax in the SELECT list to return these CSV
 rows in a more readable, column by column, format. (This syntax uses a zero-
diff --git a/docs/querying-system-tables/index.html b/docs/querying-system-tables/index.html
index 148889c..58e2a01 100644
--- a/docs/querying-system-tables/index.html
+++ b/docs/querying-system-tables/index.html
@@ -1340,7 +1340,7 @@ requests.</p>
 
 <p>Query the drillbits, version, options, boot, threads, and memory tables in the sys database.</p>
 
-<h3 id="query-the-drillbits-table.">Query the drillbits table.</h3>
+<h3 id="query-the-drillbits-table">Query the drillbits table.</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=10.10.100.113:5181&gt; select * from drillbits;
 +-------------------+------------+--------------+------------+---------+
 |   hostname        |  user_port | control_port | data_port  |  current|
@@ -1368,7 +1368,7 @@ True means the Drillbit is connected to the session or client running the
 query. This Drillbit is the Foreman for the current session.<br></li>
 </ul>
 
-<h3 id="query-the-version-table.">Query the version table.</h3>
+<h3 id="query-the-version-table">Query the version table.</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=10.10.100.113:5181&gt; select * from version;
 +-------------------------------------------+--------------------------------------------------------------------+----------------------------+--------------+----------------------------+
 |                 commit_id                 |                           commit_message                           |        commit_time         | build_email  |         build_time         |
@@ -1392,7 +1392,7 @@ example.</li>
 The time that the release was built.</li>
 </ul>
 
-<h3 id="query-the-options-table.">Query the options table.</h3>
+<h3 id="query-the-options-table">Query the options table.</h3>
 
 <p>Drill provides system, session, and boot options that you can query.</p>
 
@@ -1434,7 +1434,7 @@ The default value, which is of the double, float, or long double data type;
 otherwise, null.</li>
 </ul>
 
-<h3 id="query-the-boot-table.">Query the boot table.</h3>
+<h3 id="query-the-boot-table">Query the boot table.</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=10.10.100.113:5181&gt; select * from boot limit 10;
 +--------------------------------------+----------+-------+---------+------------+-------------------------+-----------+------------+
 |                 name                 |   kind   | type  | status  |  num_val   |       string_val        | bool_val  | float_val  |
@@ -1472,7 +1472,7 @@ The default value, which is of the double, float, or long double data type;
 otherwise, null.</li>
 </ul>
 
-<h3 id="query-the-threads-table.">Query the threads table.</h3>
+<h3 id="query-the-threads-table">Query the threads table.</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=10.10.100.113:5181&gt; select * from threads;
 +--------------------+------------+----------------+---------------+
 |       hostname     | user_port  | total_threads  | busy_threads  |
@@ -1495,7 +1495,7 @@ The peak thread count on the node.</li>
 The current number of live threads (daemon and non-daemon) on the node.</li>
 </ul>
 
-<h3 id="query-the-memory-table.">Query the memory table.</h3>
+<h3 id="query-the-memory-table">Query the memory table.</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=10.10.100.113:5181&gt; select * from memory;
 +--------------------+------------+---------------+-------------+-----------------+---------------------+-------------+
 |       hostname     | user_port  | heap_current  |  heap_max   | direct_current  | jvm_direct_current  | direct_max  |
diff --git a/docs/ranking-window-functions/index.html b/docs/ranking-window-functions/index.html
index f00173d..040be89 100644
--- a/docs/ranking-window-functions/index.html
+++ b/docs/ranking-window-functions/index.html
@@ -1342,7 +1342,7 @@ The window clauses for the function. The OVER clause cannot contain an explicit
 
 <p>The following examples show queries that use each of the ranking window functions in Drill. See <a href="/docs/sql-window-functions-examples/">Window Functions Examples</a> for information about the data and setup for these examples.</p>
 
-<h3 id="cume_dist()">CUME_DIST()</h3>
+<h3 id="cume_dist">CUME_DIST()</h3>
 
 <p>The following query uses the CUME_DIST() window function to calculate the cumulative distribution of sales for each dealer in Q1.  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select dealer_id, sales, cume_dist() over(order by sales) as cumedist from q1_sales;
@@ -1382,7 +1382,7 @@ The window clauses for the function. The OVER clause cannot contain an explicit
    +------------+-----------------+--------+------------+
    10 rows selected (0.198 seconds)  
 </code></pre></div>
-<h3 id="ntile()">NTILE()</h3>
+<h3 id="ntile">NTILE()</h3>
 
 <p>The following example uses the NTILE window function to divide the Q1 sales into five groups and list the sales in ascending order.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select emp_mgr, sales, ntile(5) over(order by sales) as ntilerank from q1_sales;
@@ -1420,7 +1420,7 @@ The window clauses for the function. The OVER clause cannot contain an explicit
    +-----------------+------------+--------+------------+
    10 rows selected (0.312 seconds)
 </code></pre></div>
-<h3 id="percent_rank()">PERCENT_RANK()</h3>
+<h3 id="percent_rank">PERCENT_RANK()</h3>
 
 <p>The following query uses the PERCENT_RANK() window function to calculate the percent rank for employee sales in Q1.  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select dealer_id, emp_name, sales, percent_rank() over(order by sales) as perrank from q1_sales; 
@@ -1440,7 +1440,7 @@ The window clauses for the function. The OVER clause cannot contain an explicit
    +------------+-----------------+--------+---------------------+
    10 rows selected (0.169 seconds)
 </code></pre></div>
-<h3 id="rank()">RANK()</h3>
+<h3 id="rank">RANK()</h3>
 
 <p>The following query uses the RANK() window function to rank the employee sales for Q1. The word rank in Drill is a reserved keyword and must be enclosed in back ticks (``).</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select dealer_id, emp_name, sales, rank() over(order by sales) as `rank` from q1_sales;
@@ -1460,7 +1460,7 @@ The window clauses for the function. The OVER clause cannot contain an explicit
    +------------+-----------------+--------+-------+
    10 rows selected (0.174 seconds)
 </code></pre></div>
-<h3 id="row_number()">ROW_NUMBER()</h3>
+<h3 id="row_number">ROW_NUMBER()</h3>
 
 <p>The following query uses the ROW_NUMBER() window function to number the sales for each dealer_id. The word rownum contains the reserved keyword row and must be enclosed in back ticks (``).  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">    select dealer_id, emp_name, sales, row_number() over(partition by dealer_id order by sales) as `rownum` from q1_sales;
diff --git a/docs/rest-api-introduction/index.html b/docs/rest-api-introduction/index.html
index bbd9c15..aaca492 100644
--- a/docs/rest-api-introduction/index.html
+++ b/docs/rest-api-introduction/index.html
@@ -1334,7 +1334,7 @@
 
 <hr>
 
-<h3 id="post-/query.json">POST /query.json</h3>
+<h3 id="post-query-json">POST /query.json</h3>
 
 <p>Submit a query and return results.</p>
 
@@ -1375,7 +1375,7 @@
 
 <hr>
 
-<h3 id="get-/profiles.json">GET /profiles.json</h3>
+<h3 id="get-profiles-json">GET /profiles.json</h3>
 
 <p>Get the profiles of running and completed queries. </p>
 
@@ -1399,7 +1399,7 @@
 </code></pre></div>
 <hr>
 
-<h3 id="get-/profiles/{queryid}.json">GET /profiles/{queryid}.json</h3>
+<h3 id="get-profiles-queryid-json">GET /profiles/{queryid}.json</h3>
 
 <p>Get the profile of the query that has the given queryid.</p>
 
@@ -1417,7 +1417,7 @@
 </code></pre></div>
 <hr>
 
-<h3 id="get-/profiles/cancel/{queryid}">GET /profiles/cancel/{queryid}</h3>
+<h3 id="get-profiles-cancel-queryid">GET /profiles/cancel/{queryid}</h3>
 
 <p>Cancel the query that has the given queryid.</p>
 
@@ -1440,7 +1440,7 @@
 
 <hr>
 
-<h3 id="get-/storage.json">GET /storage.json</h3>
+<h3 id="get-storage-json">GET /storage.json</h3>
 
 <p>Get the list of storage plugin names and configurations.</p>
 
@@ -1474,7 +1474,7 @@
 </code></pre></div>
 <hr>
 
-<h3 id="get-/storage/{name}.json">GET /storage/{name}.json</h3>
+<h3 id="get-storage-name-json">GET /storage/{name}.json</h3>
 
 <p>Get the definition of the named storage plugin.</p>
 
@@ -1498,7 +1498,7 @@
 </code></pre></div>
 <hr>
 
-<h3 id="get-/storage/{name}/enable/{val}">Get /storage/{name}/enable/{val}</h3>
+<h3 id="get-storage-name-enable-val">Get /storage/{name}/enable/{val}</h3>
 
 <p>Enable or disable the named storage plugin.</p>
 
@@ -1521,7 +1521,7 @@
 
 <hr>
 
-<h3 id="post-/storage/{name}.json">POST /storage/{name}.json</h3>
+<h3 id="post-storage-name-json">POST /storage/{name}.json</h3>
 
 <p>Create or update a storage plugin configuration.</p>
 
@@ -1554,7 +1554,7 @@
 
 <hr>
 
-<h3 id="delete-/storage/{name}.json">DELETE /storage/{name}.json</h3>
+<h3 id="delete-storage-name-json">DELETE /storage/{name}.json</h3>
 
 <p>Delete a storage plugin configuration.</p>
 
@@ -1577,7 +1577,7 @@
 
 <hr>
 
-<h3 id="get-/cluster.json">GET /cluster.json</h3>
+<h3 id="get-cluster-json">GET /cluster.json</h3>
 
 <p>Get Drillbit information, such as port numbers.</p>
 
@@ -1608,7 +1608,7 @@
 </code></pre></div>
 <hr>
 
-<h3 id="get-/status">GET /status</h3>
+<h3 id="get-status">GET /status</h3>
 
 <p>Get the status of Drill. </p>
 
@@ -1628,7 +1628,7 @@
 </code></pre></div>
 <hr>
 
-<h3 id="get-/status/metrics">GET /status/metrics</h3>
+<h3 id="get-status-metrics">GET /status/metrics</h3>
 
 <p>Get the current memory metrics.</p>
 
@@ -1647,7 +1647,7 @@
 
 <hr>
 
-<h3 id="get-/status/threads">GET /status/threads</h3>
+<h3 id="get-status-threads">GET /status/threads</h3>
 
 <p>Get the status of threads.</p>
 
@@ -1678,7 +1678,7 @@
 
 <hr>
 
-<h3 id="get-/options.json">GET /options.json</h3>
+<h3 id="get-options-json">GET /options.json</h3>
 
 <p>List the name, default, and data type of the system and session options.</p>
 
diff --git a/docs/rpc-overview/index.html b/docs/rpc-overview/index.html
index ff4fc73..f7fe897 100644
--- a/docs/rpc-overview/index.html
+++ b/docs/rpc-overview/index.html
@@ -1303,7 +1303,7 @@ Body (bytes), RawBody (bytes).</p>
 
 <p>For encryption support, both UserClient and UserServer require modification since new handlers will be added for encryption and decryption if privacy is negotiated as part of the handshake. </p>
 
-<h3 id="encryption,-decryption,-and-chunkcreation-handlers">Encryption, Decryption, and ChunkCreation Handlers</h3>
+<h3 id="encryption-decryption-and-chunkcreation-handlers">Encryption, Decryption, and ChunkCreation Handlers</h3>
 
 <p>In addition to an Encryption/Decryption handler, a ChunkCreation handler on the sender side and LengthFieldBasedFrameDecoder on the receiver side should be added. </p>
 
diff --git a/docs/s3-storage-plugin/index.html b/docs/s3-storage-plugin/index.html
index 7af67fd..afa9e47 100644
--- a/docs/s3-storage-plugin/index.html
+++ b/docs/s3-storage-plugin/index.html
@@ -1307,7 +1307,7 @@
 
 <p>Refer to <a href="/docs/s3-storage-plugin/#configuring-the-s3-storage-plugin">Configuring the S3 Storage Plugin</a>. </p>
 
-<h3 id="defining-access-keys-in-the-drill-core-site.xml-file">Defining Access Keys in the Drill core-site.xml File</h3>
+<h3 id="defining-access-keys-in-the-drill-core-site-xml-file">Defining Access Keys in the Drill core-site.xml File</h3>
 
 <p>To configure the access keys in Drill&#39;s core-site.xml file, navigate to the <code>$DRILL_HOME/conf</code> or <code>$DRILL_SITE</code> directory, and rename the core-site-example.xml file to core-site.xml. Replace the text <code>ENTER_YOUR_ACESSKEY</code> and <code>ENTER_YOUR_SECRETKEY</code> with your AWS credentials and also include the endpoint, as shown in the following example:   </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   &lt;configuration&gt;
diff --git a/docs/secure-communication-paths/index.html b/docs/secure-communication-paths/index.html
index df29425..3e5dfac 100644
--- a/docs/secure-communication-paths/index.html
+++ b/docs/secure-communication-paths/index.html
@@ -1343,7 +1343,7 @@
 </tr>
 </tbody></table>
 
-<h2 id="java-and-c++-client-to-drillbit">Java and C++ Client to Drillbit</h2>
+<h2 id="java-and-c-client-to-drillbit">Java and C++ Client to Drillbit</h2>
 
 <p>Java (native or JDBC) and C++ (native or ODBC) clients submit queries to Drill. BI tools use the ODBC or JDBC API.</p>
 
diff --git a/docs/sql-extensions/index.html b/docs/sql-extensions/index.html
index 6936007..df0f681 100644
--- a/docs/sql-extensions/index.html
+++ b/docs/sql-extensions/index.html
@@ -1284,7 +1284,7 @@
 
 <p>Drill extends the SELECT statement for reading complex, multi-structured data. The extended CREATE TABLE AS provides the capability to write data of complex/multi-structured data types. Drill extends the <a href="http://drill.apache.org/docs/lexical-structure">lexical rules</a> for working with files and directories, such as using back ticks for including file names, directory names, and reserved words in queries. Drill syntax supports using the file system as a persistent store for q [...]
 
-<h2 id="extensions-for-hive--and-hbase-related-data-sources">Extensions for Hive- and HBase-related Data Sources</h2>
+<h2 id="extensions-for-hive-and-hbase-related-data-sources">Extensions for Hive- and HBase-related Data Sources</h2>
 
 <p>Drill supports Hive and HBase as a plug-and-play data source. Drill can read tables created in Hive that use <a href="/docs/hive-to-drill-data-type-mapping">data types compatible</a> with Drill.  You can query Hive tables without modifications. You can query self-describing data without requiring metadata definitions in the Hive metastore. Primitives, such as JOIN, support columnar operation. </p>
 
diff --git a/docs/starting-drill-in-distributed-mode/index.html b/docs/starting-drill-in-distributed-mode/index.html
index 6f99632..4bdeb36 100644
--- a/docs/starting-drill-in-distributed-mode/index.html
+++ b/docs/starting-drill-in-distributed-mode/index.html
@@ -1287,7 +1287,7 @@
   <p class="last"> If you use Drill in embedded mode, do not use the drillbit.sh command.   </p>
 </div>
 
-<h2 id="using-the-drillbit.sh-command">Using the drillbit.sh Command</h2>
+<h2 id="using-the-drillbit-sh-command">Using the drillbit.sh Command</h2>
 
 <p>In addition to starting a Drillbit, you use the <strong>drillbit.sh</strong> command to perform the other tasks:</p>
 
@@ -1299,7 +1299,7 @@
 
 <p>You can use a configuration file to start Drillbits. A configuration file is useful for controlling Drillbits on multiple nodes.</p>
 
-<h3 id="drillbit.sh-command-syntax">drillbit.sh Command Syntax</h3>
+<h3 id="drillbit-sh-command-syntax">drillbit.sh Command Syntax</h3>
 
 <p><code>drillbit.sh [--config &lt;conf-dir&gt;] (start|stop|graceful_stop|status|restart|autorestart)</code></p>
 
diff --git a/docs/starting-the-web-console/index.html b/docs/starting-the-web-console/index.html
index 774c54d..54d0893 100644
--- a/docs/starting-the-web-console/index.html
+++ b/docs/starting-the-web-console/index.html
@@ -1291,7 +1291,7 @@ Use this URL when HTTPS support is enabled.<br></li>
 Use  this URL when running ./drill-embedded.</li>
 </ul>
 
-<h2 id="drill-1.2-and-later">Drill 1.2 and Later</h2>
+<h2 id="drill-1-2-and-later">Drill 1.2 and Later</h2>
 
 <p>If <a href="/docs/configuring-user-authentication/">user authentication</a> is not enabled, all the Web Console controls appear to users as well as administrators:  </p>
 
diff --git a/docs/string-distance-functions/index.html b/docs/string-distance-functions/index.html
index 9fe934f..cd14d93 100644
--- a/docs/string-distance-functions/index.html
+++ b/docs/string-distance-functions/index.html
@@ -1305,31 +1305,31 @@
 
 <p>The following sections describe each of the string distance functions that Drill supports.   </p>
 
-<h3 id="cosine_distance(string1,string2)">cosine_distance(string1,string2)</h3>
+<h3 id="cosine_distance-string1-string2">cosine_distance(string1,string2)</h3>
 
 <p>Calculates the cosine distance between two strings.  </p>
 
-<h3 id="fuzzy_score(string1,string2)">fuzzy_score(string1,string2)</h3>
+<h3 id="fuzzy_score-string1-string2">fuzzy_score(string1,string2)</h3>
 
 <p>Calculates the cosine distance between two strings. A matching algorithm that is similar to the searching algorithms implemented in editors such as Sublime Text, TextMate, Atom, and others. One point is given for every matched character. Subsequent matches yield two bonus points. A higher score indicates a higher similarity. </p>
 
-<h3 id="hamming_distance-(string1,string2)">hamming_distance (string1,string2)</h3>
+<h3 id="hamming_distance-string1-string2">hamming_distance (string1,string2)</h3>
 
 <p>The hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different. For further explanation about the Hamming Distance, refer to <a href="http://en.wikipedia.org/wiki/Hamming_distance">http://en.wikipedia.org/wiki/Hamming_distance</a>.   </p>
 
-<h3 id="jaccard_distance-(string1,string2)">jaccard_distance (string1,string2)</h3>
+<h3 id="jaccard_distance-string1-string2">jaccard_distance (string1,string2)</h3>
 
 <p>Measures the Jaccard distance of two sets of character sequence. <a href="https://en.wikipedia.org/wiki/Jaccard_index">Jaccard distance</a> is the dissimilarity between two sets. It is the complementary of Jaccard similarity.   </p>
 
-<h3 id="jaro_distance-(string1,string2)">jaro_distance (string1,string2)</h3>
+<h3 id="jaro_distance-string1-string2">jaro_distance (string1,string2)</h3>
 
 <p>A similarity algorithm indicating the percentage of matched characters between two character sequences. The Jaro measure is the weighted sum of percentage of matched characters from each file and transposed characters. Winkler increased this measure for matching initial characters. This implementation is based on the <a href="https://en.wikipedia.org/wiki/Jaro%E2%80%93Winkler_distance">Jaro Winkler similarity algorithm</a>.  </p>
 
-<h3 id="levenshtein_distance-(string1,string2)">levenshtein_distance (string1,string2)</h3>
+<h3 id="levenshtein_distance-string1-string2">levenshtein_distance (string1,string2)</h3>
 
 <p>An algorithm for measuring the difference between two character sequences. This is the number of changes needed to change one sequence into another, where each change is a single character modification (deletion, insertion, or substitution).</p>
 
-<h3 id="longest_common_substring_distance(string1,string2)">longest_common_substring_distance(string1,string2)</h3>
+<h3 id="longest_common_substring_distance-string1-string2">longest_common_substring_distance(string1,string2)</h3>
 
 <p>Returns the length of the longest sub-sequence that two strings have in common.
 Two strings that are entirely different, return a value of 0, and two strings that return a value of the commonly shared length implies that the strings are completely the same in value and position. This implementation is based on the <a href="https://en.wikipedia.org/wiki/Longest_common_subsequence_problem">Longest Commons Substring algorithm</a>.  </p>
diff --git a/docs/tableau-examples/index.html b/docs/tableau-examples/index.html
index 0a5e543..c4c5de6 100644
--- a/docs/tableau-examples/index.html
+++ b/docs/tableau-examples/index.html
@@ -1296,7 +1296,7 @@ DSN to a Drill data source and then access the data in Tableau 8.1.</p>
 source data. You define schemas by configuring storage plugins on the Storage
 tab of the <a href="/docs/getting-to-know-the-drill-sandbox/#storage-plugin-overview">Drill Web Console</a>. Also, the examples assume you <a href="/docs/supported-data-types/#enabling-the-decimal-type">enabled the DECIMAL data type</a> in Drill.  </p>
 
-<h2 id="example:-connect-to-a-hive-table-in-tableau">Example: Connect to a Hive Table in Tableau</h2>
+<h2 id="example-connect-to-a-hive-table-in-tableau">Example: Connect to a Hive Table in Tableau</h2>
 
 <p>To access Hive tables in Tableau 8.1, connect to the Hive schema using a DSN
 and then visualize the data in Tableau.<br>
@@ -1307,7 +1307,7 @@ and then visualize the data in Tableau.<br>
 
 <hr>
 
-<h2 id="step-1:-create-a-dsn-to-a-hive-table">Step 1: Create a DSN to a Hive Table</h2>
+<h2 id="step-1-create-a-dsn-to-a-hive-table">Step 1: Create a DSN to a Hive Table</h2>
 
 <p>In this step, we will create a DSN that accesses a Hive table.</p>
 
@@ -1329,7 +1329,7 @@ In this example, we are connecting to a Zookeeper Quorum. Verify that the Cluste
 
 <hr>
 
-<h2 id="step-2:-connect-to-hive-tables-in-tableau">Step 2: Connect to Hive Tables in Tableau</h2>
+<h2 id="step-2-connect-to-hive-tables-in-tableau">Step 2: Connect to Hive Tables in Tableau</h2>
 
 <p>Now, we can connect to Hive tables.</p>
 
@@ -1355,7 +1355,7 @@ configure the connection to the Hive table and click <strong>OK</strong>.</li>
 
 <hr>
 
-<h2 id="step-3.-visualize-the-data-in-tableau">Step 3. Visualize the Data in Tableau</h2>
+<h2 id="step-3-visualize-the-data-in-tableau">Step 3. Visualize the Data in Tableau</h2>
 
 <p>Once you connect to the data, the columns appear in the Data window. To
 visualize the data, drag fields from the Data window to the workspace view.</p>
@@ -1364,7 +1364,7 @@ visualize the data, drag fields from the Data window to the workspace view.</p>
 
 <p><img src="/docs/img/student_hive.png" alt=""></p>
 
-<h2 id="example:-connect-to-self-describing-data-in-tableau">Example: Connect to Self-Describing Data in Tableau</h2>
+<h2 id="example-connect-to-self-describing-data-in-tableau">Example: Connect to Self-Describing Data in Tableau</h2>
 
 <p>You can connect to self-describing data in Tableau in the following ways:</p>
 
@@ -1373,7 +1373,7 @@ visualize the data, drag fields from the Data window to the workspace view.</p>
 <li>Use Tableau’s Custom SQL to query the self-describing data directly. </li>
 </ol>
 
-<h3 id="option-1.-using-a-view-to-connect-to-self-describing-data">Option 1. Using a View to Connect to Self-Describing Data</h3>
+<h3 id="option-1-using-a-view-to-connect-to-self-describing-data">Option 1. Using a View to Connect to Self-Describing Data</h3>
 
 <p>The following example describes how to create a view of an HBase table and
 connect to that view in Tableau 8.1. You can also use these steps to access
@@ -1384,7 +1384,7 @@ data for other sources such as Hive, Parquet, JSON, TSV, and CSV.</p>
   <p class="last">This example assumes that there is a schema named hbase that contains a table named s_voters and a schema named dfs.default that points to a writable location.  </p>
 </div>
 
-<h4 id="step-1.-create-a-view-and-a-dsn">Step 1. Create a View and a DSN</h4>
+<h4 id="step-1-create-a-view-and-a-dsn">Step 1. Create a View and a DSN</h4>
 
 <p>In this step, we will use the ODBC Administrator to access the Drill Explorer
 where we can create a view of an HBase table. Then, we will use the ODBC
@@ -1438,7 +1438,7 @@ view.</p></li>
 <li><p>Click <strong>OK</strong> to close the ODBC Data Source Administrator.</p></li>
 </ol>
 
-<h4 id="step-2.-connect-to-the-view-from-tableau">Step 2. Connect to the View from Tableau</h4>
+<h4 id="step-2-connect-to-the-view-from-tableau">Step 2. Connect to the View from Tableau</h4>
 
 <p>Now, we can connect to the view in Tableau.</p>
 
@@ -1461,7 +1461,7 @@ view.</p></li>
 <li>In the <em>Data Connection dialog</em>, click <strong>Connect Live</strong>.</li>
 </ol>
 
-<h4 id="step-3.-visualize-the-data-in-tableau">Step 3. Visualize the Data in Tableau</h4>
+<h4 id="step-3-visualize-the-data-in-tableau">Step 3. Visualize the Data in Tableau</h4>
 
 <p>Once you connect to the data in Tableau, the columns appear in the Data
 window. To visualize the data, drag fields from the Data window to the
@@ -1471,7 +1471,7 @@ workspace view.</p>
 
 <p><img src="/docs/img/VoterContributions_hbaseview.png" alt=""></p>
 
-<h3 id="option-2.-using-custom-sql-to-access-self-describing-data">Option 2. Using Custom SQL to Access Self-Describing Data</h3>
+<h3 id="option-2-using-custom-sql-to-access-self-describing-data">Option 2. Using Custom SQL to Access Self-Describing Data</h3>
 
 <p>The following example describes how to use custom SQL to connect to a Parquet
 file and then visualize the data in Tableau 8.1. You can use the same steps to
@@ -1482,7 +1482,7 @@ access data from other sources such as Hive, HBase, JSON, TSV, and CSV.</p>
   <p class="last">This example assumes that there is a schema named dfs.default which contains a parquet file named region.parquet.  </p>
 </div>
 
-<h4 id="step-1.-create-a-dsn-to-the-parquet-file-and-preview-the-data">Step 1. Create a DSN to the Parquet File and Preview the Data</h4>
+<h4 id="step-1-create-a-dsn-to-the-parquet-file-and-preview-the-data">Step 1. Create a DSN to the Parquet File and Preview the Data</h4>
 
 <p>In this step, we will create a DSN that accesses files on the DFS. We will
 also use Drill Explorer to preview the SQL that we want to use to connect to
@@ -1514,11 +1514,11 @@ The SQL tab will include a default query to the file you selected on the Browse
 You can copy this query to file so that you can use it in Tableau.</li>
 <li>Close the Drill Explorer window. </li>
 </ol></li>
-<li>Click <strong>OK</strong> to create the DSN and return to the <em>ODBC Data Source Administrato</em>r window.</li>
+<li>Click <strong>OK</strong> to create the DSN and return to the _ODBC Data Source Administrato_r window.</li>
 <li>Click <strong>OK</strong> to close the ODBC Data Source Administrator.</li>
 </ol>
 
-<h4 id="step-2.-connect-to-a-parquet-file-in-tableau-using-custom-sql">Step 2. Connect to a Parquet File in Tableau using Custom SQL</h4>
+<h4 id="step-2-connect-to-a-parquet-file-in-tableau-using-custom-sql">Step 2. Connect to a Parquet File in Tableau using Custom SQL</h4>
 
 <p>Now, we can create a connection to the Parquet file using the custom SQL.</p>
 
@@ -1547,7 +1547,7 @@ You can copy this query to file so that you can use it in Tableau.</li>
 <li><p>In the <em>Data Connection dialog</em>, click <strong>Connect Live</strong>.</p></li>
 </ol>
 
-<h4 id="step-3.-visualize-the-data-in-tableau">Step 3. Visualize the Data in Tableau</h4>
+<h4 id="step-3-visualize-the-data-in-tableau">Step 3. Visualize the Data in Tableau</h4>
 
 <p>Once you connect to the data, the fields appear in the Data window. To
 visualize the data, drag fields from the Data window to the workspace view.</p>
diff --git a/docs/troubleshooting/index.html b/docs/troubleshooting/index.html
index 561551f..73ed671 100644
--- a/docs/troubleshooting/index.html
+++ b/docs/troubleshooting/index.html
@@ -1391,7 +1391,7 @@ Symptom:   </p>
 </ul></li>
 </ul>
 
-<h3 id="access-nested-fields-without-table-name/alias">Access Nested Fields without Table Name/Alias</h3>
+<h3 id="access-nested-fields-without-table-name-alias">Access Nested Fields without Table Name/Alias</h3>
 
 <p>Symptom: </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   SELECT x.y …  
@@ -1465,7 +1465,7 @@ Symptom:   </p>
 <p>Solution: Make sure that the ODBC driver version is compatible with the server version. <a href="/docs/installing-the-odbc-driver">Driver installation instructions</a> include how to check the driver version. 
 Turn on ODBC driver debug logging to better understand failure.  </p>
 
-<h3 id="jdbc/odbc-connection-issues-with-zookeeper">JDBC/ODBC Connection Issues with ZooKeeper</h3>
+<h3 id="jdbc-odbc-connection-issues-with-zookeeper">JDBC/ODBC Connection Issues with ZooKeeper</h3>
 
 <p>Symptom: Client cannot resolve ZooKeeper host names for JDBC/ODBC.</p>
 
@@ -1489,13 +1489,13 @@ Turn on ODBC driver debug logging to better understand failure.  </p>
 
 <p>Solution: Verify that the column alias does not conflict with the storage type. See <a href="/docs/lexical-structure/#case-sensitivity">Lexical Structures</a>.  </p>
 
-<h3 id="list-(array)-contains-null">List (Array) Contains Null</h3>
+<h3 id="list-array-contains-null">List (Array) Contains Null</h3>
 
 <p>Symptom: UNSUPPORTED_OPERATION ERROR: Null values are not supported in lists by default. </p>
 
 <p>Solution: Avoid selecting fields that are arrays containing nulls. Change Drill session settings to enable all_text_mode. Set store.json.all_text_mode to true, so Drill treats JSON null values as a string containing the word &#39;null&#39;.</p>
 
-<h3 id="select-count-(*)-takes-a-long-time-to-run">SELECT COUNT (*) Takes a Long Time to Run</h3>
+<h3 id="select-count-takes-a-long-time-to-run">SELECT COUNT (*) Takes a Long Time to Run</h3>
 
 <p>Solution: In some cases, the underlying storage format does not have a built-in capability to return a count of records in a table.  In these cases, Drill does a full scan of the data to verify the number of records.</p>
 
diff --git a/docs/tutorial-develop-a-simple-function/index.html b/docs/tutorial-develop-a-simple-function/index.html
index df1b5dc..513de32 100644
--- a/docs/tutorial-develop-a-simple-function/index.html
+++ b/docs/tutorial-develop-a-simple-function/index.html
@@ -1306,7 +1306,7 @@
 
 <hr>
 
-<h2 id="step-1:-add-dependencies">Step 1: Add dependencies</h2>
+<h2 id="step-1-add-dependencies">Step 1: Add dependencies</h2>
 
 <p>First, add the following Drill dependency to your maven project:</p>
 <div class="highlight"><pre><code class="language-xml" data-lang="xml"> <span class="nt">&lt;dependency&gt;</span>
@@ -1317,7 +1317,7 @@
 </code></pre></div>
 <hr>
 
-<h2 id="step-2:-add-annotations-to-the-function-template">Step 2: Add annotations to the function template</h2>
+<h2 id="step-2-add-annotations-to-the-function-template">Step 2: Add annotations to the function template</h2>
 
 <p>To start implementing the DrillSimpleFunc interface, add the following annotations to the @FunctionTemplate declaration:</p>
 
@@ -1353,7 +1353,7 @@
 </code></pre></div>
 <hr>
 
-<h2 id="step-3:-declare-input-parameters">Step 3: Declare input parameters</h2>
+<h2 id="step-3-declare-input-parameters">Step 3: Declare input parameters</h2>
 
 <p>The function will be generated dynamically, as you can see in the <a href="https://github.com/apache/drill/blob/master/exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/DrillSimpleFuncHolder.java/#L42">DrillSimpleFuncHolder</a>, and the input parameters and output holders are defined using holders by annotations. Define the parameters using the @Param annotation. </p>
 
@@ -1385,7 +1385,7 @@
 
 <hr>
 
-<h2 id="step-4:-declare-the-return-value-type">Step 4: Declare the return value type</h2>
+<h2 id="step-4-declare-the-return-value-type">Step 4: Declare the return value type</h2>
 
 <p>Also, using the @Output annotation, define the returned value as VarCharHolder type. Because you are manipulating a VarChar, you also have to inject a buffer that Drill uses for the output. </p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">class</span> <span class="nc">SimpleMaskFunc</span> <span class="kd">implements</span> <span class="n">DrillSimpleFunc</span> <span class="o">{</span>
@@ -1400,7 +1400,7 @@
 </code></pre></div>
 <hr>
 
-<h2 id="step-5:-implement-the-eval()-method">Step 5: Implement the eval() method</h2>
+<h2 id="step-5-implement-the-eval-method">Step 5: Implement the eval() method</h2>
 
 <p>The MASK function does not require any setup, so you do not need to define the setup() method. Define only the eval() method. </p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kt">void</span> <span class="nf">eval</span><span class="o">()</span> <span class="o">{</span>
@@ -1434,7 +1434,7 @@
 
 <p>Even to a seasoned Java developer, the eval() method might look a bit strange because Drill generates the final code on the fly to fulfill a query request. This technique leverages Java’s just-in-time (JIT) compiler for maximum speed.</p>
 
-Basic Coding Rules</h2>
+<h2 id="basic-coding-rules">Basic Coding Rules</h2>
 
 <p>To leverage Java’s just-in-time (JIT) compiler for maximum speed, you need to adhere to some basic rules.</p>
 
@@ -1468,13 +1468,13 @@ Basic Coding Rules</h2>
     <span class="nt">&lt;/executions&gt;</span>
 <span class="nt">&lt;/plugin&gt;</span>
 </code></pre></div>
-Add a drill-module.conf File to Resources</h2>
+<h2 id="add-a-drill-module-conf-file-to-resources">Add a drill-module.conf File to Resources</h2>
 
 <p>Add a <code>drill-module.conf</code> file in the resources folder of your project. The presence of this file tells Drill that your jar contains a custom function. Put the following line in the <code>drill-module.config</code>:</p>
 
 <p><code>drill.classpath.scanning.packages += &quot;org.apache.drill.contrib.function&quot;</code></p>
 
-Build and Deploy the Function</h2>
+<h2 id="build-and-deploy-the-function">Build and Deploy the Function</h2>
 
 <p>Build the function using mvn package:</p>
 
@@ -1493,7 +1493,7 @@ Build and Deploy the Function</h2>
 
 <p><strong>Note:</strong> This tutorial shows the manual method for adding JAR files to Drill, however as of Drill 1.9, the Dynamic UDF feature provides a new method for users.</p>
 
-Test the New Function</h2>
+<h2 id="test-the-new-function">Test the New Function</h2>
 
 <p>Restart drill and run the following query on the <a href="/docs/querying-json-files/"><code>employee.json</code></a> file installed with Drill:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">SELECT MASK(first_name, &#39;*&#39; , 3) FIRST , MASK(last_name, &#39;#&#39;, 7) LAST  FROM cp.`employee.json` LIMIT 5;
diff --git a/docs/useful-research/index.html b/docs/useful-research/index.html
index adc846f..cbb1649 100644
--- a/docs/useful-research/index.html
+++ b/docs/useful-research/index.html
@@ -1315,7 +1315,7 @@
 <li>Design Proposal for Drill: <a href="http://www.slideshare.net/CamuelGilyadov/apache-drill-14071739">http://www.slideshare.net/CamuelGilyadov/apache-drill-14071739</a></li>
 </ul>
 
-<h2 id="dazo-(second-generation-opendremel)">Dazo (second generation OpenDremel)</h2>
+<h2 id="dazo-second-generation-opendremel">Dazo (second generation OpenDremel)</h2>
 
 <ul>
 <li>Dazo repos: <a href="https://github.com/Dazo-org">https://github.com/Dazo-org</a></li>
@@ -1329,7 +1329,7 @@
 <li><a href="https://github.com/rgrzywinski/field-stripe/">https://github.com/rgrzywinski/field-stripe/</a></li>
 </ul>
 
-Code generation / Physical plan generation</h2>
+<h2 id="code-generation-physical-plan-generation">Code generation / Physical plan generation</h2>
 
 <ul>
 <li><a href="http://www.vldb.org/pvldb/vol4/p539-neumann.pdf">http://www.vldb.org/pvldb/vol4/p539-neumann.pdf</a> (SLIDES: <a href="http://www.vldb.org/2011/files/slides/research9/rSession9-3.pdf">http://www.vldb.org/2011/files/slides/research9/rSession9-3.pdf</a>)</li>
diff --git a/docs/using-apache-drill-with-tableau-10-2/index.html b/docs/using-apache-drill-with-tableau-10-2/index.html
index f758b8c..589f394 100644
--- a/docs/using-apache-drill-with-tableau-10-2/index.html
+++ b/docs/using-apache-drill-with-tableau-10-2/index.html
@@ -1309,7 +1309,7 @@
 2.  <a href="/docs/using-apache-drill-with-tableau-10-2/#step-2:-connect-tableau-to-drill">Connect Tableau to Drill (using the Apache Drill Data Connector).</a><br>
 3.  <a href="/docs/using-apache-drill-with-tableau-10-2/#step-3:-query-and-analyze-the-data">Query and Analyze the Data (various data formats with Tableau and Drill).</a>  </p>
 
-<h2 id="step-1:-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h2>
+<h2 id="step-1-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h2>
 
 <p>Drill uses standard ODBC connectivity to provide you with easy data exploration capabilities on complex, schema-less data sets. </p>
 
@@ -1327,7 +1327,7 @@
 
 <p><strong>Important:</strong> Verify that the Tableau client system can resolve the hostnames for the Drill and Zookeeper nodes correctly. See the <em>System Requirements</em> section of the ODBC <a href="http://drill.apache.org/docs/installing-the-driver-on-mac-os-x/">Mac</a> or <a href="http://drill.apache.org/docs/installing-the-driver-on-windows/">Windows</a> installation page for instructions.  </p>
 
-<h2 id="step-2:-connect-tableau-to-drill">Step 2: Connect Tableau to Drill</h2>
+<h2 id="step-2-connect-tableau-to-drill">Step 2: Connect Tableau to Drill</h2>
 
 <p>To connect Tableau to Drill, complete the following steps:</p>
 
@@ -1345,7 +1345,7 @@
 
 <p><strong>Note:</strong> Tableau can natively work with Hive tables and Drill views. You can use custom SQL or create a view in Drill to represent the complex data in Drill data sources, such as data in files or HBase/MapR-DB tables, to Tableau. For more information, see <a href="http://drill.apache.org/docs/tableau-examples/">Tableau Examples</a>.  </p>
 
-<h2 id="step-3:-query-and-analyze-the-data">Step 3: Query and Analyze the Data</h2>
+<h2 id="step-3-query-and-analyze-the-data">Step 3: Query and Analyze the Data</h2>
 
 <p>Tableau can now use Drill to query various data sources and visualize the information, as shown in the following example.  </p>
 
diff --git a/docs/using-apache-drill-with-tableau-9-desktop/index.html b/docs/using-apache-drill-with-tableau-9-desktop/index.html
index c6c5a0e..06bf3f1 100644
--- a/docs/using-apache-drill-with-tableau-9-desktop/index.html
+++ b/docs/using-apache-drill-with-tableau-9-desktop/index.html
@@ -1291,7 +1291,7 @@
 <li>Query and analyze various data formats with Tableau and Drill.</li>
 </ol>
 
-<h2 id="step-1:-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h2>
+<h2 id="step-1-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h2>
 
 <p>Drill uses standard ODBC connectivity to provide easy data-exploration capabilities on complex, schema-less data sets. For the best experience use the latest release of Apache Drill. For Tableau 9.0 Desktop, Drill Version 0.9 or higher is recommended.</p>
 
@@ -1309,7 +1309,7 @@
 
 <p>Also make sure to test the ODBC connection to Drill before using it with Tableau.</p>
 
-<h2 id="step-2:-install-the-tableau-data-connection-customization-(tdc)-file">Step 2: Install the Tableau Data-connection Customization (TDC) File</h2>
+<h2 id="step-2-install-the-tableau-data-connection-customization-tdc-file">Step 2: Install the Tableau Data-connection Customization (TDC) File</h2>
 
 <p>The MapR Drill ODBC Driver includes a file named <code>MapRDrillODBC.TDC</code>. The TDC file includes customizations that improve ODBC configuration and performance when using Tableau. The MapR Drill ODBC driver installer automatically installs the TDC file if the installer can find the Tableau installation. If you installed the MapR Drill ODBC driver first and then installed Tableau, the TDC file is not installed automatically. You must install the TDC file manually. </p>
 
@@ -1326,7 +1326,7 @@ For example, you can press the SPACEBAR key.</p></li>
 
 <p>If the installation of the TDC file fails, this is likely due to your Tableau repository being in location other than the default one.  In this case, manually copy the My Tableau Repository to C:\Users&lt;user&gt;\Documents\My Tableau Repository. Repeat the procedure to install the MapRDrillODBC.TDC file manually.</p>
 
-<h2 id="step-3:-connect-tableau-to-drill-via-odbc">Step 3: Connect Tableau to Drill via ODBC</h2>
+<h2 id="step-3-connect-tableau-to-drill-via-odbc">Step 3: Connect Tableau to Drill via ODBC</h2>
 
 <p>Complete the following steps to configure an ODBC data connection: </p>
 
@@ -1352,7 +1352,7 @@ Tableau is now connected to Drill, and you can select various tables and views.
 
 <p>Note: If Drill authentication and impersonation is enabled, only the views that the user has access to will be displayed in the Table dialog box. Also, if custom SQL is being used to try and access data sources that the user does not have access to, an error message will be displayed. <img src="/docs/img/tableau-error.png" alt="drill query flow"></p>
 
-<h2 id="step-4:-query-and-analyze-the-data">Step 4: Query and Analyze the Data</h2>
+<h2 id="step-4-query-and-analyze-the-data">Step 4: Query and Analyze the Data</h2>
 
 <p>Tableau Desktop can now use Drill to query various data sources and visualize the information.</p>
 
diff --git a/docs/using-apache-drill-with-tableau-9-server/index.html b/docs/using-apache-drill-with-tableau-9-server/index.html
index f888ab3..d5b3820 100644
--- a/docs/using-apache-drill-with-tableau-9-server/index.html
+++ b/docs/using-apache-drill-with-tableau-9-server/index.html
@@ -1290,7 +1290,7 @@
 <li> Publish Tableau visualizations and data sources from Tableau Desktop to Tableau Server for collaboration.</li>
 </ol>
 
-<h2 id="step-1:-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h2>
+<h2 id="step-1-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h2>
 
 <p>Drill uses standard ODBC connectivity to provide easy data-exploration capabilities on complex, schema-less data sets. The latest release of Apache Drill. For Tableau 9.0 Server, Drill Version 0.9 or higher is recommended.</p>
 
@@ -1308,7 +1308,7 @@
 
 <p>Also make sure to test the ODBC connection to Drill before using it with Tableau.</p>
 
-<h2 id="step-2:-install-the-tableau-data-connection-customization-(tdc)-file">Step 2: Install the Tableau Data-connection Customization (TDC) File</h2>
+<h2 id="step-2-install-the-tableau-data-connection-customization-tdc-file">Step 2: Install the Tableau Data-connection Customization (TDC) File</h2>
 
 <p>The MapR Drill ODBC Driver includes a file named <code>MapRDrillODBC.TDC</code>. The TDC file includes customizations that improve ODBC configuration and performance when using Tableau.</p>
 
@@ -1319,7 +1319,7 @@
 
 <p>For more information about Tableau TDC configuration, see <a href="http://kb.tableau.com/articles/knowledgebase/customizing-odbc-connections">Customizing and Tuning ODBC Connections</a></p>
 
-<h2 id="step-3:-publish-tableau-visualizations-and-data-sources">Step 3: Publish Tableau Visualizations and Data Sources</h2>
+<h2 id="step-3-publish-tableau-visualizations-and-data-sources">Step 3: Publish Tableau Visualizations and Data Sources</h2>
 
 <p>For collaboration purposes, you can now use Tableau Desktop to publish data sources and visualizations on Tableau Server.</p>
 
diff --git a/docs/using-information-builders-webfocus-with-apache-drill/index.html b/docs/using-information-builders-webfocus-with-apache-drill/index.html
index 9b7b9b5..b2d550e 100644
--- a/docs/using-information-builders-webfocus-with-apache-drill/index.html
+++ b/docs/using-information-builders-webfocus-with-apache-drill/index.html
@@ -1294,7 +1294,7 @@
 
 <p>Drill 1.2 or later</p>
 
-<h2 id="step-1:-install-the-apache-drill-jdbc-driver.">Step 1: Install the Apache Drill JDBC driver.</h2>
+<h2 id="step-1-install-the-apache-drill-jdbc-driver">Step 1: Install the Apache Drill JDBC driver.</h2>
 
 <p>Drill provides JDBC connectivity that easily integrates with WebFOCUS. See <a href="https://drill.apache.org/docs/using-the-jdbc-driver/">/docs/using-the-jdbc-driver/</a> for general installation steps.  </p>
 
@@ -1310,7 +1310,7 @@ The following example shows the driver JAR file copied to a directory on a Linux
 <code>/usr/lib/drill-1.4.0/jdbc-driver/drill-jdbc-all-1.4.0.jar</code></li>
 </ol>
 
-<h2 id="step-2:-configure-the-webfocus-adapter-and-connections-to-drill.">Step 2: Configure the WebFOCUS adapter and connections to Drill.</h2>
+<h2 id="step-2-configure-the-webfocus-adapter-and-connections-to-drill">Step 2: Configure the WebFOCUS adapter and connections to Drill.</h2>
 
 <ol>
 <li>From a web browser, access the WebFOCUS Management Console. The WebFOCUS administrator provides you with the URL information: <code>http://hostname:port/</code><br>
@@ -1329,7 +1329,7 @@ The Apache Drill adapter appears in the list.<br>
 Now you can use the WebFOCUS adapter and connection or create additional connections.</p></li>
 </ol>
 
-<h2 id="(optional)-step-3:-create-additional-drill-connections.">(Optional) Step 3: Create additional Drill connections.</h2>
+<h2 id="optional-step-3-create-additional-drill-connections">(Optional) Step 3: Create additional Drill connections.</h2>
 
 <p>Complete the following steps to create additional connections:  </p>
 
diff --git a/docs/using-jdbc-with-squirrel-on-windows/index.html b/docs/using-jdbc-with-squirrel-on-windows/index.html
index fabce94..b018311 100644
--- a/docs/using-jdbc-with-squirrel-on-windows/index.html
+++ b/docs/using-jdbc-with-squirrel-on-windows/index.html
@@ -1299,7 +1299,7 @@
 
 <hr>
 
-<h2 id="step-1:-getting-the-drill-jdbc-driver">Step 1: Getting the Drill JDBC Driver</h2>
+<h2 id="step-1-getting-the-drill-jdbc-driver">Step 1: Getting the Drill JDBC Driver</h2>
 
 <p>The Drill JDBC Driver <code>JAR</code> file must exist in a directory on your Windows
 machine in order to configure the driver in the SQuirreL client.</p>
@@ -1314,7 +1314,7 @@ machine:</p>
 </code></pre></div>
 <hr>
 
-<h2 id="step-2:-installing-and-starting-squirrel">Step 2: Installing and Starting SQuirreL</h2>
+<h2 id="step-2-installing-and-starting-squirrel">Step 2: Installing and Starting SQuirreL</h2>
 
 <p>To install and start SQuirreL, complete the following steps:</p>
 
@@ -1327,14 +1327,14 @@ machine:</p>
 
 <hr>
 
-<h2 id="step-3:-adding-the-drill-jdbc-driver-to-squirrel">Step 3: Adding the Drill JDBC Driver to SQuirreL</h2>
+<h2 id="step-3-adding-the-drill-jdbc-driver-to-squirrel">Step 3: Adding the Drill JDBC Driver to SQuirreL</h2>
 
 <p>To add the Drill JDBC Driver to SQuirreL, define the driver and create a
 database alias. The alias is a specific instance of the driver configuration.
 SQuirreL uses the driver definition and alias to connect to Drill so you can
 access data sources that you have registered with Drill.</p>
 
-<h3 id="a.-define-the-driver">A. Define the Driver</h3>
+<h3 id="a-define-the-driver">A. Define the Driver</h3>
 
 <p>To define the Drill JDBC Driver, complete the following steps:</p>
 
@@ -1376,7 +1376,7 @@ access data sources that you have registered with Drill.</p>
 
 <p><img src="/docs/img/52.png" alt="drill query flow"></p>
 
-<h3 id="b.-create-an-alias">B. Create an Alias</h3>
+<h3 id="b-create-an-alias">B. Create an Alias</h3>
 
 <p>To create an alias, complete the following steps:</p>
 
@@ -1427,7 +1427,7 @@ access data sources that you have registered with Drill.</p>
 
 <hr>
 
-<h2 id="step-4:-running-a-drill-query-from-squirrel">Step 4: Running a Drill Query from SQuirreL</h2>
+<h2 id="step-4-running-a-drill-query-from-squirrel">Step 4: Running a Drill Query from SQuirreL</h2>
 
 <p>Once you have SQuirreL successfully connected to your cluster through the
 Drill JDBC Driver, you can issue queries from the SQuirreL client. You can run
diff --git a/docs/using-microstrategy-analytics-with-apache-drill/index.html b/docs/using-microstrategy-analytics-with-apache-drill/index.html
index ddb9c79..40d97c7 100644
--- a/docs/using-microstrategy-analytics-with-apache-drill/index.html
+++ b/docs/using-microstrategy-analytics-with-apache-drill/index.html
@@ -1293,7 +1293,7 @@
 
 <hr>
 
-<h2 id="step-1:-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h2>
+<h2 id="step-1-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h2>
 
 <p>Drill uses standard ODBC connectivity to provide easy data exploration capabilities on complex, schema-less data sets. Verify that the ODBC driver version that you download correlates with the Apache Drill version that you use. Ideally, you should upgrade to the latest version of Apache Drill and the MapR Drill ODBC Driver. </p>
 
@@ -1329,7 +1329,7 @@
 
 <hr>
 
-<h2 id="step-2:-install-the-drill-object-on-microstrategy-analytics-enterprise">Step 2: Install the Drill Object on MicroStrategy Analytics Enterprise</h2>
+<h2 id="step-2-install-the-drill-object-on-microstrategy-analytics-enterprise">Step 2: Install the Drill Object on MicroStrategy Analytics Enterprise</h2>
 
 <p>The steps listed in this section were created based on the MicroStrategy Technote for installing DBMS objects which you can reference at: </p>
 
@@ -1362,7 +1362,7 @@
 
 <hr>
 
-<h2 id="step-3:-create-the-microstrategy-database-connection-for-apache-drill">Step 3: Create the MicroStrategy database connection for Apache Drill</h2>
+<h2 id="step-3-create-the-microstrategy-database-connection-for-apache-drill">Step 3: Create the MicroStrategy database connection for Apache Drill</h2>
 
 <p>Complete the following steps to use the Database Instance Wizard to create the MicroStrategy database connection for Apache Drill:</p>
 
@@ -1381,7 +1381,7 @@
 
 <hr>
 
-<h2 id="step-4:-query-and-analyze-the-data">Step 4: Query and Analyze the Data</h2>
+<h2 id="step-4-query-and-analyze-the-data">Step 4: Query and Analyze the Data</h2>
 
 <p>This step includes an example scenario that shows you how to use MicroStrategy, with Drill as the database instance, to analyze Twitter data stored as complex JSON documents. </p>
 
@@ -1389,7 +1389,7 @@
 
 <p>The Drill distributed file system plugin is configured to read Twitter data in a directory structure. A view is created in Drill to capture the most relevant maps and nested maps and arrays for the Twitter JSON documents. Refer to <a href="/docs/query-data-introduction/">Query Data</a> for more information about how to configure and use Drill to work with complex data:</p>
 
-<h3 id="part-1:-create-a-project">Part 1: Create a Project</h3>
+<h3 id="part-1-create-a-project">Part 1: Create a Project</h3>
 
 <p>Complete the following steps to create a project:</p>
 
@@ -1407,7 +1407,7 @@
 <li> Click <strong>OK</strong>. The new project is created in MicroStrategy Developer. </li>
 </ol>
 
-<h3 id="part-2:-create-a-freeform-report-to-analyze-data">Part 2: Create a Freeform Report to Analyze Data</h3>
+<h3 id="part-2-create-a-freeform-report-to-analyze-data">Part 2: Create a Freeform Report to Analyze Data</h3>
 
 <p>Complete the following steps to create a Freeform Report and analyze data:</p>
 
diff --git a/docs/using-qlik-sense-with-drill/index.html b/docs/using-qlik-sense-with-drill/index.html
index 3480f2a..0f53aee 100644
--- a/docs/using-qlik-sense-with-drill/index.html
+++ b/docs/using-qlik-sense-with-drill/index.html
@@ -1301,7 +1301,7 @@
 <li> Qlik Sense installed. See <a href="http://www.qlik.com/us/explore/products/sense">Qlik Sense</a>.</li>
 </ul>
 
-<h2 id="step-1:-install-and-configure-the-drill-odbc-driver">Step 1: Install and Configure the Drill ODBC Driver</h2>
+<h2 id="step-1-install-and-configure-the-drill-odbc-driver">Step 1: Install and Configure the Drill ODBC Driver</h2>
 
 <p>Drill uses standard ODBC connectivity to provide easy data exploration capabilities on complex, schema-less data sets. Verify that the ODBC driver version that you download correlates with the Apache Drill version that you use. Ideally, you should upgrade to the latest version of Apache Drill and the MapR Drill ODBC Driver. </p>
 
@@ -1313,7 +1313,7 @@
 <li><a href="/docs/configuring-odbc-on-windows">Configure ODBC</a>.</li>
 </ol>
 
-<h2 id="step-2:-configure-a-connection-in-qlik-sense">Step 2: Configure a Connection in Qlik Sense</h2>
+<h2 id="step-2-configure-a-connection-in-qlik-sense">Step 2: Configure a Connection in Qlik Sense</h2>
 
 <p>Once you create an ODBC DSN, it shows up as another option when you create a connection from a new and/or existing Qlik Sense application. The steps for creating a connection from an application are the same in Qlik Sense Desktop and Qlik Sense Server. </p>
 
@@ -1327,7 +1327,7 @@
 <img src="/docs/img/step3_img1.png" alt=""></li>
 </ol>
 
-<h2 id="step-3:-authenticate">Step 3: Authenticate</h2>
+<h2 id="step-3-authenticate">Step 3: Authenticate</h2>
 
 <p>After providing the credentials and saving the connection, click <strong>Select</strong> in the new connection to trigger the authentication against Drill.  </p>
 
@@ -1341,7 +1341,7 @@
 
 <p><img src="/docs/img/step4_img3.png" alt=""></p>
 
-<h2 id="step-4:-select-tables-and-load-the-data-model">Step 4: Select Tables and Load the Data Model</h2>
+<h2 id="step-4-select-tables-and-load-the-data-model">Step 4: Select Tables and Load the Data Model</h2>
 
 <p>Explore the various tables available in Drill, and select the tables of interest. For each table selected, Qlik Sense shows a preview of the logic used for the table.  </p>
 
@@ -1374,7 +1374,7 @@
 
 <p><img src="/docs/img/step5_img5.png" alt="">  </p>
 
-<h2 id="step-5:-analyze-data-with-qlik-sense-and-drill">Step 5: Analyze Data with Qlik Sense and Drill</h2>
+<h2 id="step-5-analyze-data-with-qlik-sense-and-drill">Step 5: Analyze Data with Qlik Sense and Drill</h2>
 
 <p>After the data model is loaded into the application, use Qlik Sense to build a wide range of visualizations on top of the data that Drill delivers via ODBC. Qlik Sense specializes in self-service data visualization at the point of decision.  </p>
 
diff --git a/docs/using-saiku-analytics-with-apache-drill/index.html b/docs/using-saiku-analytics-with-apache-drill/index.html
index 441fb70..f961092 100644
--- a/docs/using-saiku-analytics-with-apache-drill/index.html
+++ b/docs/using-saiku-analytics-with-apache-drill/index.html
@@ -1324,7 +1324,7 @@
 
 <h2 id="select-datasource-and-load-the-data-model">Select datasource and load the data model</h2>
 
-<h3 id="what-is-a-schema?">What is a schema?</h3>
+<h3 id="what-is-a-schema">What is a schema?</h3>
 
 <p>A schema in its raw form is an XML document that defines how the data is laid out in your database. Within Saiku you might have multiple schemas, each containing multiple cubes. Within a cube are collections of Dimensions and Measures. The schema allows Saiku to display the UI elements that lets users discover answers to their data in a drag and drop format.</p>
 
diff --git a/docs/using-tibco-spotfire-desktop-with-drill/index.html b/docs/using-tibco-spotfire-desktop-with-drill/index.html
index e03b6dc..e06afb2 100644
--- a/docs/using-tibco-spotfire-desktop-with-drill/index.html
+++ b/docs/using-tibco-spotfire-desktop-with-drill/index.html
@@ -1289,7 +1289,7 @@
 <li> Configure the Spotfire Desktop data connection for Drill.</li>
 </ol>
 
-<h2 id="step-1:-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h2>
+<h2 id="step-1-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h2>
 
 <p>Drill uses standard ODBC connectivity to provide easy data exploration capabilities on complex, schema-less data sets. Verify that the ODBC driver version that you download correlates with the Apache Drill version that you use. Ideally, you should upgrade to the latest version of Apache Drill and the MapR Drill ODBC Driver. </p>
 
@@ -1307,7 +1307,7 @@
 
 <hr>
 
-<h2 id="step-2:-configure-the-spotfire-desktop-data-connection-for-drill">Step 2: Configure the Spotfire Desktop Data Connection for Drill</h2>
+<h2 id="step-2-configure-the-spotfire-desktop-data-connection-for-drill">Step 2: Configure the Spotfire Desktop Data Connection for Drill</h2>
 
 <p>Complete the following steps to configure a Drill data connection: </p>
 
diff --git a/docs/value-window-functions/index.html b/docs/value-window-functions/index.html
index d5a2cb0..2db0e7e 100644
--- a/docs/value-window-functions/index.html
+++ b/docs/value-window-functions/index.html
@@ -1320,12 +1320,12 @@
 
 <h2 id="syntax">Syntax</h2>
 
-<h3 id="lag-|-lead">LAG | LEAD</h3>
+<h3 id="lag-lead">LAG | LEAD</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   LAG | LEAD
    ( expression )
    OVER ( [ PARTITION BY expr_list ] [ ORDER BY order_list ] )  
 </code></pre></div>
-<h3 id="first_value-|-last_value">FIRST_VALUE | LAST_VALUE</h3>
+<h3 id="first_value-last_value">FIRST_VALUE | LAST_VALUE</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   FIRST_VALUE | LAST_VALUE
    ( expression ) OVER
    ( [ PARTITION BY expr_list ] [ ORDER BY order_list ][ frame_clause ] )  
@@ -1351,7 +1351,7 @@ The frame clause refines the set of rows in a function&#39;s window, including o
 
 <p>The following examples show queries that use each of the value window functions in Drill.  </p>
 
-<h3 id="lag()">LAG()</h3>
+<h3 id="lag">LAG()</h3>
 
 <p>The following example uses the LAG window function to show the quantity of records sold to the Tower Records customer with customer ID 8  and the dates that customer 8 purchased records. To compare each sale with the previous sale for customer 8, the query returns the previous quantity sold for each sale. Since there is no purchase before 1976-01-25, the first previous quantity sold value is null. Note that the term &quot;date&quot; in the query is enclosed in back ticks because it is [...]
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select cust_id, `date`, qty_sold, lag(qty_sold,1) over (order by cust_id, `date`) as prev_qtysold from sales where cust_id = 8 order by cust_id, `date`;  
@@ -1367,7 +1367,7 @@ The frame clause refines the set of rows in a function&#39;s window, including o
    +----------+-------------+-----------+---------------+
    5 rows selected (0.331 seconds)
 </code></pre></div>
-<h3 id="lead()">LEAD()</h3>
+<h3 id="lead">LEAD()</h3>
 
 <p>The following example uses the LEAD window function to provide the commission for concert tickets with show ID 172 and the next commission for subsequent ticket sales. Since there is no commission after 40.00, the last next_comm value is null. Note that the term &quot;date&quot; in the query is enclosed in back ticks because it is a reserved keyword in Drill.  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select show_id, `date`, commission, lead(commission,1) over (order by `date`) as next_comm from commission where show_id = 172;
@@ -1389,7 +1389,7 @@ The frame clause refines the set of rows in a function&#39;s window, including o
    +----------+-------------+-------------+------------+
    12 rows selected (0.241 seconds)
 </code></pre></div>
-<h3 id="first_value()">FIRST_VALUE()</h3>
+<h3 id="first_value">FIRST_VALUE()</h3>
 
 <p>The following example uses the FIRST_VALUE window function to identify the employee with the lowest sales for each dealer in Q1:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select emp_name, dealer_id, sales, first_value(sales) over (partition by dealer_id order by sales) as dealer_low from q1_sales;
@@ -1409,7 +1409,7 @@ The frame clause refines the set of rows in a function&#39;s window, including o
    +-----------------+------------+--------+-------------+
    10 rows selected (0.299 seconds)
 </code></pre></div>
-<h3 id="last_value()">LAST_VALUE()</h3>
+<h3 id="last_value">LAST_VALUE()</h3>
 
 <p>The following example uses the LAST_VALUE window function to identify the last car sale each employee made at each dealership in 2013:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select emp_name, dealer_id, sales, `year`, last_value(sales) over (partition by  emp_name order by `year`) as last_sale from emp_sales where `year` = 2013;
diff --git a/docs/why-drill/index.html b/docs/why-drill/index.html
index 4511c0f..5b67351 100644
--- a/docs/why-drill/index.html
+++ b/docs/why-drill/index.html
@@ -1280,7 +1280,7 @@
       
         <h2 id="top-10-reasons-to-use-drill">Top 10 Reasons to Use Drill</h2>
 
-<h2 id="1.-get-started-in-minutes">1. Get started in minutes</h2>
+<h2 id="1-get-started-in-minutes">1. Get started in minutes</h2>
 
 <p>It takes just a few minutes to get started with Drill. Untar the Drill software on your Linux, Mac, or Windows laptop and run a query on a local file. No need to set up any infrastructure or to define schemas. Just point to the data, such as data in a file, directory, HBase table, and drill.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">$ tar -xvf apache-drill-&lt;version&gt;.tar.gz
@@ -1294,11 +1294,11 @@ $ &lt;install directory&gt;/bin/drill-embedded
 | 4            | Michael Spence             | Michael             | Spence        | 2            | VP Country Manager         | 0         | 1              | 1969-06-20  | 1998-01-01 00:00:00.0  | 40000.0  | 1              | Graduate Degree      | S               | M       | Senior Management     |
 | 5            | Maya Gutierrez             | Maya                | Gutierrez     | 2            | VP Country Manager         | 0         | 1              | 1951-05-10  | 1998-01-01 00:00:00.0  | 35000.0  | 1              | Bachelors Degree     | M               | F       | Senior Management     |
 </code></pre></div>
-<h2 id="2.-schema-free-json-model">2. Schema-free JSON model</h2>
+<h2 id="2-schema-free-json-model">2. Schema-free JSON model</h2>
 
 <p>Drill is the world&#39;s first and only distributed SQL engine that doesn&#39;t require schemas. It shares the same schema-free JSON model as MongoDB and Elasticsearch. No need to define and maintain schemas or transform data (ETL). Drill automatically understands the structure of the data. </p>
 
-<h2 id="3.-query-complex,-semi-structured-data-in-situ">3. Query complex, semi-structured data in-situ</h2>
+<h2 id="3-query-complex-semi-structured-data-in-situ">3. Query complex, semi-structured data in-situ</h2>
 
 <p>Using Drill&#39;s schema-free JSON model, you can query complex, semi-structured data in situ. No need to flatten or transform the data prior to or during query execution. Drill also provides intuitive extensions to SQL to work with nested data. Here&#39;s a simple query on a JSON file demonstrating how to access nested elements and arrays:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">SELECT * FROM (SELECT t.trans_id,
@@ -1309,7 +1309,7 @@ WHERE sq.prod_id BETWEEN 700 AND 750 AND
       sq.purchased = &#39;true&#39;
 ORDER BY sq.prod_id;
 </code></pre></div>
-<h2 id="4.-real-sql----not-&quot;sql-like&quot;">4. Real SQL -- not &quot;SQL-like&quot;</h2>
+<h2 id="4-real-sql-not-sql-like">4. Real SQL -- not &quot;SQL-like&quot;</h2>
 
 <p>Drill supports the standard SQL:2003 syntax. No need to learn a new &quot;SQL-like&quot; language or struggle with a semi-functional BI tool. Drill supports many data types including DATE, INTERVAL, TIMESTAMP, and VARCHAR, as well as complex query constructs such as correlated sub-queries and joins in WHERE clauses. Here is an example of a TPC-H standard query that runs in Drill:</p>
 
@@ -1326,11 +1326,11 @@ WHERE o.o_orderdate &gt;= DATE &#39;1996-10-01&#39;
       GROUP BY o.o_orderpriority
       ORDER BY o.o_orderpriority;
 </code></pre></div>
-<h2 id="5.-leverage-standard-bi-tools">5. Leverage standard BI tools</h2>
+<h2 id="5-leverage-standard-bi-tools">5. Leverage standard BI tools</h2>
 
 <p>Drill works with standard BI tools. You can use your existing tools, such as Tableau, MicroStrategy, QlikView and Excel. </p>
 
-<h2 id="6.-interactive-queries-on-hive-tables">6. Interactive queries on Hive tables</h2>
+<h2 id="6-interactive-queries-on-hive-tables">6. Interactive queries on Hive tables</h2>
 
 <p>Apache Drill lets you leverage your investments in Hive. You can run interactive queries with Drill on your Hive tables and access all Hive input/output formats (including custom SerDes). You can join tables associated with different Hive metastores, and you can join a Hive table with an HBase table or a directory of log files. Here&#39;s a simple query in Drill on a Hive table:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">SELECT `month`, state, sum(order_total) AS sales
@@ -1338,7 +1338,7 @@ FROM hive.orders
 GROUP BY `month`, state
 ORDER BY 3 DESC LIMIT 5;
 </code></pre></div>
-<h2 id="7.-access-multiple-data-sources">7. Access multiple data sources</h2>
+<h2 id="7-access-multiple-data-sources">7. Access multiple data sources</h2>
 
 <p>Drill is extensible. You can connect Drill out-of-the-box to file systems (local or distributed, such as S3 and HDFS), HBase and Hive. You can implement a storage plugin to make Drill work with any other data source. Drill can combine data from multiple data sources on the fly in a single query, with no centralized metadata definitions. Here&#39;s a query that combines data from a Hive table, an HBase table (view) and a JSON file:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">SELECT custview.membership, sum(orders.order_total) AS sales
@@ -1347,15 +1347,15 @@ WHERE orders.cust_id = custview.cust_id AND orders.cust_id = c.user_info.cust_id
 GROUP BY custview.membership
 ORDER BY 2;
 </code></pre></div>
-<h2 id="8.-user-defined-functions-(udfs)-for-drill-and-hive">8. User-Defined Functions (UDFs) for Drill and Hive</h2>
+<h2 id="8-user-defined-functions-udfs-for-drill-and-hive">8. User-Defined Functions (UDFs) for Drill and Hive</h2>
 
 <p>Drill exposes a simple, high-performance Java API to build <a href="/docs/develop-custom-functions/">custom user-defined functions</a> (UDFs) for adding your own business logic to Drill.  Drill also supports Hive UDFs. If you have already built UDFs in Hive, you can reuse them with Drill with no modifications. </p>
 
-<h2 id="9.-high-performance">9. High performance</h2>
+<h2 id="9-high-performance">9. High performance</h2>
 
 <p>Drill is designed from the ground up for high throughput and low latency. It doesn&#39;t use a general purpose execution engine like MapReduce, Tez or Spark. As a result, Drill is flexible (schema-free JSON model) and performant. Drill&#39;s optimizer leverages rule- and cost-based techniques, as well as data locality and operator push-down, which is the capability to push down query fragments into the back-end data sources. Drill also provides a columnar and vectorized execution engi [...]
 
-<h2 id="10.-scales-from-a-single-laptop-to-a-1000-node-cluster">10. Scales from a single laptop to a 1000-node cluster</h2>
+<h2 id="10-scales-from-a-single-laptop-to-a-1000-node-cluster">10. Scales from a single laptop to a 1000-node cluster</h2>
 
 <p>Drill is available as a simple download you can run on your laptop. When you&#39;re ready to analyze larger datasets, deploy Drill on your Hadoop cluster (up to 1000 commodity servers). Drill leverages the aggregate memory in the cluster to execute queries using an optimistic pipelined model, and automatically spills to disk when the working set doesn&#39;t fit in memory.</p>
 
diff --git a/docs/workspaces/index.html b/docs/workspaces/index.html
index 32986be..825b3fe 100644
--- a/docs/workspaces/index.html
+++ b/docs/workspaces/index.html
@@ -1329,7 +1329,7 @@ location of the data:</p>
 
 <p><code>&lt;plugin&gt;.&lt;workspace name&gt;.`&lt;location&gt;</code>`  </p>
 
-<h2 id="overriding-dfs.default">Overriding <code>dfs.default</code></h2>
+<h2 id="overriding-dfs-default">Overriding <code>dfs.default</code></h2>
 
 <p>You may want to override the hidden default workspace in scenarios where users do not have permissions to access the root directory. 
 Add the following workspace entry to the <code>dfs</code> storage plugin configuration to override the default workspace:</p>
diff --git a/faq/index.html b/faq/index.html
index f0be137..08d0d5e 100644
--- a/faq/index.html
+++ b/faq/index.html
@@ -122,11 +122,11 @@
 
 <div class="int_text" align="left"><h2 id="overview">Overview</h2>
 
-<h3 id="why-drill?">Why Drill?</h3>
+<h3 id="why-drill">Why Drill?</h3>
 
 <p>The 40-year monopoly of the RDBMS is over. With the exponential growth of data in recent years, and the shift towards rapid application development, new data is increasingly being stored in non-relational datastores including Hadoop, NoSQL and cloud storage. Apache Drill enables analysts, business users, data scientists and developers to explore and analyze this data without sacrificing the flexibility and agility offered by these datastores. Drill processes the data in-situ without r [...]
 
-<h3 id="what-are-some-of-drill&#39;s-key-features?">What are some of Drill&#39;s key features?</h3>
+<h3 id="what-are-some-of-drills-key-features">What are some of Drill&#39;s key features?</h3>
 
 <p>Drill is an innovative distributed SQL engine designed to enable data exploration and analytics on non-relational datastores. Users can query the data using standard SQL and BI tools without having to create and manage schemas. Some of the key features are:</p>
 
@@ -137,7 +137,7 @@
 <li>Pluggable architecture enables connectivity to multiple datastores</li>
 </ul>
 
-<h3 id="how-does-drill-achieve-performance?">How does Drill achieve performance?</h3>
+<h3 id="how-does-drill-achieve-performance">How does Drill achieve performance?</h3>
 
 <p>Drill is built from the ground up to achieve high throughput and low latency. The following capabilities help accomplish that:</p>
 
@@ -149,7 +149,7 @@
 <li><strong>Optimistic/pipelined execution</strong>: Drill is able to stream data in memory between operators. Drill minimizes the use of disks unless needed to complete the query.</li>
 </ul>
 
-<h3 id="what-datastores-does-drill-support?">What datastores does Drill support?</h3>
+<h3 id="what-datastores-does-drill-support">What datastores does Drill support?</h3>
 
 <p>Drill is primarily focused on non-relational datastores, including Hadoop, NoSQL and cloud storage. The following datastores are currently supported:</p>
 
@@ -161,7 +161,7 @@
 
 <p>A new datastore can be added by developing a storage plugin. Drill&#39;s unique schema-free JSON data model enables it to query non-relational datastores in-situ (many of these systems store complex or schema-free data).</p>
 
-<h3 id="what-clients-are-supported?">What clients are supported?</h3>
+<h3 id="what-clients-are-supported">What clients are supported?</h3>
 
 <ul>
 <li><strong>BI tools</strong> via the ODBC and JDBC drivers (eg, Tableau, Excel, MicroStrategy, Spotfire, QlikView, Business Objects)</li>
@@ -171,7 +171,7 @@
 
 <h2 id="comparisons">Comparisons</h2>
 
-<h3 id="is-drill-a-&#39;sql-on-hadoop&#39;-engine?">Is  Drill a &#39;SQL-on-Hadoop&#39; engine?</h3>
+<h3 id="is-drill-a-sql-on-hadoop-engine">Is  Drill a &#39;SQL-on-Hadoop&#39; engine?</h3>
 
 <p>Drill supports a variety of non-relational datastores in addition to Hadoop. Drill takes a different approach compared to traditional SQL-on-Hadoop technologies like Hive and Impala. For example, users can directly query self-describing data (eg, JSON, Parquet) without having to create and manage schemas.</p>
 
@@ -226,11 +226,11 @@
 </tr>
 </tbody></table>
 
-<h3 id="is-spark-sql-similar-to-drill?">Is Spark SQL similar to Drill?</h3>
+<h3 id="is-spark-sql-similar-to-drill">Is Spark SQL similar to Drill?</h3>
 
 <p>No. Spark SQL is primarily designed to enable developers to incorporate SQL statements in Spark programs. Drill does not depend on Spark, and is targeted at business users, analysts, data scientists and developers. </p>
 
-<h3 id="does-drill-replace-hive?">Does Drill replace Hive?</h3>
+<h3 id="does-drill-replace-hive">Does Drill replace Hive?</h3>
 
 <p>Hive is a batch processing framework most suitable for long-running jobs. For data exploration and BI, Drill provides a much better experience than Hive.</p>
 
@@ -238,7 +238,7 @@
 
 <h2 id="metadata">Metadata</h2>
 
-<h3 id="how-does-drill-support-queries-on-self-describing-data?">How does Drill support queries on self-describing data?</h3>
+<h3 id="how-does-drill-support-queries-on-self-describing-data">How does Drill support queries on self-describing data?</h3>
 
 <p>Drill&#39;s flexible JSON data model and on-the-fly schema discovery enable it to query self-describing data.</p>
 
@@ -247,11 +247,11 @@
 <li><strong>On-the-fly schema discovery (or late binding)</strong>: Traditional query engines (eg, relational databases, Hive, Impala, Spark SQL) need to know the structure of the data before query execution. Drill, on the other hand, features a fundamentally different architecture, which enables execution to begin without knowing the structure of the data. The query is automatically compiled and re-compiled during the execution phase, based on the actual data flowing through the system. [...]
 </ul>
 
-<h3 id="but-i-already-have-schemas-defined-in-hive-metastore?-can-i-use-that-with-drill?">But I already have schemas defined in Hive Metastore? Can I use that with Drill?</h3>
+<h3 id="but-i-already-have-schemas-defined-in-hive-metastore-can-i-use-that-with-drill">But I already have schemas defined in Hive Metastore? Can I use that with Drill?</h3>
 
 <p>Absolutely. Drill has a storage plugin for Hive tables, so you can simply point Drill to the Hive Metastore and start performing low-latency queries on Hive tables. In fact, a single Drill cluster can query data from multiple Hive Metastores, and even perform joins across these datasets.</p>
 
-<h3 id="is-drill-&quot;anti-schema&quot;-or-&quot;anti-dba&quot;?">Is Drill &quot;anti-schema&quot; or &quot;anti-DBA&quot;?</h3>
+<h3 id="is-drill-anti-schema-or-anti-dba">Is Drill &quot;anti-schema&quot; or &quot;anti-DBA&quot;?</h3>
 
 <p>Not at all. Drill actually takes advantage of schemas when available. For example, Drill leverages the schema information in Hive when querying Hive tables. However, when querying schema-free datastores like MongoDB, or raw files on S3 or Hadoop, schemas are not available, and Drill is still able to query that data.</p>
 
@@ -265,7 +265,7 @@
 
 <p>Drill is all about flexibility. The flexible schema management capabilities in Drill allow users to explore raw data and then create models/structure with <code>CREATE TABLE</code> or <code>CREATE VIEW</code> statements, or with Hive Metastore.</p>
 
-<h3 id="what-does-a-drill-query-look-like?">What does a Drill query look like?</h3>
+<h3 id="what-does-a-drill-query-look-like">What does a Drill query look like?</h3>
 
 <p>Drill uses a decentralized metadata model and relies on its storage plugins to provide metadata. There is a storage plugin associated with each data source that is supported by Drill.</p>
 
@@ -276,25 +276,25 @@
 <span class="k">SELECT</span> <span class="o">*</span> <span class="k">FROM</span> <span class="n">hive1</span><span class="p">.</span><span class="n">logs</span><span class="p">.</span><span class="n">frontend</span><span class="p">;</span>
 <span class="k">SELECT</span> <span class="o">*</span> <span class="k">FROM</span> <span class="n">hbase1</span><span class="p">.</span><span class="n">events</span><span class="p">.</span><span class="n">clicks</span><span class="p">;</span>
 </code></pre></div>
-<h3 id="what-sql-functionality-does-drill-support?">What SQL functionality does Drill support?</h3>
+<h3 id="what-sql-functionality-does-drill-support">What SQL functionality does Drill support?</h3>
 
 <p>Drill supports standard SQL (aka ANSI SQL). In addition, it features several extensions that help with complex data, such as the <code>KVGEN</code> and <code>FLATTEN</code> functions. For more details, refer to the <a href="/docs/sql-reference/">SQL Reference</a>.</p>
 
-<h3 id="do-i-need-to-load-data-into-drill-to-start-querying-it?">Do I need to load data into Drill to start querying it?</h3>
+<h3 id="do-i-need-to-load-data-into-drill-to-start-querying-it">Do I need to load data into Drill to start querying it?</h3>
 
 <p>No. Drill can query data &#39;in-situ&#39;.</p>
 
 <h2 id="getting-started">Getting Started</h2>
 
-<h3 id="what-is-the-best-way-to-get-started-with-drill?">What is the best way to get started with Drill?</h3>
+<h3 id="what-is-the-best-way-to-get-started-with-drill">What is the best way to get started with Drill?</h3>
 
 <p>The best way to get started is to try it out. It only takes a few minutes and all you need is a laptop (Mac, Windows or Linux). We&#39;ve compiled <a href="/docs/tutorials-introduction/">several tutorials</a> to help you get started.</p>
 
-<h3 id="how-can-i-ask-questions-and-provide-feedback?">How can I ask questions and provide feedback?</h3>
+<h3 id="how-can-i-ask-questions-and-provide-feedback">How can I ask questions and provide feedback?</h3>
 
 <p>Please post your questions and feedback to <a href="mailto:user@drill.apache.org">user@drill.apache.org</a>. We are happy to help!</p>
 
-<h3 id="how-can-i-contribute-to-drill?">How can I contribute to Drill?</h3>
+<h3 id="how-can-i-contribute-to-drill">How can I contribute to Drill?</h3>
 
 <p>The documentation has information on <a href="/docs/contribute-to-drill/">how to contribute</a>.</p>
 </div>
diff --git a/feed.xml b/feed.xml
index 9880949..1f43d60 100644
--- a/feed.xml
+++ b/feed.xml
@@ -6,8 +6,8 @@
 </description>
     <link>/</link>
     <atom:link href="/feed.xml" rel="self" type="application/rss+xml"/>
-    <pubDate>Fri, 02 Nov 2018 10:53:08 -0700</pubDate>
-    <lastBuildDate>Fri, 02 Nov 2018 10:53:08 -0700</lastBuildDate>
+    <pubDate>Fri, 02 Nov 2018 12:41:27 -0700</pubDate>
+    <lastBuildDate>Fri, 02 Nov 2018 12:41:27 -0700</lastBuildDate>
     <generator>Jekyll v2.5.2</generator>
     
       <item>
@@ -16,24 +16,24 @@
 
 &lt;p&gt;The release provides the following bug fixes and improvements:&lt;/p&gt;
 
-&lt;h2 id=&quot;run-drill-in-a-docker-container-(drill-6346)&quot;&gt;Run Drill in a Docker Container (DRILL-6346)&lt;/h2&gt;
+&lt;h2 id=&quot;run-drill-in-a-docker-container-drill-6346&quot;&gt;Run Drill in a Docker Container (DRILL-6346)&lt;/h2&gt;
 
 &lt;p&gt;Running Drill in a Docker container is the simplest way to start using Drill; all you need is the Docker client installed on your machine. You simply run a Docker command, and your Docker client downloads the Drill Docker image from the apache-drill repository on Docker Hub and then brings up a container with Apache Drill running in embedded mode. See &lt;a href=&quot;/docs/running-drill-on-docker/&quot;&gt;Running Drill on Docker&lt;/a&gt;.  &lt;/p&gt;
 
-&lt;h2 id=&quot;export-and-save-storage-plugin-configurations-(drill-4580)&quot;&gt;Export and Save Storage Plugin Configurations (DRILL-4580)&lt;/h2&gt;
+&lt;h2 id=&quot;export-and-save-storage-plugin-configurations-drill-4580&quot;&gt;Export and Save Storage Plugin Configurations (DRILL-4580)&lt;/h2&gt;
 
 &lt;p&gt;You can export and save your storage plugin configurations from the Storage page in the Drill Web UI. See &lt;a href=&quot;/docs/configuring-storage-plugins/#exporting-storage-plugin-configurations&quot;&gt;Exporting Storage Plugin Configurations&lt;/a&gt;.  &lt;/p&gt;
 
-&lt;h2 id=&quot;manage-storage-plugin-configurations-in-a-configuration-file-(drill-6494)&quot;&gt;Manage Storage Plugin Configurations in a Configuration File (DRILL-6494)&lt;/h2&gt;
+&lt;h2 id=&quot;manage-storage-plugin-configurations-in-a-configuration-file-drill-6494&quot;&gt;Manage Storage Plugin Configurations in a Configuration File (DRILL-6494)&lt;/h2&gt;
 
 &lt;p&gt;You can manage storage plugin configurations in the Drill configuration file,  storage-plugins-override.conf. When you provide the storage plugin configurations in the storage-plugins-override.conf file, Drill reads the file and configures the plugins during start-up. See &lt;a href=&quot;https://drill.apache.org/docs/configuring-storage-plugins/#configuring-storage-plugins-with-the-storage-plugins-override.conf-file&quot;&gt;Configuring Storage Plugins with the storage-plugins- [...]
 
-&lt;h2 id=&quot;query-metadata-in-various-image-formats-(drill-4364)&quot;&gt;Query Metadata in Various Image Formats (DRILL-4364)&lt;/h2&gt;
+&lt;h2 id=&quot;query-metadata-in-various-image-formats-drill-4364&quot;&gt;Query Metadata in Various Image Formats (DRILL-4364)&lt;/h2&gt;
 
 &lt;p&gt;The metadata format plugin is useful for querying a large number of image files stored in a distributed file system. You do not have to build a metadata repository in advance.&lt;br&gt;
 See &lt;a href=&quot;/docs/image-metadata-format-plugin/&quot;&gt;Image Metadata Format Plugin&lt;/a&gt;.  &lt;/p&gt;
 
-&lt;h2 id=&quot;set-hive-properties-at-the-session-level-(drill-6575)&quot;&gt;Set Hive Properties at the Session Level (DRILL-6575)&lt;/h2&gt;
+&lt;h2 id=&quot;set-hive-properties-at-the-session-level-drill-6575&quot;&gt;Set Hive Properties at the Session Level (DRILL-6575)&lt;/h2&gt;
 
 &lt;p&gt;The store.hive.conf.properties option enables you to specify Hive properties at the session level using the SET command. See &lt;a href=&quot;/docs/hive-storage-plugin/#setting-hive-properties&quot;&gt;Setting Hive Properties&lt;/a&gt;.   &lt;/p&gt;
 
@@ -54,19 +54,19 @@ See &lt;a href=&quot;/docs/image-metadata-format-plugin/&quot;&gt;Image Metadata
 
 &lt;p&gt;The release provides the following bug fixes and improvements:&lt;/p&gt;
 
-&lt;h2 id=&quot;ability-to-run-drill-under-yarn-(drill-1170)&quot;&gt;Ability to Run Drill Under YARN (DRILL-1170)&lt;/h2&gt;
+&lt;h2 id=&quot;ability-to-run-drill-under-yarn-drill-1170&quot;&gt;Ability to Run Drill Under YARN (DRILL-1170)&lt;/h2&gt;
 
 &lt;p&gt;You can run Drill as a YARN application (&lt;a href=&quot;/docs/drill-on-yarn/&quot;&gt;Drill-on-YARN&lt;/a&gt;) if you want Drill to work alongside other applications, such as Hadoop and Spark, in a YARN-managed cluster. YARN assigns resources, such as memory and CPU, to applications in the cluster and eliminates the manual steps associated with installation and resource allocation for stand-alone applications in a multi-tenant environment. YARN automatically deploys (localizes [...]
 
-&lt;h2 id=&quot;spnego-support-(drill-5425)&quot;&gt;SPNEGO Support (DRILL-5425)&lt;/h2&gt;
+&lt;h2 id=&quot;spnego-support-drill-5425&quot;&gt;SPNEGO Support (DRILL-5425)&lt;/h2&gt;
 
 &lt;p&gt;You can use SPNEGO to extend Kerberos authentication to Web applications through HTTP. &lt;/p&gt;
 
-&lt;h2 id=&quot;sql-syntax-support-(drill-5868)&quot;&gt;SQL Syntax Support (DRILL-5868)&lt;/h2&gt;
+&lt;h2 id=&quot;sql-syntax-support-drill-5868&quot;&gt;SQL Syntax Support (DRILL-5868)&lt;/h2&gt;
 
 &lt;p&gt;Query syntax appears highlighted in the Drill Web Console. In addition to syntax highlighting, auto-complete is supported in all SQL editors, including the Edit Query tab within an existing profile to rerun the query. For browsers like Chrome, you can type Ctrl+Space for a drop-down list and then use arrow keys for navigating through options. An auto-complete feature that specifies Drill keywords and functions, and the ability to write SQL from templates using snippets. &lt;/p&gt;
 
-&lt;h2 id=&quot;user/distribution-specific-configuration-checks-during-startup-(drill-5741)&quot;&gt;User/Distribution-Specific Configuration Checks During Startup (DRILL-5741)&lt;/h2&gt;
+&lt;h2 id=&quot;user-distribution-specific-configuration-checks-during-startup-drill-5741&quot;&gt;User/Distribution-Specific Configuration Checks During Startup (DRILL-5741)&lt;/h2&gt;
 
 &lt;p&gt;You can define the maximum amount of cumulative memory allocated to the Drill process during startup through the &lt;code&gt;DRILLBIT_MAX_PROC_MEM&lt;/code&gt; environment variable. For example, if you set &lt;code&gt;DRILLBIT_MAX_PROC_MEM to 40G&lt;/code&gt;, the total amount of memory allocated to the following memory parameters cannot exceed 40G:  &lt;/p&gt;
 
@@ -180,7 +180,7 @@ cp jets3t-0.9.2/jars/jets3t-0.9.2.jar &lt;span class=&quot;nv&quot;&gt;$DRILL_HO
 
 &lt;p&gt;The release provides the following bug fixes and improvements:&lt;/p&gt;
 
-&lt;h2 id=&quot;kafka-and-opentsdb-storage-plugins-(drill-4779,-drill-5337)&quot;&gt;Kafka and OpenTSDB Storage Plugins (DRILL-4779, DRILL-5337)&lt;/h2&gt;
+&lt;h2 id=&quot;kafka-and-opentsdb-storage-plugins-drill-4779-drill-5337&quot;&gt;Kafka and OpenTSDB Storage Plugins (DRILL-4779, DRILL-5337)&lt;/h2&gt;
 
 &lt;p&gt;You can configure Kafka and OpenTSDB as Drill data sources.  &lt;/p&gt;
 
@@ -199,7 +199,7 @@ cp jets3t-0.9.2/jars/jets3t-0.9.2.jar &lt;span class=&quot;nv&quot;&gt;$DRILL_HO
 &lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/li&gt;
 &lt;/ul&gt;
 
-&lt;h2 id=&quot;queue-based-memory-assignment-for-buffering-operators-(throttling)-(drill-5716)&quot;&gt;Queue-Based Memory Assignment for Buffering Operators (Throttling) (DRILL-5716)&lt;/h2&gt;
+&lt;h2 id=&quot;queue-based-memory-assignment-for-buffering-operators-throttling-drill-5716&quot;&gt;Queue-Based Memory Assignment for Buffering Operators (Throttling) (DRILL-5716)&lt;/h2&gt;
 
 &lt;p&gt;Throttling limits the number of concurrent queries that run to prevent queries from failing with out-of-memory errors. When you enable throttling, you configure the number of concurrent queries that can run and the resource requirements for each query. Drill calculates the amount of memory to assign per query per node. See &lt;a href=&quot;/docs/throttling/&quot;&gt;Throttling&lt;/a&gt; for more information. &lt;/p&gt;
 
@@ -232,11 +232,11 @@ cp jets3t-0.9.2/jars/jets3t-0.9.2.jar &lt;span class=&quot;nv&quot;&gt;$DRILL_HO
 
 &lt;p&gt;Drill 1.10 provided authentication support through Plain and Kerberos authentication mechanisms to authenticate the Drill client to Drillbit and Drillbit to Drillbit communication channels. Drill 1.11 extends that support to include encryption. Drill uses the Kerberos mechanism over the SASL framework to encrypt the communication channels. &lt;/p&gt;
 
-&lt;h2 id=&quot;access-to-paths-outside-the-current-workspace-(drill-5964)&quot;&gt;Access to Paths Outside the Current Workspace (DRILL-5964)&lt;/h2&gt;
+&lt;h2 id=&quot;access-to-paths-outside-the-current-workspace-drill-5964&quot;&gt;Access to Paths Outside the Current Workspace (DRILL-5964)&lt;/h2&gt;
 
 &lt;p&gt;A new parameter, allowAccessOutsideWorkspace, in the dfs storage plugin configuration prevents users from accessing paths outside the root of a workspace. The default value for the parameter is false. Set the parameter to true to allow users access outside of a workspace. If existing storage plugin configurations do not specify the parameter, users cannot access paths outside the configured workspaces.&lt;/p&gt;
 
-&lt;p&gt;You can find a complete list of JIRAs resolved in the 1.12.0 release &lt;a href=&quot;https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12341087&amp;amp;styleName=Html&amp;amp;projectId=12313820&amp;amp;Create=Create&amp;amp;atl_token=A5KQ-2QAV-T4JA-FDED%7Cd194b12b906cd370f36d15e8af60a94592b89038%7Clin&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;You can find a complete list of JIRAs resolved in the 1.12.0 release &lt;a href=&quot;https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12341087&amp;styleName=Html&amp;projectId=12313820&amp;Create=Create&amp;atl_token=A5KQ-2QAV-T4JA-FDED%7Cd194b12b906cd370f36d15e8af60a94592b89038%7Clin&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
 </description>
         <pubDate>Fri, 15 Dec 2017 00:00:00 -0800</pubDate>
         <link>/blog/2017/12/15/drill-1.12-released/</link>
@@ -253,7 +253,7 @@ cp jets3t-0.9.2/jars/jets3t-0.9.2.jar &lt;span class=&quot;nv&quot;&gt;$DRILL_HO
 
 &lt;p&gt;The release provides the following bug fixes and improvements:&lt;/p&gt;
 
-&lt;h2 id=&quot;cryptography-related-functions-(drill-5634)&quot;&gt;Cryptography-Related Functions (DRILL-5634)&lt;/h2&gt;
+&lt;h2 id=&quot;cryptography-related-functions-drill-5634&quot;&gt;Cryptography-Related Functions (DRILL-5634)&lt;/h2&gt;
 
 &lt;p&gt;Drill provides the following cryptographic-related functions:&lt;/p&gt;
 
@@ -266,38 +266,38 @@ cp jets3t-0.9.2/jars/jets3t-0.9.2.jar &lt;span class=&quot;nv&quot;&gt;$DRILL_HO
 &lt;li&gt;sha2()&lt;br&gt;&lt;/li&gt;
 &lt;/ul&gt;
 
-&lt;h2 id=&quot;spill-to-disk-for-hash-aggregate-operator-(drill-5457)&quot;&gt;Spill to Disk for Hash Aggregate Operator (DRILL-5457)&lt;/h2&gt;
+&lt;h2 id=&quot;spill-to-disk-for-hash-aggregate-operator-drill-5457&quot;&gt;Spill to Disk for Hash Aggregate Operator (DRILL-5457)&lt;/h2&gt;
 
 &lt;p&gt;The Hash aggregate operator can spill data to disk in cases where the operation exceeds the set memory limit. Note that you may need to increase the default value of the &lt;code&gt;planner.memory.max_query_memory_per_node&lt;/code&gt; option due to insufficient memory.      &lt;/p&gt;
 
-&lt;h2 id=&quot;format-plugin-support-for-pcap-files-(drill-5432)&quot;&gt;Format Plugin Support for PCAP Files (DRILL-5432)&lt;/h2&gt;
+&lt;h2 id=&quot;format-plugin-support-for-pcap-files-drill-5432&quot;&gt;Format Plugin Support for PCAP Files (DRILL-5432)&lt;/h2&gt;
 
 &lt;p&gt;A “pcap” format plugin enables Drill to read PCAP files. You must add the “pcap” format to the dfs storage plugin configuration, as shown:  &lt;/p&gt;
 &lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-text&quot; data-lang=&quot;text&quot;&gt;   &amp;quot;pcap&amp;quot;: {
           &amp;quot;type&amp;quot;: &amp;quot;pcap&amp;quot;
         }   
 &lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
-&lt;h2 id=&quot;change-the-hdfs-block-size-for-parquet-files-(drill-5379)&quot;&gt;Change the HDFS Block Size for Parquet Files (DRILL-5379)&lt;/h2&gt;
+&lt;h2 id=&quot;change-the-hdfs-block-size-for-parquet-files-drill-5379&quot;&gt;Change the HDFS Block Size for Parquet Files (DRILL-5379)&lt;/h2&gt;
 
 &lt;p&gt;The &lt;code&gt;store.parquet.writer.use_single_fs_block&lt;/code&gt; option enables Drill to write a Parquet file as a single file system block without changing the file system default block size.&lt;/p&gt;
 
-&lt;h2 id=&quot;store-query-profiles-in-memory-(drill-5481)&quot;&gt;Store Query Profiles in Memory (DRILL-5481)&lt;/h2&gt;
+&lt;h2 id=&quot;store-query-profiles-in-memory-drill-5481&quot;&gt;Store Query Profiles in Memory (DRILL-5481)&lt;/h2&gt;
 
 &lt;p&gt;The &lt;code&gt;drill.exec.profiles.store.inmemory&lt;/code&gt; option enables Drill to store query profiles in memory instead of writing the query profiles to disk. The &lt;code&gt;drill.exec.profiles.store.capacity&lt;/code&gt; option sets the maximum number of most recent profiles to retain in memory.  &lt;/p&gt;
 
-&lt;h2 id=&quot;configurable-ctas-directory-and-file-permissions-option-(drill-5391)&quot;&gt;Configurable CTAS Directory and File Permissions Option (DRILL-5391)&lt;/h2&gt;
+&lt;h2 id=&quot;configurable-ctas-directory-and-file-permissions-option-drill-5391&quot;&gt;Configurable CTAS Directory and File Permissions Option (DRILL-5391)&lt;/h2&gt;
 
 &lt;p&gt;You can use the &lt;code&gt;exec.persistent_table.umask&lt;/code&gt; configuration option, at the system or session level, to modify permissions on directories and files that result from running the CTAS command. By default, the option is set to 002, which sets the default directory permissions to 775 and default file permissions to 664.   &lt;/p&gt;
 
-&lt;h2 id=&quot;support-for-network-encryption-(drill-4335)&quot;&gt;Support for Network Encryption (DRILL-4335)&lt;/h2&gt;
+&lt;h2 id=&quot;support-for-network-encryption-drill-4335&quot;&gt;Support for Network Encryption (DRILL-4335)&lt;/h2&gt;
 
 &lt;p&gt;Drill can use SASL to support network encryption between the Drill client and drillbits, and also between drillbits.  &lt;/p&gt;
 
-&lt;h2 id=&quot;metadata-file-stores-relative-paths-(drill-3867)&quot;&gt;Metadata file Stores Relative Paths (DRILL-3867)&lt;/h2&gt;
+&lt;h2 id=&quot;metadata-file-stores-relative-paths-drill-3867&quot;&gt;Metadata file Stores Relative Paths (DRILL-3867)&lt;/h2&gt;
 
 &lt;p&gt;Drill now stores the relative path in the metadata file (versus the absolute path), which enables you to move partitioned Parquet directories from one location in DFS to another without having to rebuild the Parquet metadata files; the metadata remains valid in the new location.  &lt;/p&gt;
 
-&lt;h2 id=&quot;support-for-additional-quoting-identifiers-(drill-3510)&quot;&gt;Support for Additional Quoting Identifiers (DRILL-3510)&lt;/h2&gt;
+&lt;h2 id=&quot;support-for-additional-quoting-identifiers-drill-3510&quot;&gt;Support for Additional Quoting Identifiers (DRILL-3510)&lt;/h2&gt;
 
 &lt;p&gt;In addition to back ticks, the SQL parser in Drill can use double quotes and square brackets as identifier quotes. Use the &lt;code&gt;planner.parser.quoting_identifiers&lt;/code&gt; configuration option, at the system or session level, to set the type of identifier quotes that the SQL parser in Drill uses, as shown:  &lt;/p&gt;
 &lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-text&quot; data-lang=&quot;text&quot;&gt;   ALTER SESSION SET planner.parser.quoting_identifiers = &amp;#39;&amp;quot;&amp;#39;;  
@@ -306,7 +306,7 @@ cp jets3t-0.9.2/jars/jets3t-0.9.2.jar &lt;span class=&quot;nv&quot;&gt;$DRILL_HO
 &lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 &lt;p&gt;The default setting is back ticks. The quoting identifier used in queries must match the setting. If you use another type of quoting identifier, Drill returns an error.  &lt;/p&gt;
 
-&lt;p&gt;You can find a complete list of JIRAs resolved in the 1.11.0 release &lt;a href=&quot;https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&amp;amp;version=12339943&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;You can find a complete list of JIRAs resolved in the 1.11.0 release &lt;a href=&quot;https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&amp;version=12339943&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
 </description>
         <pubDate>Mon, 31 Jul 2017 00:00:00 -0700</pubDate>
         <link>/blog/2017/07/31/drill-1.11-released/</link>
@@ -343,7 +343,7 @@ cp jets3t-0.9.2/jars/jets3t-0.9.2.jar &lt;span class=&quot;nv&quot;&gt;$DRILL_HO
 
 &lt;p&gt;Drill supports Kerberos authentication between the client and drillbit. See &lt;a href=&quot;/docs/configuring-kerberos-authentication/&quot;&gt;Configuring Kerberos Authentication&lt;/a&gt; in the &lt;a href=&quot;/docs/securing-drill/&quot;&gt;Securing Drill&lt;/a&gt; section.&lt;/p&gt;
 
-&lt;p&gt;A complete list of JIRAs resolved in the 1.10.0 release can be found &lt;a href=&quot;https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12338769&amp;amp;styleName=Html&amp;amp;projectId=12313820&amp;amp;Create=Create&amp;amp;atl_token=A5KQ-2QAV-T4JA-FDED%7C264858c85b35c3b8ac66b0573aa7e88ffa802c9d%7Clin&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;A complete list of JIRAs resolved in the 1.10.0 release can be found &lt;a href=&quot;https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12338769&amp;styleName=Html&amp;projectId=12313820&amp;Create=Create&amp;atl_token=A5KQ-2QAV-T4JA-FDED%7C264858c85b35c3b8ac66b0573aa7e88ffa802c9d%7Clin&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
 </description>
         <pubDate>Wed, 15 Mar 2017 00:00:00 -0700</pubDate>
         <link>/blog/2017/03/15/drill-1.10-released/</link>
@@ -376,7 +376,7 @@ cp jets3t-0.9.2/jars/jets3t-0.9.2.jar &lt;span class=&quot;nv&quot;&gt;$DRILL_HO
 
 &lt;p&gt;The new HTTPD format plugin adds the capability to query HTTP web server logs natively and also includes parse_url() and parse_query() UDFs. The parse_url() UDF returns maps of the URL. The parse_query() UDF returns the query string.  &lt;/p&gt;
 
-&lt;p&gt;A complete list of JIRAs resolved in the 1.9.0 release can be found &lt;a href=&quot;https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12337861&amp;amp;styleName=Html&amp;amp;projectId=12313820&amp;amp;Create=Create&amp;amp;atl_token=A5KQ-2QAV-T4JA-FDED%7Cedcc6294c1851bcd19a3686871e085181f755a91%7Clin&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;A complete list of JIRAs resolved in the 1.9.0 release can be found &lt;a href=&quot;https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12337861&amp;styleName=Html&amp;projectId=12313820&amp;Create=Create&amp;atl_token=A5KQ-2QAV-T4JA-FDED%7Cedcc6294c1851bcd19a3686871e085181f755a91%7Clin&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
 </description>
         <pubDate>Tue, 29 Nov 2016 00:00:00 -0800</pubDate>
         <link>/blog/2016/11/29/drill-1.9-released/</link>
@@ -413,7 +413,7 @@ cp jets3t-0.9.2/jars/jets3t-0.9.2.jar &lt;span class=&quot;nv&quot;&gt;$DRILL_HO
 
 &lt;p&gt;New parameters set the minimum filter selectivity estimate to increase the parallelization of the major fragment performing a join. See &lt;a href=&quot;https://drill.apache.org/docs/configuration-options-introduction/#system-options&quot;&gt;System Options&lt;/a&gt;. &lt;/p&gt;
 
-&lt;p&gt;A complete list of JIRAs resolved in the 1.8.0 release can be found &lt;a href=&quot;https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12334768&amp;amp;styleName=Html&amp;amp;projectId=12313820&amp;amp;Create=Create&amp;amp;atl_token=A5KQ-2QAV-T4JA-FDED%7Ce8d020149d9a6082481af301e563adbe35c76a87%7Clout&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;A complete list of JIRAs resolved in the 1.8.0 release can be found &lt;a href=&quot;https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12334768&amp;styleName=Html&amp;projectId=12313820&amp;Create=Create&amp;atl_token=A5KQ-2QAV-T4JA-FDED%7Ce8d020149d9a6082481af301e563adbe35c76a87%7Clout&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
 </description>
         <pubDate>Tue, 30 Aug 2016 00:00:00 -0700</pubDate>
         <link>/blog/2016/08/30/drill-1.8-released/</link>
@@ -442,7 +442,7 @@ cp jets3t-0.9.2/jars/jets3t-0.9.2.jar &lt;span class=&quot;nv&quot;&gt;$DRILL_HO
 
 &lt;p&gt;Drill now supports HBase 1.x. &lt;/p&gt;
 
-&lt;p&gt;A complete list of JIRAs resolved in the 1.7.0 release can be found &lt;a href=&quot;https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12334767&amp;amp;styleName=&amp;amp;projectId=12313820&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;A complete list of JIRAs resolved in the 1.7.0 release can be found &lt;a href=&quot;https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12334767&amp;styleName=&amp;projectId=12313820&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
 </description>
         <pubDate>Tue, 28 Jun 2016 00:00:00 -0700</pubDate>
         <link>/blog/2016/06/28/drill-1.7-released/</link>
@@ -467,7 +467,7 @@ cp jets3t-0.9.2/jars/jets3t-0.9.2.jar &lt;span class=&quot;nv&quot;&gt;$DRILL_HO
 
 &lt;p&gt;The window function frame clause now supports additional custom frames. See &lt;a href=&quot;/docs/sql-window-functions-introduction/#syntax&quot;&gt;Window Function Syntax&lt;/a&gt;. &lt;/p&gt;
 
-&lt;p&gt;A complete list of JIRAs resolved in the 1.6.0 release can be found &lt;a href=&quot;https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12334766&amp;amp;styleName=Html&amp;amp;projectId=12313820&amp;amp;Create=Create&amp;amp;atl_token=A5KQ-2QAV-T4JA-FDED%7C9ec2112379f0ae5d2b67a8cbd2626bcde62b41cd%7Clout&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;A complete list of JIRAs resolved in the 1.6.0 release can be found &lt;a href=&quot;https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12334766&amp;styleName=Html&amp;projectId=12313820&amp;Create=Create&amp;atl_token=A5KQ-2QAV-T4JA-FDED%7C9ec2112379f0ae5d2b67a8cbd2626bcde62b41cd%7Clout&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
 </description>
         <pubDate>Wed, 16 Mar 2016 00:00:00 -0700</pubDate>
         <link>/blog/2016/03/16/drill-1.6-released/</link>