You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@drill.apache.org by kr...@apache.org on 2015/12/10 03:45:32 UTC

[3/3] drill-site git commit: squash 4 commits

squash 4 commits


Project: http://git-wip-us.apache.org/repos/asf/drill-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill-site/commit/0d7ffd4b
Tree: http://git-wip-us.apache.org/repos/asf/drill-site/tree/0d7ffd4b
Diff: http://git-wip-us.apache.org/repos/asf/drill-site/diff/0d7ffd4b

Branch: refs/heads/asf-site
Commit: 0d7ffd4b0185f530d35aa1c02aa1ee02198b4601
Parents: e6f79da
Author: Kris Hahn <kr...@apache.org>
Authored: Wed Dec 9 18:45:11 2015 -0800
Committer: Kris Hahn <kr...@apache.org>
Committed: Wed Dec 9 18:45:11 2015 -0800

----------------------------------------------------------------------
 blog/2014/11/19/sql-on-mongodb/index.html       |   4 +-
 .../12/02/drill-top-level-project/index.html    |   2 +-
 .../index.html                                  |  15 +-
 blog/2014/12/16/whats-coming-in-2015/index.html |   4 +-
 .../index.html                                  |   2 +-
 blog/2015/07/05/drill-1.1-released/index.html   |   2 +-
 .../drill-tutorial-at-nosql-now-2015/index.html |   5 +-
 docs/aggregate-window-functions/index.html      |  10 +-
 .../index.html                                  |  14 +-
 .../apache-drill-1-1-0-release-notes/index.html |   6 +-
 .../apache-drill-1-2-0-release-notes/index.html |   2 +-
 .../index.html                                  |   2 +-
 docs/apache-drill-contribution-ideas/index.html |   2 +-
 docs/compiling-drill-from-source/index.html     |   4 +-
 docs/configuring-jreport-with-drill/index.html  |   6 +-
 docs/configuring-odbc-on-linux/index.html       |  10 +-
 docs/configuring-odbc-on-mac-os-x/index.html    |  10 +-
 docs/configuring-odbc-on-windows/index.html     |   2 +-
 .../index.html                                  |   4 +-
 .../index.html                                  |   8 +-
 .../index.html                                  |  10 +-
 docs/configuring-user-impersonation/index.html  |   2 +-
 .../index.html                                  |   2 +-
 docs/custom-function-interfaces/index.html      |   6 +-
 docs/data-type-conversion/index.html            |   2 +-
 docs/date-time-and-timestamp/index.html         |   2 +-
 .../index.html                                  |   2 +-
 docs/drill-introduction/index.html              |   6 +-
 docs/drill-patch-review-tool/index.html         |  20 +-
 docs/drill-plan-syntax/index.html               |   2 +-
 docs/drop-table/index.html                      |  14 +-
 docs/explain/index.html                         |   2 +-
 .../index.html                                  |   2 +-
 .../index.html                                  |   6 +-
 docs/installing-the-driver-on-linux/index.html  |   6 +-
 .../index.html                                  |   6 +-
 .../installing-the-driver-on-windows/index.html |   8 +-
 docs/json-data-model/index.html                 |  18 +-
 docs/kvgen/index.html                           |   2 +-
 .../index.html                                  |  30 +--
 .../index.html                                  |  28 +--
 .../index.html                                  |  36 ++--
 docs/mongodb-storage-plugin/index.html          |   2 +-
 docs/odbc-configuration-reference/index.html    |   2 +-
 docs/parquet-format/index.html                  |   2 +-
 docs/partition-pruning/index.html               |   6 +-
 docs/plugin-configuration-basics/index.html     |  10 +-
 docs/querying-hbase/index.html                  |   2 +-
 docs/querying-json-files/index.html             |   2 +-
 docs/querying-plain-text-files/index.html       |   4 +-
 docs/querying-sequence-files/index.html         |   2 +-
 docs/querying-system-tables/index.html          |  12 +-
 docs/ranking-window-functions/index.html        |  10 +-
 docs/rdbms-storage-plugin/index.html            |   2 +-
 docs/rest-api/index.html                        |  28 +--
 docs/s3-storage-plugin/index.html               |   4 +-
 docs/sequence-files/index.html                  |   4 +-
 docs/sql-extensions/index.html                  |   2 +-
 .../index.html                                  |   4 +-
 .../index.html                                  |   2 +-
 docs/starting-drill-on-windows/index.html       |   2 +-
 docs/starting-the-web-console/index.html        |   2 +-
 docs/tableau-examples/index.html                |  26 +--
 docs/troubleshooting/index.html                 |   8 +-
 .../index.html                                  |  18 +-
 docs/useful-research/index.html                 |   4 +-
 .../index.html                                  |   8 +-
 .../index.html                                  |   6 +-
 .../index.html                                  |  12 +-
 .../index.html                                  |  12 +-
 docs/using-qlik-sense-with-drill/index.html     |  10 +-
 docs/using-the-jdbc-driver/index.html           |   8 +-
 .../index.html                                  |   4 +-
 docs/value-window-functions/index.html          |  12 +-
 docs/why-drill/index.html                       |  20 +-
 docs/workspaces/index.html                      |   2 +-
 faq/index.html                                  |  34 +--
 feed.xml                                        |  13 +-
 js/script.js                                    | 214 +++++++++----------
 79 files changed, 422 insertions(+), 419 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/blog/2014/11/19/sql-on-mongodb/index.html
----------------------------------------------------------------------
diff --git a/blog/2014/11/19/sql-on-mongodb/index.html b/blog/2014/11/19/sql-on-mongodb/index.html
index 32301a9..5efc20b 100644
--- a/blog/2014/11/19/sql-on-mongodb/index.html
+++ b/blog/2014/11/19/sql-on-mongodb/index.html
@@ -149,7 +149,7 @@
 <li>Optimizations</li>
 </ul>
 
-<h2 id="drill-and-mongodb-setup-(standalone/replicated/sharded)">Drill and MongoDB Setup (Standalone/Replicated/Sharded)</h2>
+<h2 id="drill-and-mongodb-setup-standalone-replicated-sharded">Drill and MongoDB Setup (Standalone/Replicated/Sharded)</h2>
 
 <h3 id="standalone">Standalone</h3>
 
@@ -190,7 +190,7 @@
 
 <p>In replicated mode, whichever drillbit receives the query connects to the nearest <code>mongod</code> (local <code>mongod</code>) to read the data.</p>
 
-<h3 id="sharded/sharded-with-replica-set">Sharded/Sharded with Replica Set</h3>
+<h3 id="sharded-sharded-with-replica-set">Sharded/Sharded with Replica Set</h3>
 
 <ul>
 <li>Start Mongo processes in sharded mode</li>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/blog/2014/12/02/drill-top-level-project/index.html
----------------------------------------------------------------------
diff --git a/blog/2014/12/02/drill-top-level-project/index.html b/blog/2014/12/02/drill-top-level-project/index.html
index 70743fa..8eb74e2 100644
--- a/blog/2014/12/02/drill-top-level-project/index.html
+++ b/blog/2014/12/02/drill-top-level-project/index.html
@@ -160,7 +160,7 @@
 
 <p>After almost two years of research and development, we released Drill 0.4 in August, and continued with monthly releases since then.</p>
 
-<h2 id="what&#39;s-next">What&#39;s Next</h2>
+<h2 id="what-39-s-next">What&#39;s Next</h2>
 
 <p>Graduating to a top-level project is a significant milestone, but it&#39;s really just the beginning of the journey. In fact, we&#39;re currently wrapping up Drill 0.7, which includes hundreds of fixes and enhancements, and we expect to release that in the next couple weeks.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html
----------------------------------------------------------------------
diff --git a/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html b/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html
index 2b6fe1d..4afef95 100644
--- a/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html
+++ b/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html
@@ -127,8 +127,9 @@
   <div class="addthis_sharing_toolbox"></div>
 
   <article class="post-content">
-    <p><script type="text/javascript" src="//addthisevent.com/libs/1.5.8/ate.min.js"></script>
-<a href="/blog/2014/12/11/apache-drill-qa-panelist-spotlight/" title="Add to Calendar" class="addthisevent">
+    <script type="text/javascript" src="//addthisevent.com/libs/1.5.8/ate.min.js"></script>
+
+<p><a href="/blog/2014/12/11/apache-drill-qa-panelist-spotlight/" title="Add to Calendar" class="addthisevent">
     Add to Calendar
     <span class="_start">12-17-2014 11:30:00</span>
     <span class="_end">12-17-2014 12:30:00</span>
@@ -152,23 +153,23 @@
 
 <p>Apache Drill committers Tomer Shiran, Jacques Nadeau, and Ted Dunning, as well as Tableau Product Manager Jeff Feng and Data Scientist Dr. Kirk Borne will be on hand to answer your questions.</p>
 
-<h4 id="tomer-shiran,-apache-drill-founder-(@tshiran)">Tomer Shiran, Apache Drill Founder (@tshiran)</h4>
+<h4 id="tomer-shiran-apache-drill-founder-tshiran">Tomer Shiran, Apache Drill Founder (@tshiran)</h4>
 
 <p>Tomer Shiran is the founder of Apache Drill, and a PMC member and committer on the project. He is VP Product Management at MapR, responsible for product strategy, roadmap and new feature development. Prior to MapR, Tomer held numerous product management and engineering roles at Microsoft, most recently as the product manager for Microsoft Internet Security &amp; Acceleration Server (now Microsoft Forefront). He is the founder of two websites that have served tens of millions of users, and received coverage in prestigious publications such as The New York Times, USA Today and The Times of London. Tomer is also the author of a 900-page programming book. He holds an MS in Computer Engineering from Carnegie Mellon University and a BS in Computer Science from Technion - Israel Institute of Technology.</p>
 
-<h4 id="jeff-feng,-product-manager-tableau-software-(@jtfeng)">Jeff Feng, Product Manager Tableau Software (@jtfeng)</h4>
+<h4 id="jeff-feng-product-manager-tableau-software-jtfeng">Jeff Feng, Product Manager Tableau Software (@jtfeng)</h4>
 
 <p>Jeff Feng is a Product Manager at Tableau and leads their Big Data product roadmap &amp; strategic vision.  In his role, he focuses on joint technology integration and partnership efforts with a number of Hadoop, NoSQL and web application partners in helping users see and understand their data.</p>
 
-<h4 id="ted-dunning,-apache-drill-comitter-(@ted_dunning)">Ted Dunning, Apache Drill Comitter (@Ted_Dunning)</h4>
+<h4 id="ted-dunning-apache-drill-comitter-ted_dunning">Ted Dunning, Apache Drill Comitter (@Ted_Dunning)</h4>
 
 <p>Ted Dunning is Chief Applications Architect at MapR Technologies and committer and PMC member of the Apache Mahout, Apache ZooKeeper, and Apache Drill projects and mentor for Apache Storm. He contributed to Mahout clustering, classification and matrix decomposition algorithms  and helped expand the new version of Mahout Math library. Ted was the chief architect behind the MusicMatch (now Yahoo Music) and Veoh recommendation systems, he built fraud detection systems for ID Analytics (LifeLock) and he has issued 24 patents to date. Ted has a PhD in computing science from University of Sheffield. When he’s not doing data science, he plays guitar and mandolin.</p>
 
-<h4 id="jacques-nadeau,-vice-president,-apache-drill-(@intjesus)">Jacques Nadeau, Vice President, Apache Drill (@intjesus)</h4>
+<h4 id="jacques-nadeau-vice-president-apache-drill-intjesus">Jacques Nadeau, Vice President, Apache Drill (@intjesus)</h4>
 
 <p>Jacques Nadeau leads Apache Drill development efforts at MapR Technologies. He is an industry veteran with over 15 years of big data and analytics experience. Most recently, he was cofounder and CTO of search engine startup YapMap. Before that, he was director of new product engineering with Quigo (contextual advertising, acquired by AOL in 2007). He also built the Avenue A | Razorfish analytics data warehousing system and associated services practice (acquired by Microsoft).</p>
 
-<h4 id="dr.-kirk-borne,-george-mason-university-(@kirkdborne)">Dr. Kirk Borne, George Mason University (@KirkDBorne)</h4>
+<h4 id="dr-kirk-borne-george-mason-university-kirkdborne">Dr. Kirk Borne, George Mason University (@KirkDBorne)</h4>
 
 <p>Dr. Kirk Borne is a Transdisciplinary Data Scientist and an Astrophysicist. He is Professor of Astrophysics and Computational Science in the George Mason University School of Physics, Astronomy, and Computational Sciences. He has been at Mason since 2003, where he teaches and advises students in the graduate and undergraduate Computational Science, Informatics, and Data Science programs. Previously, he spent nearly 20 years in positions supporting NASA projects, including an assignment as NASA&#39;s Data Archive Project Scientist for the Hubble Space Telescope, and as Project Manager in NASA&#39;s Space Science Data Operations Office. He has extensive experience in big data and data science, including expertise in scientific data mining and data systems. He has published over 200 articles (research papers, conference papers, and book chapters), and given over 200 invited talks at conferences and universities worldwide.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/blog/2014/12/16/whats-coming-in-2015/index.html
----------------------------------------------------------------------
diff --git a/blog/2014/12/16/whats-coming-in-2015/index.html b/blog/2014/12/16/whats-coming-in-2015/index.html
index bc2fad6..596ff85 100644
--- a/blog/2014/12/16/whats-coming-in-2015/index.html
+++ b/blog/2014/12/16/whats-coming-in-2015/index.html
@@ -213,7 +213,7 @@
 
 <p>If you&#39;re interested in implementing a new storage plugin, I would encourage you to reach out to the Drill developer community on <a href="mailto:dev@drill.apache.org">dev@drill.apache.org</a>. I&#39;m looking forward to publishing an example of a single-query join across 10 data sources.</p>
 
-<h2 id="drill/spark-integration">Drill/Spark Integration</h2>
+<h2 id="drill-spark-integration">Drill/Spark Integration</h2>
 
 <p>We&#39;re seeing growing interest in Spark as an execution engine for data pipelines, providing an alternative to MapReduce. The Drill community is working on integrating Drill and Spark to address a few new use cases:</p>
 
@@ -239,7 +239,7 @@
 <li><strong>Workload management</strong>: A single cluster is often shared among many users and groups, and everyone expects answers in real-time. Workload management prioritizes the allocation of resources to ensure that the most important workloads get done first so that business demands can be met. Administrators need to be able to assign priorities and quotas at a fine granularity. We&#39;re working on enhancing Drill&#39;s workload management to provide these capabilities while providing tight integration with YARN and Mesos.</li>
 </ul>
 
-<h2 id="we-would-love-to-hear-from-you!">We Would Love to Hear From You!</h2>
+<h2 id="we-would-love-to-hear-from-you">We Would Love to Hear From You!</h2>
 
 <p>Are there other features you would like to see in Drill? We would love to hear from you:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/blog/2015/01/27/schema-free-json-data-infrastructure/index.html
----------------------------------------------------------------------
diff --git a/blog/2015/01/27/schema-free-json-data-infrastructure/index.html b/blog/2015/01/27/schema-free-json-data-infrastructure/index.html
index a297e47..58be84b 100644
--- a/blog/2015/01/27/schema-free-json-data-infrastructure/index.html
+++ b/blog/2015/01/27/schema-free-json-data-infrastructure/index.html
@@ -129,7 +129,7 @@
   <article class="post-content">
     <p>JSON has emerged in recent years as the de-facto standard data exchange format. It is being used everywhere. Front-end Web applications use JSON to maintain data and communicate with back-end applications. Web APIs are JSON-based (eg, <a href="https://dev.twitter.com/rest/public">Twitter REST APIs</a>, <a href="http://developers.marketo.com/documentation/rest/">Marketo REST APIs</a>, <a href="https://developer.github.com/v3/">GitHub API</a>). It&#39;s the format of choice for public datasets, operational log files and more.</p>
 
-<h1 id="why-is-json-a-convenient-data-exchange-format?">Why is JSON a Convenient Data Exchange Format?</h1>
+<h1 id="why-is-json-a-convenient-data-exchange-format">Why is JSON a Convenient Data Exchange Format?</h1>
 
 <p>While I won&#39;t dive into the historical roots of JSON (JavaScript Object Notation, <a href="http://en.wikipedia.org/wiki/JSON#JavaScript_eval.28.29"><code>eval()</code></a>, etc.), I do want to highlight several attributes of JSON that make it a convenient data exchange format:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/blog/2015/07/05/drill-1.1-released/index.html
----------------------------------------------------------------------
diff --git a/blog/2015/07/05/drill-1.1-released/index.html b/blog/2015/07/05/drill-1.1-released/index.html
index 98ef6e6..64c88e9 100644
--- a/blog/2015/07/05/drill-1.1-released/index.html
+++ b/blog/2015/07/05/drill-1.1-released/index.html
@@ -167,7 +167,7 @@
   &lt;version&gt;1.1.0&lt;/version&gt;
 &lt;/dependency&gt;
 </code></pre></div>
-<h2 id="mongodb-3.0-support">MongoDB 3.0 Support</h2>
+<h2 id="mongodb-3-0-support">MongoDB 3.0 Support</h2>
 
 <p>Drill now uses MongoDB&#39;s latest Java driver and has enhanced connection pooling for better performance and resilience in large-scale deployments.  Learn more about using the <a href="https://drill.apache.org/docs/mongodb-plugin-for-apache-drill/">MongoDB plugin</a>.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/blog/2015/07/23/drill-tutorial-at-nosql-now-2015/index.html
----------------------------------------------------------------------
diff --git a/blog/2015/07/23/drill-tutorial-at-nosql-now-2015/index.html b/blog/2015/07/23/drill-tutorial-at-nosql-now-2015/index.html
index 3973da6..486f7ed 100644
--- a/blog/2015/07/23/drill-tutorial-at-nosql-now-2015/index.html
+++ b/blog/2015/07/23/drill-tutorial-at-nosql-now-2015/index.html
@@ -127,8 +127,9 @@
   <div class="addthis_sharing_toolbox"></div>
 
   <article class="post-content">
-    <p><script type="text/javascript" src="//addthisevent.com/libs/1.5.8/ate.min.js"></script>
-<a href="/blog/2015/07/23/drill-tutorial-at-nosql-now-2015/" title="Add to Calendar" class="addthisevent">
+    <script type="text/javascript" src="//addthisevent.com/libs/1.5.8/ate.min.js"></script>
+
+<p><a href="/blog/2015/07/23/drill-tutorial-at-nosql-now-2015/" title="Add to Calendar" class="addthisevent">
     Add to Calendar
     <span class="_start">08-20-2015 13:00:00</span>
     <span class="_end">08-20-2014 16:15:00</span>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/aggregate-window-functions/index.html
----------------------------------------------------------------------
diff --git a/docs/aggregate-window-functions/index.html b/docs/aggregate-window-functions/index.html
index 41bfc32..12dc386 100644
--- a/docs/aggregate-window-functions/index.html
+++ b/docs/aggregate-window-functions/index.html
@@ -1109,7 +1109,7 @@ If an ORDER BY clause is used for an aggregate function, an explicit frame claus
 
 <p>The following examples show queries that use each of the aggregate window functions in Drill. See <a href="/docs/sql-window-functions-examples/">SQL Window Functions Examples</a> for information about the data and setup for these examples.</p>
 
-<h3 id="avg()">AVG()</h3>
+<h3 id="avg">AVG()</h3>
 
 <p>The following query uses the AVG() window function with the PARTITION BY clause to calculate the average sales for each car dealer in Q1.  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select dealer_id, sales, avg(sales) over (partition by dealer_id) as avgsales from q1_sales;
@@ -1129,7 +1129,7 @@ If an ORDER BY clause is used for an aggregate function, an explicit frame claus
    +------------+--------+-----------+
    10 rows selected (0.455 seconds)
 </code></pre></div>
-<h3 id="count()">COUNT()</h3>
+<h3 id="count">COUNT()</h3>
 
 <p>The following query uses the COUNT (*) window function to count the number of sales in Q1, ordered by dealer_id. The word count is enclosed in back ticks (``) because it is a reserved keyword in Drill.  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select dealer_id, sales, count(*) over(order by dealer_id) as `count` from q1_sales;
@@ -1167,7 +1167,7 @@ If an ORDER BY clause is used for an aggregate function, an explicit frame claus
    +------------+--------+--------+
    10 rows selected (0.249 seconds)
 </code></pre></div>
-<h3 id="max()">MAX()</h3>
+<h3 id="max">MAX()</h3>
 
 <p>The following query uses the MAX() window function with the PARTITION BY clause to identify the employee with the maximum number of car sales in Q1 at each dealership. The word max is a reserved keyword in Drill and must be enclosed in back ticks (``).  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select emp_name, dealer_id, sales, max(sales) over(partition by dealer_id) as `max` from q1_sales;
@@ -1187,7 +1187,7 @@ If an ORDER BY clause is used for an aggregate function, an explicit frame claus
    +-----------------+------------+--------+--------+
    10 rows selected (0.402 seconds)
 </code></pre></div>
-<h3 id="min()">MIN()</h3>
+<h3 id="min">MIN()</h3>
 
 <p>The following query uses the MIN() window function with the PARTITION BY clause to identify the employee with the minimum number of car sales in Q1 at each dealership. The word min is a reserved keyword in Drill and must be enclosed in back ticks (``).  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select emp_name, dealer_id, sales, min(sales) over(partition by dealer_id) as `min` from q1_sales;
@@ -1207,7 +1207,7 @@ If an ORDER BY clause is used for an aggregate function, an explicit frame claus
    +-----------------+------------+--------+-------+
    10 rows selected (0.194 seconds)
 </code></pre></div>
-<h3 id="sum()">SUM()</h3>
+<h3 id="sum">SUM()</h3>
 
 <p>The following query uses the SUM() window function to total the amount of sales for each dealer in Q1. The word sum is a reserved keyword in Drill and must be enclosed in back ticks (``).  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select dealer_id, emp_name, sales, sum(sales) over(partition by dealer_id) as `sum` from q1_sales;

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/analyzing-the-yelp-academic-dataset/index.html
----------------------------------------------------------------------
diff --git a/docs/analyzing-the-yelp-academic-dataset/index.html b/docs/analyzing-the-yelp-academic-dataset/index.html
index 1067bf3..61da50d 100644
--- a/docs/analyzing-the-yelp-academic-dataset/index.html
+++ b/docs/analyzing-the-yelp-academic-dataset/index.html
@@ -1068,7 +1068,7 @@ analysis extremely easy.</p>
 
 <h2 id="querying-data-with-drill">Querying Data with Drill</h2>
 
-<h3 id="1.-view-the-contents-of-the-yelp-business-data">1. View the contents of the Yelp business data</h3>
+<h3 id="1-view-the-contents-of-the-yelp-business-data">1. View the contents of the Yelp business data</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=local&gt; !set maxwidth 10000
 
 0: jdbc:drill:zk=local&gt; select * from
@@ -1088,7 +1088,7 @@ analysis extremely easy.</p>
 
 <p>You can directly query self-describing files such as JSON, Parquet, and text. There is no need to create metadata definitions in the Hive metastore.</p>
 
-<h3 id="2.-explore-the-business-data-set-further">2. Explore the business data set further</h3>
+<h3 id="2-explore-the-business-data-set-further">2. Explore the business data set further</h3>
 
 <h4 id="total-reviews-in-the-data-set">Total reviews in the data set</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=local&gt; select sum(review_count) as totalreviews 
@@ -1139,7 +1139,7 @@ group by stars order by stars desc;
 | 1.0        | 4.0        |
 +------------+------------+
 </code></pre></div>
-<h4 id="top-businesses-with-high-review-counts-(&gt;-1000)">Top businesses with high review counts (&gt; 1000)</h4>
+<h4 id="top-businesses-with-high-review-counts-gt-1000">Top businesses with high review counts (&gt; 1000)</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=local&gt; select name, state, city, `review_count` from
 dfs.`/&lt;path-to-yelp-dataset&gt;/yelp/yelp_academic_dataset_business.json`
 where review_count &gt; 1000 order by `review_count` desc limit 10;
@@ -1183,7 +1183,7 @@ b limit 10;
 </code></pre></div>
 <p>Note how Drill can traverse and refer through multiple levels of nesting.</p>
 
-<h3 id="3.-get-the-amenities-of-each-business-in-the-data-set">3. Get the amenities of each business in the data set</h3>
+<h3 id="3-get-the-amenities-of-each-business-in-the-data-set">3. Get the amenities of each business in the data set</h3>
 
 <p>Note that the attributes column in the Yelp business data set has a different
 element for every row, representing that businesses can have separate
@@ -1231,7 +1231,7 @@ on data.</p>
 | true  | store.json.all_text_mode updated.  |
 +-------+------------------------------------+
 </code></pre></div>
-<h3 id="4.-explore-the-restaurant-businesses-in-the-data-set">4. Explore the restaurant businesses in the data set</h3>
+<h3 id="4-explore-the-restaurant-businesses-in-the-data-set">4. Explore the restaurant businesses in the data set</h3>
 
 <h4 id="number-of-restaurants-in-the-data-set">Number of restaurants in the data set</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=local&gt; select count(*) as TotalRestaurants from dfs.`/&lt;path-to-yelp-dataset&gt;/yelp/yelp_academic_dataset_business.json` where true=repeated_contains(categories,&#39;Restaurants&#39;);
@@ -1303,9 +1303,9 @@ order by count(categories[0]) desc limit 10;
 | Hair Salons          | 901           |
 +----------------------+---------------+
 </code></pre></div>
-<h3 id="5.-explore-the-yelp-reviews-dataset-and-combine-with-the-businesses.">5. Explore the Yelp reviews dataset and combine with the businesses.</h3>
+<h3 id="5-explore-the-yelp-reviews-dataset-and-combine-with-the-businesses">5. Explore the Yelp reviews dataset and combine with the businesses.</h3>
 
-<h4 id="take-a-look-at-the-contents-of-the-yelp-reviews-dataset.">Take a look at the contents of the Yelp reviews dataset.</h4>
+<h4 id="take-a-look-at-the-contents-of-the-yelp-reviews-dataset">Take a look at the contents of the Yelp reviews dataset.</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=local&gt; select * 
 from dfs.`/&lt;path-to-yelp-dataset&gt;/yelp/yelp_academic_dataset_review.json` limit 1;
 +---------------------------------+------------------------+------------------------+-------+------------+----------------------------------------------------------------------+--------+------------------------+

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/apache-drill-1-1-0-release-notes/index.html
----------------------------------------------------------------------
diff --git a/docs/apache-drill-1-1-0-release-notes/index.html b/docs/apache-drill-1-1-0-release-notes/index.html
index 0ef3745..51d057d 100644
--- a/docs/apache-drill-1-1-0-release-notes/index.html
+++ b/docs/apache-drill-1-1-0-release-notes/index.html
@@ -1035,7 +1035,7 @@
 
 <p>It has been about 6 weeks since the release of Drill 1.0.0. Today we&#39;re happy to announce the availability of Drill 1.1.0, providing 119 additional enhancements and bug fixes. </p>
 
-<h2 id="noteworthy-new-features-in-drill-1.1.0">Noteworthy New Features in Drill 1.1.0</h2>
+<h2 id="noteworthy-new-features-in-drill-1-1-0">Noteworthy New Features in Drill 1.1.0</h2>
 
 <p>Drill now supports window functions, automatic partitioning, and Hive impersonation. </p>
 
@@ -1059,13 +1059,13 @@
 <li>AVG<br></li>
 </ul>
 
-<h3 id="automatic-partitioning-in-ctas-(drill-3333)"><a href="/docs/partition-pruning/#automatic-partitioning">Automatic Partitioning</a> in CTAS (DRILL-3333)</h3>
+<h3 id="automatic-partitioning-in-ctas-drill-3333"><a href="/docs/partition-pruning/#automatic-partitioning">Automatic Partitioning</a> in CTAS (DRILL-3333)</h3>
 
 <p>When a table is created with a partition by clause, the parquet writer will create separate files for the different partition values. The data will first be sorted by the partition keys, and the parquet writer will create a new file when it encounters a new value for the partition columns. </p>
 
 <p>When queries are issued against data that was created this way, partition pruning will work if the filter contains a partition column. Unlike directory-based partitioning, no view is required, nor is it necessary to reference the dir* column names. </p>
 
-<h3 id="hive-impersonation-support-(drill-3203)"><a href="/docs/configuring-user-impersonation-with-hive-authorization">Hive impersonation</a> support (DRILL-3203)</h3>
+<h3 id="hive-impersonation-support-drill-3203"><a href="/docs/configuring-user-impersonation-with-hive-authorization">Hive impersonation</a> support (DRILL-3203)</h3>
 
 <p>When impersonation is enabled, Drill now supports impersonating the user who issued the query when accessing Hive metadata/data (instead of accessing Hive as the user that started the drillbit). </p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/apache-drill-1-2-0-release-notes/index.html
----------------------------------------------------------------------
diff --git a/docs/apache-drill-1-2-0-release-notes/index.html b/docs/apache-drill-1-2-0-release-notes/index.html
index fafd84d..49f6955 100644
--- a/docs/apache-drill-1-2-0-release-notes/index.html
+++ b/docs/apache-drill-1-2-0-release-notes/index.html
@@ -1040,7 +1040,7 @@
 <li><a href="/docs/apache-drill-1-2-0-release-notes/#important-unresolved-issues">Important unresolved issues</a></li>
 </ul>
 
-<h2 id="noteworthy-new-features-in-drill-1.2.0">Noteworthy New Features in Drill 1.2.0</h2>
+<h2 id="noteworthy-new-features-in-drill-1-2-0">Noteworthy New Features in Drill 1.2.0</h2>
 
 <p>This release of Drill introduces a number of enhancements, including the following ones:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/apache-drill-contribution-guidelines/index.html
----------------------------------------------------------------------
diff --git a/docs/apache-drill-contribution-guidelines/index.html b/docs/apache-drill-contribution-guidelines/index.html
index c4ae837..0268b7f 100644
--- a/docs/apache-drill-contribution-guidelines/index.html
+++ b/docs/apache-drill-contribution-guidelines/index.html
@@ -1187,7 +1187,7 @@ it easy to quickly view the contents of the patch in a web browser.</p>
 <li>Once your patch is accepted, be sure to upload a final version which grants rights to the ASF.</li>
 </ul>
 
-<h2 id="where-is-a-good-place-to-start-contributing?">Where is a good place to start contributing?</h2>
+<h2 id="where-is-a-good-place-to-start-contributing">Where is a good place to start contributing?</h2>
 
 <p>After getting the source code, building and running a few simple queries, one
 of the simplest places to start is to implement a DrillFunc.<br>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/apache-drill-contribution-ideas/index.html
----------------------------------------------------------------------
diff --git a/docs/apache-drill-contribution-ideas/index.html b/docs/apache-drill-contribution-ideas/index.html
index ab36dc4..9f5571d 100644
--- a/docs/apache-drill-contribution-ideas/index.html
+++ b/docs/apache-drill-contribution-ideas/index.html
@@ -1089,7 +1089,7 @@ own use case). Then try to implement one.</p>
 <li>Approximate aggregate functions (such as what is available in BlinkDB)</li>
 </ul>
 
-<h2 id="support-for-new-file-format-readers/writers">Support for new file format readers/writers</h2>
+<h2 id="support-for-new-file-format-readers-writers">Support for new file format readers/writers</h2>
 
 <p>Currently Drill supports text, JSON and Parquet file formats natively when
 interacting with file system. More readers/writers can be introduced by

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/compiling-drill-from-source/index.html
----------------------------------------------------------------------
diff --git a/docs/compiling-drill-from-source/index.html b/docs/compiling-drill-from-source/index.html
index a38addd..ad4d258 100644
--- a/docs/compiling-drill-from-source/index.html
+++ b/docs/compiling-drill-from-source/index.html
@@ -1050,10 +1050,10 @@ Maven and JDK installed:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">java -version
 mvn -version
 </code></pre></div>
-<h2 id="1.-clone-the-repository">1. Clone the Repository</h2>
+<h2 id="1-clone-the-repository">1. Clone the Repository</h2>
 <div class="highlight"><pre><code class="language-text" data-lang="text">git clone https://git-wip-us.apache.org/repos/asf/drill.git
 </code></pre></div>
-<h2 id="2.-compile-the-code">2. Compile the Code</h2>
+<h2 id="2-compile-the-code">2. Compile the Code</h2>
 <div class="highlight"><pre><code class="language-text" data-lang="text">cd drill
 mvn clean install -DskipTests
 </code></pre></div>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/configuring-jreport-with-drill/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-jreport-with-drill/index.html b/docs/configuring-jreport-with-drill/index.html
index aa268e0..12b0e26 100644
--- a/docs/configuring-jreport-with-drill/index.html
+++ b/docs/configuring-jreport-with-drill/index.html
@@ -1045,7 +1045,7 @@
 
 <hr>
 
-<h3 id="step-1:-install-the-drill-jdbc-driver-with-jreport">Step 1: Install the Drill JDBC Driver with JReport</h3>
+<h3 id="step-1-install-the-drill-jdbc-driver-with-jreport">Step 1: Install the Drill JDBC Driver with JReport</h3>
 
 <p>Drill provides standard JDBC connectivity to integrate with JReport. JReport 13.1 requires Drill 1.0 or later.
 For general instructions on installing the Drill JDBC driver, see <a href="/docs/using-the-jdbc-driver/">Using JDBC</a>.</p>
@@ -1065,7 +1065,7 @@ For example, on Windows, copy the Drill JDBC driver jar file into:</p>
 
 <hr>
 
-<h3 id="step-2:-create-a-new-jreport-catalog-to-manage-the-drill-connection">Step 2: Create a New JReport Catalog to Manage the Drill Connection</h3>
+<h3 id="step-2-create-a-new-jreport-catalog-to-manage-the-drill-connection">Step 2: Create a New JReport Catalog to Manage the Drill Connection</h3>
 
 <ol>
 <li> Click Create <strong>New -&gt; Catalog…</strong></li>
@@ -1080,7 +1080,7 @@ For example, on Windows, copy the Drill JDBC driver jar file into:</p>
 <li>Click <strong>Done</strong> when you have added all the tables you need. </li>
 </ol>
 
-<h3 id="step-3:-use-jreport-designer">Step 3: Use JReport Designer</h3>
+<h3 id="step-3-use-jreport-designer">Step 3: Use JReport Designer</h3>
 
 <ol>
 <li> In the Catalog Browser, right-click <strong>Queries</strong> and select <strong>Add Query…</strong></li>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/configuring-odbc-on-linux/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-odbc-on-linux/index.html b/docs/configuring-odbc-on-linux/index.html
index bcf5373..b77031d 100644
--- a/docs/configuring-odbc-on-linux/index.html
+++ b/docs/configuring-odbc-on-linux/index.html
@@ -1065,7 +1065,7 @@ on Linux, copy the following configuration files in <code>/opt/mapr/drillobdc/Se
 
 <hr>
 
-<h2 id="step-1:-set-environment-variables">Step 1: Set Environment Variables</h2>
+<h2 id="step-1-set-environment-variables">Step 1: Set Environment Variables</h2>
 
 <ol>
 <li>Set the ODBCINI environment variable to point to the <code>.odbc.ini</code> in your home directory. For example:<br>
@@ -1085,7 +1085,7 @@ Only include the path to the shared libraries corresponding to the driver matchi
 
 <hr>
 
-<h2 id="step-2:-define-the-odbc-data-sources-in-.odbc.ini">Step 2: Define the ODBC Data Sources in .odbc.ini</h2>
+<h2 id="step-2-define-the-odbc-data-sources-in-odbc-ini">Step 2: Define the ODBC Data Sources in .odbc.ini</h2>
 
 <p>Define the ODBC data sources in the <code>~/.odbc.ini</code> configuration file for your environment. To use Drill in embedded mode, set the following properties:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">ConnectionType=Direct
@@ -1171,7 +1171,7 @@ behavior of DSNs using the MapR Drill ODBC Driver.</p>
 
 <hr>
 
-<h2 id="step-3:-(optional)-define-the-odbc-driver-in-.odbcinst.ini">Step 3: (Optional) Define the ODBC Driver in .odbcinst.ini</h2>
+<h2 id="step-3-optional-define-the-odbc-driver-in-odbcinst-ini">Step 3: (Optional) Define the ODBC Driver in .odbcinst.ini</h2>
 
 <p>The <code>.odbcinst.ini</code> is an optional configuration file that defines the ODBC
 Drivers. This configuration file is optional because you can specify drivers
@@ -1193,7 +1193,7 @@ Driver=/opt/mapr/drillodbc/lib/64/libmaprdrillodbc64.so
 </code></pre></div>
 <hr>
 
-<h2 id="step-4:-configure-the-mapr-drill-odbc-driver">Step 4: Configure the MapR Drill ODBC Driver</h2>
+<h2 id="step-4-configure-the-mapr-drill-odbc-driver">Step 4: Configure the MapR Drill ODBC Driver</h2>
 
 <p>Configure the MapR Drill ODBC Driver for your environment by modifying the <code>.mapr.drillodbc.ini</code> configuration
 file. This configures the driver to work with your ODBC driver manager. The following sample shows a possible configuration, which you can use as is if you installed the default iODBC driver manager.</p>
@@ -1216,7 +1216,7 @@ SwapFilePath=/tmp
 ODBCInstLib=libiodbcinst.so
 . . .
 </code></pre></div>
-<h3 id="configuring-.mapr.drillodbc.ini">Configuring .mapr.drillodbc.ini</h3>
+<h3 id="configuring-mapr-drillodbc-ini">Configuring .mapr.drillodbc.ini</h3>
 
 <p>To configure the MapR Drill ODBC Driver in the <code>mapr.drillodbc.ini</code> configuration file, complete the following steps:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/configuring-odbc-on-mac-os-x/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-odbc-on-mac-os-x/index.html b/docs/configuring-odbc-on-mac-os-x/index.html
index af7c354..290f782 100644
--- a/docs/configuring-odbc-on-mac-os-x/index.html
+++ b/docs/configuring-odbc-on-mac-os-x/index.html
@@ -1079,7 +1079,7 @@ on Mac OS X, copy the following configuration files in <code>/opt/mapr/drillodbc
 
 <hr>
 
-<h2 id="step-1:-set-environment-variables">Step 1: Set Environment Variables</h2>
+<h2 id="step-1-set-environment-variables">Step 1: Set Environment Variables</h2>
 
 <p>Create or modify the <code>/etc/launchd.conf</code> file to set environment variables. Set the SIMBAINI variable to point to the <code>.mapr.drillodbc.ini</code> file, the ODBCSYSINI varialbe to the <code>.odbcinst.ini</code> file, the ODBCINI variable to the <code>.odbc.ini</code> file, and the DYLD_LIBRARY_PATH to the location of the dynamic linker (DYLD) libraries and to the MapR Drill ODBC Driver. If you installed the iODBC driver manager using the DMG, the DYLD libraries are installed in <code>/usr/local/iODBC/lib</code>. The launchd.conf file should look something like this:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">setenv SIMBAINI /Users/joeuser/.mapr.drillodbc.ini
@@ -1091,7 +1091,7 @@ setenv DYLD_LIBRARY_PATH /usr/local/iODBC/lib:/opt/mapr/drillodbc/lib/universal
 
 <hr>
 
-<h2 id="step-2:-define-the-odbc-data-sources-in-.odbc.ini">Step 2: Define the ODBC Data Sources in .odbc.ini</h2>
+<h2 id="step-2-define-the-odbc-data-sources-in-odbc-ini">Step 2: Define the ODBC Data Sources in .odbc.ini</h2>
 
 <p>Define the ODBC data sources in the <code>~/.odbc.ini</code> configuration file for your environment. </p>
 
@@ -1173,7 +1173,7 @@ behavior of DSNs using the MapR Drill ODBC Driver.</p>
 
 <hr>
 
-<h2 id="step-3:-(optional)-define-the-odbc-driver-in-.odbcinst.ini">Step 3: (Optional) Define the ODBC Driver in .odbcinst.ini</h2>
+<h2 id="step-3-optional-define-the-odbc-driver-in-odbcinst-ini">Step 3: (Optional) Define the ODBC Driver in .odbcinst.ini</h2>
 
 <p>The <code>.odbcinst.ini</code> is an optional configuration file that defines the ODBC
 Drivers. This configuration file is optional because you can specify drivers
@@ -1189,7 +1189,7 @@ Driver=/opt/mapr/drillodbc/lib/universal/libmaprdrillodbc.dylib
 </code></pre></div>
 <hr>
 
-<h2 id="step-4:-configure-the-mapr-drill-odbc-driver">Step 4: Configure the MapR Drill ODBC Driver</h2>
+<h2 id="step-4-configure-the-mapr-drill-odbc-driver">Step 4: Configure the MapR Drill ODBC Driver</h2>
 
 <p>Configure the MapR Drill ODBC Driver for your environment by modifying the <code>.mapr.drillodbc.ini</code> configuration
 file. This configures the driver to work with your ODBC driver manager. The following sample shows a possible configuration, which you can use as is if you installed the default iODBC driver manager.</p>
@@ -1208,7 +1208,7 @@ SwapFilePath=/tmp
 # iODBC
 ODBCInstLib=libiodbcinst.dylib
 </code></pre></div>
-<h3 id="configuring-.mapr.drillodbc.ini">Configuring .mapr.drillodbc.ini</h3>
+<h3 id="configuring-mapr-drillodbc-ini">Configuring .mapr.drillodbc.ini</h3>
 
 <p>To configure the MapR Drill ODBC Driver in the <code>mapr.drillodbc.ini</code> configuration file, complete the following steps:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/configuring-odbc-on-windows/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-odbc-on-windows/index.html b/docs/configuring-odbc-on-windows/index.html
index ac67c5e..ddb1e5d 100644
--- a/docs/configuring-odbc-on-windows/index.html
+++ b/docs/configuring-odbc-on-windows/index.html
@@ -1041,7 +1041,7 @@ sources:</p>
 <li>Create an ODBC Connection String</li>
 </ul>
 
-<h2 id="sample-odbc-configuration-(dsn)">Sample ODBC Configuration (DSN)</h2>
+<h2 id="sample-odbc-configuration-dsn">Sample ODBC Configuration (DSN)</h2>
 
 <p>You can see how to create a DSN to connect to Drill data sources by taking a look at the preconfigured sample that the installer sets up. If
 you want to create a DSN for a 32-bit application, you must use the 32-bit

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/configuring-resources-for-a-shared-drillbit/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-resources-for-a-shared-drillbit/index.html b/docs/configuring-resources-for-a-shared-drillbit/index.html
index 489fa2f..fdc6aff 100644
--- a/docs/configuring-resources-for-a-shared-drillbit/index.html
+++ b/docs/configuring-resources-for-a-shared-drillbit/index.html
@@ -1064,7 +1064,7 @@ The maximum degree of distribution of a query across cores and cluster nodes.</l
 Same as max per node but applies to the query as executed by the entire cluster.</li>
 </ul>
 
-<h3 id="planner.width.max_per_node">planner.width.max_per_node</h3>
+<h3 id="planner-width-max_per_node">planner.width.max_per_node</h3>
 
 <p>Configure the <code>planner.width.max_per_node</code> to achieve fine grained, absolute control over parallelization. In this context <em>width</em> refers to fanout or distribution potential: the ability to run a query in parallel across the cores on a node and the nodes on a cluster. A physical plan consists of intermediate operations, known as query &quot;fragments,&quot; that run concurrently, yielding opportunities for parallelism above and below each exchange operator in the plan. An exchange operator represents a breakpoint in the execution flow where processing can be distributed. For example, a single-process scan of a file may flow into an exchange operator, followed by a multi-process aggregation fragment.</p>
 
@@ -1074,7 +1074,7 @@ Same as max per node but applies to the query as executed by the entire cluster.
 
 <p>When you modify the default setting, you can supply any meaningful number. The system does not automatically scale down your setting.</p>
 
-<h3 id="planner.width.max_per_query">planner.width.max_per_query</h3>
+<h3 id="planner-width-max_per_query">planner.width.max_per_query</h3>
 
 <p>The max_per_query value also sets the maximum degree of parallelism for any given stage of a query, but the setting applies to the query as executed by the whole cluster (multiple nodes). In effect, the actual maximum width per query is the <em>minimum of two values</em>: min((number of nodes * width.max_per_node), width.max_per_query)</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/configuring-tibco-spotfire-server-with-drill/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-tibco-spotfire-server-with-drill/index.html b/docs/configuring-tibco-spotfire-server-with-drill/index.html
index 05e06a1..c34fc68 100644
--- a/docs/configuring-tibco-spotfire-server-with-drill/index.html
+++ b/docs/configuring-tibco-spotfire-server-with-drill/index.html
@@ -1046,7 +1046,7 @@
 
 <hr>
 
-<h3 id="step-1:-install-and-configure-the-drill-jdbc-driver">Step 1: Install and Configure the Drill JDBC Driver</h3>
+<h3 id="step-1-install-and-configure-the-drill-jdbc-driver">Step 1: Install and Configure the Drill JDBC Driver</h3>
 
 <p>Drill provides standard JDBC connectivity, making it easy to integrate data exploration capabilities on complex, schema-less data sets. Tibco Spotfire Server (TSS) requires Drill 1.0 or later, which incudes the JDBC driver. The JDBC driver is bundled with the Drill configuration files, and it is recommended that you use the JDBC driver that is shipped with the specific Drill version.</p>
 
@@ -1074,7 +1074,7 @@ For Windows systems, the hosts file is located here:
 
 <hr>
 
-<h3 id="step-2:-configure-the-drill-data-source-template-in-tss">Step 2: Configure the Drill Data Source Template in TSS</h3>
+<h3 id="step-2-configure-the-drill-data-source-template-in-tss">Step 2: Configure the Drill Data Source Template in TSS</h3>
 
 <p>The Drill Data Source template can now be configured with the TSS Configuration Tool. The Windows-based TSS Configuration Tool is recommended. If TSS is installed on a Linux system, you also need to install TSS on a small Windows-based system so you can utilize the Configuration Tool. In this case, it is also recommended that you install the Drill JDBC driver on the TSS Windows system.</p>
 
@@ -1129,7 +1129,7 @@ For Windows systems, the hosts file is located here:
 </code></pre></div>
 <hr>
 
-<h3 id="step-3:-configure-drill-data-sources-with-tibco-spotfire-desktop">Step 3: Configure Drill Data Sources with Tibco Spotfire Desktop</h3>
+<h3 id="step-3-configure-drill-data-sources-with-tibco-spotfire-desktop">Step 3: Configure Drill Data Sources with Tibco Spotfire Desktop</h3>
 
 <p>To configure Drill data sources in TSS, you need to use the Tibco Spotfire Desktop client.</p>
 
@@ -1146,7 +1146,7 @@ For Windows systems, the hosts file is located here:
 
 <hr>
 
-<h3 id="step-4:-query-and-analyze-the-data">Step 4: Query and Analyze the Data</h3>
+<h3 id="step-4-query-and-analyze-the-data">Step 4: Query and Analyze the Data</h3>
 
 <p>After the Drill data source has been configured in the Information Designer, the information elements can be defined. </p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/configuring-user-impersonation-with-hive-authorization/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-user-impersonation-with-hive-authorization/index.html b/docs/configuring-user-impersonation-with-hive-authorization/index.html
index 0cdad6e..dfe1efa 100644
--- a/docs/configuring-user-impersonation-with-hive-authorization/index.html
+++ b/docs/configuring-user-impersonation-with-hive-authorization/index.html
@@ -1063,7 +1063,7 @@
 <li>Hive remote metastore repository configured<br></li>
 </ul>
 
-<h2 id="step-1:-enabling-drill-impersonation">Step 1: Enabling Drill Impersonation</h2>
+<h2 id="step-1-enabling-drill-impersonation">Step 1: Enabling Drill Impersonation</h2>
 
 <p>Modify <code>&lt;DRILL_HOME&gt;/conf/drill-override.conf</code> on each Drill node to include the required properties, set the <a href="/docs/configuring-user-impersonation/#chained-impersonation">maximum number of chained user hops</a>, and restart the Drillbit process.</p>
 
@@ -1082,7 +1082,7 @@
 <code>&lt;DRILLINSTALL_HOME&gt;/bin/drillbit.sh restart</code>  </p></li>
 </ol>
 
-<h2 id="step-2:-updating-hive-site.xml">Step 2:  Updating hive-site.xml</h2>
+<h2 id="step-2-updating-hive-site-xml">Step 2:  Updating hive-site.xml</h2>
 
 <p>Update hive-site.xml with the parameters specific to the type of authorization that you are configuring and then restart Hive.  </p>
 
@@ -1114,7 +1114,7 @@
 <strong>Description:</strong> Tells HiveServer2 to execute Hive operations as the user submitting the query. Must be set to true for the storage based model.<br>
 <strong>Value:</strong> true</p>
 
-<h3 id="example-of-hive-site.xml-configuration-with-the-required-properties-for-storage-based-authorization">Example of hive-site.xml configuration with the required properties for storage based authorization</h3>
+<h3 id="example-of-hive-site-xml-configuration-with-the-required-properties-for-storage-based-authorization">Example of hive-site.xml configuration with the required properties for storage based authorization</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   &lt;configuration&gt;
      &lt;property&gt;
        &lt;name&gt;hive.metastore.uris&lt;/name&gt;
@@ -1190,7 +1190,7 @@
 <strong>Description:</strong> In unsecure mode, setting this property to true causes the metastore to execute DFS operations using the client&#39;s reported user and group permissions. Note: This property must be set on both the client and server sides. This is a best effort property. If the client is set to true and the server is set to false, the client setting is ignored.<br>
 <strong>Value:</strong> false  </p>
 
-<h3 id="example-of-hive-site.xml-configuration-with-the-required-properties-for-sql-standard-based-authorization">Example of hive-site.xml configuration with the required properties for SQL standard based authorization</h3>
+<h3 id="example-of-hive-site-xml-configuration-with-the-required-properties-for-sql-standard-based-authorization">Example of hive-site.xml configuration with the required properties for SQL standard based authorization</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   &lt;configuration&gt;
      &lt;property&gt;
        &lt;name&gt;hive.metastore.uris&lt;/name&gt;
@@ -1238,7 +1238,7 @@
      &lt;/property&gt;    
     &lt;/configuration&gt;
 </code></pre></div>
-<h2 id="step-3:-modifying-the-hive-storage-plugin">Step 3: Modifying the Hive Storage Plugin</h2>
+<h2 id="step-3-modifying-the-hive-storage-plugin">Step 3: Modifying the Hive Storage Plugin</h2>
 
 <p>Modify the Hive storage plugin configuration in the Drill Web Console to include specific authorization settings. The Drillbit that you use to access the Web Console must be running.  </p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/configuring-user-impersonation/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-user-impersonation/index.html b/docs/configuring-user-impersonation/index.html
index 8adcfaa..1869b2c 100644
--- a/docs/configuring-user-impersonation/index.html
+++ b/docs/configuring-user-impersonation/index.html
@@ -1096,7 +1096,7 @@ hadoop fs –chown &lt;user&gt;:&lt;group&gt; &lt;file_name&gt;
 </code></pre></div>
 <p>Example: <code>hadoop fs –chmod 750 employees.drill.view</code></p>
 
-<h3 id="modifying-system|session-level-view-permissions">Modifying SYSTEM|SESSION Level View Permissions</h3>
+<h3 id="modifying-system-session-level-view-permissions">Modifying SYSTEM|SESSION Level View Permissions</h3>
 
 <p>Use the <code>ALTER SESSION|SYSTEM</code> command with the <code>new_view_default_permissions</code> parameter and the appropriate octal code to set view permissions at the system or session level prior to creating a view.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">ALTER SESSION SET `new_view_default_permissions` = &#39;&lt;octal_code&gt;&#39;;

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/configuring-web-console-and-rest-api-security/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-web-console-and-rest-api-security/index.html b/docs/configuring-web-console-and-rest-api-security/index.html
index 61b1291..da396d0 100644
--- a/docs/configuring-web-console-and-rest-api-security/index.html
+++ b/docs/configuring-web-console-and-rest-api-security/index.html
@@ -1038,7 +1038,7 @@ With Web Console security in place, users who do not have administrator privileg
 
 <h2 id="https-support">HTTPS Support</h2>
 
-<p>Drill 1.2 uses the Linux Pluggable Authentication Module (PAM) and code-level support for transport layer security (TLS) to secure the Web Console and REST API. By default, the Web Console and REST API support the HTTP protocol. You set the following start-up option to TRUE to enable HTTPS support:</p>
+<p>Drill 1.2 uses code-level support for transport layer security (TLS) to secure the Web Console and REST API. By default, the Web Console and REST API support the HTTP protocol. You set the following start-up option to TRUE to enable HTTPS support:</p>
 
 <p><code>drill.exec.http.ssl_enabled</code></p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/custom-function-interfaces/index.html
----------------------------------------------------------------------
diff --git a/docs/custom-function-interfaces/index.html b/docs/custom-function-interfaces/index.html
index 57006cd..3f28e17 100644
--- a/docs/custom-function-interfaces/index.html
+++ b/docs/custom-function-interfaces/index.html
@@ -1046,13 +1046,13 @@ public static class Add1 implements DrillSimpleFunc{
 
 <p>The simple function interface includes the <code>@Param</code> and <code>@Output</code> holders where you indicate the data types that your function can process.</p>
 
-<h3 id="@param-holder">@Param Holder</h3>
+<h3 id="param-holder">@Param Holder</h3>
 
 <p>This holder indicates the data type that the function processes as input and determines the number of parameters that your function accepts within the query. For example:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">@Param BigIntHolder input1;
 @Param BigIntHolder input2;
 </code></pre></div>
-<h3 id="@output-holder">@Output Holder</h3>
+<h3 id="output-holder">@Output Holder</h3>
 
 <p>This holder indicates the data type that the processing returns. For example:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">@Output BigIntHolder out;
@@ -1108,7 +1108,7 @@ public static class MySecondMin implements DrillAggFunc {
 </code></pre></div>
 <p>The aggregate function interface includes holders where you indicate the data types that your function can process. This interface includes the @Param and @Output holders previously described and also includes the @Workspace holder. </p>
 
-<h3 id="@workspace-holder">@Workspace holder</h3>
+<h3 id="workspace-holder">@Workspace holder</h3>
 
 <p>This holder indicates the data type used to store intermediate data during processing. For example:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">@Workspace BigIntHolder min;

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/data-type-conversion/index.html
----------------------------------------------------------------------
diff --git a/docs/data-type-conversion/index.html b/docs/data-type-conversion/index.html
index a69817f..438e381 100644
--- a/docs/data-type-conversion/index.html
+++ b/docs/data-type-conversion/index.html
@@ -1630,7 +1630,7 @@ use in your Drill queries as described in this section:</p>
 </tr>
 </tbody></table>
 
-<h3 id="format-specifiers-for-date/time-conversions">Format Specifiers for Date/Time Conversions</h3>
+<h3 id="format-specifiers-for-date-time-conversions">Format Specifiers for Date/Time Conversions</h3>
 
 <p>Use the following Joda format specifiers for date/time conversions:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/date-time-and-timestamp/index.html
----------------------------------------------------------------------
diff --git a/docs/date-time-and-timestamp/index.html b/docs/date-time-and-timestamp/index.html
index cf4193b..5f2606c 100644
--- a/docs/date-time-and-timestamp/index.html
+++ b/docs/date-time-and-timestamp/index.html
@@ -1140,7 +1140,7 @@ SELECT INTERVAL &#39;13&#39; month FROM (VALUES(1));
 +------------+
 1 row selected (0.076 seconds)
 </code></pre></div>
-<h2 id="date,-time,-and-timestamp">DATE, TIME, and TIMESTAMP</h2>
+<h2 id="date-time-and-timestamp">DATE, TIME, and TIMESTAMP</h2>
 
 <p>DATE, TIME, and TIMESTAMP literals. Drill stores values in Coordinated Universal Time (UTC). Drill supports time functions in the range 1971 to 2037.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/date-time-functions-and-arithmetic/index.html
----------------------------------------------------------------------
diff --git a/docs/date-time-functions-and-arithmetic/index.html b/docs/date-time-functions-and-arithmetic/index.html
index 2f3dbf2..6e2897e 100644
--- a/docs/date-time-functions-and-arithmetic/index.html
+++ b/docs/date-time-functions-and-arithmetic/index.html
@@ -1537,7 +1537,7 @@ SELECT NOW() FROM (VALUES(1));
 +------------+
 1 row selected (0.062 seconds)
 </code></pre></div>
-<h2 id="date,-time,-and-interval-arithmetic-functions">Date, Time, and Interval Arithmetic Functions</h2>
+<h2 id="date-time-and-interval-arithmetic-functions">Date, Time, and Interval Arithmetic Functions</h2>
 
 <p>Is the day returned from the NOW function the same as the day returned from the CURRENT_DATE function?</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">SELECT EXTRACT(day FROM NOW()) = EXTRACT(day FROM CURRENT_DATE) FROM (VALUES(1));

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/drill-introduction/index.html
----------------------------------------------------------------------
diff --git a/docs/drill-introduction/index.html b/docs/drill-introduction/index.html
index 3d06113..941e8fa 100644
--- a/docs/drill-introduction/index.html
+++ b/docs/drill-introduction/index.html
@@ -1038,7 +1038,7 @@ applications, while still providing the familiarity and ecosystem of ANSI SQL,
 the industry-standard query language. Drill provides plug-and-play integration
 with existing Apache Hive and Apache HBase deployments. </p>
 
-<h2 id="what&#39;s-new-in-apache-drill-1.2">What&#39;s New in Apache Drill 1.2</h2>
+<h2 id="what-39-s-new-in-apache-drill-1-2">What&#39;s New in Apache Drill 1.2</h2>
 
 <p>This release of Drill fixes <a href="/docs/apache-drill-1-2-0-release-notes/">many issues</a> and introduces a number of enhancements, including the following ones:</p>
 
@@ -1071,7 +1071,7 @@ Javadocs and better application dependency compatibility<br></li>
 <li>Improved LIMIT processing</li>
 </ul>
 
-<h2 id="what&#39;s-new-in-apache-drill-1.1">What&#39;s New in Apache Drill 1.1</h2>
+<h2 id="what-39-s-new-in-apache-drill-1-1">What&#39;s New in Apache Drill 1.1</h2>
 
 <p>Many enhancements in Apache Drill 1.1 include the following key features:</p>
 
@@ -1082,7 +1082,7 @@ Javadocs and better application dependency compatibility<br></li>
 <li>Support for UNION and UNION ALL and better optimized plans that include UNION.</li>
 </ul>
 
-<h2 id="what&#39;s-new-in-apache-drill-1.0">What&#39;s New in Apache Drill 1.0</h2>
+<h2 id="what-39-s-new-in-apache-drill-1-0">What&#39;s New in Apache Drill 1.0</h2>
 
 <p>Apache Drill 1.0 offers the following new features:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/drill-patch-review-tool/index.html
----------------------------------------------------------------------
diff --git a/docs/drill-patch-review-tool/index.html b/docs/drill-patch-review-tool/index.html
index 710051f..3320147 100644
--- a/docs/drill-patch-review-tool/index.html
+++ b/docs/drill-patch-review-tool/index.html
@@ -1064,7 +1064,7 @@
 
 <h3 id="drill-jira-and-reviewboard-script">Drill JIRA and Reviewboard script</h3>
 
-<h4 id="1.-setup">1. Setup</h4>
+<h4 id="1-setup">1. Setup</h4>
 
 <ol>
 <li>Follow instructions <a href="/docs/drill-patch-review-tool/#jira-command-line-tool">here</a> to setup the jira-python package</li>
@@ -1075,7 +1075,7 @@ On Mac -&gt; sudo easy_install argparse
 </code></pre></div></li>
 </ol>
 
-<h4 id="2.-usage">2. Usage</h4>
+<h4 id="2-usage">2. Usage</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">nnarkhed-mn: nnarkhed$ python drill-patch-review.py --help
 usage: drill-patch-review.py [-h] -b BRANCH -j JIRA [-s SUMMARY]
                              [-d DESCRIPTION] [-r REVIEWBOARD] [-t TESTING]
@@ -1102,7 +1102,7 @@ optional arguments:
   -rbu, --reviewboard-user Reviewboard user name
   -rbp, --reviewboard-password Reviewboard password
 </code></pre></div>
-<h4 id="3.-upload-patch">3. Upload patch</h4>
+<h4 id="3-upload-patch">3. Upload patch</h4>
 
 <ol>
 <li>Specify the branch against which the patch should be created (-b)</li>
@@ -1113,7 +1113,7 @@ optional arguments:
 <p>Example:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">python drill-patch-review.py -b origin/master -j DRILL-241 -rbu tnachen -rbp password
 </code></pre></div>
-<h4 id="4.-update-patch">4. Update patch</h4>
+<h4 id="4-update-patch">4. Update patch</h4>
 
 <ol>
 <li>Specify the branch against which the patch should be created (-b)</li>
@@ -1128,12 +1128,12 @@ optional arguments:
 </code></pre></div>
 <h3 id="jira-command-line-tool">JIRA command line tool</h3>
 
-<h4 id="1.-download-the-jira-command-line-package">1. Download the JIRA command line package</h4>
+<h4 id="1-download-the-jira-command-line-package">1. Download the JIRA command line package</h4>
 
 <p>Install the jira-python package.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">sudo easy_install jira-python
 </code></pre></div>
-<h4 id="2.-configure-jira-username-and-password">2. Configure JIRA username and password</h4>
+<h4 id="2-configure-jira-username-and-password">2. Configure JIRA username and password</h4>
 
 <p>Include a jira.ini file in your $HOME directory that contains your Apache JIRA
 username and password.</p>
@@ -1146,7 +1146,7 @@ password=***********
 <p>This is a quick tutorial on using <a href="https://reviews.apache.org">Review Board</a>
 with Drill.</p>
 
-<h4 id="1.-install-the-post-review-tool">1. Install the post-review tool</h4>
+<h4 id="1-install-the-post-review-tool">1. Install the post-review tool</h4>
 
 <p>If you are on RHEL, Fedora or CentOS, follow these steps:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">sudo yum install python-setuptools
@@ -1159,7 +1159,7 @@ sudo easy_install -U RBTools
 <p>For other platforms, follow the <a href="http://www.reviewboard.org/docs/manual/dev/users/tools/post-review/">instructions</a> to
 setup the post-review tool.</p>
 
-<h4 id="2.-configure-stuff">2. Configure Stuff</h4>
+<h4 id="2-configure-stuff">2. Configure Stuff</h4>
 
 <p>Then you need to configure a few things to make it work.</p>
 
@@ -1177,7 +1177,7 @@ TARGET_GROUPS = &#39;drill-git&#39;
 
 <h3 id="faq">FAQ</h3>
 
-<h4 id="when-i-run-the-script,-it-throws-the-following-error-and-exits">When I run the script, it throws the following error and exits</h4>
+<h4 id="when-i-run-the-script-it-throws-the-following-error-and-exits">When I run the script, it throws the following error and exits</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">nnarkhed$python drill-patch-review.py -b trunk -j DRILL-241
 There don&#39;t seem to be any diffs
 </code></pre></div>
@@ -1188,7 +1188,7 @@ There don&#39;t seem to be any diffs
 <li>The -b branch is not pointing to the remote branch. In the example above, &quot;trunk&quot; is specified as the branch, which is the local branch. The correct value for the -b (--branch) option is the remote branch. &quot;git branch -r&quot; gives the list of the remote branch names.</li>
 </ul>
 
-<h4 id="when-i-run-the-script,-it-throws-the-following-error-and-exits">When I run the script, it throws the following error and exits</h4>
+<h4 id="when-i-run-the-script-it-throws-the-following-error-and-exits">When I run the script, it throws the following error and exits</h4>
 
 <p>Error uploading diff</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/drill-plan-syntax/index.html
----------------------------------------------------------------------
diff --git a/docs/drill-plan-syntax/index.html b/docs/drill-plan-syntax/index.html
index 3a4e910..9bb54c1 100644
--- a/docs/drill-plan-syntax/index.html
+++ b/docs/drill-plan-syntax/index.html
@@ -1033,7 +1033,7 @@
 
     <div class="int_text" align="left">
       
-        <h3 id="whats-the-plan?">Whats the plan?</h3>
+        <h3 id="whats-the-plan">Whats the plan?</h3>
 
 <p>This section is about the end-to-end plan flow for Drill. The incoming query
 to Drill can be a SQL 2003 query/DrQL or MongoQL. The query is converted to a

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/drop-table/index.html
----------------------------------------------------------------------
diff --git a/docs/drop-table/index.html b/docs/drop-table/index.html
index 27ede7f..370dd98 100644
--- a/docs/drop-table/index.html
+++ b/docs/drop-table/index.html
@@ -1088,7 +1088,7 @@
 
 <p>The following examples show results for several DROP TABLE scenarios.  </p>
 
-<h3 id="example-1:-identifying-a-schema">Example 1:  Identifying a schema</h3>
+<h3 id="example-1-identifying-a-schema">Example 1:  Identifying a schema</h3>
 
 <p>This example shows you how to identify a schema with the USE and DROP TABLE commands and successfully drop a table named <code>donuts_json</code> in the <code>&quot;donuts&quot;</code> workspace configured within the DFS storage plugin configuration.  </p>
 
@@ -1142,7 +1142,7 @@
    Error: PARSE ERROR: Root schema is immutable. Creating or dropping tables/views is not allowed in root schema.Select a schema using &#39;USE schema&#39; command.
    [Error Id: 8c42cb6a-27eb-48fd-b42a-671a6fb58c14 on 10.250.56.218:31010] (state=,code=0)
 </code></pre></div>
-<h3 id="example-2:-dropping-a-table-created-from-a-file">Example 2: Dropping a table created from a file</h3>
+<h3 id="example-2-dropping-a-table-created-from-a-file">Example 2: Dropping a table created from a file</h3>
 
 <p>In the following example, the <code>donuts_json</code> table is removed from the <code>/tmp</code> workspace using the DROP TABLE command. This example assumes that the steps in the <a href="/docs/create-table-as-ctas/#complete-ctas-example">Complete CTAS Example</a> were already completed. </p>
 
@@ -1178,7 +1178,7 @@
    +-------+------------------------------+
    1 row selected (0.107 seconds)  
 </code></pre></div>
-<h3 id="example-3:-dropping-a-table-created-as-a-directory">Example 3: Dropping a table created as a directory</h3>
+<h3 id="example-3-dropping-a-table-created-as-a-directory">Example 3: Dropping a table created as a directory</h3>
 
 <p>When you create a table that writes files to a directory, you can issue the <code>DROP TABLE</code> command against the table to remove the directory. All files and subdirectories are deleted. For example, the following CTAS command writes Parquet data from the <code>nation.parquet</code> file, installed with Drill, to the <code>/tmp/name_key</code> directory.  </p>
 
@@ -1235,7 +1235,7 @@
    +-------+---------------------------+
    1 row selected (0.086 seconds)
 </code></pre></div>
-<h3 id="example-4:-dropping-a-table-that-does-not-exist">Example 4: Dropping a table that does not exist</h3>
+<h3 id="example-4-dropping-a-table-that-does-not-exist">Example 4: Dropping a table that does not exist</h3>
 
 <p>The following example shows the result of dropping a table that does not exist because it was either already dropped or never existed. </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   0: jdbc:drill:zk=local&gt; use use dfs.tmp;
@@ -1251,7 +1251,7 @@
    Error: VALIDATION ERROR: Table [name_key] not found
    [Error Id: fc6bfe17-d009-421c-8063-d759d7ea2f4e on 10.250.56.218:31010] (state=,code=0)
 </code></pre></div>
-<h3 id="example-5:-dropping-a-table-without-permissions">Example 5: Dropping a table without permissions</h3>
+<h3 id="example-5-dropping-a-table-without-permissions">Example 5: Dropping a table without permissions</h3>
 
 <p>The following example shows the result of dropping a table without the appropriate permissions in the file system.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   0: jdbc:drill:zk=local&gt; drop table name_key;
@@ -1259,7 +1259,7 @@
    Error: PERMISSION ERROR: Unauthorized to drop table
    [Error Id: 36f6b51a-786d-4950-a4a7-44250f153c55 on 10.10.30.167:31010] (state=,code=0)  
 </code></pre></div>
-<h3 id="example-6:-dropping-and-querying-a-table-concurrently">Example 6: Dropping and querying a table concurrently</h3>
+<h3 id="example-6-dropping-and-querying-a-table-concurrently">Example 6: Dropping and querying a table concurrently</h3>
 
 <p>The result of this scenario depends on the delta in time between one user dropping a table and another user issuing a query against the table. Results can also vary. In some instances the drop may succeed and the query fails completely or the query completes partially and then the table is dropped returning an exception in the middle of the query results.</p>
 
@@ -1281,7 +1281,7 @@
    Fragment 1:0
    [Error Id: 6e3c6a8d-8cfd-4033-90c4-61230af80573 on 10.10.30.167:31010] (state=,code=0)
 </code></pre></div>
-<h3 id="example-7:-dropping-a-table-with-different-file-formats">Example 7: Dropping a table with different file formats</h3>
+<h3 id="example-7-dropping-a-table-with-different-file-formats">Example 7: Dropping a table with different file formats</h3>
 
 <p>The following example shows the result of dropping a table when multiple file formats exists in the directory. In this scenario, the <code>sales_dir</code> table resides in the <code>dfs.sales</code> workspace and contains Parquet, CSV, and JSON files.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/explain/index.html
----------------------------------------------------------------------
diff --git a/docs/explain/index.html b/docs/explain/index.html
index b0ad10e..cb63fee 100644
--- a/docs/explain/index.html
+++ b/docs/explain/index.html
@@ -1069,7 +1069,7 @@ you are selecting from, you are likely to see plan changes.</p>
 <p>This option returns costing information. You can use this option for both
 physical and logical plans.</p>
 
-<h4 id="with-implementation-|-without-implementation">WITH IMPLEMENTATION | WITHOUT IMPLEMENTATION</h4>
+<h4 id="with-implementation-without-implementation">WITH IMPLEMENTATION | WITHOUT IMPLEMENTATION</h4>
 
 <p>These options return the physical and logical plan information, respectively.
 The default is physical (WITH IMPLEMENTATION).</p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/getting-to-know-the-drill-sandbox/index.html
----------------------------------------------------------------------
diff --git a/docs/getting-to-know-the-drill-sandbox/index.html b/docs/getting-to-know-the-drill-sandbox/index.html
index 540a17f..3737c55 100644
--- a/docs/getting-to-know-the-drill-sandbox/index.html
+++ b/docs/getting-to-know-the-drill-sandbox/index.html
@@ -1142,7 +1142,7 @@ URI. Metadata for Hive tables is automatically available for users to query.</p>
 </code></pre></div>
 <p>Do not use this storage plugin configuration outside the sandbox. Use the configuration for either the <a href="/docs/hive-storage-plugin/">remote or embedded metastore configuration</a>.</p>
 
-<h2 id="what&#39;s-next">What&#39;s Next</h2>
+<h2 id="what-39-s-next">What&#39;s Next</h2>
 
 <p>Start running queries by going to <a href="/docs/lesson-1-learn-about-the-data-set">Lesson 1: Learn About the Data
 Set</a>.</p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/installing-the-apache-drill-sandbox/index.html
----------------------------------------------------------------------
diff --git a/docs/installing-the-apache-drill-sandbox/index.html b/docs/installing-the-apache-drill-sandbox/index.html
index e2b39ae..d360cc2 100644
--- a/docs/installing-the-apache-drill-sandbox/index.html
+++ b/docs/installing-the-apache-drill-sandbox/index.html
@@ -1068,7 +1068,7 @@ instructions:</p>
 <li>To install VirtualBox, see the <a href="http://dlc.sun.com.edgesuite.net/virtualbox/4.3.4/UserManual.pdf">Oracle VM VirtualBox User Manual</a>. By downloading VirtualBox, you agree to the terms and conditions of the respective license.</li>
 </ul>
 
-<h2 id="installing-the-mapr-sandbox-with-apache-drill-on-vmware-player/vmware-fusion">Installing the MapR Sandbox with Apache Drill on VMware Player/VMware Fusion</h2>
+<h2 id="installing-the-mapr-sandbox-with-apache-drill-on-vmware-player-vmware-fusion">Installing the MapR Sandbox with Apache Drill on VMware Player/VMware Fusion</h2>
 
 <p>Complete the following steps to install the MapR Sandbox with Apache Drill on
 VMware Player or VMware Fusion:</p>
@@ -1110,7 +1110,7 @@ The Import Virtual Machine dialog appears.</p></li>
 <li>Alternatively, access the command line on the VM: Press Alt+F2 on Windows or Option+F5 on Mac.<br></li>
 </ul>
 
-<h3 id="what&#39;s-next">What&#39;s Next</h3>
+<h3 id="what-39-s-next">What&#39;s Next</h3>
 
 <p>After downloading and installing the sandbox, continue with the tutorial by
 <a href="/docs/getting-to-know-the-drill-sandbox/">Getting to Know the Drill
@@ -1160,7 +1160,7 @@ VirtualBox:</p>
 </ul></li>
 </ol>
 
-<h3 id="what&#39;s-next">What&#39;s Next</h3>
+<h3 id="what-39-s-next">What&#39;s Next</h3>
 
 <p>After downloading and installing the sandbox, continue with the tutorial by
 <a href="/docs/getting-to-know-the-drill-sandbox/">Getting to Know the Drill Sandbox</a>.</p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/installing-the-driver-on-linux/index.html
----------------------------------------------------------------------
diff --git a/docs/installing-the-driver-on-linux/index.html b/docs/installing-the-driver-on-linux/index.html
index 1729f66..c322f86 100644
--- a/docs/installing-the-driver-on-linux/index.html
+++ b/docs/installing-the-driver-on-linux/index.html
@@ -1077,7 +1077,7 @@ Example: <code>127.0.0.1 localhost</code></p></li>
 
 <p>To install the driver, you need Administrator privileges on the computer.</p>
 
-<h2 id="step-1:-download-the-mapr-drill-odbc-driver">Step 1: Download the MapR Drill ODBC Driver</h2>
+<h2 id="step-1-download-the-mapr-drill-odbc-driver">Step 1: Download the MapR Drill ODBC Driver</h2>
 
 <p>Download either the 32- or 64-bit driver:</p>
 
@@ -1086,7 +1086,7 @@ Example: <code>127.0.0.1 localhost</code></p></li>
 <li><a href="http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/MapRDrill_odbc_v1.2.0.1000/MapRDrillODBC-1.2.0.x86_64.rpm">MapR Drill ODBC Driver (64-bit)</a></li>
 </ul>
 
-<h2 id="step-2:-install-the-mapr-drill-odbc-driver">Step 2: Install the MapR Drill ODBC Driver</h2>
+<h2 id="step-2-install-the-mapr-drill-odbc-driver">Step 2: Install the MapR Drill ODBC Driver</h2>
 
 <p>To install the driver, complete the following steps:</p>
 
@@ -1141,7 +1141,7 @@ locations and descriptions:</p>
 </tr>
 </tbody></table>
 
-<h2 id="step-3:-check-the-mapr-drill-odbc-driver-version">Step 3: Check the MapR Drill ODBC Driver version</h2>
+<h2 id="step-3-check-the-mapr-drill-odbc-driver-version">Step 3: Check the MapR Drill ODBC Driver version</h2>
 
 <p>To check the version of the driver you installed, use the following case-sensitive command on the terminal command line:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/installing-the-driver-on-mac-os-x/index.html
----------------------------------------------------------------------
diff --git a/docs/installing-the-driver-on-mac-os-x/index.html b/docs/installing-the-driver-on-mac-os-x/index.html
index 592533b..9179d3e 100644
--- a/docs/installing-the-driver-on-mac-os-x/index.html
+++ b/docs/installing-the-driver-on-mac-os-x/index.html
@@ -1062,7 +1062,7 @@ Example: <code>127.0.0.1 localhost</code></p></li>
 
 <hr>
 
-<h2 id="step-1:-download-the-mapr-drill-odbc-driver">Step 1: Download the MapR Drill ODBC Driver</h2>
+<h2 id="step-1-download-the-mapr-drill-odbc-driver">Step 1: Download the MapR Drill ODBC Driver</h2>
 
 <p>Click the following link to download the driver:  </p>
 
@@ -1070,7 +1070,7 @@ Example: <code>127.0.0.1 localhost</code></p></li>
 
 <hr>
 
-<h2 id="step-2:-install-the-mapr-drill-odbc-driver">Step 2: Install the MapR Drill ODBC Driver</h2>
+<h2 id="step-2-install-the-mapr-drill-odbc-driver">Step 2: Install the MapR Drill ODBC Driver</h2>
 
 <p>To install the driver, complete the following steps:</p>
 
@@ -1092,7 +1092,7 @@ Example: <code>127.0.0.1 localhost</code></p></li>
 <li><code>/opt/mapr/drillodbc/lib/universal</code> – Binaries directory</li>
 </ul>
 
-<h2 id="step-3:-check-the-mapr-drill-odbc-driver-version">Step 3: Check the MapR Drill ODBC Driver version</h2>
+<h2 id="step-3-check-the-mapr-drill-odbc-driver-version">Step 3: Check the MapR Drill ODBC Driver version</h2>
 
 <p>To check the version of the driver you installed, use the following command on the terminal command line:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">$ pkgutil --info mapr.drillodbc

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/installing-the-driver-on-windows/index.html
----------------------------------------------------------------------
diff --git a/docs/installing-the-driver-on-windows/index.html b/docs/installing-the-driver-on-windows/index.html
index 0db91dd..67484b2 100644
--- a/docs/installing-the-driver-on-windows/index.html
+++ b/docs/installing-the-driver-on-windows/index.html
@@ -1071,7 +1071,7 @@ Example: <code>127.0.0.1 localhost</code></p></li>
 
 <hr>
 
-<h2 id="step-1:-download-the-mapr-drill-odbc-driver">Step 1: Download the MapR Drill ODBC Driver</h2>
+<h2 id="step-1-download-the-mapr-drill-odbc-driver">Step 1: Download the MapR Drill ODBC Driver</h2>
 
 <p>Download the installer that corresponds to the bitness of the client application from which you want to create an ODBC connection:</p>
 
@@ -1082,7 +1082,7 @@ Example: <code>127.0.0.1 localhost</code></p></li>
 
 <hr>
 
-<h2 id="step-2:-install-the-mapr-drill-odbc-driver">Step 2: Install the MapR Drill ODBC Driver</h2>
+<h2 id="step-2-install-the-mapr-drill-odbc-driver">Step 2: Install the MapR Drill ODBC Driver</h2>
 
 <ol>
 <li>Double-click the installer from the location where you downloaded it.</li>
@@ -1095,7 +1095,7 @@ Example: <code>127.0.0.1 localhost</code></p></li>
 
 <hr>
 
-<h2 id="step-3:-verify-the-installation">Step 3: Verify the installation</h2>
+<h2 id="step-3-verify-the-installation">Step 3: Verify the installation</h2>
 
 <p>To verify the installation, perform the following steps:</p>
 
@@ -1112,7 +1112,7 @@ The ODBC Data Source Administrator dialog appears.
 
 <p>You need to configure and start Drill before <a href="/docs/testing-the-odbc-connection/">testing</a> the ODBC Data Source Administrator.</p>
 
-<h2 id="the-tableau-data-connection-customization-(tdc)-file">The Tableau Data-connection Customization (TDC) File</h2>
+<h2 id="the-tableau-data-connection-customization-tdc-file">The Tableau Data-connection Customization (TDC) File</h2>
 
 <p>The MapR Drill ODBC Driver includes a file named <code>MapRDrillODBC.TDC</code>. The TDC file includes customizations that improve ODBC configuration and performance
 when using Tableau.</p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/json-data-model/index.html
----------------------------------------------------------------------
diff --git a/docs/json-data-model/index.html b/docs/json-data-model/index.html
index 10d9faf..c79a7e5 100644
--- a/docs/json-data-model/index.html
+++ b/docs/json-data-model/index.html
@@ -1120,7 +1120,7 @@ Reads all data from JSON files as VARCHAR. You need to cast numbers from VARCHAR
 
 <p>Drill uses these types internally for reading complex and nested data structures from data sources such as JSON.</p>
 
-<h3 id="experimental-feature:-heterogeneous-types">Experimental Feature: Heterogeneous types</h3>
+<h3 id="experimental-feature-heterogeneous-types">Experimental Feature: Heterogeneous types</h3>
 
 <p>The Union type allows storing different types in the same field. This new feature is still considered experimental, and must be explicitly enabled by setting the <code>exec.enabel_union_type</code> option to true.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">ALTER SESSION SET `exec.enable_union_type` = true;
@@ -1216,11 +1216,11 @@ y[z].x because these references are not ambiguous. Observe the following guideli
 <li>Generate key/value pairs for loosely structured data</li>
 </ul>
 
-<h2 id="example:-flatten-and-generate-key-values-for-complex-json">Example: Flatten and Generate Key Values for Complex JSON</h2>
+<h2 id="example-flatten-and-generate-key-values-for-complex-json">Example: Flatten and Generate Key Values for Complex JSON</h2>
 
 <p>This example uses the following data that represents unit sales of tickets to events that were sold over a period of several days in December:</p>
 
-<h3 id="ticket_sales.json-contents">ticket_sales.json Contents</h3>
+<h3 id="ticket_sales-json-contents">ticket_sales.json Contents</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">{
   &quot;type&quot;: &quot;ticket&quot;,
   &quot;venue&quot;: 123455,
@@ -1251,7 +1251,7 @@ y[z].x because these references are not ambiguous. Observe the following guideli
 +---------+---------+---------------------------------------------------------------+
 2 rows selected (1.343 seconds)
 </code></pre></div>
-<h3 id="generate-key/value-pairs">Generate Key/Value Pairs</h3>
+<h3 id="generate-key-value-pairs">Generate Key/Value Pairs</h3>
 
 <p>Continuing with the data from <a href="/docs/json-data-model/#example:-flatten-and-generate-key-values-for-complex-json">previous example</a>, use the KVGEN (Key Value Generator) function to generate key/value pairs from complex data. Generating key/value pairs is often helpful when working with data that contains arbitrary maps consisting of dynamic and unknown element names, such as the ticket sales data in this example. For example purposes, take a look at how kvgen breaks the sales data into keys and values representing the key dates and number of tickets sold:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">SELECT KVGEN(tkt.sales) AS `key dates:tickets sold` FROM dfs.`/Users/drilluser/ticket_sales.json` tkt;
@@ -1285,7 +1285,7 @@ FROM dfs.`/Users/drilluser/drill/ticket_sales.json`;
 +--------------------------------+
 8 rows selected (0.171 seconds)
 </code></pre></div>
-<h3 id="example:-aggregate-loosely-structured-data">Example: Aggregate Loosely Structured Data</h3>
+<h3 id="example-aggregate-loosely-structured-data">Example: Aggregate Loosely Structured Data</h3>
 
 <p>Use flatten and kvgen together to aggregate the data from the <a href="/docs/json-data-model/#example:-flatten-and-generate-key-values-for-complex-json">previous example</a>. Make sure all text mode is set to false to sum numbers. Drill returns an error if you attempt to sum data in all text mode.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">ALTER SYSTEM SET `store.json.all_text_mode` = false;
@@ -1300,7 +1300,7 @@ FROM dfs.`/Users/drilluser/drill/ticket_sales.json`;
 +--------------+
 1 row selected (0.244 seconds)
 </code></pre></div>
-<h3 id="example:-aggregate-and-sort-data">Example: Aggregate and Sort Data</h3>
+<h3 id="example-aggregate-and-sort-data">Example: Aggregate and Sort Data</h3>
 
 <p>Sum and group the ticket sales by date and sort in ascending order of total tickets sold.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">SELECT `right`(tkt.tot_sales.key,2) `December Date`,
@@ -1321,7 +1321,7 @@ ORDER BY TotalSales;
 +----------------+-------------+
 5 rows selected (0.252 seconds)
 </code></pre></div>
-<h3 id="example:-access-a-map-field-in-an-array">Example: Access a Map Field in an Array</h3>
+<h3 id="example-access-a-map-field-in-an-array">Example: Access a Map Field in an Array</h3>
 
 <p>To access a map field in an array, use dot notation to drill down through the hierarchy of the JSON data to the field. Examples are based on the following <a href="https://github.com/zemirco/sf-city-lots-json">City Lots San Francisco in .json</a>.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">{
@@ -1385,7 +1385,7 @@ FROM dfs.`/Users/drilluser/citylots.json`;
 
 <p>More examples of drilling down into an array are shown in <a href="/docs/selecting-nested-data-for-a-column">&quot;Selecting Nested Data for a Column&quot;</a>.</p>
 
-<h3 id="example:-flatten-an-array-of-maps-using-a-subquery">Example: Flatten an Array of Maps using a Subquery</h3>
+<h3 id="example-flatten-an-array-of-maps-using-a-subquery">Example: Flatten an Array of Maps using a Subquery</h3>
 
 <p>By flattening the following JSON file, which contains an array of maps, you can evaluate the records of the flattened data.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">{&quot;name&quot;:&quot;classic&quot;,&quot;fillings&quot;:[ {&quot;name&quot;:&quot;sugar&quot;,&quot;cal&quot;:500} , {&quot;name&quot;:&quot;flour&quot;,&quot;cal&quot;:300} ] }
@@ -1401,7 +1401,7 @@ SELECT flat.fill FROM (SELECT FLATTEN(t.fillings) AS fill FROM dfs.flatten.`test
 </code></pre></div>
 <p>Use a table alias for column fields and functions when working with complex data sets. Currently, you must use a subquery when operating on a flattened column. Eliminating the subquery and table alias in the WHERE clause, for example <code>flat.fillings[0].cal &gt; 300</code>, does not evaluate all records of the flattened data against the predicate and produces the wrong results.</p>
 
-<h3 id="example:-access-map-fields-in-a-map">Example: Access Map Fields in a Map</h3>
+<h3 id="example-access-map-fields-in-a-map">Example: Access Map Fields in a Map</h3>
 
 <p>This example uses a WHERE clause to drill down to a third level of the following JSON hierarchy to get the max_hdl greater than 160:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">{

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/kvgen/index.html
----------------------------------------------------------------------
diff --git a/docs/kvgen/index.html b/docs/kvgen/index.html
index 7356b7c..0bd4e4b 100644
--- a/docs/kvgen/index.html
+++ b/docs/kvgen/index.html
@@ -1122,7 +1122,7 @@ array down into multiple distinct rows and further query those rows.</p>
 {&quot;key&quot;: &quot;c&quot;, &quot;value&quot;: &quot;valC&quot;}
 {&quot;key&quot;: &quot;d&quot;, &quot;value&quot;: &quot;valD&quot;}
 </code></pre></div>
-<h2 id="example:-different-data-type-values">Example: Different Data Type Values</h2>
+<h2 id="example-different-data-type-values">Example: Different Data Type Values</h2>
 
 <p>Assume that a JSON file called <code>kvgendata.json</code> includes multiple records that
 look like this one:</p>