You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@drill.apache.org by br...@apache.org on 2016/01/20 23:26:56 UTC

[3/3] drill-site git commit: updated links to new odbc drivers for 1.4

updated links to new odbc drivers for 1.4


Project: http://git-wip-us.apache.org/repos/asf/drill-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill-site/commit/d9464074
Tree: http://git-wip-us.apache.org/repos/asf/drill-site/tree/d9464074
Diff: http://git-wip-us.apache.org/repos/asf/drill-site/diff/d9464074

Branch: refs/heads/asf-site
Commit: d946407492ef3a1fd480179e109afea67b3b4b82
Parents: 7804722
Author: Bridget Bevens <bb...@maprtech.com>
Authored: Wed Jan 20 14:26:33 2016 -0800
Committer: Bridget Bevens <bb...@maprtech.com>
Committed: Wed Jan 20 14:26:33 2016 -0800

----------------------------------------------------------------------
 blog/2014/11/19/sql-on-mongodb/index.html       |   4 +-
 .../12/02/drill-top-level-project/index.html    |   2 +-
 .../index.html                                  |  15 +-
 blog/2014/12/16/whats-coming-in-2015/index.html |   4 +-
 .../index.html                                  |   2 +-
 blog/2015/07/05/drill-1.1-released/index.html   |   2 +-
 .../drill-tutorial-at-nosql-now-2015/index.html |   5 +-
 docs/aggregate-window-functions/index.html      |  10 +-
 .../index.html                                  |  14 +-
 .../apache-drill-1-1-0-release-notes/index.html |   6 +-
 .../apache-drill-1-2-0-release-notes/index.html |   2 +-
 .../index.html                                  |   2 +-
 docs/apache-drill-contribution-ideas/index.html |   2 +-
 docs/compiling-drill-from-source/index.html     |   4 +-
 docs/configuring-jreport-with-drill/index.html  |   6 +-
 docs/configuring-odbc-on-linux/index.html       |  10 +-
 docs/configuring-odbc-on-mac-os-x/index.html    |  10 +-
 docs/configuring-odbc-on-windows/index.html     |   2 +-
 .../index.html                                  |   4 +-
 .../index.html                                  |   8 +-
 .../index.html                                  |  10 +-
 docs/configuring-user-impersonation/index.html  |   2 +-
 docs/custom-function-interfaces/index.html      |   6 +-
 docs/data-type-conversion/index.html            |   2 +-
 docs/date-time-and-timestamp/index.html         |   2 +-
 .../index.html                                  |   2 +-
 docs/drill-introduction/index.html              |  10 +-
 docs/drill-patch-review-tool/index.html         |  20 +-
 docs/drill-plan-syntax/index.html               |   2 +-
 docs/drop-table/index.html                      |  14 +-
 docs/explain/index.html                         |   2 +-
 .../index.html                                  |   2 +-
 docs/how-to-partition-data/index.html           |   4 +-
 .../index.html                                  |   6 +-
 docs/installing-the-driver-on-linux/index.html  |  12 +-
 .../index.html                                  |  10 +-
 .../installing-the-driver-on-windows/index.html |  14 +-
 docs/json-data-model/index.html                 |  18 +-
 docs/kvgen/index.html                           |   2 +-
 .../index.html                                  |  30 +--
 .../index.html                                  |  28 +--
 .../index.html                                  |  36 ++--
 docs/mongodb-storage-plugin/index.html          |   2 +-
 docs/odbc-configuration-reference/index.html    |   2 +-
 docs/parquet-format/index.html                  |   2 +-
 docs/querying-hbase/index.html                  |   2 +-
 docs/querying-json-files/index.html             |   2 +-
 docs/querying-plain-text-files/index.html       |   4 +-
 docs/querying-sequence-files/index.html         |   2 +-
 docs/querying-system-tables/index.html          |  12 +-
 docs/ranking-window-functions/index.html        |  10 +-
 docs/rdbms-storage-plugin/index.html            |   2 +-
 docs/rest-api/index.html                        |  28 +--
 docs/s3-storage-plugin/index.html               |   4 +-
 docs/sequence-files/index.html                  |   4 +-
 docs/sql-extensions/index.html                  |   2 +-
 .../index.html                                  |   4 +-
 docs/tableau-examples/index.html                |  26 +--
 docs/troubleshooting/index.html                 |   8 +-
 .../index.html                                  |  16 +-
 docs/useful-research/index.html                 |   4 +-
 .../index.html                                  |   8 +-
 .../index.html                                  |   6 +-
 .../index.html                                  |  12 +-
 .../index.html                                  |  12 +-
 docs/using-qlik-sense-with-drill/index.html     |  10 +-
 .../index.html                                  |   4 +-
 docs/value-window-functions/index.html          |  12 +-
 docs/why-drill/index.html                       |  20 +-
 docs/workspaces/index.html                      |   2 +-
 faq/index.html                                  |  34 +--
 feed.xml                                        |  13 +-
 js/script.js                                    | 214 +++++++++----------
 73 files changed, 414 insertions(+), 417 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/blog/2014/11/19/sql-on-mongodb/index.html
----------------------------------------------------------------------
diff --git a/blog/2014/11/19/sql-on-mongodb/index.html b/blog/2014/11/19/sql-on-mongodb/index.html
index 5efc20b..32301a9 100644
--- a/blog/2014/11/19/sql-on-mongodb/index.html
+++ b/blog/2014/11/19/sql-on-mongodb/index.html
@@ -149,7 +149,7 @@
 <li>Optimizations</li>
 </ul>
 
-<h2 id="drill-and-mongodb-setup-standalone-replicated-sharded">Drill and MongoDB Setup (Standalone/Replicated/Sharded)</h2>
+<h2 id="drill-and-mongodb-setup-(standalone/replicated/sharded)">Drill and MongoDB Setup (Standalone/Replicated/Sharded)</h2>
 
 <h3 id="standalone">Standalone</h3>
 
@@ -190,7 +190,7 @@
 
 <p>In replicated mode, whichever drillbit receives the query connects to the nearest <code>mongod</code> (local <code>mongod</code>) to read the data.</p>
 
-<h3 id="sharded-sharded-with-replica-set">Sharded/Sharded with Replica Set</h3>
+<h3 id="sharded/sharded-with-replica-set">Sharded/Sharded with Replica Set</h3>
 
 <ul>
 <li>Start Mongo processes in sharded mode</li>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/blog/2014/12/02/drill-top-level-project/index.html
----------------------------------------------------------------------
diff --git a/blog/2014/12/02/drill-top-level-project/index.html b/blog/2014/12/02/drill-top-level-project/index.html
index 8eb74e2..70743fa 100644
--- a/blog/2014/12/02/drill-top-level-project/index.html
+++ b/blog/2014/12/02/drill-top-level-project/index.html
@@ -160,7 +160,7 @@
 
 <p>After almost two years of research and development, we released Drill 0.4 in August, and continued with monthly releases since then.</p>
 
-<h2 id="what-39-s-next">What&#39;s Next</h2>
+<h2 id="what&#39;s-next">What&#39;s Next</h2>
 
 <p>Graduating to a top-level project is a significant milestone, but it&#39;s really just the beginning of the journey. In fact, we&#39;re currently wrapping up Drill 0.7, which includes hundreds of fixes and enhancements, and we expect to release that in the next couple weeks.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html
----------------------------------------------------------------------
diff --git a/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html b/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html
index 4afef95..2b6fe1d 100644
--- a/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html
+++ b/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html
@@ -127,9 +127,8 @@
   <div class="addthis_sharing_toolbox"></div>
 
   <article class="post-content">
-    <script type="text/javascript" src="//addthisevent.com/libs/1.5.8/ate.min.js"></script>
-
-<p><a href="/blog/2014/12/11/apache-drill-qa-panelist-spotlight/" title="Add to Calendar" class="addthisevent">
+    <p><script type="text/javascript" src="//addthisevent.com/libs/1.5.8/ate.min.js"></script>
+<a href="/blog/2014/12/11/apache-drill-qa-panelist-spotlight/" title="Add to Calendar" class="addthisevent">
     Add to Calendar
     <span class="_start">12-17-2014 11:30:00</span>
     <span class="_end">12-17-2014 12:30:00</span>
@@ -153,23 +152,23 @@
 
 <p>Apache Drill committers Tomer Shiran, Jacques Nadeau, and Ted Dunning, as well as Tableau Product Manager Jeff Feng and Data Scientist Dr. Kirk Borne will be on hand to answer your questions.</p>
 
-<h4 id="tomer-shiran-apache-drill-founder-tshiran">Tomer Shiran, Apache Drill Founder (@tshiran)</h4>
+<h4 id="tomer-shiran,-apache-drill-founder-(@tshiran)">Tomer Shiran, Apache Drill Founder (@tshiran)</h4>
 
 <p>Tomer Shiran is the founder of Apache Drill, and a PMC member and committer on the project. He is VP Product Management at MapR, responsible for product strategy, roadmap and new feature development. Prior to MapR, Tomer held numerous product management and engineering roles at Microsoft, most recently as the product manager for Microsoft Internet Security &amp; Acceleration Server (now Microsoft Forefront). He is the founder of two websites that have served tens of millions of users, and received coverage in prestigious publications such as The New York Times, USA Today and The Times of London. Tomer is also the author of a 900-page programming book. He holds an MS in Computer Engineering from Carnegie Mellon University and a BS in Computer Science from Technion - Israel Institute of Technology.</p>
 
-<h4 id="jeff-feng-product-manager-tableau-software-jtfeng">Jeff Feng, Product Manager Tableau Software (@jtfeng)</h4>
+<h4 id="jeff-feng,-product-manager-tableau-software-(@jtfeng)">Jeff Feng, Product Manager Tableau Software (@jtfeng)</h4>
 
 <p>Jeff Feng is a Product Manager at Tableau and leads their Big Data product roadmap &amp; strategic vision.  In his role, he focuses on joint technology integration and partnership efforts with a number of Hadoop, NoSQL and web application partners in helping users see and understand their data.</p>
 
-<h4 id="ted-dunning-apache-drill-comitter-ted_dunning">Ted Dunning, Apache Drill Comitter (@Ted_Dunning)</h4>
+<h4 id="ted-dunning,-apache-drill-comitter-(@ted_dunning)">Ted Dunning, Apache Drill Comitter (@Ted_Dunning)</h4>
 
 <p>Ted Dunning is Chief Applications Architect at MapR Technologies and committer and PMC member of the Apache Mahout, Apache ZooKeeper, and Apache Drill projects and mentor for Apache Storm. He contributed to Mahout clustering, classification and matrix decomposition algorithms  and helped expand the new version of Mahout Math library. Ted was the chief architect behind the MusicMatch (now Yahoo Music) and Veoh recommendation systems, he built fraud detection systems for ID Analytics (LifeLock) and he has issued 24 patents to date. Ted has a PhD in computing science from University of Sheffield. When he’s not doing data science, he plays guitar and mandolin.</p>
 
-<h4 id="jacques-nadeau-vice-president-apache-drill-intjesus">Jacques Nadeau, Vice President, Apache Drill (@intjesus)</h4>
+<h4 id="jacques-nadeau,-vice-president,-apache-drill-(@intjesus)">Jacques Nadeau, Vice President, Apache Drill (@intjesus)</h4>
 
 <p>Jacques Nadeau leads Apache Drill development efforts at MapR Technologies. He is an industry veteran with over 15 years of big data and analytics experience. Most recently, he was cofounder and CTO of search engine startup YapMap. Before that, he was director of new product engineering with Quigo (contextual advertising, acquired by AOL in 2007). He also built the Avenue A | Razorfish analytics data warehousing system and associated services practice (acquired by Microsoft).</p>
 
-<h4 id="dr-kirk-borne-george-mason-university-kirkdborne">Dr. Kirk Borne, George Mason University (@KirkDBorne)</h4>
+<h4 id="dr.-kirk-borne,-george-mason-university-(@kirkdborne)">Dr. Kirk Borne, George Mason University (@KirkDBorne)</h4>
 
 <p>Dr. Kirk Borne is a Transdisciplinary Data Scientist and an Astrophysicist. He is Professor of Astrophysics and Computational Science in the George Mason University School of Physics, Astronomy, and Computational Sciences. He has been at Mason since 2003, where he teaches and advises students in the graduate and undergraduate Computational Science, Informatics, and Data Science programs. Previously, he spent nearly 20 years in positions supporting NASA projects, including an assignment as NASA&#39;s Data Archive Project Scientist for the Hubble Space Telescope, and as Project Manager in NASA&#39;s Space Science Data Operations Office. He has extensive experience in big data and data science, including expertise in scientific data mining and data systems. He has published over 200 articles (research papers, conference papers, and book chapters), and given over 200 invited talks at conferences and universities worldwide.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/blog/2014/12/16/whats-coming-in-2015/index.html
----------------------------------------------------------------------
diff --git a/blog/2014/12/16/whats-coming-in-2015/index.html b/blog/2014/12/16/whats-coming-in-2015/index.html
index 596ff85..bc2fad6 100644
--- a/blog/2014/12/16/whats-coming-in-2015/index.html
+++ b/blog/2014/12/16/whats-coming-in-2015/index.html
@@ -213,7 +213,7 @@
 
 <p>If you&#39;re interested in implementing a new storage plugin, I would encourage you to reach out to the Drill developer community on <a href="mailto:dev@drill.apache.org">dev@drill.apache.org</a>. I&#39;m looking forward to publishing an example of a single-query join across 10 data sources.</p>
 
-<h2 id="drill-spark-integration">Drill/Spark Integration</h2>
+<h2 id="drill/spark-integration">Drill/Spark Integration</h2>
 
 <p>We&#39;re seeing growing interest in Spark as an execution engine for data pipelines, providing an alternative to MapReduce. The Drill community is working on integrating Drill and Spark to address a few new use cases:</p>
 
@@ -239,7 +239,7 @@
 <li><strong>Workload management</strong>: A single cluster is often shared among many users and groups, and everyone expects answers in real-time. Workload management prioritizes the allocation of resources to ensure that the most important workloads get done first so that business demands can be met. Administrators need to be able to assign priorities and quotas at a fine granularity. We&#39;re working on enhancing Drill&#39;s workload management to provide these capabilities while providing tight integration with YARN and Mesos.</li>
 </ul>
 
-<h2 id="we-would-love-to-hear-from-you">We Would Love to Hear From You!</h2>
+<h2 id="we-would-love-to-hear-from-you!">We Would Love to Hear From You!</h2>
 
 <p>Are there other features you would like to see in Drill? We would love to hear from you:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/blog/2015/01/27/schema-free-json-data-infrastructure/index.html
----------------------------------------------------------------------
diff --git a/blog/2015/01/27/schema-free-json-data-infrastructure/index.html b/blog/2015/01/27/schema-free-json-data-infrastructure/index.html
index 58be84b..a297e47 100644
--- a/blog/2015/01/27/schema-free-json-data-infrastructure/index.html
+++ b/blog/2015/01/27/schema-free-json-data-infrastructure/index.html
@@ -129,7 +129,7 @@
   <article class="post-content">
     <p>JSON has emerged in recent years as the de-facto standard data exchange format. It is being used everywhere. Front-end Web applications use JSON to maintain data and communicate with back-end applications. Web APIs are JSON-based (eg, <a href="https://dev.twitter.com/rest/public">Twitter REST APIs</a>, <a href="http://developers.marketo.com/documentation/rest/">Marketo REST APIs</a>, <a href="https://developer.github.com/v3/">GitHub API</a>). It&#39;s the format of choice for public datasets, operational log files and more.</p>
 
-<h1 id="why-is-json-a-convenient-data-exchange-format">Why is JSON a Convenient Data Exchange Format?</h1>
+<h1 id="why-is-json-a-convenient-data-exchange-format?">Why is JSON a Convenient Data Exchange Format?</h1>
 
 <p>While I won&#39;t dive into the historical roots of JSON (JavaScript Object Notation, <a href="http://en.wikipedia.org/wiki/JSON#JavaScript_eval.28.29"><code>eval()</code></a>, etc.), I do want to highlight several attributes of JSON that make it a convenient data exchange format:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/blog/2015/07/05/drill-1.1-released/index.html
----------------------------------------------------------------------
diff --git a/blog/2015/07/05/drill-1.1-released/index.html b/blog/2015/07/05/drill-1.1-released/index.html
index 64c88e9..98ef6e6 100644
--- a/blog/2015/07/05/drill-1.1-released/index.html
+++ b/blog/2015/07/05/drill-1.1-released/index.html
@@ -167,7 +167,7 @@
   &lt;version&gt;1.1.0&lt;/version&gt;
 &lt;/dependency&gt;
 </code></pre></div>
-<h2 id="mongodb-3-0-support">MongoDB 3.0 Support</h2>
+<h2 id="mongodb-3.0-support">MongoDB 3.0 Support</h2>
 
 <p>Drill now uses MongoDB&#39;s latest Java driver and has enhanced connection pooling for better performance and resilience in large-scale deployments.  Learn more about using the <a href="https://drill.apache.org/docs/mongodb-plugin-for-apache-drill/">MongoDB plugin</a>.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/blog/2015/07/23/drill-tutorial-at-nosql-now-2015/index.html
----------------------------------------------------------------------
diff --git a/blog/2015/07/23/drill-tutorial-at-nosql-now-2015/index.html b/blog/2015/07/23/drill-tutorial-at-nosql-now-2015/index.html
index 486f7ed..3973da6 100644
--- a/blog/2015/07/23/drill-tutorial-at-nosql-now-2015/index.html
+++ b/blog/2015/07/23/drill-tutorial-at-nosql-now-2015/index.html
@@ -127,9 +127,8 @@
   <div class="addthis_sharing_toolbox"></div>
 
   <article class="post-content">
-    <script type="text/javascript" src="//addthisevent.com/libs/1.5.8/ate.min.js"></script>
-
-<p><a href="/blog/2015/07/23/drill-tutorial-at-nosql-now-2015/" title="Add to Calendar" class="addthisevent">
+    <p><script type="text/javascript" src="//addthisevent.com/libs/1.5.8/ate.min.js"></script>
+<a href="/blog/2015/07/23/drill-tutorial-at-nosql-now-2015/" title="Add to Calendar" class="addthisevent">
     Add to Calendar
     <span class="_start">08-20-2015 13:00:00</span>
     <span class="_end">08-20-2014 16:15:00</span>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/docs/aggregate-window-functions/index.html
----------------------------------------------------------------------
diff --git a/docs/aggregate-window-functions/index.html b/docs/aggregate-window-functions/index.html
index ea4afa9..6d08e0b 100644
--- a/docs/aggregate-window-functions/index.html
+++ b/docs/aggregate-window-functions/index.html
@@ -1122,7 +1122,7 @@ If an ORDER BY clause is used for an aggregate function, an explicit frame claus
 
 <p>The following examples show queries that use each of the aggregate window functions in Drill. See <a href="/docs/sql-window-functions-examples/">SQL Window Functions Examples</a> for information about the data and setup for these examples.</p>
 
-<h3 id="avg">AVG()</h3>
+<h3 id="avg()">AVG()</h3>
 
 <p>The following query uses the AVG() window function with the PARTITION BY clause to calculate the average sales for each car dealer in Q1.  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select dealer_id, sales, avg(sales) over (partition by dealer_id) as avgsales from q1_sales;
@@ -1142,7 +1142,7 @@ If an ORDER BY clause is used for an aggregate function, an explicit frame claus
    +------------+--------+-----------+
    10 rows selected (0.455 seconds)
 </code></pre></div>
-<h3 id="count">COUNT()</h3>
+<h3 id="count()">COUNT()</h3>
 
 <p>The following query uses the COUNT (*) window function to count the number of sales in Q1, ordered by dealer_id. The word count is enclosed in back ticks (``) because it is a reserved keyword in Drill.  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select dealer_id, sales, count(*) over(order by dealer_id) as `count` from q1_sales;
@@ -1180,7 +1180,7 @@ If an ORDER BY clause is used for an aggregate function, an explicit frame claus
    +------------+--------+--------+
    10 rows selected (0.249 seconds)
 </code></pre></div>
-<h3 id="max">MAX()</h3>
+<h3 id="max()">MAX()</h3>
 
 <p>The following query uses the MAX() window function with the PARTITION BY clause to identify the employee with the maximum number of car sales in Q1 at each dealership. The word max is a reserved keyword in Drill and must be enclosed in back ticks (``).  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select emp_name, dealer_id, sales, max(sales) over(partition by dealer_id) as `max` from q1_sales;
@@ -1200,7 +1200,7 @@ If an ORDER BY clause is used for an aggregate function, an explicit frame claus
    +-----------------+------------+--------+--------+
    10 rows selected (0.402 seconds)
 </code></pre></div>
-<h3 id="min">MIN()</h3>
+<h3 id="min()">MIN()</h3>
 
 <p>The following query uses the MIN() window function with the PARTITION BY clause to identify the employee with the minimum number of car sales in Q1 at each dealership. The word min is a reserved keyword in Drill and must be enclosed in back ticks (``).  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select emp_name, dealer_id, sales, min(sales) over(partition by dealer_id) as `min` from q1_sales;
@@ -1220,7 +1220,7 @@ If an ORDER BY clause is used for an aggregate function, an explicit frame claus
    +-----------------+------------+--------+-------+
    10 rows selected (0.194 seconds)
 </code></pre></div>
-<h3 id="sum">SUM()</h3>
+<h3 id="sum()">SUM()</h3>
 
 <p>The following query uses the SUM() window function to total the amount of sales for each dealer in Q1. The word sum is a reserved keyword in Drill and must be enclosed in back ticks (``).  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select dealer_id, emp_name, sales, sum(sales) over(partition by dealer_id) as `sum` from q1_sales;

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/docs/analyzing-the-yelp-academic-dataset/index.html
----------------------------------------------------------------------
diff --git a/docs/analyzing-the-yelp-academic-dataset/index.html b/docs/analyzing-the-yelp-academic-dataset/index.html
index aca57b5..579d287 100644
--- a/docs/analyzing-the-yelp-academic-dataset/index.html
+++ b/docs/analyzing-the-yelp-academic-dataset/index.html
@@ -1081,7 +1081,7 @@ analysis extremely easy.</p>
 
 <h2 id="querying-data-with-drill">Querying Data with Drill</h2>
 
-<h3 id="1-view-the-contents-of-the-yelp-business-data">1. View the contents of the Yelp business data</h3>
+<h3 id="1.-view-the-contents-of-the-yelp-business-data">1. View the contents of the Yelp business data</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=local&gt; !set maxwidth 10000
 
 0: jdbc:drill:zk=local&gt; select * from
@@ -1101,7 +1101,7 @@ analysis extremely easy.</p>
 
 <p>You can directly query self-describing files such as JSON, Parquet, and text. There is no need to create metadata definitions in the Hive metastore.</p>
 
-<h3 id="2-explore-the-business-data-set-further">2. Explore the business data set further</h3>
+<h3 id="2.-explore-the-business-data-set-further">2. Explore the business data set further</h3>
 
 <h4 id="total-reviews-in-the-data-set">Total reviews in the data set</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=local&gt; select sum(review_count) as totalreviews 
@@ -1152,7 +1152,7 @@ group by stars order by stars desc;
 | 1.0        | 4.0        |
 +------------+------------+
 </code></pre></div>
-<h4 id="top-businesses-with-high-review-counts-gt-1000">Top businesses with high review counts (&gt; 1000)</h4>
+<h4 id="top-businesses-with-high-review-counts-(&gt;-1000)">Top businesses with high review counts (&gt; 1000)</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=local&gt; select name, state, city, `review_count` from
 dfs.`/&lt;path-to-yelp-dataset&gt;/yelp/yelp_academic_dataset_business.json`
 where review_count &gt; 1000 order by `review_count` desc limit 10;
@@ -1196,7 +1196,7 @@ b limit 10;
 </code></pre></div>
 <p>Note how Drill can traverse and refer through multiple levels of nesting.</p>
 
-<h3 id="3-get-the-amenities-of-each-business-in-the-data-set">3. Get the amenities of each business in the data set</h3>
+<h3 id="3.-get-the-amenities-of-each-business-in-the-data-set">3. Get the amenities of each business in the data set</h3>
 
 <p>Note that the attributes column in the Yelp business data set has a different
 element for every row, representing that businesses can have separate
@@ -1244,7 +1244,7 @@ on data.</p>
 | true  | store.json.all_text_mode updated.  |
 +-------+------------------------------------+
 </code></pre></div>
-<h3 id="4-explore-the-restaurant-businesses-in-the-data-set">4. Explore the restaurant businesses in the data set</h3>
+<h3 id="4.-explore-the-restaurant-businesses-in-the-data-set">4. Explore the restaurant businesses in the data set</h3>
 
 <h4 id="number-of-restaurants-in-the-data-set">Number of restaurants in the data set</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=local&gt; select count(*) as TotalRestaurants from dfs.`/&lt;path-to-yelp-dataset&gt;/yelp/yelp_academic_dataset_business.json` where true=repeated_contains(categories,&#39;Restaurants&#39;);
@@ -1316,9 +1316,9 @@ order by count(categories[0]) desc limit 10;
 | Hair Salons          | 901           |
 +----------------------+---------------+
 </code></pre></div>
-<h3 id="5-explore-the-yelp-reviews-dataset-and-combine-with-the-businesses">5. Explore the Yelp reviews dataset and combine with the businesses.</h3>
+<h3 id="5.-explore-the-yelp-reviews-dataset-and-combine-with-the-businesses.">5. Explore the Yelp reviews dataset and combine with the businesses.</h3>
 
-<h4 id="take-a-look-at-the-contents-of-the-yelp-reviews-dataset">Take a look at the contents of the Yelp reviews dataset.</h4>
+<h4 id="take-a-look-at-the-contents-of-the-yelp-reviews-dataset.">Take a look at the contents of the Yelp reviews dataset.</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=local&gt; select * 
 from dfs.`/&lt;path-to-yelp-dataset&gt;/yelp/yelp_academic_dataset_review.json` limit 1;
 +---------------------------------+------------------------+------------------------+-------+------------+----------------------------------------------------------------------+--------+------------------------+

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/docs/apache-drill-1-1-0-release-notes/index.html
----------------------------------------------------------------------
diff --git a/docs/apache-drill-1-1-0-release-notes/index.html b/docs/apache-drill-1-1-0-release-notes/index.html
index cdd223b..e006487 100644
--- a/docs/apache-drill-1-1-0-release-notes/index.html
+++ b/docs/apache-drill-1-1-0-release-notes/index.html
@@ -1048,7 +1048,7 @@
 
 <p>It has been about 6 weeks since the release of Drill 1.0.0. Today we&#39;re happy to announce the availability of Drill 1.1.0, providing 119 additional enhancements and bug fixes. </p>
 
-<h2 id="noteworthy-new-features-in-drill-1-1-0">Noteworthy New Features in Drill 1.1.0</h2>
+<h2 id="noteworthy-new-features-in-drill-1.1.0">Noteworthy New Features in Drill 1.1.0</h2>
 
 <p>Drill now supports window functions, automatic partitioning, and Hive impersonation. </p>
 
@@ -1072,13 +1072,13 @@
 <li>AVG<br></li>
 </ul>
 
-<h3 id="automatic-partitioning-in-ctas-drill-3333"><a href="/docs/partition-pruning/#automatic-partitioning">Automatic Partitioning</a> in CTAS (DRILL-3333)</h3>
+<h3 id="automatic-partitioning-in-ctas-(drill-3333)"><a href="/docs/partition-pruning/#automatic-partitioning">Automatic Partitioning</a> in CTAS (DRILL-3333)</h3>
 
 <p>When a table is created with a partition by clause, the parquet writer will create separate files for the different partition values. The data will first be sorted by the partition keys, and the parquet writer will create a new file when it encounters a new value for the partition columns. </p>
 
 <p>When queries are issued against data that was created this way, partition pruning will work if the filter contains a partition column. Unlike directory-based partitioning, no view is required, nor is it necessary to reference the dir* column names. </p>
 
-<h3 id="hive-impersonation-support-drill-3203"><a href="/docs/configuring-user-impersonation-with-hive-authorization">Hive impersonation</a> support (DRILL-3203)</h3>
+<h3 id="hive-impersonation-support-(drill-3203)"><a href="/docs/configuring-user-impersonation-with-hive-authorization">Hive impersonation</a> support (DRILL-3203)</h3>
 
 <p>When impersonation is enabled, Drill now supports impersonating the user who issued the query when accessing Hive metadata/data (instead of accessing Hive as the user that started the drillbit). </p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/docs/apache-drill-1-2-0-release-notes/index.html
----------------------------------------------------------------------
diff --git a/docs/apache-drill-1-2-0-release-notes/index.html b/docs/apache-drill-1-2-0-release-notes/index.html
index 0bbe4e2..ef16779 100644
--- a/docs/apache-drill-1-2-0-release-notes/index.html
+++ b/docs/apache-drill-1-2-0-release-notes/index.html
@@ -1053,7 +1053,7 @@
 <li><a href="/docs/apache-drill-1-2-0-release-notes/#important-unresolved-issues">Important unresolved issues</a></li>
 </ul>
 
-<h2 id="noteworthy-new-features-in-drill-1-2-0">Noteworthy New Features in Drill 1.2.0</h2>
+<h2 id="noteworthy-new-features-in-drill-1.2.0">Noteworthy New Features in Drill 1.2.0</h2>
 
 <p>This release of Drill introduces a number of enhancements, including the following ones:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/docs/apache-drill-contribution-guidelines/index.html
----------------------------------------------------------------------
diff --git a/docs/apache-drill-contribution-guidelines/index.html b/docs/apache-drill-contribution-guidelines/index.html
index d2debbb..b87ae06 100644
--- a/docs/apache-drill-contribution-guidelines/index.html
+++ b/docs/apache-drill-contribution-guidelines/index.html
@@ -1200,7 +1200,7 @@ it easy to quickly view the contents of the patch in a web browser.</p>
 <li>Once your patch is accepted, be sure to upload a final version which grants rights to the ASF.</li>
 </ul>
 
-<h2 id="where-is-a-good-place-to-start-contributing">Where is a good place to start contributing?</h2>
+<h2 id="where-is-a-good-place-to-start-contributing?">Where is a good place to start contributing?</h2>
 
 <p>After getting the source code, building and running a few simple queries, one
 of the simplest places to start is to implement a DrillFunc.<br>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/docs/apache-drill-contribution-ideas/index.html
----------------------------------------------------------------------
diff --git a/docs/apache-drill-contribution-ideas/index.html b/docs/apache-drill-contribution-ideas/index.html
index 2111552..2c19852 100644
--- a/docs/apache-drill-contribution-ideas/index.html
+++ b/docs/apache-drill-contribution-ideas/index.html
@@ -1102,7 +1102,7 @@ own use case). Then try to implement one.</p>
 <li>Approximate aggregate functions (such as what is available in BlinkDB)</li>
 </ul>
 
-<h2 id="support-for-new-file-format-readers-writers">Support for new file format readers/writers</h2>
+<h2 id="support-for-new-file-format-readers/writers">Support for new file format readers/writers</h2>
 
 <p>Currently Drill supports text, JSON and Parquet file formats natively when
 interacting with file system. More readers/writers can be introduced by

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/docs/compiling-drill-from-source/index.html
----------------------------------------------------------------------
diff --git a/docs/compiling-drill-from-source/index.html b/docs/compiling-drill-from-source/index.html
index 9255e02..1631a38 100644
--- a/docs/compiling-drill-from-source/index.html
+++ b/docs/compiling-drill-from-source/index.html
@@ -1063,10 +1063,10 @@ Maven and JDK installed:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">java -version
 mvn -version
 </code></pre></div>
-<h2 id="1-clone-the-repository">1. Clone the Repository</h2>
+<h2 id="1.-clone-the-repository">1. Clone the Repository</h2>
 <div class="highlight"><pre><code class="language-text" data-lang="text">git clone https://git-wip-us.apache.org/repos/asf/drill.git
 </code></pre></div>
-<h2 id="2-compile-the-code">2. Compile the Code</h2>
+<h2 id="2.-compile-the-code">2. Compile the Code</h2>
 <div class="highlight"><pre><code class="language-text" data-lang="text">cd drill
 mvn clean install -DskipTests
 </code></pre></div>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/docs/configuring-jreport-with-drill/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-jreport-with-drill/index.html b/docs/configuring-jreport-with-drill/index.html
index 524f545..fdfe5fd 100644
--- a/docs/configuring-jreport-with-drill/index.html
+++ b/docs/configuring-jreport-with-drill/index.html
@@ -1058,7 +1058,7 @@
 
 <hr>
 
-<h3 id="step-1-install-the-drill-jdbc-driver-with-jreport">Step 1: Install the Drill JDBC Driver with JReport</h3>
+<h3 id="step-1:-install-the-drill-jdbc-driver-with-jreport">Step 1: Install the Drill JDBC Driver with JReport</h3>
 
 <p>Drill provides standard JDBC connectivity to integrate with JReport. JReport 13.1 requires Drill 1.0 or later.
 For general instructions on installing the Drill JDBC driver, see <a href="/docs/using-the-jdbc-driver/">Using JDBC</a>.</p>
@@ -1078,7 +1078,7 @@ For example, on Windows, copy the Drill JDBC driver jar file into:</p>
 
 <hr>
 
-<h3 id="step-2-create-a-new-jreport-catalog-to-manage-the-drill-connection">Step 2: Create a New JReport Catalog to Manage the Drill Connection</h3>
+<h3 id="step-2:-create-a-new-jreport-catalog-to-manage-the-drill-connection">Step 2: Create a New JReport Catalog to Manage the Drill Connection</h3>
 
 <ol>
 <li> Click Create <strong>New -&gt; Catalog…</strong></li>
@@ -1093,7 +1093,7 @@ For example, on Windows, copy the Drill JDBC driver jar file into:</p>
 <li>Click <strong>Done</strong> when you have added all the tables you need. </li>
 </ol>
 
-<h3 id="step-3-use-jreport-designer">Step 3: Use JReport Designer</h3>
+<h3 id="step-3:-use-jreport-designer">Step 3: Use JReport Designer</h3>
 
 <ol>
 <li> In the Catalog Browser, right-click <strong>Queries</strong> and select <strong>Add Query…</strong></li>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/docs/configuring-odbc-on-linux/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-odbc-on-linux/index.html b/docs/configuring-odbc-on-linux/index.html
index 28a278b..796ab7d 100644
--- a/docs/configuring-odbc-on-linux/index.html
+++ b/docs/configuring-odbc-on-linux/index.html
@@ -1078,7 +1078,7 @@ on Linux, copy the following configuration files in <code>/opt/mapr/drillobdc/Se
 
 <hr>
 
-<h2 id="step-1-set-environment-variables">Step 1: Set Environment Variables</h2>
+<h2 id="step-1:-set-environment-variables">Step 1: Set Environment Variables</h2>
 
 <ol>
 <li>Set the ODBCINI environment variable to point to the <code>.odbc.ini</code> in your home directory. For example:<br>
@@ -1098,7 +1098,7 @@ Only include the path to the shared libraries corresponding to the driver matchi
 
 <hr>
 
-<h2 id="step-2-define-the-odbc-data-sources-in-odbc-ini">Step 2: Define the ODBC Data Sources in .odbc.ini</h2>
+<h2 id="step-2:-define-the-odbc-data-sources-in-.odbc.ini">Step 2: Define the ODBC Data Sources in .odbc.ini</h2>
 
 <p>Define the ODBC data sources in the <code>~/.odbc.ini</code> configuration file for your environment. To use Drill in embedded mode, set the following properties:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">ConnectionType=Direct
@@ -1184,7 +1184,7 @@ behavior of DSNs using the MapR Drill ODBC Driver.</p>
 
 <hr>
 
-<h2 id="step-3-optional-define-the-odbc-driver-in-odbcinst-ini">Step 3: (Optional) Define the ODBC Driver in .odbcinst.ini</h2>
+<h2 id="step-3:-(optional)-define-the-odbc-driver-in-.odbcinst.ini">Step 3: (Optional) Define the ODBC Driver in .odbcinst.ini</h2>
 
 <p>The <code>.odbcinst.ini</code> is an optional configuration file that defines the ODBC
 Drivers. This configuration file is optional because you can specify drivers
@@ -1206,7 +1206,7 @@ Driver=/opt/mapr/drillodbc/lib/64/libmaprdrillodbc64.so
 </code></pre></div>
 <hr>
 
-<h2 id="step-4-configure-the-mapr-drill-odbc-driver">Step 4: Configure the MapR Drill ODBC Driver</h2>
+<h2 id="step-4:-configure-the-mapr-drill-odbc-driver">Step 4: Configure the MapR Drill ODBC Driver</h2>
 
 <p>Configure the MapR Drill ODBC Driver for your environment by modifying the <code>.mapr.drillodbc.ini</code> configuration
 file. This configures the driver to work with your ODBC driver manager. The following sample shows a possible configuration, which you can use as is if you installed the default iODBC driver manager.</p>
@@ -1229,7 +1229,7 @@ SwapFilePath=/tmp
 ODBCInstLib=libiodbcinst.so
 . . .
 </code></pre></div>
-<h3 id="configuring-mapr-drillodbc-ini">Configuring .mapr.drillodbc.ini</h3>
+<h3 id="configuring-.mapr.drillodbc.ini">Configuring .mapr.drillodbc.ini</h3>
 
 <p>To configure the MapR Drill ODBC Driver in the <code>mapr.drillodbc.ini</code> configuration file, complete the following steps:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/docs/configuring-odbc-on-mac-os-x/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-odbc-on-mac-os-x/index.html b/docs/configuring-odbc-on-mac-os-x/index.html
index 2f73e42..3ac9cce 100644
--- a/docs/configuring-odbc-on-mac-os-x/index.html
+++ b/docs/configuring-odbc-on-mac-os-x/index.html
@@ -1092,7 +1092,7 @@ on Mac OS X, copy the following configuration files in <code>/opt/mapr/drillodbc
 
 <hr>
 
-<h2 id="step-1-set-environment-variables">Step 1: Set Environment Variables</h2>
+<h2 id="step-1:-set-environment-variables">Step 1: Set Environment Variables</h2>
 
 <p>Create or modify the <code>/etc/launchd.conf</code> file to set environment variables. Set the SIMBAINI variable to point to the <code>.mapr.drillodbc.ini</code> file, the ODBCSYSINI varialbe to the <code>.odbcinst.ini</code> file, the ODBCINI variable to the <code>.odbc.ini</code> file, and the DYLD_LIBRARY_PATH to the location of the dynamic linker (DYLD) libraries and to the MapR Drill ODBC Driver. If you installed the iODBC driver manager using the DMG, the DYLD libraries are installed in <code>/usr/local/iODBC/lib</code>. The launchd.conf file should look something like this:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">setenv SIMBAINI /Users/joeuser/.mapr.drillodbc.ini
@@ -1104,7 +1104,7 @@ setenv DYLD_LIBRARY_PATH /usr/local/iODBC/lib:/opt/mapr/drillodbc/lib/universal
 
 <hr>
 
-<h2 id="step-2-define-the-odbc-data-sources-in-odbc-ini">Step 2: Define the ODBC Data Sources in .odbc.ini</h2>
+<h2 id="step-2:-define-the-odbc-data-sources-in-.odbc.ini">Step 2: Define the ODBC Data Sources in .odbc.ini</h2>
 
 <p>Define the ODBC data sources in the <code>~/.odbc.ini</code> configuration file for your environment. </p>
 
@@ -1186,7 +1186,7 @@ behavior of DSNs using the MapR Drill ODBC Driver.</p>
 
 <hr>
 
-<h2 id="step-3-optional-define-the-odbc-driver-in-odbcinst-ini">Step 3: (Optional) Define the ODBC Driver in .odbcinst.ini</h2>
+<h2 id="step-3:-(optional)-define-the-odbc-driver-in-.odbcinst.ini">Step 3: (Optional) Define the ODBC Driver in .odbcinst.ini</h2>
 
 <p>The <code>.odbcinst.ini</code> is an optional configuration file that defines the ODBC
 Drivers. This configuration file is optional because you can specify drivers
@@ -1202,7 +1202,7 @@ Driver=/opt/mapr/drillodbc/lib/universal/libmaprdrillodbc.dylib
 </code></pre></div>
 <hr>
 
-<h2 id="step-4-configure-the-mapr-drill-odbc-driver">Step 4: Configure the MapR Drill ODBC Driver</h2>
+<h2 id="step-4:-configure-the-mapr-drill-odbc-driver">Step 4: Configure the MapR Drill ODBC Driver</h2>
 
 <p>Configure the MapR Drill ODBC Driver for your environment by modifying the <code>.mapr.drillodbc.ini</code> configuration
 file. This configures the driver to work with your ODBC driver manager. The following sample shows a possible configuration, which you can use as is if you installed the default iODBC driver manager.</p>
@@ -1221,7 +1221,7 @@ SwapFilePath=/tmp
 # iODBC
 ODBCInstLib=libiodbcinst.dylib
 </code></pre></div>
-<h3 id="configuring-mapr-drillodbc-ini">Configuring .mapr.drillodbc.ini</h3>
+<h3 id="configuring-.mapr.drillodbc.ini">Configuring .mapr.drillodbc.ini</h3>
 
 <p>To configure the MapR Drill ODBC Driver in the <code>mapr.drillodbc.ini</code> configuration file, complete the following steps:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/docs/configuring-odbc-on-windows/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-odbc-on-windows/index.html b/docs/configuring-odbc-on-windows/index.html
index f92a5a8..1dcf527 100644
--- a/docs/configuring-odbc-on-windows/index.html
+++ b/docs/configuring-odbc-on-windows/index.html
@@ -1054,7 +1054,7 @@ sources:</p>
 <li>Create an ODBC Connection String</li>
 </ul>
 
-<h2 id="sample-odbc-configuration-dsn">Sample ODBC Configuration (DSN)</h2>
+<h2 id="sample-odbc-configuration-(dsn)">Sample ODBC Configuration (DSN)</h2>
 
 <p>You can see how to create a DSN to connect to Drill data sources by taking a look at the preconfigured sample that the installer sets up. If
 you want to create a DSN for a 32-bit application, you must use the 32-bit

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/docs/configuring-resources-for-a-shared-drillbit/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-resources-for-a-shared-drillbit/index.html b/docs/configuring-resources-for-a-shared-drillbit/index.html
index 3ad260c..19974aa 100644
--- a/docs/configuring-resources-for-a-shared-drillbit/index.html
+++ b/docs/configuring-resources-for-a-shared-drillbit/index.html
@@ -1077,7 +1077,7 @@ The maximum degree of distribution of a query across cores and cluster nodes.</l
 Same as max per node but applies to the query as executed by the entire cluster.</li>
 </ul>
 
-<h3 id="planner-width-max_per_node">planner.width.max_per_node</h3>
+<h3 id="planner.width.max_per_node">planner.width.max_per_node</h3>
 
 <p>Configure the <code>planner.width.max_per_node</code> to achieve fine grained, absolute control over parallelization. In this context <em>width</em> refers to fanout or distribution potential: the ability to run a query in parallel across the cores on a node and the nodes on a cluster. A physical plan consists of intermediate operations, known as query &quot;fragments,&quot; that run concurrently, yielding opportunities for parallelism above and below each exchange operator in the plan. An exchange operator represents a breakpoint in the execution flow where processing can be distributed. For example, a single-process scan of a file may flow into an exchange operator, followed by a multi-process aggregation fragment.</p>
 
@@ -1087,7 +1087,7 @@ Same as max per node but applies to the query as executed by the entire cluster.
 
 <p>When you modify the default setting, you can supply any meaningful number. The system does not automatically scale down your setting.</p>
 
-<h3 id="planner-width-max_per_query">planner.width.max_per_query</h3>
+<h3 id="planner.width.max_per_query">planner.width.max_per_query</h3>
 
 <p>The max_per_query value also sets the maximum degree of parallelism for any given stage of a query, but the setting applies to the query as executed by the whole cluster (multiple nodes). In effect, the actual maximum width per query is the <em>minimum of two values</em>: min((number of nodes * width.max_per_node), width.max_per_query)</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/docs/configuring-tibco-spotfire-server-with-drill/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-tibco-spotfire-server-with-drill/index.html b/docs/configuring-tibco-spotfire-server-with-drill/index.html
index b0b2629..32eefbb 100644
--- a/docs/configuring-tibco-spotfire-server-with-drill/index.html
+++ b/docs/configuring-tibco-spotfire-server-with-drill/index.html
@@ -1059,7 +1059,7 @@
 
 <hr>
 
-<h3 id="step-1-install-and-configure-the-drill-jdbc-driver">Step 1: Install and Configure the Drill JDBC Driver</h3>
+<h3 id="step-1:-install-and-configure-the-drill-jdbc-driver">Step 1: Install and Configure the Drill JDBC Driver</h3>
 
 <p>Drill provides standard JDBC connectivity, making it easy to integrate data exploration capabilities on complex, schema-less data sets. Tibco Spotfire Server (TSS) requires Drill 1.0 or later, which incudes the JDBC driver. The JDBC driver is bundled with the Drill configuration files, and it is recommended that you use the JDBC driver that is shipped with the specific Drill version.</p>
 
@@ -1087,7 +1087,7 @@ For Windows systems, the hosts file is located here:
 
 <hr>
 
-<h3 id="step-2-configure-the-drill-data-source-template-in-tss">Step 2: Configure the Drill Data Source Template in TSS</h3>
+<h3 id="step-2:-configure-the-drill-data-source-template-in-tss">Step 2: Configure the Drill Data Source Template in TSS</h3>
 
 <p>The Drill Data Source template can now be configured with the TSS Configuration Tool. The Windows-based TSS Configuration Tool is recommended. If TSS is installed on a Linux system, you also need to install TSS on a small Windows-based system so you can utilize the Configuration Tool. In this case, it is also recommended that you install the Drill JDBC driver on the TSS Windows system.</p>
 
@@ -1142,7 +1142,7 @@ For Windows systems, the hosts file is located here:
 </code></pre></div>
 <hr>
 
-<h3 id="step-3-configure-drill-data-sources-with-tibco-spotfire-desktop">Step 3: Configure Drill Data Sources with Tibco Spotfire Desktop</h3>
+<h3 id="step-3:-configure-drill-data-sources-with-tibco-spotfire-desktop">Step 3: Configure Drill Data Sources with Tibco Spotfire Desktop</h3>
 
 <p>To configure Drill data sources in TSS, you need to use the Tibco Spotfire Desktop client.</p>
 
@@ -1159,7 +1159,7 @@ For Windows systems, the hosts file is located here:
 
 <hr>
 
-<h3 id="step-4-query-and-analyze-the-data">Step 4: Query and Analyze the Data</h3>
+<h3 id="step-4:-query-and-analyze-the-data">Step 4: Query and Analyze the Data</h3>
 
 <p>After the Drill data source has been configured in the Information Designer, the information elements can be defined. </p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/docs/configuring-user-impersonation-with-hive-authorization/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-user-impersonation-with-hive-authorization/index.html b/docs/configuring-user-impersonation-with-hive-authorization/index.html
index 3f4c5d0..a89a4c7 100644
--- a/docs/configuring-user-impersonation-with-hive-authorization/index.html
+++ b/docs/configuring-user-impersonation-with-hive-authorization/index.html
@@ -1076,7 +1076,7 @@
 <li>Hive remote metastore repository configured<br></li>
 </ul>
 
-<h2 id="step-1-enabling-drill-impersonation">Step 1: Enabling Drill Impersonation</h2>
+<h2 id="step-1:-enabling-drill-impersonation">Step 1: Enabling Drill Impersonation</h2>
 
 <p>Modify <code>&lt;DRILL_HOME&gt;/conf/drill-override.conf</code> on each Drill node to include the required properties, set the <a href="/docs/configuring-user-impersonation/#chained-impersonation">maximum number of chained user hops</a>, and restart the Drillbit process.</p>
 
@@ -1095,7 +1095,7 @@
 <code>&lt;DRILLINSTALL_HOME&gt;/bin/drillbit.sh restart</code>  </p></li>
 </ol>
 
-<h2 id="step-2-updating-hive-site-xml">Step 2:  Updating hive-site.xml</h2>
+<h2 id="step-2:-updating-hive-site.xml">Step 2:  Updating hive-site.xml</h2>
 
 <p>Update hive-site.xml with the parameters specific to the type of authorization that you are configuring and then restart Hive.  </p>
 
@@ -1127,7 +1127,7 @@
 <strong>Description:</strong> Tells HiveServer2 to execute Hive operations as the user submitting the query. Must be set to true for the storage based model.<br>
 <strong>Value:</strong> true</p>
 
-<h3 id="example-of-hive-site-xml-configuration-with-the-required-properties-for-storage-based-authorization">Example of hive-site.xml configuration with the required properties for storage based authorization</h3>
+<h3 id="example-of-hive-site.xml-configuration-with-the-required-properties-for-storage-based-authorization">Example of hive-site.xml configuration with the required properties for storage based authorization</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   &lt;configuration&gt;
      &lt;property&gt;
        &lt;name&gt;hive.metastore.uris&lt;/name&gt;
@@ -1203,7 +1203,7 @@
 <strong>Description:</strong> In unsecure mode, setting this property to true causes the metastore to execute DFS operations using the client&#39;s reported user and group permissions. Note: This property must be set on both the client and server sides. This is a best effort property. If the client is set to true and the server is set to false, the client setting is ignored.<br>
 <strong>Value:</strong> false  </p>
 
-<h3 id="example-of-hive-site-xml-configuration-with-the-required-properties-for-sql-standard-based-authorization">Example of hive-site.xml configuration with the required properties for SQL standard based authorization</h3>
+<h3 id="example-of-hive-site.xml-configuration-with-the-required-properties-for-sql-standard-based-authorization">Example of hive-site.xml configuration with the required properties for SQL standard based authorization</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   &lt;configuration&gt;
      &lt;property&gt;
        &lt;name&gt;hive.metastore.uris&lt;/name&gt;
@@ -1251,7 +1251,7 @@
      &lt;/property&gt;    
     &lt;/configuration&gt;
 </code></pre></div>
-<h2 id="step-3-modifying-the-hive-storage-plugin">Step 3: Modifying the Hive Storage Plugin</h2>
+<h2 id="step-3:-modifying-the-hive-storage-plugin">Step 3: Modifying the Hive Storage Plugin</h2>
 
 <p>Modify the Hive storage plugin configuration in the Drill Web Console to include specific authorization settings. The Drillbit that you use to access the Web Console must be running.  </p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/docs/configuring-user-impersonation/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-user-impersonation/index.html b/docs/configuring-user-impersonation/index.html
index 1900bf1..64ec021 100644
--- a/docs/configuring-user-impersonation/index.html
+++ b/docs/configuring-user-impersonation/index.html
@@ -1109,7 +1109,7 @@ hadoop fs –chown &lt;user&gt;:&lt;group&gt; &lt;file_name&gt;
 </code></pre></div>
 <p>Example: <code>hadoop fs –chmod 750 employees.drill.view</code></p>
 
-<h3 id="modifying-system-session-level-view-permissions">Modifying SYSTEM|SESSION Level View Permissions</h3>
+<h3 id="modifying-system|session-level-view-permissions">Modifying SYSTEM|SESSION Level View Permissions</h3>
 
 <p>Use the <code>ALTER SESSION|SYSTEM</code> command with the <code>new_view_default_permissions</code> parameter and the appropriate octal code to set view permissions at the system or session level prior to creating a view.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">ALTER SESSION SET `new_view_default_permissions` = &#39;&lt;octal_code&gt;&#39;;

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/docs/custom-function-interfaces/index.html
----------------------------------------------------------------------
diff --git a/docs/custom-function-interfaces/index.html b/docs/custom-function-interfaces/index.html
index 6ef4255..c992833 100644
--- a/docs/custom-function-interfaces/index.html
+++ b/docs/custom-function-interfaces/index.html
@@ -1059,13 +1059,13 @@ public static class Add1 implements DrillSimpleFunc{
 
 <p>The simple function interface includes the <code>@Param</code> and <code>@Output</code> holders where you indicate the data types that your function can process.</p>
 
-<h3 id="param-holder">@Param Holder</h3>
+<h3 id="@param-holder">@Param Holder</h3>
 
 <p>This holder indicates the data type that the function processes as input and determines the number of parameters that your function accepts within the query. For example:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">@Param BigIntHolder input1;
 @Param BigIntHolder input2;
 </code></pre></div>
-<h3 id="output-holder">@Output Holder</h3>
+<h3 id="@output-holder">@Output Holder</h3>
 
 <p>This holder indicates the data type that the processing returns. For example:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">@Output BigIntHolder out;
@@ -1121,7 +1121,7 @@ public static class MySecondMin implements DrillAggFunc {
 </code></pre></div>
 <p>The aggregate function interface includes holders where you indicate the data types that your function can process. This interface includes the @Param and @Output holders previously described and also includes the @Workspace holder. </p>
 
-<h3 id="workspace-holder">@Workspace holder</h3>
+<h3 id="@workspace-holder">@Workspace holder</h3>
 
 <p>This holder indicates the data type used to store intermediate data during processing. For example:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">@Workspace BigIntHolder min;

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/docs/data-type-conversion/index.html
----------------------------------------------------------------------
diff --git a/docs/data-type-conversion/index.html b/docs/data-type-conversion/index.html
index 904dae4..1a9b725 100644
--- a/docs/data-type-conversion/index.html
+++ b/docs/data-type-conversion/index.html
@@ -1643,7 +1643,7 @@ use in your Drill queries as described in this section:</p>
 </tr>
 </tbody></table>
 
-<h3 id="format-specifiers-for-date-time-conversions">Format Specifiers for Date/Time Conversions</h3>
+<h3 id="format-specifiers-for-date/time-conversions">Format Specifiers for Date/Time Conversions</h3>
 
 <p>Use the following Joda format specifiers for date/time conversions:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/docs/date-time-and-timestamp/index.html
----------------------------------------------------------------------
diff --git a/docs/date-time-and-timestamp/index.html b/docs/date-time-and-timestamp/index.html
index e082287..e0b7733 100644
--- a/docs/date-time-and-timestamp/index.html
+++ b/docs/date-time-and-timestamp/index.html
@@ -1153,7 +1153,7 @@ SELECT INTERVAL &#39;13&#39; month FROM (VALUES(1));
 +------------+
 1 row selected (0.076 seconds)
 </code></pre></div>
-<h2 id="date-time-and-timestamp">DATE, TIME, and TIMESTAMP</h2>
+<h2 id="date,-time,-and-timestamp">DATE, TIME, and TIMESTAMP</h2>
 
 <p>DATE, TIME, and TIMESTAMP literals. Drill stores values in Coordinated Universal Time (UTC). Drill supports time functions in the range 1971 to 2037.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/docs/date-time-functions-and-arithmetic/index.html
----------------------------------------------------------------------
diff --git a/docs/date-time-functions-and-arithmetic/index.html b/docs/date-time-functions-and-arithmetic/index.html
index ad7dd5a..a3deb34 100644
--- a/docs/date-time-functions-and-arithmetic/index.html
+++ b/docs/date-time-functions-and-arithmetic/index.html
@@ -1550,7 +1550,7 @@ SELECT NOW() FROM (VALUES(1));
 +------------+
 1 row selected (0.062 seconds)
 </code></pre></div>
-<h2 id="date-time-and-interval-arithmetic-functions">Date, Time, and Interval Arithmetic Functions</h2>
+<h2 id="date,-time,-and-interval-arithmetic-functions">Date, Time, and Interval Arithmetic Functions</h2>
 
 <p>Is the day returned from the NOW function the same as the day returned from the CURRENT_DATE function?</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">SELECT EXTRACT(day FROM NOW()) = EXTRACT(day FROM CURRENT_DATE) FROM (VALUES(1));

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/docs/drill-introduction/index.html
----------------------------------------------------------------------
diff --git a/docs/drill-introduction/index.html b/docs/drill-introduction/index.html
index b6cc901..3cc1d3b 100644
--- a/docs/drill-introduction/index.html
+++ b/docs/drill-introduction/index.html
@@ -1051,7 +1051,7 @@ applications, while still providing the familiarity and ecosystem of ANSI SQL,
 the industry-standard query language. Drill provides plug-and-play integration
 with existing Apache Hive and Apache HBase deployments.  </p>
 
-<h2 id="what-39-s-new-in-apache-drill-1-4">What&#39;s New in Apache Drill 1.4</h2>
+<h2 id="what&#39;s-new-in-apache-drill-1.4">What&#39;s New in Apache Drill 1.4</h2>
 
 <p>Drill 1.4 introduces the following improvements:</p>
 
@@ -1064,7 +1064,7 @@ with existing Apache Hive and Apache HBase deployments.  </p>
 
 <p>Drill 1.4 fixes an error that occurred when you query a Hive table using the HBaseStorageHandler (<a href="https://issues.apache.org/jira/browse/DRILL-3739">DRILL-3739</a>). To successfully query a Hive table using the HBaseStorageHandler, you need to configure the Hive storage plugin as described in the <a href="/docs/hive-storage-plugin/#connect-drill-to-the-hive-remote-metastore">Hive storage plugin documentation</a>.</p>
 
-<h2 id="what-39-s-new-in-apache-drill-1-3">What&#39;s New in Apache Drill 1.3</h2>
+<h2 id="what&#39;s-new-in-apache-drill-1.3">What&#39;s New in Apache Drill 1.3</h2>
 
 <p>This releases fix issues and add a number of enhancements, including the following ones:</p>
 
@@ -1077,7 +1077,7 @@ Support for columns that evolve from one data type to another over time. </li>
 <li>Enhancements related to querying Hive tables, MongoDB collections, and Avro files</li>
 </ul>
 
-<h2 id="what-39-s-new-in-apache-drill-1-2">What&#39;s New in Apache Drill 1.2</h2>
+<h2 id="what&#39;s-new-in-apache-drill-1.2">What&#39;s New in Apache Drill 1.2</h2>
 
 <p>This release of Drill fixes <a href="/docs/apache-drill-1-2-0-release-notes/">many issues</a> and introduces a number of enhancements, including the following ones:</p>
 
@@ -1110,7 +1110,7 @@ Javadocs and better application dependency compatibility<br></li>
 <li>Improved LIMIT processing</li>
 </ul>
 
-<h2 id="what-39-s-new-in-apache-drill-1-1">What&#39;s New in Apache Drill 1.1</h2>
+<h2 id="what&#39;s-new-in-apache-drill-1.1">What&#39;s New in Apache Drill 1.1</h2>
 
 <p>Many enhancements in Apache Drill 1.1 include the following key features:</p>
 
@@ -1121,7 +1121,7 @@ Javadocs and better application dependency compatibility<br></li>
 <li>Support for UNION and UNION ALL and better optimized plans that include UNION.</li>
 </ul>
 
-<h2 id="what-39-s-new-in-apache-drill-1-0">What&#39;s New in Apache Drill 1.0</h2>
+<h2 id="what&#39;s-new-in-apache-drill-1.0">What&#39;s New in Apache Drill 1.0</h2>
 
 <p>Apache Drill 1.0 offers the following new features:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/docs/drill-patch-review-tool/index.html
----------------------------------------------------------------------
diff --git a/docs/drill-patch-review-tool/index.html b/docs/drill-patch-review-tool/index.html
index d891876..62c875c 100644
--- a/docs/drill-patch-review-tool/index.html
+++ b/docs/drill-patch-review-tool/index.html
@@ -1077,7 +1077,7 @@
 
 <h3 id="drill-jira-and-reviewboard-script">Drill JIRA and Reviewboard script</h3>
 
-<h4 id="1-setup">1. Setup</h4>
+<h4 id="1.-setup">1. Setup</h4>
 
 <ol>
 <li>Follow instructions <a href="/docs/drill-patch-review-tool/#jira-command-line-tool">here</a> to setup the jira-python package</li>
@@ -1088,7 +1088,7 @@ On Mac -&gt; sudo easy_install argparse
 </code></pre></div></li>
 </ol>
 
-<h4 id="2-usage">2. Usage</h4>
+<h4 id="2.-usage">2. Usage</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">nnarkhed-mn: nnarkhed$ python drill-patch-review.py --help
 usage: drill-patch-review.py [-h] -b BRANCH -j JIRA [-s SUMMARY]
                              [-d DESCRIPTION] [-r REVIEWBOARD] [-t TESTING]
@@ -1115,7 +1115,7 @@ optional arguments:
   -rbu, --reviewboard-user Reviewboard user name
   -rbp, --reviewboard-password Reviewboard password
 </code></pre></div>
-<h4 id="3-upload-patch">3. Upload patch</h4>
+<h4 id="3.-upload-patch">3. Upload patch</h4>
 
 <ol>
 <li>Specify the branch against which the patch should be created (-b)</li>
@@ -1126,7 +1126,7 @@ optional arguments:
 <p>Example:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">python drill-patch-review.py -b origin/master -j DRILL-241 -rbu tnachen -rbp password
 </code></pre></div>
-<h4 id="4-update-patch">4. Update patch</h4>
+<h4 id="4.-update-patch">4. Update patch</h4>
 
 <ol>
 <li>Specify the branch against which the patch should be created (-b)</li>
@@ -1141,12 +1141,12 @@ optional arguments:
 </code></pre></div>
 <h3 id="jira-command-line-tool">JIRA command line tool</h3>
 
-<h4 id="1-download-the-jira-command-line-package">1. Download the JIRA command line package</h4>
+<h4 id="1.-download-the-jira-command-line-package">1. Download the JIRA command line package</h4>
 
 <p>Install the jira-python package.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">sudo easy_install jira-python
 </code></pre></div>
-<h4 id="2-configure-jira-username-and-password">2. Configure JIRA username and password</h4>
+<h4 id="2.-configure-jira-username-and-password">2. Configure JIRA username and password</h4>
 
 <p>Include a jira.ini file in your $HOME directory that contains your Apache JIRA
 username and password.</p>
@@ -1159,7 +1159,7 @@ password=***********
 <p>This is a quick tutorial on using <a href="https://reviews.apache.org">Review Board</a>
 with Drill.</p>
 
-<h4 id="1-install-the-post-review-tool">1. Install the post-review tool</h4>
+<h4 id="1.-install-the-post-review-tool">1. Install the post-review tool</h4>
 
 <p>If you are on RHEL, Fedora or CentOS, follow these steps:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">sudo yum install python-setuptools
@@ -1172,7 +1172,7 @@ sudo easy_install -U RBTools
 <p>For other platforms, follow the <a href="http://www.reviewboard.org/docs/manual/dev/users/tools/post-review/">instructions</a> to
 setup the post-review tool.</p>
 
-<h4 id="2-configure-stuff">2. Configure Stuff</h4>
+<h4 id="2.-configure-stuff">2. Configure Stuff</h4>
 
 <p>Then you need to configure a few things to make it work.</p>
 
@@ -1190,7 +1190,7 @@ TARGET_GROUPS = &#39;drill-git&#39;
 
 <h3 id="faq">FAQ</h3>
 
-<h4 id="when-i-run-the-script-it-throws-the-following-error-and-exits">When I run the script, it throws the following error and exits</h4>
+<h4 id="when-i-run-the-script,-it-throws-the-following-error-and-exits">When I run the script, it throws the following error and exits</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">nnarkhed$python drill-patch-review.py -b trunk -j DRILL-241
 There don&#39;t seem to be any diffs
 </code></pre></div>
@@ -1201,7 +1201,7 @@ There don&#39;t seem to be any diffs
 <li>The -b branch is not pointing to the remote branch. In the example above, &quot;trunk&quot; is specified as the branch, which is the local branch. The correct value for the -b (--branch) option is the remote branch. &quot;git branch -r&quot; gives the list of the remote branch names.</li>
 </ul>
 
-<h4 id="when-i-run-the-script-it-throws-the-following-error-and-exits">When I run the script, it throws the following error and exits</h4>
+<h4 id="when-i-run-the-script,-it-throws-the-following-error-and-exits">When I run the script, it throws the following error and exits</h4>
 
 <p>Error uploading diff</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/docs/drill-plan-syntax/index.html
----------------------------------------------------------------------
diff --git a/docs/drill-plan-syntax/index.html b/docs/drill-plan-syntax/index.html
index 24dfbf9..e23e81f 100644
--- a/docs/drill-plan-syntax/index.html
+++ b/docs/drill-plan-syntax/index.html
@@ -1046,7 +1046,7 @@
 
     <div class="int_text" align="left">
       
-        <h3 id="whats-the-plan">Whats the plan?</h3>
+        <h3 id="whats-the-plan?">Whats the plan?</h3>
 
 <p>This section is about the end-to-end plan flow for Drill. The incoming query
 to Drill can be a SQL 2003 query/DrQL or MongoQL. The query is converted to a

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/docs/drop-table/index.html
----------------------------------------------------------------------
diff --git a/docs/drop-table/index.html b/docs/drop-table/index.html
index 480f283..1954905 100644
--- a/docs/drop-table/index.html
+++ b/docs/drop-table/index.html
@@ -1101,7 +1101,7 @@
 
 <p>The following examples show results for several DROP TABLE scenarios.  </p>
 
-<h3 id="example-1-identifying-a-schema">Example 1:  Identifying a schema</h3>
+<h3 id="example-1:-identifying-a-schema">Example 1:  Identifying a schema</h3>
 
 <p>This example shows you how to identify a schema with the USE and DROP TABLE commands and successfully drop a table named <code>donuts_json</code> in the <code>&quot;donuts&quot;</code> workspace configured within the DFS storage plugin configuration.  </p>
 
@@ -1155,7 +1155,7 @@
    Error: PARSE ERROR: Root schema is immutable. Creating or dropping tables/views is not allowed in root schema.Select a schema using &#39;USE schema&#39; command.
    [Error Id: 8c42cb6a-27eb-48fd-b42a-671a6fb58c14 on 10.250.56.218:31010] (state=,code=0)
 </code></pre></div>
-<h3 id="example-2-dropping-a-table-created-from-a-file">Example 2: Dropping a table created from a file</h3>
+<h3 id="example-2:-dropping-a-table-created-from-a-file">Example 2: Dropping a table created from a file</h3>
 
 <p>In the following example, the <code>donuts_json</code> table is removed from the <code>/tmp</code> workspace using the DROP TABLE command. This example assumes that the steps in the <a href="/docs/create-table-as-ctas/#complete-ctas-example">Complete CTAS Example</a> were already completed. </p>
 
@@ -1191,7 +1191,7 @@
    +-------+------------------------------+
    1 row selected (0.107 seconds)  
 </code></pre></div>
-<h3 id="example-3-dropping-a-table-created-as-a-directory">Example 3: Dropping a table created as a directory</h3>
+<h3 id="example-3:-dropping-a-table-created-as-a-directory">Example 3: Dropping a table created as a directory</h3>
 
 <p>When you create a table that writes files to a directory, you can issue the <code>DROP TABLE</code> command against the table to remove the directory. All files and subdirectories are deleted. For example, the following CTAS command writes Parquet data from the <code>nation.parquet</code> file, installed with Drill, to the <code>/tmp/name_key</code> directory.  </p>
 
@@ -1248,7 +1248,7 @@
    +-------+---------------------------+
    1 row selected (0.086 seconds)
 </code></pre></div>
-<h3 id="example-4-dropping-a-table-that-does-not-exist">Example 4: Dropping a table that does not exist</h3>
+<h3 id="example-4:-dropping-a-table-that-does-not-exist">Example 4: Dropping a table that does not exist</h3>
 
 <p>The following example shows the result of dropping a table that does not exist because it was either already dropped or never existed. </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   0: jdbc:drill:zk=local&gt; use use dfs.tmp;
@@ -1264,7 +1264,7 @@
    Error: VALIDATION ERROR: Table [name_key] not found
    [Error Id: fc6bfe17-d009-421c-8063-d759d7ea2f4e on 10.250.56.218:31010] (state=,code=0)
 </code></pre></div>
-<h3 id="example-5-dropping-a-table-without-permissions">Example 5: Dropping a table without permissions</h3>
+<h3 id="example-5:-dropping-a-table-without-permissions">Example 5: Dropping a table without permissions</h3>
 
 <p>The following example shows the result of dropping a table without the appropriate permissions in the file system.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   0: jdbc:drill:zk=local&gt; drop table name_key;
@@ -1272,7 +1272,7 @@
    Error: PERMISSION ERROR: Unauthorized to drop table
    [Error Id: 36f6b51a-786d-4950-a4a7-44250f153c55 on 10.10.30.167:31010] (state=,code=0)  
 </code></pre></div>
-<h3 id="example-6-dropping-and-querying-a-table-concurrently">Example 6: Dropping and querying a table concurrently</h3>
+<h3 id="example-6:-dropping-and-querying-a-table-concurrently">Example 6: Dropping and querying a table concurrently</h3>
 
 <p>The result of this scenario depends on the delta in time between one user dropping a table and another user issuing a query against the table. Results can also vary. In some instances the drop may succeed and the query fails completely or the query completes partially and then the table is dropped returning an exception in the middle of the query results.</p>
 
@@ -1294,7 +1294,7 @@
    Fragment 1:0
    [Error Id: 6e3c6a8d-8cfd-4033-90c4-61230af80573 on 10.10.30.167:31010] (state=,code=0)
 </code></pre></div>
-<h3 id="example-7-dropping-a-table-with-different-file-formats">Example 7: Dropping a table with different file formats</h3>
+<h3 id="example-7:-dropping-a-table-with-different-file-formats">Example 7: Dropping a table with different file formats</h3>
 
 <p>The following example shows the result of dropping a table when multiple file formats exists in the directory. In this scenario, the <code>sales_dir</code> table resides in the <code>dfs.sales</code> workspace and contains Parquet, CSV, and JSON files.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/docs/explain/index.html
----------------------------------------------------------------------
diff --git a/docs/explain/index.html b/docs/explain/index.html
index 544f387..0bfc148 100644
--- a/docs/explain/index.html
+++ b/docs/explain/index.html
@@ -1082,7 +1082,7 @@ you are selecting from, you are likely to see plan changes.</p>
 <p>This option returns costing information. You can use this option for both
 physical and logical plans.</p>
 
-<h4 id="with-implementation-without-implementation">WITH IMPLEMENTATION | WITHOUT IMPLEMENTATION</h4>
+<h4 id="with-implementation-|-without-implementation">WITH IMPLEMENTATION | WITHOUT IMPLEMENTATION</h4>
 
 <p>These options return the physical and logical plan information, respectively.
 The default is physical (WITH IMPLEMENTATION).</p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/docs/getting-to-know-the-drill-sandbox/index.html
----------------------------------------------------------------------
diff --git a/docs/getting-to-know-the-drill-sandbox/index.html b/docs/getting-to-know-the-drill-sandbox/index.html
index f05c5e2..fc6551e 100644
--- a/docs/getting-to-know-the-drill-sandbox/index.html
+++ b/docs/getting-to-know-the-drill-sandbox/index.html
@@ -1155,7 +1155,7 @@ URI. Metadata for Hive tables is automatically available for users to query.</p>
 </code></pre></div>
 <p>Do not use this storage plugin configuration outside the sandbox. Use the configuration for either the <a href="/docs/hive-storage-plugin/">remote or embedded metastore configuration</a>.</p>
 
-<h2 id="what-39-s-next">What&#39;s Next</h2>
+<h2 id="what&#39;s-next">What&#39;s Next</h2>
 
 <p>Start running queries by going to <a href="/docs/lesson-1-learn-about-the-data-set">Lesson 1: Learn About the Data
 Set</a>.</p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/docs/how-to-partition-data/index.html
----------------------------------------------------------------------
diff --git a/docs/how-to-partition-data/index.html b/docs/how-to-partition-data/index.html
index bdd246f..1a38725 100644
--- a/docs/how-to-partition-data/index.html
+++ b/docs/how-to-partition-data/index.html
@@ -1054,7 +1054,7 @@
 
 <p>Unlike using the Drill 1.0 partitioning, no view query is subsequently required, nor is it necessary to use the <a href="/docs/querying-directories">dir* variables</a> after you use the PARTITION BY clause in a CTAS statement. </p>
 
-<h2 id="drill-1-0-partitioning">Drill 1.0 Partitioning</h2>
+<h2 id="drill-1.0-partitioning">Drill 1.0 Partitioning</h2>
 
 <p>Drill 1.0 does not support the PARTITION BY clause of the CTAS command supported by later versions. Partitioning Drill 1.0-generated data involves performing the following steps.   </p>
 
@@ -1066,7 +1066,7 @@
 
 <p>After partitioning the data, you need to create a view of the partitioned data to query the data. You can use the <a href="/docs/querying-directories">dir* variables</a> in queries to refer to subdirectories in your workspace path.</p>
 
-<h3 id="drill-1-0-partitioning-example">Drill 1.0 Partitioning Example</h3>
+<h3 id="drill-1.0-partitioning-example">Drill 1.0 Partitioning Example</h3>
 
 <p>Suppose you have text files containing several years of log data. To partition the data by year and quarter, create the following hierarchy of directories:  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   …/logs/1994/Q1  

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/docs/installing-the-apache-drill-sandbox/index.html
----------------------------------------------------------------------
diff --git a/docs/installing-the-apache-drill-sandbox/index.html b/docs/installing-the-apache-drill-sandbox/index.html
index d9a5b9e..9a95158 100644
--- a/docs/installing-the-apache-drill-sandbox/index.html
+++ b/docs/installing-the-apache-drill-sandbox/index.html
@@ -1081,7 +1081,7 @@ instructions:</p>
 <li>To install VirtualBox, see the <a href="http://dlc.sun.com.edgesuite.net/virtualbox/4.3.4/UserManual.pdf">Oracle VM VirtualBox User Manual</a>. By downloading VirtualBox, you agree to the terms and conditions of the respective license.</li>
 </ul>
 
-<h2 id="installing-the-mapr-sandbox-with-apache-drill-on-vmware-player-vmware-fusion">Installing the MapR Sandbox with Apache Drill on VMware Player/VMware Fusion</h2>
+<h2 id="installing-the-mapr-sandbox-with-apache-drill-on-vmware-player/vmware-fusion">Installing the MapR Sandbox with Apache Drill on VMware Player/VMware Fusion</h2>
 
 <p>Complete the following steps to install the MapR Sandbox with Apache Drill on
 VMware Player or VMware Fusion:</p>
@@ -1123,7 +1123,7 @@ The Import Virtual Machine dialog appears.</p></li>
 <li>Alternatively, access the command line on the VM: Press Alt+F2 on Windows or Option+F5 on Mac.<br></li>
 </ul>
 
-<h3 id="what-39-s-next">What&#39;s Next</h3>
+<h3 id="what&#39;s-next">What&#39;s Next</h3>
 
 <p>After downloading and installing the sandbox, continue with the tutorial by
 <a href="/docs/getting-to-know-the-drill-sandbox/">Getting to Know the Drill
@@ -1173,7 +1173,7 @@ VirtualBox:</p>
 </ul></li>
 </ol>
 
-<h3 id="what-39-s-next">What&#39;s Next</h3>
+<h3 id="what&#39;s-next">What&#39;s Next</h3>
 
 <p>After downloading and installing the sandbox, continue with the tutorial by
 <a href="/docs/getting-to-know-the-drill-sandbox/">Getting to Know the Drill Sandbox</a>.</p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/docs/installing-the-driver-on-linux/index.html
----------------------------------------------------------------------
diff --git a/docs/installing-the-driver-on-linux/index.html b/docs/installing-the-driver-on-linux/index.html
index f1318c2..6dc65a2 100644
--- a/docs/installing-the-driver-on-linux/index.html
+++ b/docs/installing-the-driver-on-linux/index.html
@@ -1040,7 +1040,7 @@
 
     </div>
 
-     
+     Jan 20, 2016
 
     <link href="/css/docpage.css" rel="stylesheet" type="text/css">
 
@@ -1090,16 +1090,16 @@ Example: <code>127.0.0.1 localhost</code></p></li>
 
 <p>To install the driver, you need Administrator privileges on the computer.</p>
 
-<h2 id="step-1-download-the-mapr-drill-odbc-driver">Step 1: Download the MapR Drill ODBC Driver</h2>
+<h2 id="step-1:-download-the-mapr-drill-odbc-driver">Step 1: Download the MapR Drill ODBC Driver</h2>
 
 <p>Download either the 32- or 64-bit driver:</p>
 
 <ul>
-<li><a href="http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/MapRDrill_odbc_v1.2.0.1000/MapRDrillODBC-32bit-1.2.0.i686.rpm">MapR Drill ODBC Driver (32-bit)</a></li>
-<li><a href="http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/MapRDrill_odbc_v1.2.0.1000/MapRDrillODBC-1.2.0.x86_64.rpm">MapR Drill ODBC Driver (64-bit)</a></li>
+<li><a href="http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/MapRDrill_odbc_v1.2.1.1000/MapRDrillODBC-32bit-1.2.1.i686.rpm">MapR Drill ODBC Driver (32-bit)</a></li>
+<li><a href="http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/MapRDrill_odbc_v1.2.1.1000/MapRDrillODBC-1.2.1.x86_64.rpm">MapR Drill ODBC Driver (64-bit)</a></li>
 </ul>
 
-<h2 id="step-2-install-the-mapr-drill-odbc-driver">Step 2: Install the MapR Drill ODBC Driver</h2>
+<h2 id="step-2:-install-the-mapr-drill-odbc-driver">Step 2: Install the MapR Drill ODBC Driver</h2>
 
 <p>To install the driver, complete the following steps:</p>
 
@@ -1154,7 +1154,7 @@ locations and descriptions:</p>
 </tr>
 </tbody></table>
 
-<h2 id="step-3-check-the-mapr-drill-odbc-driver-version">Step 3: Check the MapR Drill ODBC Driver version</h2>
+<h2 id="step-3:-check-the-mapr-drill-odbc-driver-version">Step 3: Check the MapR Drill ODBC Driver version</h2>
 
 <p>To check the version of the driver you installed, use the following case-sensitive command on the terminal command line:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/docs/installing-the-driver-on-mac-os-x/index.html
----------------------------------------------------------------------
diff --git a/docs/installing-the-driver-on-mac-os-x/index.html b/docs/installing-the-driver-on-mac-os-x/index.html
index 3f46877..deb40d5 100644
--- a/docs/installing-the-driver-on-mac-os-x/index.html
+++ b/docs/installing-the-driver-on-mac-os-x/index.html
@@ -1040,7 +1040,7 @@
 
     </div>
 
-     
+     Jan 20, 2016
 
     <link href="/css/docpage.css" rel="stylesheet" type="text/css">
 
@@ -1075,15 +1075,15 @@ Example: <code>127.0.0.1 localhost</code></p></li>
 
 <hr>
 
-<h2 id="step-1-download-the-mapr-drill-odbc-driver">Step 1: Download the MapR Drill ODBC Driver</h2>
+<h2 id="step-1:-download-the-mapr-drill-odbc-driver">Step 1: Download the MapR Drill ODBC Driver</h2>
 
 <p>Click the following link to download the driver:  </p>
 
-<p><a href="http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/MapRDrill_odbc_v1.2.0.1000/MapRDrillODBC.dmg">MapR Drill ODBC Driver for Mac</a></p>
+<p><a href="http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/MapRDrill_odbc_v1.2.1.1000/MapRDrillODBC.dmg">MapR Drill ODBC Driver for Mac</a></p>
 
 <hr>
 
-<h2 id="step-2-install-the-mapr-drill-odbc-driver">Step 2: Install the MapR Drill ODBC Driver</h2>
+<h2 id="step-2:-install-the-mapr-drill-odbc-driver">Step 2: Install the MapR Drill ODBC Driver</h2>
 
 <p>To install the driver, complete the following steps:</p>
 
@@ -1105,7 +1105,7 @@ Example: <code>127.0.0.1 localhost</code></p></li>
 <li><code>/opt/mapr/drillodbc/lib/universal</code> – Binaries directory</li>
 </ul>
 
-<h2 id="step-3-check-the-mapr-drill-odbc-driver-version">Step 3: Check the MapR Drill ODBC Driver version</h2>
+<h2 id="step-3:-check-the-mapr-drill-odbc-driver-version">Step 3: Check the MapR Drill ODBC Driver version</h2>
 
 <p>To check the version of the driver you installed, use the following command on the terminal command line:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">$ pkgutil --info mapr.drillodbc

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d9464074/docs/installing-the-driver-on-windows/index.html
----------------------------------------------------------------------
diff --git a/docs/installing-the-driver-on-windows/index.html b/docs/installing-the-driver-on-windows/index.html
index fe29921..5f62198 100644
--- a/docs/installing-the-driver-on-windows/index.html
+++ b/docs/installing-the-driver-on-windows/index.html
@@ -1040,7 +1040,7 @@
 
     </div>
 
-     Jan 13, 2016
+     Jan 20, 2016
 
     <link href="/css/docpage.css" rel="stylesheet" type="text/css">
 
@@ -1083,18 +1083,18 @@ Example: <code>127.0.0.1 localhost</code></p></li>
 
 <hr>
 
-<h2 id="step-1-download-the-mapr-drill-odbc-driver">Step 1: Download the MapR Drill ODBC Driver</h2>
+<h2 id="step-1:-download-the-mapr-drill-odbc-driver">Step 1: Download the MapR Drill ODBC Driver</h2>
 
 <p>Download the installer that corresponds to the bitness of the client application from which you want to create an ODBC connection:</p>
 
 <ul>
-<li><a href="http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/MapRDrill_odbc/MapRDrillODBC32.msi">MapR Drill ODBC Driver (32-bit)</a><br></li>
-<li><a href="http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/MapRDrill_odbc/MapRDrillODBC64.msi">MapR Drill ODBC Driver (64-bit)</a></li>
+<li><a href="http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/MapRDrill_odbc_v1.2.1.1000/MapRDrillODBC32.msi">MapR Drill ODBC Driver (32-bit)</a><br></li>
+<li><a href="http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/MapRDrill_odbc_v1.2.1.1000/MapRDrillODBC64.msi">MapR Drill ODBC Driver (64-bit)</a></li>
 </ul>
 
 <hr>
 
-<h2 id="step-2-install-the-mapr-drill-odbc-driver">Step 2: Install the MapR Drill ODBC Driver</h2>
+<h2 id="step-2:-install-the-mapr-drill-odbc-driver">Step 2: Install the MapR Drill ODBC Driver</h2>
 
 <ol>
 <li>Double-click the installer from the location where you downloaded it.</li>
@@ -1107,7 +1107,7 @@ Example: <code>127.0.0.1 localhost</code></p></li>
 
 <hr>
 
-<h2 id="step-3-verify-the-installation">Step 3: Verify the installation</h2>
+<h2 id="step-3:-verify-the-installation">Step 3: Verify the installation</h2>
 
 <p>To verify the installation, perform the following steps:</p>
 
@@ -1124,7 +1124,7 @@ The ODBC Data Source Administrator dialog appears.
 
 <p>You need to configure and start Drill before <a href="/docs/testing-the-odbc-connection/">testing</a> the ODBC Data Source Administrator.</p>
 
-<h2 id="the-tableau-data-connection-customization-tdc-file">The Tableau Data-connection Customization (TDC) File</h2>
+<h2 id="the-tableau-data-connection-customization-(tdc)-file">The Tableau Data-connection Customization (TDC) File</h2>
 
 <p>The MapR Drill ODBC Driver includes a file named <code>MapRDrillODBC.TDC</code>. The TDC file includes customizations that improve ODBC configuration and performance
 when using Tableau.</p>