You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@drill.apache.org by ts...@apache.org on 2015/12/16 06:14:57 UTC

[3/3] drill-site git commit: Update website

Update website


Project: http://git-wip-us.apache.org/repos/asf/drill-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill-site/commit/90078fe1
Tree: http://git-wip-us.apache.org/repos/asf/drill-site/tree/90078fe1
Diff: http://git-wip-us.apache.org/repos/asf/drill-site/diff/90078fe1

Branch: refs/heads/asf-site
Commit: 90078fe1593054afd230f1ce643f7314d3c49857
Parents: debb5d3
Author: Tomer Shiran <ts...@gmail.com>
Authored: Tue Dec 15 21:14:49 2015 -0800
Committer: Tomer Shiran <ts...@gmail.com>
Committed: Tue Dec 15 21:14:49 2015 -0800

----------------------------------------------------------------------
 blog/2014/11/19/sql-on-mongodb/index.html       |   4 +-
 .../12/02/drill-top-level-project/index.html    |   2 +-
 .../index.html                                  |  10 +-
 blog/2014/12/16/whats-coming-in-2015/index.html |   4 +-
 .../index.html                                  |   2 +-
 blog/2015/07/05/drill-1.1-released/index.html   |   2 +-
 blog/2015/12/14/drill-1.4-released/index.html   | 205 +++++++++++++++++++
 blog/index.html                                 |   5 +
 docs/aggregate-window-functions/index.html      |  10 +-
 .../index.html                                  |  14 +-
 .../apache-drill-1-1-0-release-notes/index.html |   6 +-
 .../apache-drill-1-2-0-release-notes/index.html |   2 +-
 .../index.html                                  |   2 +-
 docs/apache-drill-contribution-ideas/index.html |   2 +-
 docs/compiling-drill-from-source/index.html     |   4 +-
 docs/configuring-jreport-with-drill/index.html  |   6 +-
 docs/configuring-odbc-on-linux/index.html       |  10 +-
 docs/configuring-odbc-on-mac-os-x/index.html    |  10 +-
 docs/configuring-odbc-on-windows/index.html     |   2 +-
 .../index.html                                  |   4 +-
 .../index.html                                  |   8 +-
 .../index.html                                  |  10 +-
 docs/configuring-user-impersonation/index.html  |   2 +-
 docs/custom-function-interfaces/index.html      |   6 +-
 docs/data-type-conversion/index.html            |   2 +-
 docs/date-time-and-timestamp/index.html         |   2 +-
 .../index.html                                  |   2 +-
 docs/drill-introduction/index.html              |   8 +-
 docs/drill-patch-review-tool/index.html         |  20 +-
 docs/drill-plan-syntax/index.html               |   2 +-
 docs/drop-table/index.html                      |  14 +-
 docs/explain/index.html                         |   2 +-
 .../index.html                                  |   2 +-
 docs/how-to-partition-data/index.html           |   4 +-
 .../index.html                                  |   6 +-
 docs/installing-the-driver-on-linux/index.html  |   6 +-
 .../index.html                                  |   6 +-
 .../installing-the-driver-on-windows/index.html |   8 +-
 docs/json-data-model/index.html                 |  18 +-
 docs/kvgen/index.html                           |   2 +-
 .../index.html                                  |  30 +--
 .../index.html                                  |  28 +--
 .../index.html                                  |  36 ++--
 docs/mongodb-storage-plugin/index.html          |   2 +-
 docs/odbc-configuration-reference/index.html    |   2 +-
 docs/parquet-format/index.html                  |   2 +-
 docs/querying-hbase/index.html                  |   2 +-
 docs/querying-json-files/index.html             |   2 +-
 docs/querying-plain-text-files/index.html       |   4 +-
 docs/querying-sequence-files/index.html         |   2 +-
 docs/querying-system-tables/index.html          |  12 +-
 docs/ranking-window-functions/index.html        |  10 +-
 docs/rdbms-storage-plugin/index.html            |   2 +-
 docs/rest-api/index.html                        |  28 +--
 docs/s3-storage-plugin/index.html               |   4 +-
 docs/sequence-files/index.html                  |   4 +-
 docs/sql-extensions/index.html                  |   2 +-
 .../index.html                                  |   4 +-
 docs/tableau-examples/index.html                |  26 +--
 docs/troubleshooting/index.html                 |   8 +-
 .../index.html                                  |  12 +-
 docs/useful-research/index.html                 |   4 +-
 .../index.html                                  |   8 +-
 .../index.html                                  |   6 +-
 .../index.html                                  |  12 +-
 .../index.html                                  |  12 +-
 docs/using-qlik-sense-with-drill/index.html     |  10 +-
 .../index.html                                  |   4 +-
 docs/value-window-functions/index.html          |  12 +-
 docs/why-drill/index.html                       |  20 +-
 docs/workspaces/index.html                      |   2 +-
 faq/index.html                                  |  34 +--
 feed.xml                                        |  76 ++++---
 index.html                                      |   2 +-
 74 files changed, 549 insertions(+), 311 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/blog/2014/11/19/sql-on-mongodb/index.html
----------------------------------------------------------------------
diff --git a/blog/2014/11/19/sql-on-mongodb/index.html b/blog/2014/11/19/sql-on-mongodb/index.html
index 5efc20b..32301a9 100644
--- a/blog/2014/11/19/sql-on-mongodb/index.html
+++ b/blog/2014/11/19/sql-on-mongodb/index.html
@@ -149,7 +149,7 @@
 <li>Optimizations</li>
 </ul>
 
-<h2 id="drill-and-mongodb-setup-standalone-replicated-sharded">Drill and MongoDB Setup (Standalone/Replicated/Sharded)</h2>
+<h2 id="drill-and-mongodb-setup-(standalone/replicated/sharded)">Drill and MongoDB Setup (Standalone/Replicated/Sharded)</h2>
 
 <h3 id="standalone">Standalone</h3>
 
@@ -190,7 +190,7 @@
 
 <p>In replicated mode, whichever drillbit receives the query connects to the nearest <code>mongod</code> (local <code>mongod</code>) to read the data.</p>
 
-<h3 id="sharded-sharded-with-replica-set">Sharded/Sharded with Replica Set</h3>
+<h3 id="sharded/sharded-with-replica-set">Sharded/Sharded with Replica Set</h3>
 
 <ul>
 <li>Start Mongo processes in sharded mode</li>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/blog/2014/12/02/drill-top-level-project/index.html
----------------------------------------------------------------------
diff --git a/blog/2014/12/02/drill-top-level-project/index.html b/blog/2014/12/02/drill-top-level-project/index.html
index 8eb74e2..70743fa 100644
--- a/blog/2014/12/02/drill-top-level-project/index.html
+++ b/blog/2014/12/02/drill-top-level-project/index.html
@@ -160,7 +160,7 @@
 
 <p>After almost two years of research and development, we released Drill 0.4 in August, and continued with monthly releases since then.</p>
 
-<h2 id="what-39-s-next">What&#39;s Next</h2>
+<h2 id="what&#39;s-next">What&#39;s Next</h2>
 
 <p>Graduating to a top-level project is a significant milestone, but it&#39;s really just the beginning of the journey. In fact, we&#39;re currently wrapping up Drill 0.7, which includes hundreds of fixes and enhancements, and we expect to release that in the next couple weeks.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html
----------------------------------------------------------------------
diff --git a/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html b/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html
index 4afef95..5d762fd 100644
--- a/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html
+++ b/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html
@@ -153,23 +153,23 @@
 
 <p>Apache Drill committers Tomer Shiran, Jacques Nadeau, and Ted Dunning, as well as Tableau Product Manager Jeff Feng and Data Scientist Dr. Kirk Borne will be on hand to answer your questions.</p>
 
-<h4 id="tomer-shiran-apache-drill-founder-tshiran">Tomer Shiran, Apache Drill Founder (@tshiran)</h4>
+<h4 id="tomer-shiran,-apache-drill-founder-(@tshiran)">Tomer Shiran, Apache Drill Founder (@tshiran)</h4>
 
 <p>Tomer Shiran is the founder of Apache Drill, and a PMC member and committer on the project. He is VP Product Management at MapR, responsible for product strategy, roadmap and new feature development. Prior to MapR, Tomer held numerous product management and engineering roles at Microsoft, most recently as the product manager for Microsoft Internet Security &amp; Acceleration Server (now Microsoft Forefront). He is the founder of two websites that have served tens of millions of users, and received coverage in prestigious publications such as The New York Times, USA Today and The Times of London. Tomer is also the author of a 900-page programming book. He holds an MS in Computer Engineering from Carnegie Mellon University and a BS in Computer Science from Technion - Israel Institute of Technology.</p>
 
-<h4 id="jeff-feng-product-manager-tableau-software-jtfeng">Jeff Feng, Product Manager Tableau Software (@jtfeng)</h4>
+<h4 id="jeff-feng,-product-manager-tableau-software-(@jtfeng)">Jeff Feng, Product Manager Tableau Software (@jtfeng)</h4>
 
 <p>Jeff Feng is a Product Manager at Tableau and leads their Big Data product roadmap &amp; strategic vision.  In his role, he focuses on joint technology integration and partnership efforts with a number of Hadoop, NoSQL and web application partners in helping users see and understand their data.</p>
 
-<h4 id="ted-dunning-apache-drill-comitter-ted_dunning">Ted Dunning, Apache Drill Comitter (@Ted_Dunning)</h4>
+<h4 id="ted-dunning,-apache-drill-comitter-(@ted_dunning)">Ted Dunning, Apache Drill Comitter (@Ted_Dunning)</h4>
 
 <p>Ted Dunning is Chief Applications Architect at MapR Technologies and committer and PMC member of the Apache Mahout, Apache ZooKeeper, and Apache Drill projects and mentor for Apache Storm. He contributed to Mahout clustering, classification and matrix decomposition algorithms  and helped expand the new version of Mahout Math library. Ted was the chief architect behind the MusicMatch (now Yahoo Music) and Veoh recommendation systems, he built fraud detection systems for ID Analytics (LifeLock) and he has issued 24 patents to date. Ted has a PhD in computing science from University of Sheffield. When he’s not doing data science, he plays guitar and mandolin.</p>
 
-<h4 id="jacques-nadeau-vice-president-apache-drill-intjesus">Jacques Nadeau, Vice President, Apache Drill (@intjesus)</h4>
+<h4 id="jacques-nadeau,-vice-president,-apache-drill-(@intjesus)">Jacques Nadeau, Vice President, Apache Drill (@intjesus)</h4>
 
 <p>Jacques Nadeau leads Apache Drill development efforts at MapR Technologies. He is an industry veteran with over 15 years of big data and analytics experience. Most recently, he was cofounder and CTO of search engine startup YapMap. Before that, he was director of new product engineering with Quigo (contextual advertising, acquired by AOL in 2007). He also built the Avenue A | Razorfish analytics data warehousing system and associated services practice (acquired by Microsoft).</p>
 
-<h4 id="dr-kirk-borne-george-mason-university-kirkdborne">Dr. Kirk Borne, George Mason University (@KirkDBorne)</h4>
+<h4 id="dr.-kirk-borne,-george-mason-university-(@kirkdborne)">Dr. Kirk Borne, George Mason University (@KirkDBorne)</h4>
 
 <p>Dr. Kirk Borne is a Transdisciplinary Data Scientist and an Astrophysicist. He is Professor of Astrophysics and Computational Science in the George Mason University School of Physics, Astronomy, and Computational Sciences. He has been at Mason since 2003, where he teaches and advises students in the graduate and undergraduate Computational Science, Informatics, and Data Science programs. Previously, he spent nearly 20 years in positions supporting NASA projects, including an assignment as NASA&#39;s Data Archive Project Scientist for the Hubble Space Telescope, and as Project Manager in NASA&#39;s Space Science Data Operations Office. He has extensive experience in big data and data science, including expertise in scientific data mining and data systems. He has published over 200 articles (research papers, conference papers, and book chapters), and given over 200 invited talks at conferences and universities worldwide.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/blog/2014/12/16/whats-coming-in-2015/index.html
----------------------------------------------------------------------
diff --git a/blog/2014/12/16/whats-coming-in-2015/index.html b/blog/2014/12/16/whats-coming-in-2015/index.html
index 596ff85..bc2fad6 100644
--- a/blog/2014/12/16/whats-coming-in-2015/index.html
+++ b/blog/2014/12/16/whats-coming-in-2015/index.html
@@ -213,7 +213,7 @@
 
 <p>If you&#39;re interested in implementing a new storage plugin, I would encourage you to reach out to the Drill developer community on <a href="mailto:dev@drill.apache.org">dev@drill.apache.org</a>. I&#39;m looking forward to publishing an example of a single-query join across 10 data sources.</p>
 
-<h2 id="drill-spark-integration">Drill/Spark Integration</h2>
+<h2 id="drill/spark-integration">Drill/Spark Integration</h2>
 
 <p>We&#39;re seeing growing interest in Spark as an execution engine for data pipelines, providing an alternative to MapReduce. The Drill community is working on integrating Drill and Spark to address a few new use cases:</p>
 
@@ -239,7 +239,7 @@
 <li><strong>Workload management</strong>: A single cluster is often shared among many users and groups, and everyone expects answers in real-time. Workload management prioritizes the allocation of resources to ensure that the most important workloads get done first so that business demands can be met. Administrators need to be able to assign priorities and quotas at a fine granularity. We&#39;re working on enhancing Drill&#39;s workload management to provide these capabilities while providing tight integration with YARN and Mesos.</li>
 </ul>
 
-<h2 id="we-would-love-to-hear-from-you">We Would Love to Hear From You!</h2>
+<h2 id="we-would-love-to-hear-from-you!">We Would Love to Hear From You!</h2>
 
 <p>Are there other features you would like to see in Drill? We would love to hear from you:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/blog/2015/01/27/schema-free-json-data-infrastructure/index.html
----------------------------------------------------------------------
diff --git a/blog/2015/01/27/schema-free-json-data-infrastructure/index.html b/blog/2015/01/27/schema-free-json-data-infrastructure/index.html
index 58be84b..a297e47 100644
--- a/blog/2015/01/27/schema-free-json-data-infrastructure/index.html
+++ b/blog/2015/01/27/schema-free-json-data-infrastructure/index.html
@@ -129,7 +129,7 @@
   <article class="post-content">
     <p>JSON has emerged in recent years as the de-facto standard data exchange format. It is being used everywhere. Front-end Web applications use JSON to maintain data and communicate with back-end applications. Web APIs are JSON-based (eg, <a href="https://dev.twitter.com/rest/public">Twitter REST APIs</a>, <a href="http://developers.marketo.com/documentation/rest/">Marketo REST APIs</a>, <a href="https://developer.github.com/v3/">GitHub API</a>). It&#39;s the format of choice for public datasets, operational log files and more.</p>
 
-<h1 id="why-is-json-a-convenient-data-exchange-format">Why is JSON a Convenient Data Exchange Format?</h1>
+<h1 id="why-is-json-a-convenient-data-exchange-format?">Why is JSON a Convenient Data Exchange Format?</h1>
 
 <p>While I won&#39;t dive into the historical roots of JSON (JavaScript Object Notation, <a href="http://en.wikipedia.org/wiki/JSON#JavaScript_eval.28.29"><code>eval()</code></a>, etc.), I do want to highlight several attributes of JSON that make it a convenient data exchange format:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/blog/2015/07/05/drill-1.1-released/index.html
----------------------------------------------------------------------
diff --git a/blog/2015/07/05/drill-1.1-released/index.html b/blog/2015/07/05/drill-1.1-released/index.html
index 64c88e9..98ef6e6 100644
--- a/blog/2015/07/05/drill-1.1-released/index.html
+++ b/blog/2015/07/05/drill-1.1-released/index.html
@@ -167,7 +167,7 @@
   &lt;version&gt;1.1.0&lt;/version&gt;
 &lt;/dependency&gt;
 </code></pre></div>
-<h2 id="mongodb-3-0-support">MongoDB 3.0 Support</h2>
+<h2 id="mongodb-3.0-support">MongoDB 3.0 Support</h2>
 
 <p>Drill now uses MongoDB&#39;s latest Java driver and has enhanced connection pooling for better performance and resilience in large-scale deployments.  Learn more about using the <a href="https://drill.apache.org/docs/mongodb-plugin-for-apache-drill/">MongoDB plugin</a>.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/blog/2015/12/14/drill-1.4-released/index.html
----------------------------------------------------------------------
diff --git a/blog/2015/12/14/drill-1.4-released/index.html b/blog/2015/12/14/drill-1.4-released/index.html
new file mode 100644
index 0000000..232fc44
--- /dev/null
+++ b/blog/2015/12/14/drill-1.4-released/index.html
@@ -0,0 +1,205 @@
+<!DOCTYPE html>
+<html>
+
+<head>
+
+<meta charset="UTF-8">
+<meta name=viewport content="width=device-width, initial-scale=1">
+
+
+<title>Drill 1.4 Released - Apache Drill</title>
+
+<link href="//maxcdn.bootstrapcdn.com/font-awesome/4.3.0/css/font-awesome.min.css" rel="stylesheet" type="text/css"/>
+<link href='//fonts.googleapis.com/css?family=PT+Sans' rel='stylesheet' type='text/css'/>
+<link href="/css/site.css" rel="stylesheet" type="text/css"/>
+
+<link rel="shortcut icon" href="/favicon.ico" type="image/x-icon"/>
+<link rel="icon" href="/favicon.ico" type="image/x-icon"/>
+
+<script src="//ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js" language="javascript" type="text/javascript"></script>
+<script src="//cdnjs.cloudflare.com/ajax/libs/jquery-easing/1.3/jquery.easing.min.js" language="javascript" type="text/javascript"></script>
+<script language="javascript" type="text/javascript" src="/js/modernizr.custom.js"></script>
+<script language="javascript" type="text/javascript" src="/js/script.js"></script>
+<script language="javascript" type="text/javascript" src="/js/drill.js"></script>
+
+</head>
+
+
+<body onResize="resized();">
+  <div class="page-wrap">
+    <div class="bui"></div>
+
+<div id="menu" class="mw">
+<ul>
+  <li class='toc-categories'>
+  <a class="expand-toc-icon" href="javascript:void(0);"><i class="fa fa-bars"></i></a>
+  </li>
+  <li class="logo"><a href="/"></a></li>
+  <li class='expand-menu'>
+  <a href="javascript:void(0);"><span class='menu-text'>Menu</span><span class='expand-icon'><i class="fa fa-bars"></i></span></a>
+  </li>
+  <li class='clear-float'></li>
+  <li class="documentation-menu">
+    <a href="/docs/">Documentation</a>
+    <ul>
+      
+        <li><a href="/docs/getting-started/">Getting Started</a></li>
+      
+        <li><a href="/docs/architecture/">Architecture</a></li>
+      
+        <li><a href="/docs/tutorials/">Tutorials</a></li>
+      
+        <li><a href="/docs/install-drill/">Install Drill</a></li>
+      
+        <li><a href="/docs/configure-drill/">Configure Drill</a></li>
+      
+        <li><a href="/docs/connect-a-data-source/">Connect a Data Source</a></li>
+      
+        <li><a href="/docs/odbc-jdbc-interfaces/">ODBC/JDBC Interfaces</a></li>
+      
+        <li><a href="/docs/query-data/">Query Data</a></li>
+      
+        <li><a href="/docs/performance-tuning/">Performance Tuning</a></li>
+      
+        <li><a href="/docs/log-and-debug/">Log and Debug</a></li>
+      
+        <li><a href="/docs/sql-reference/">SQL Reference</a></li>
+      
+        <li><a href="/docs/data-sources-and-file-formats/">Data Sources and File Formats</a></li>
+      
+        <li><a href="/docs/develop-custom-functions/">Develop Custom Functions</a></li>
+      
+        <li><a href="/docs/troubleshooting/">Troubleshooting</a></li>
+      
+        <li><a href="/docs/developer-information/">Developer Information</a></li>
+      
+        <li><a href="/docs/release-notes/">Release Notes</a></li>
+      
+        <li><a href="/docs/sample-datasets/">Sample Datasets</a></li>
+      
+        <li><a href="/docs/project-bylaws/">Project Bylaws</a></li>
+      
+    </ul>
+  </li>
+  <li class='nav'>
+    <a href="/community-resources/">Community</a>
+    <ul>
+      <li><a href="/team/">Team</a></li>
+      <li><a href="/mailinglists/">Mailing Lists</a></li>
+      <li><a href="/community-resources/">Community Resources</a></li>
+    </ul>
+  </li>
+  <li class='nav'><a href="/faq/">FAQ</a></li>
+  <li class='nav'><a href="/blog/">Blog</a></li>
+  <li id="twitter-menu-item"><a href="https://twitter.com/apachedrill" title="apachedrill on twitter" target="_blank"><img src="/images/twitter_32_26_white.png" alt="twitter logo" align="center"></a> </li>
+  <li class='search-bar'>
+    <form id="drill-search-form">
+      <input type="text" placeholder="Search Apache Drill" id="drill-search-term" />
+      <button type="submit">
+        <i class="fa fa-search"></i>
+      </button>
+    </form>
+  </li>
+  <li class="d">
+    <a href="/download/">
+      <i class="fa fa-cloud-download"></i> Download
+    </a>
+  </li>
+</ul>
+</div>
+
+    <link href="/css/content.css" rel="stylesheet" type="text/css">
+
+<div class="post int_text">
+  <header class="post-header">
+    <div class="int_title">
+      <h1 class="post-title">Drill 1.4 Released</h1>
+    </div>
+    <p class="post-meta">
+    
+      
+      
+      <strong>Author:</strong> Jacques Nadeau (PMC Chair and Committer, Apache Drill)<br />
+    
+<strong>Date:</strong> Dec 14, 2015
+</p>
+  </header>
+  <div class="addthis_sharing_toolbox"></div>
+
+  <article class="post-content">
+    <p>Apache Drill 1.4 (<a href="https://drill.apache.org/download/">available here</a>) includes bug fixes and enhancements from <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12332947&amp;projectId=12313820">32 
+JIRAs</a>.</p>
+
+<p>Here&#39;s a list of highlights from this newest version of Drill:</p>
+
+<h2 id="select-with-options">Select With Options</h2>
+
+<p>Queries that change storage plugin configuration options can now be written. For instance, to query the file <code>CO.dat</code>, the following can be used:</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT * FROM TABLE(dfs.`/path/to/CO.dat`(type =&gt; &#39;text&#39;));
+</code></pre></div>
+<p>If a version of <code>CO.dat</code> with a header is available, the first entries of the file can be parsed as column names by 
+passing an <code>extractHeader =&gt; true</code> argument. We can also use a pipe symbol, &#39;|&#39;, as the delimiter by passing 
+<code>fieldDelimiter</code>:</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT * FROM TABLE(dfs.`/path/to/CO.dat`(type =&gt; &#39;text&#39;, fieldDelimiter =&gt; &#39;|&#39;, extractHeader =&gt; true));
+</code></pre></div>
+<p>Additionally, <code>lineDelimiter</code> can be used to indicate a deliminter for new lines, such as the double pipe, &#39;||&#39;, symbol in this example:</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT * FROM TABLE(dfs.`/path/to/CO.dat`(type =&gt; &#39;text&#39;, lineDelimiter =&gt; &#39;||&#39;, fieldDelimiter =&gt; &#39;|&#39;));
+</code></pre></div>
+<h2 id="improved-behavior-for-csv-header-parsing">Improved Behavior For CSV Header Parsing</h2>
+
+<p>When header parsing is enabled, queries to CSV files no longer raise an exception if the indicated column does not 
+exist. Instead, Drill now returns <code>null</code> values for that column.</p>
+
+<h2 id="json-formatting">JSON Formatting</h2>
+
+<p>For more compact results, Drill&#39;s default behavior of pretty-printing JSON can now be changed by setting the variable 
+<code>store.json.writer.uglify</code> to <code>true</code>. As in:</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">ALTER SESSION SET store.json.writer.uglify = true;
+</code></pre></div>
+<h2 id="better-logging">Better Logging</h2>
+
+<p>SQL query text is now logged to the <code>drillbit.log</code> file.</p>
+
+<h2 id="other-improvements">Other Improvements</h2>
+
+<p>This version also features schema change compatible sorting, better Apache Hive support, and more efficient caching for Parquet file metadata.</p>
+
+  </article>
+ <div id="disqus_thread"></div>
+    <script type="text/javascript">
+        /* * * CONFIGURATION VARIABLES: EDIT BEFORE PASTING INTO YOUR WEBPAGE * * */
+        var disqus_shortname = 'drill'; // required: replace example with your forum shortname
+
+        /* * * DON'T EDIT BELOW THIS LINE * * */
+        (function() {
+            var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true;
+            dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js';
+            (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq);
+        })();
+    </script>
+    <noscript>Please enable JavaScript to view the <a href="http://disqus.com/?ref_noscript">comments powered by Disqus.</a></noscript>
+    
+</div>
+<script type="text/javascript" src="//s7.addthis.com/js/300/addthis_widget.js#pubid=ra-548b2caa33765e8d" async="async"></script>
+
+  </div>
+  <p class="push"></p>
+<div id="footer" class="mw">
+<div class="wrapper">
+Copyright © 2012-2014 The Apache Software Foundation, licensed under the Apache License, Version 2.0.<br>
+Apache and the Apache feather logo are trademarks of The Apache Software Foundation. Other names appearing on the site may be trademarks of their respective owners.<br/><br/>
+</div>
+</div>
+
+  <script>
+(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
+(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
+m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
+})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
+
+ga('create', 'UA-53379651-1', 'auto');
+ga('send', 'pageview');
+</script>
+<script type="text/javascript" src="//s7.addthis.com/js/300/addthis_widget.js#pubid=ra-548b2caa33765e8d" async="async"></script>
+</body>
+</html>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/blog/index.html
----------------------------------------------------------------------
diff --git a/blog/index.html b/blog/index.html
index 8aae181..7592f6d 100644
--- a/blog/index.html
+++ b/blog/index.html
@@ -114,6 +114,11 @@
 </div>
 
 <div class="int_text" align="left"><!-- previously: site.posts -->
+<p><a class="post-link" href="/blog/2015/12/14/drill-1.4-released/">Drill 1.4 Released</a><br/>
+<span class="post-date">Posted on Dec 14, 2015
+by Jacques Nadeau</span>
+<br/>Apache Drill 1.4's highlights are&#58; "select with options" queries that can change storage plugin settings, improved behavior when parsing CSV file header names, a variable to set non-pretty (e.g. compact) printing of JSON, and better drillbit.log files that include query text.</p>
+<!-- previously: site.posts -->
 <p><a class="post-link" href="/blog/2015/11/23/drill-1.3-released/">Drill 1.3 Released</a><br/>
 <span class="post-date">Posted on Nov 23, 2015
 by Jacques Nadeau</span>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/docs/aggregate-window-functions/index.html
----------------------------------------------------------------------
diff --git a/docs/aggregate-window-functions/index.html b/docs/aggregate-window-functions/index.html
index cefd8cf..9c49da2 100644
--- a/docs/aggregate-window-functions/index.html
+++ b/docs/aggregate-window-functions/index.html
@@ -1124,7 +1124,7 @@ If an ORDER BY clause is used for an aggregate function, an explicit frame claus
 
 <p>The following examples show queries that use each of the aggregate window functions in Drill. See <a href="/docs/sql-window-functions-examples/">SQL Window Functions Examples</a> for information about the data and setup for these examples.</p>
 
-<h3 id="avg">AVG()</h3>
+<h3 id="avg()">AVG()</h3>
 
 <p>The following query uses the AVG() window function with the PARTITION BY clause to calculate the average sales for each car dealer in Q1.  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select dealer_id, sales, avg(sales) over (partition by dealer_id) as avgsales from q1_sales;
@@ -1144,7 +1144,7 @@ If an ORDER BY clause is used for an aggregate function, an explicit frame claus
    +------------+--------+-----------+
    10 rows selected (0.455 seconds)
 </code></pre></div>
-<h3 id="count">COUNT()</h3>
+<h3 id="count()">COUNT()</h3>
 
 <p>The following query uses the COUNT (*) window function to count the number of sales in Q1, ordered by dealer_id. The word count is enclosed in back ticks (``) because it is a reserved keyword in Drill.  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select dealer_id, sales, count(*) over(order by dealer_id) as `count` from q1_sales;
@@ -1182,7 +1182,7 @@ If an ORDER BY clause is used for an aggregate function, an explicit frame claus
    +------------+--------+--------+
    10 rows selected (0.249 seconds)
 </code></pre></div>
-<h3 id="max">MAX()</h3>
+<h3 id="max()">MAX()</h3>
 
 <p>The following query uses the MAX() window function with the PARTITION BY clause to identify the employee with the maximum number of car sales in Q1 at each dealership. The word max is a reserved keyword in Drill and must be enclosed in back ticks (``).  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select emp_name, dealer_id, sales, max(sales) over(partition by dealer_id) as `max` from q1_sales;
@@ -1202,7 +1202,7 @@ If an ORDER BY clause is used for an aggregate function, an explicit frame claus
    +-----------------+------------+--------+--------+
    10 rows selected (0.402 seconds)
 </code></pre></div>
-<h3 id="min">MIN()</h3>
+<h3 id="min()">MIN()</h3>
 
 <p>The following query uses the MIN() window function with the PARTITION BY clause to identify the employee with the minimum number of car sales in Q1 at each dealership. The word min is a reserved keyword in Drill and must be enclosed in back ticks (``).  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select emp_name, dealer_id, sales, min(sales) over(partition by dealer_id) as `min` from q1_sales;
@@ -1222,7 +1222,7 @@ If an ORDER BY clause is used for an aggregate function, an explicit frame claus
    +-----------------+------------+--------+-------+
    10 rows selected (0.194 seconds)
 </code></pre></div>
-<h3 id="sum">SUM()</h3>
+<h3 id="sum()">SUM()</h3>
 
 <p>The following query uses the SUM() window function to total the amount of sales for each dealer in Q1. The word sum is a reserved keyword in Drill and must be enclosed in back ticks (``).  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select dealer_id, emp_name, sales, sum(sales) over(partition by dealer_id) as `sum` from q1_sales;

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/docs/analyzing-the-yelp-academic-dataset/index.html
----------------------------------------------------------------------
diff --git a/docs/analyzing-the-yelp-academic-dataset/index.html b/docs/analyzing-the-yelp-academic-dataset/index.html
index 58adf37..c7cbb12 100644
--- a/docs/analyzing-the-yelp-academic-dataset/index.html
+++ b/docs/analyzing-the-yelp-academic-dataset/index.html
@@ -1083,7 +1083,7 @@ analysis extremely easy.</p>
 
 <h2 id="querying-data-with-drill">Querying Data with Drill</h2>
 
-<h3 id="1-view-the-contents-of-the-yelp-business-data">1. View the contents of the Yelp business data</h3>
+<h3 id="1.-view-the-contents-of-the-yelp-business-data">1. View the contents of the Yelp business data</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=local&gt; !set maxwidth 10000
 
 0: jdbc:drill:zk=local&gt; select * from
@@ -1103,7 +1103,7 @@ analysis extremely easy.</p>
 
 <p>You can directly query self-describing files such as JSON, Parquet, and text. There is no need to create metadata definitions in the Hive metastore.</p>
 
-<h3 id="2-explore-the-business-data-set-further">2. Explore the business data set further</h3>
+<h3 id="2.-explore-the-business-data-set-further">2. Explore the business data set further</h3>
 
 <h4 id="total-reviews-in-the-data-set">Total reviews in the data set</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=local&gt; select sum(review_count) as totalreviews 
@@ -1154,7 +1154,7 @@ group by stars order by stars desc;
 | 1.0        | 4.0        |
 +------------+------------+
 </code></pre></div>
-<h4 id="top-businesses-with-high-review-counts-gt-1000">Top businesses with high review counts (&gt; 1000)</h4>
+<h4 id="top-businesses-with-high-review-counts-(&gt;-1000)">Top businesses with high review counts (&gt; 1000)</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=local&gt; select name, state, city, `review_count` from
 dfs.`/&lt;path-to-yelp-dataset&gt;/yelp/yelp_academic_dataset_business.json`
 where review_count &gt; 1000 order by `review_count` desc limit 10;
@@ -1198,7 +1198,7 @@ b limit 10;
 </code></pre></div>
 <p>Note how Drill can traverse and refer through multiple levels of nesting.</p>
 
-<h3 id="3-get-the-amenities-of-each-business-in-the-data-set">3. Get the amenities of each business in the data set</h3>
+<h3 id="3.-get-the-amenities-of-each-business-in-the-data-set">3. Get the amenities of each business in the data set</h3>
 
 <p>Note that the attributes column in the Yelp business data set has a different
 element for every row, representing that businesses can have separate
@@ -1246,7 +1246,7 @@ on data.</p>
 | true  | store.json.all_text_mode updated.  |
 +-------+------------------------------------+
 </code></pre></div>
-<h3 id="4-explore-the-restaurant-businesses-in-the-data-set">4. Explore the restaurant businesses in the data set</h3>
+<h3 id="4.-explore-the-restaurant-businesses-in-the-data-set">4. Explore the restaurant businesses in the data set</h3>
 
 <h4 id="number-of-restaurants-in-the-data-set">Number of restaurants in the data set</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=local&gt; select count(*) as TotalRestaurants from dfs.`/&lt;path-to-yelp-dataset&gt;/yelp/yelp_academic_dataset_business.json` where true=repeated_contains(categories,&#39;Restaurants&#39;);
@@ -1318,9 +1318,9 @@ order by count(categories[0]) desc limit 10;
 | Hair Salons          | 901           |
 +----------------------+---------------+
 </code></pre></div>
-<h3 id="5-explore-the-yelp-reviews-dataset-and-combine-with-the-businesses">5. Explore the Yelp reviews dataset and combine with the businesses.</h3>
+<h3 id="5.-explore-the-yelp-reviews-dataset-and-combine-with-the-businesses.">5. Explore the Yelp reviews dataset and combine with the businesses.</h3>
 
-<h4 id="take-a-look-at-the-contents-of-the-yelp-reviews-dataset">Take a look at the contents of the Yelp reviews dataset.</h4>
+<h4 id="take-a-look-at-the-contents-of-the-yelp-reviews-dataset.">Take a look at the contents of the Yelp reviews dataset.</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=local&gt; select * 
 from dfs.`/&lt;path-to-yelp-dataset&gt;/yelp/yelp_academic_dataset_review.json` limit 1;
 +---------------------------------+------------------------+------------------------+-------+------------+----------------------------------------------------------------------+--------+------------------------+

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/docs/apache-drill-1-1-0-release-notes/index.html
----------------------------------------------------------------------
diff --git a/docs/apache-drill-1-1-0-release-notes/index.html b/docs/apache-drill-1-1-0-release-notes/index.html
index 7392e22..8d11155 100644
--- a/docs/apache-drill-1-1-0-release-notes/index.html
+++ b/docs/apache-drill-1-1-0-release-notes/index.html
@@ -1050,7 +1050,7 @@
 
 <p>It has been about 6 weeks since the release of Drill 1.0.0. Today we&#39;re happy to announce the availability of Drill 1.1.0, providing 119 additional enhancements and bug fixes. </p>
 
-<h2 id="noteworthy-new-features-in-drill-1-1-0">Noteworthy New Features in Drill 1.1.0</h2>
+<h2 id="noteworthy-new-features-in-drill-1.1.0">Noteworthy New Features in Drill 1.1.0</h2>
 
 <p>Drill now supports window functions, automatic partitioning, and Hive impersonation. </p>
 
@@ -1074,13 +1074,13 @@
 <li>AVG<br></li>
 </ul>
 
-<h3 id="automatic-partitioning-in-ctas-drill-3333"><a href="/docs/partition-pruning/#automatic-partitioning">Automatic Partitioning</a> in CTAS (DRILL-3333)</h3>
+<h3 id="automatic-partitioning-in-ctas-(drill-3333)"><a href="/docs/partition-pruning/#automatic-partitioning">Automatic Partitioning</a> in CTAS (DRILL-3333)</h3>
 
 <p>When a table is created with a partition by clause, the parquet writer will create separate files for the different partition values. The data will first be sorted by the partition keys, and the parquet writer will create a new file when it encounters a new value for the partition columns. </p>
 
 <p>When queries are issued against data that was created this way, partition pruning will work if the filter contains a partition column. Unlike directory-based partitioning, no view is required, nor is it necessary to reference the dir* column names. </p>
 
-<h3 id="hive-impersonation-support-drill-3203"><a href="/docs/configuring-user-impersonation-with-hive-authorization">Hive impersonation</a> support (DRILL-3203)</h3>
+<h3 id="hive-impersonation-support-(drill-3203)"><a href="/docs/configuring-user-impersonation-with-hive-authorization">Hive impersonation</a> support (DRILL-3203)</h3>
 
 <p>When impersonation is enabled, Drill now supports impersonating the user who issued the query when accessing Hive metadata/data (instead of accessing Hive as the user that started the drillbit). </p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/docs/apache-drill-1-2-0-release-notes/index.html
----------------------------------------------------------------------
diff --git a/docs/apache-drill-1-2-0-release-notes/index.html b/docs/apache-drill-1-2-0-release-notes/index.html
index b7718e9..ab1d503 100644
--- a/docs/apache-drill-1-2-0-release-notes/index.html
+++ b/docs/apache-drill-1-2-0-release-notes/index.html
@@ -1055,7 +1055,7 @@
 <li><a href="/docs/apache-drill-1-2-0-release-notes/#important-unresolved-issues">Important unresolved issues</a></li>
 </ul>
 
-<h2 id="noteworthy-new-features-in-drill-1-2-0">Noteworthy New Features in Drill 1.2.0</h2>
+<h2 id="noteworthy-new-features-in-drill-1.2.0">Noteworthy New Features in Drill 1.2.0</h2>
 
 <p>This release of Drill introduces a number of enhancements, including the following ones:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/docs/apache-drill-contribution-guidelines/index.html
----------------------------------------------------------------------
diff --git a/docs/apache-drill-contribution-guidelines/index.html b/docs/apache-drill-contribution-guidelines/index.html
index 4276419..3f192e1 100644
--- a/docs/apache-drill-contribution-guidelines/index.html
+++ b/docs/apache-drill-contribution-guidelines/index.html
@@ -1202,7 +1202,7 @@ it easy to quickly view the contents of the patch in a web browser.</p>
 <li>Once your patch is accepted, be sure to upload a final version which grants rights to the ASF.</li>
 </ul>
 
-<h2 id="where-is-a-good-place-to-start-contributing">Where is a good place to start contributing?</h2>
+<h2 id="where-is-a-good-place-to-start-contributing?">Where is a good place to start contributing?</h2>
 
 <p>After getting the source code, building and running a few simple queries, one
 of the simplest places to start is to implement a DrillFunc.<br>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/docs/apache-drill-contribution-ideas/index.html
----------------------------------------------------------------------
diff --git a/docs/apache-drill-contribution-ideas/index.html b/docs/apache-drill-contribution-ideas/index.html
index b4390af..72bde1c 100644
--- a/docs/apache-drill-contribution-ideas/index.html
+++ b/docs/apache-drill-contribution-ideas/index.html
@@ -1104,7 +1104,7 @@ own use case). Then try to implement one.</p>
 <li>Approximate aggregate functions (such as what is available in BlinkDB)</li>
 </ul>
 
-<h2 id="support-for-new-file-format-readers-writers">Support for new file format readers/writers</h2>
+<h2 id="support-for-new-file-format-readers/writers">Support for new file format readers/writers</h2>
 
 <p>Currently Drill supports text, JSON and Parquet file formats natively when
 interacting with file system. More readers/writers can be introduced by

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/docs/compiling-drill-from-source/index.html
----------------------------------------------------------------------
diff --git a/docs/compiling-drill-from-source/index.html b/docs/compiling-drill-from-source/index.html
index 68c07b5..22f47f8 100644
--- a/docs/compiling-drill-from-source/index.html
+++ b/docs/compiling-drill-from-source/index.html
@@ -1065,10 +1065,10 @@ Maven and JDK installed:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">java -version
 mvn -version
 </code></pre></div>
-<h2 id="1-clone-the-repository">1. Clone the Repository</h2>
+<h2 id="1.-clone-the-repository">1. Clone the Repository</h2>
 <div class="highlight"><pre><code class="language-text" data-lang="text">git clone https://git-wip-us.apache.org/repos/asf/drill.git
 </code></pre></div>
-<h2 id="2-compile-the-code">2. Compile the Code</h2>
+<h2 id="2.-compile-the-code">2. Compile the Code</h2>
 <div class="highlight"><pre><code class="language-text" data-lang="text">cd drill
 mvn clean install -DskipTests
 </code></pre></div>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/docs/configuring-jreport-with-drill/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-jreport-with-drill/index.html b/docs/configuring-jreport-with-drill/index.html
index 4645b62..24a6e96 100644
--- a/docs/configuring-jreport-with-drill/index.html
+++ b/docs/configuring-jreport-with-drill/index.html
@@ -1060,7 +1060,7 @@
 
 <hr>
 
-<h3 id="step-1-install-the-drill-jdbc-driver-with-jreport">Step 1: Install the Drill JDBC Driver with JReport</h3>
+<h3 id="step-1:-install-the-drill-jdbc-driver-with-jreport">Step 1: Install the Drill JDBC Driver with JReport</h3>
 
 <p>Drill provides standard JDBC connectivity to integrate with JReport. JReport 13.1 requires Drill 1.0 or later.
 For general instructions on installing the Drill JDBC driver, see <a href="/docs/using-the-jdbc-driver/">Using JDBC</a>.</p>
@@ -1080,7 +1080,7 @@ For example, on Windows, copy the Drill JDBC driver jar file into:</p>
 
 <hr>
 
-<h3 id="step-2-create-a-new-jreport-catalog-to-manage-the-drill-connection">Step 2: Create a New JReport Catalog to Manage the Drill Connection</h3>
+<h3 id="step-2:-create-a-new-jreport-catalog-to-manage-the-drill-connection">Step 2: Create a New JReport Catalog to Manage the Drill Connection</h3>
 
 <ol>
 <li> Click Create <strong>New -&gt; Catalog…</strong></li>
@@ -1095,7 +1095,7 @@ For example, on Windows, copy the Drill JDBC driver jar file into:</p>
 <li>Click <strong>Done</strong> when you have added all the tables you need. </li>
 </ol>
 
-<h3 id="step-3-use-jreport-designer">Step 3: Use JReport Designer</h3>
+<h3 id="step-3:-use-jreport-designer">Step 3: Use JReport Designer</h3>
 
 <ol>
 <li> In the Catalog Browser, right-click <strong>Queries</strong> and select <strong>Add Query…</strong></li>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/docs/configuring-odbc-on-linux/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-odbc-on-linux/index.html b/docs/configuring-odbc-on-linux/index.html
index 1ef9337..564a739 100644
--- a/docs/configuring-odbc-on-linux/index.html
+++ b/docs/configuring-odbc-on-linux/index.html
@@ -1080,7 +1080,7 @@ on Linux, copy the following configuration files in <code>/opt/mapr/drillobdc/Se
 
 <hr>
 
-<h2 id="step-1-set-environment-variables">Step 1: Set Environment Variables</h2>
+<h2 id="step-1:-set-environment-variables">Step 1: Set Environment Variables</h2>
 
 <ol>
 <li>Set the ODBCINI environment variable to point to the <code>.odbc.ini</code> in your home directory. For example:<br>
@@ -1100,7 +1100,7 @@ Only include the path to the shared libraries corresponding to the driver matchi
 
 <hr>
 
-<h2 id="step-2-define-the-odbc-data-sources-in-odbc-ini">Step 2: Define the ODBC Data Sources in .odbc.ini</h2>
+<h2 id="step-2:-define-the-odbc-data-sources-in-.odbc.ini">Step 2: Define the ODBC Data Sources in .odbc.ini</h2>
 
 <p>Define the ODBC data sources in the <code>~/.odbc.ini</code> configuration file for your environment. To use Drill in embedded mode, set the following properties:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">ConnectionType=Direct
@@ -1186,7 +1186,7 @@ behavior of DSNs using the MapR Drill ODBC Driver.</p>
 
 <hr>
 
-<h2 id="step-3-optional-define-the-odbc-driver-in-odbcinst-ini">Step 3: (Optional) Define the ODBC Driver in .odbcinst.ini</h2>
+<h2 id="step-3:-(optional)-define-the-odbc-driver-in-.odbcinst.ini">Step 3: (Optional) Define the ODBC Driver in .odbcinst.ini</h2>
 
 <p>The <code>.odbcinst.ini</code> is an optional configuration file that defines the ODBC
 Drivers. This configuration file is optional because you can specify drivers
@@ -1208,7 +1208,7 @@ Driver=/opt/mapr/drillodbc/lib/64/libmaprdrillodbc64.so
 </code></pre></div>
 <hr>
 
-<h2 id="step-4-configure-the-mapr-drill-odbc-driver">Step 4: Configure the MapR Drill ODBC Driver</h2>
+<h2 id="step-4:-configure-the-mapr-drill-odbc-driver">Step 4: Configure the MapR Drill ODBC Driver</h2>
 
 <p>Configure the MapR Drill ODBC Driver for your environment by modifying the <code>.mapr.drillodbc.ini</code> configuration
 file. This configures the driver to work with your ODBC driver manager. The following sample shows a possible configuration, which you can use as is if you installed the default iODBC driver manager.</p>
@@ -1231,7 +1231,7 @@ SwapFilePath=/tmp
 ODBCInstLib=libiodbcinst.so
 . . .
 </code></pre></div>
-<h3 id="configuring-mapr-drillodbc-ini">Configuring .mapr.drillodbc.ini</h3>
+<h3 id="configuring-.mapr.drillodbc.ini">Configuring .mapr.drillodbc.ini</h3>
 
 <p>To configure the MapR Drill ODBC Driver in the <code>mapr.drillodbc.ini</code> configuration file, complete the following steps:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/docs/configuring-odbc-on-mac-os-x/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-odbc-on-mac-os-x/index.html b/docs/configuring-odbc-on-mac-os-x/index.html
index 8d4aff3..abf6ae8 100644
--- a/docs/configuring-odbc-on-mac-os-x/index.html
+++ b/docs/configuring-odbc-on-mac-os-x/index.html
@@ -1094,7 +1094,7 @@ on Mac OS X, copy the following configuration files in <code>/opt/mapr/drillodbc
 
 <hr>
 
-<h2 id="step-1-set-environment-variables">Step 1: Set Environment Variables</h2>
+<h2 id="step-1:-set-environment-variables">Step 1: Set Environment Variables</h2>
 
 <p>Create or modify the <code>/etc/launchd.conf</code> file to set environment variables. Set the SIMBAINI variable to point to the <code>.mapr.drillodbc.ini</code> file, the ODBCSYSINI varialbe to the <code>.odbcinst.ini</code> file, the ODBCINI variable to the <code>.odbc.ini</code> file, and the DYLD_LIBRARY_PATH to the location of the dynamic linker (DYLD) libraries and to the MapR Drill ODBC Driver. If you installed the iODBC driver manager using the DMG, the DYLD libraries are installed in <code>/usr/local/iODBC/lib</code>. The launchd.conf file should look something like this:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">setenv SIMBAINI /Users/joeuser/.mapr.drillodbc.ini
@@ -1106,7 +1106,7 @@ setenv DYLD_LIBRARY_PATH /usr/local/iODBC/lib:/opt/mapr/drillodbc/lib/universal
 
 <hr>
 
-<h2 id="step-2-define-the-odbc-data-sources-in-odbc-ini">Step 2: Define the ODBC Data Sources in .odbc.ini</h2>
+<h2 id="step-2:-define-the-odbc-data-sources-in-.odbc.ini">Step 2: Define the ODBC Data Sources in .odbc.ini</h2>
 
 <p>Define the ODBC data sources in the <code>~/.odbc.ini</code> configuration file for your environment. </p>
 
@@ -1188,7 +1188,7 @@ behavior of DSNs using the MapR Drill ODBC Driver.</p>
 
 <hr>
 
-<h2 id="step-3-optional-define-the-odbc-driver-in-odbcinst-ini">Step 3: (Optional) Define the ODBC Driver in .odbcinst.ini</h2>
+<h2 id="step-3:-(optional)-define-the-odbc-driver-in-.odbcinst.ini">Step 3: (Optional) Define the ODBC Driver in .odbcinst.ini</h2>
 
 <p>The <code>.odbcinst.ini</code> is an optional configuration file that defines the ODBC
 Drivers. This configuration file is optional because you can specify drivers
@@ -1204,7 +1204,7 @@ Driver=/opt/mapr/drillodbc/lib/universal/libmaprdrillodbc.dylib
 </code></pre></div>
 <hr>
 
-<h2 id="step-4-configure-the-mapr-drill-odbc-driver">Step 4: Configure the MapR Drill ODBC Driver</h2>
+<h2 id="step-4:-configure-the-mapr-drill-odbc-driver">Step 4: Configure the MapR Drill ODBC Driver</h2>
 
 <p>Configure the MapR Drill ODBC Driver for your environment by modifying the <code>.mapr.drillodbc.ini</code> configuration
 file. This configures the driver to work with your ODBC driver manager. The following sample shows a possible configuration, which you can use as is if you installed the default iODBC driver manager.</p>
@@ -1223,7 +1223,7 @@ SwapFilePath=/tmp
 # iODBC
 ODBCInstLib=libiodbcinst.dylib
 </code></pre></div>
-<h3 id="configuring-mapr-drillodbc-ini">Configuring .mapr.drillodbc.ini</h3>
+<h3 id="configuring-.mapr.drillodbc.ini">Configuring .mapr.drillodbc.ini</h3>
 
 <p>To configure the MapR Drill ODBC Driver in the <code>mapr.drillodbc.ini</code> configuration file, complete the following steps:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/docs/configuring-odbc-on-windows/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-odbc-on-windows/index.html b/docs/configuring-odbc-on-windows/index.html
index abb3ee8..585517f 100644
--- a/docs/configuring-odbc-on-windows/index.html
+++ b/docs/configuring-odbc-on-windows/index.html
@@ -1056,7 +1056,7 @@ sources:</p>
 <li>Create an ODBC Connection String</li>
 </ul>
 
-<h2 id="sample-odbc-configuration-dsn">Sample ODBC Configuration (DSN)</h2>
+<h2 id="sample-odbc-configuration-(dsn)">Sample ODBC Configuration (DSN)</h2>
 
 <p>You can see how to create a DSN to connect to Drill data sources by taking a look at the preconfigured sample that the installer sets up. If
 you want to create a DSN for a 32-bit application, you must use the 32-bit

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/docs/configuring-resources-for-a-shared-drillbit/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-resources-for-a-shared-drillbit/index.html b/docs/configuring-resources-for-a-shared-drillbit/index.html
index ab33dc9..43b649a 100644
--- a/docs/configuring-resources-for-a-shared-drillbit/index.html
+++ b/docs/configuring-resources-for-a-shared-drillbit/index.html
@@ -1079,7 +1079,7 @@ The maximum degree of distribution of a query across cores and cluster nodes.</l
 Same as max per node but applies to the query as executed by the entire cluster.</li>
 </ul>
 
-<h3 id="planner-width-max_per_node">planner.width.max_per_node</h3>
+<h3 id="planner.width.max_per_node">planner.width.max_per_node</h3>
 
 <p>Configure the <code>planner.width.max_per_node</code> to achieve fine grained, absolute control over parallelization. In this context <em>width</em> refers to fanout or distribution potential: the ability to run a query in parallel across the cores on a node and the nodes on a cluster. A physical plan consists of intermediate operations, known as query &quot;fragments,&quot; that run concurrently, yielding opportunities for parallelism above and below each exchange operator in the plan. An exchange operator represents a breakpoint in the execution flow where processing can be distributed. For example, a single-process scan of a file may flow into an exchange operator, followed by a multi-process aggregation fragment.</p>
 
@@ -1089,7 +1089,7 @@ Same as max per node but applies to the query as executed by the entire cluster.
 
 <p>When you modify the default setting, you can supply any meaningful number. The system does not automatically scale down your setting.</p>
 
-<h3 id="planner-width-max_per_query">planner.width.max_per_query</h3>
+<h3 id="planner.width.max_per_query">planner.width.max_per_query</h3>
 
 <p>The max_per_query value also sets the maximum degree of parallelism for any given stage of a query, but the setting applies to the query as executed by the whole cluster (multiple nodes). In effect, the actual maximum width per query is the <em>minimum of two values</em>: min((number of nodes * width.max_per_node), width.max_per_query)</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/docs/configuring-tibco-spotfire-server-with-drill/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-tibco-spotfire-server-with-drill/index.html b/docs/configuring-tibco-spotfire-server-with-drill/index.html
index 2345ece..8e295e5 100644
--- a/docs/configuring-tibco-spotfire-server-with-drill/index.html
+++ b/docs/configuring-tibco-spotfire-server-with-drill/index.html
@@ -1061,7 +1061,7 @@
 
 <hr>
 
-<h3 id="step-1-install-and-configure-the-drill-jdbc-driver">Step 1: Install and Configure the Drill JDBC Driver</h3>
+<h3 id="step-1:-install-and-configure-the-drill-jdbc-driver">Step 1: Install and Configure the Drill JDBC Driver</h3>
 
 <p>Drill provides standard JDBC connectivity, making it easy to integrate data exploration capabilities on complex, schema-less data sets. Tibco Spotfire Server (TSS) requires Drill 1.0 or later, which incudes the JDBC driver. The JDBC driver is bundled with the Drill configuration files, and it is recommended that you use the JDBC driver that is shipped with the specific Drill version.</p>
 
@@ -1089,7 +1089,7 @@ For Windows systems, the hosts file is located here:
 
 <hr>
 
-<h3 id="step-2-configure-the-drill-data-source-template-in-tss">Step 2: Configure the Drill Data Source Template in TSS</h3>
+<h3 id="step-2:-configure-the-drill-data-source-template-in-tss">Step 2: Configure the Drill Data Source Template in TSS</h3>
 
 <p>The Drill Data Source template can now be configured with the TSS Configuration Tool. The Windows-based TSS Configuration Tool is recommended. If TSS is installed on a Linux system, you also need to install TSS on a small Windows-based system so you can utilize the Configuration Tool. In this case, it is also recommended that you install the Drill JDBC driver on the TSS Windows system.</p>
 
@@ -1144,7 +1144,7 @@ For Windows systems, the hosts file is located here:
 </code></pre></div>
 <hr>
 
-<h3 id="step-3-configure-drill-data-sources-with-tibco-spotfire-desktop">Step 3: Configure Drill Data Sources with Tibco Spotfire Desktop</h3>
+<h3 id="step-3:-configure-drill-data-sources-with-tibco-spotfire-desktop">Step 3: Configure Drill Data Sources with Tibco Spotfire Desktop</h3>
 
 <p>To configure Drill data sources in TSS, you need to use the Tibco Spotfire Desktop client.</p>
 
@@ -1161,7 +1161,7 @@ For Windows systems, the hosts file is located here:
 
 <hr>
 
-<h3 id="step-4-query-and-analyze-the-data">Step 4: Query and Analyze the Data</h3>
+<h3 id="step-4:-query-and-analyze-the-data">Step 4: Query and Analyze the Data</h3>
 
 <p>After the Drill data source has been configured in the Information Designer, the information elements can be defined. </p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/docs/configuring-user-impersonation-with-hive-authorization/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-user-impersonation-with-hive-authorization/index.html b/docs/configuring-user-impersonation-with-hive-authorization/index.html
index b9bc073..b18d92e 100644
--- a/docs/configuring-user-impersonation-with-hive-authorization/index.html
+++ b/docs/configuring-user-impersonation-with-hive-authorization/index.html
@@ -1078,7 +1078,7 @@
 <li>Hive remote metastore repository configured<br></li>
 </ul>
 
-<h2 id="step-1-enabling-drill-impersonation">Step 1: Enabling Drill Impersonation</h2>
+<h2 id="step-1:-enabling-drill-impersonation">Step 1: Enabling Drill Impersonation</h2>
 
 <p>Modify <code>&lt;DRILL_HOME&gt;/conf/drill-override.conf</code> on each Drill node to include the required properties, set the <a href="/docs/configuring-user-impersonation/#chained-impersonation">maximum number of chained user hops</a>, and restart the Drillbit process.</p>
 
@@ -1097,7 +1097,7 @@
 <code>&lt;DRILLINSTALL_HOME&gt;/bin/drillbit.sh restart</code>  </p></li>
 </ol>
 
-<h2 id="step-2-updating-hive-site-xml">Step 2:  Updating hive-site.xml</h2>
+<h2 id="step-2:-updating-hive-site.xml">Step 2:  Updating hive-site.xml</h2>
 
 <p>Update hive-site.xml with the parameters specific to the type of authorization that you are configuring and then restart Hive.  </p>
 
@@ -1129,7 +1129,7 @@
 <strong>Description:</strong> Tells HiveServer2 to execute Hive operations as the user submitting the query. Must be set to true for the storage based model.<br>
 <strong>Value:</strong> true</p>
 
-<h3 id="example-of-hive-site-xml-configuration-with-the-required-properties-for-storage-based-authorization">Example of hive-site.xml configuration with the required properties for storage based authorization</h3>
+<h3 id="example-of-hive-site.xml-configuration-with-the-required-properties-for-storage-based-authorization">Example of hive-site.xml configuration with the required properties for storage based authorization</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   &lt;configuration&gt;
      &lt;property&gt;
        &lt;name&gt;hive.metastore.uris&lt;/name&gt;
@@ -1205,7 +1205,7 @@
 <strong>Description:</strong> In unsecure mode, setting this property to true causes the metastore to execute DFS operations using the client&#39;s reported user and group permissions. Note: This property must be set on both the client and server sides. This is a best effort property. If the client is set to true and the server is set to false, the client setting is ignored.<br>
 <strong>Value:</strong> false  </p>
 
-<h3 id="example-of-hive-site-xml-configuration-with-the-required-properties-for-sql-standard-based-authorization">Example of hive-site.xml configuration with the required properties for SQL standard based authorization</h3>
+<h3 id="example-of-hive-site.xml-configuration-with-the-required-properties-for-sql-standard-based-authorization">Example of hive-site.xml configuration with the required properties for SQL standard based authorization</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   &lt;configuration&gt;
      &lt;property&gt;
        &lt;name&gt;hive.metastore.uris&lt;/name&gt;
@@ -1253,7 +1253,7 @@
      &lt;/property&gt;    
     &lt;/configuration&gt;
 </code></pre></div>
-<h2 id="step-3-modifying-the-hive-storage-plugin">Step 3: Modifying the Hive Storage Plugin</h2>
+<h2 id="step-3:-modifying-the-hive-storage-plugin">Step 3: Modifying the Hive Storage Plugin</h2>
 
 <p>Modify the Hive storage plugin configuration in the Drill Web Console to include specific authorization settings. The Drillbit that you use to access the Web Console must be running.  </p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/docs/configuring-user-impersonation/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-user-impersonation/index.html b/docs/configuring-user-impersonation/index.html
index 56da31c..a57677b 100644
--- a/docs/configuring-user-impersonation/index.html
+++ b/docs/configuring-user-impersonation/index.html
@@ -1111,7 +1111,7 @@ hadoop fs –chown &lt;user&gt;:&lt;group&gt; &lt;file_name&gt;
 </code></pre></div>
 <p>Example: <code>hadoop fs –chmod 750 employees.drill.view</code></p>
 
-<h3 id="modifying-system-session-level-view-permissions">Modifying SYSTEM|SESSION Level View Permissions</h3>
+<h3 id="modifying-system|session-level-view-permissions">Modifying SYSTEM|SESSION Level View Permissions</h3>
 
 <p>Use the <code>ALTER SESSION|SYSTEM</code> command with the <code>new_view_default_permissions</code> parameter and the appropriate octal code to set view permissions at the system or session level prior to creating a view.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">ALTER SESSION SET `new_view_default_permissions` = &#39;&lt;octal_code&gt;&#39;;

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/docs/custom-function-interfaces/index.html
----------------------------------------------------------------------
diff --git a/docs/custom-function-interfaces/index.html b/docs/custom-function-interfaces/index.html
index cb4687f..97801ea 100644
--- a/docs/custom-function-interfaces/index.html
+++ b/docs/custom-function-interfaces/index.html
@@ -1061,13 +1061,13 @@ public static class Add1 implements DrillSimpleFunc{
 
 <p>The simple function interface includes the <code>@Param</code> and <code>@Output</code> holders where you indicate the data types that your function can process.</p>
 
-<h3 id="param-holder">@Param Holder</h3>
+<h3 id="@param-holder">@Param Holder</h3>
 
 <p>This holder indicates the data type that the function processes as input and determines the number of parameters that your function accepts within the query. For example:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">@Param BigIntHolder input1;
 @Param BigIntHolder input2;
 </code></pre></div>
-<h3 id="output-holder">@Output Holder</h3>
+<h3 id="@output-holder">@Output Holder</h3>
 
 <p>This holder indicates the data type that the processing returns. For example:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">@Output BigIntHolder out;
@@ -1123,7 +1123,7 @@ public static class MySecondMin implements DrillAggFunc {
 </code></pre></div>
 <p>The aggregate function interface includes holders where you indicate the data types that your function can process. This interface includes the @Param and @Output holders previously described and also includes the @Workspace holder. </p>
 
-<h3 id="workspace-holder">@Workspace holder</h3>
+<h3 id="@workspace-holder">@Workspace holder</h3>
 
 <p>This holder indicates the data type used to store intermediate data during processing. For example:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">@Workspace BigIntHolder min;

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/docs/data-type-conversion/index.html
----------------------------------------------------------------------
diff --git a/docs/data-type-conversion/index.html b/docs/data-type-conversion/index.html
index e1ae8f0..4fbc1af 100644
--- a/docs/data-type-conversion/index.html
+++ b/docs/data-type-conversion/index.html
@@ -1645,7 +1645,7 @@ use in your Drill queries as described in this section:</p>
 </tr>
 </tbody></table>
 
-<h3 id="format-specifiers-for-date-time-conversions">Format Specifiers for Date/Time Conversions</h3>
+<h3 id="format-specifiers-for-date/time-conversions">Format Specifiers for Date/Time Conversions</h3>
 
 <p>Use the following Joda format specifiers for date/time conversions:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/docs/date-time-and-timestamp/index.html
----------------------------------------------------------------------
diff --git a/docs/date-time-and-timestamp/index.html b/docs/date-time-and-timestamp/index.html
index 6e15b4a..4f8b550 100644
--- a/docs/date-time-and-timestamp/index.html
+++ b/docs/date-time-and-timestamp/index.html
@@ -1155,7 +1155,7 @@ SELECT INTERVAL &#39;13&#39; month FROM (VALUES(1));
 +------------+
 1 row selected (0.076 seconds)
 </code></pre></div>
-<h2 id="date-time-and-timestamp">DATE, TIME, and TIMESTAMP</h2>
+<h2 id="date,-time,-and-timestamp">DATE, TIME, and TIMESTAMP</h2>
 
 <p>DATE, TIME, and TIMESTAMP literals. Drill stores values in Coordinated Universal Time (UTC). Drill supports time functions in the range 1971 to 2037.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/docs/date-time-functions-and-arithmetic/index.html
----------------------------------------------------------------------
diff --git a/docs/date-time-functions-and-arithmetic/index.html b/docs/date-time-functions-and-arithmetic/index.html
index eb5b6bb..516da38 100644
--- a/docs/date-time-functions-and-arithmetic/index.html
+++ b/docs/date-time-functions-and-arithmetic/index.html
@@ -1552,7 +1552,7 @@ SELECT NOW() FROM (VALUES(1));
 +------------+
 1 row selected (0.062 seconds)
 </code></pre></div>
-<h2 id="date-time-and-interval-arithmetic-functions">Date, Time, and Interval Arithmetic Functions</h2>
+<h2 id="date,-time,-and-interval-arithmetic-functions">Date, Time, and Interval Arithmetic Functions</h2>
 
 <p>Is the day returned from the NOW function the same as the day returned from the CURRENT_DATE function?</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">SELECT EXTRACT(day FROM NOW()) = EXTRACT(day FROM CURRENT_DATE) FROM (VALUES(1));

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/docs/drill-introduction/index.html
----------------------------------------------------------------------
diff --git a/docs/drill-introduction/index.html b/docs/drill-introduction/index.html
index 4b89c72..9ca9f66 100644
--- a/docs/drill-introduction/index.html
+++ b/docs/drill-introduction/index.html
@@ -1053,7 +1053,7 @@ applications, while still providing the familiarity and ecosystem of ANSI SQL,
 the industry-standard query language. Drill provides plug-and-play integration
 with existing Apache Hive and Apache HBase deployments. </p>
 
-<h2 id="what-39-s-new-in-apache-drill-1-3-and-1-4">What&#39;s New in Apache Drill 1.3 and 1.4</h2>
+<h2 id="what&#39;s-new-in-apache-drill-1.3-and-1.4">What&#39;s New in Apache Drill 1.3 and 1.4</h2>
 
 <p>These releases fix issues and add a number of enhancements, including the following ones:</p>
 
@@ -1066,7 +1066,7 @@ Support for columns that evolve from one data type to another over time. </li>
 <li>Enhancements related to querying Hive tables, MongoDB collections, and Avro files</li>
 </ul>
 
-<h2 id="what-39-s-new-in-apache-drill-1-2">What&#39;s New in Apache Drill 1.2</h2>
+<h2 id="what&#39;s-new-in-apache-drill-1.2">What&#39;s New in Apache Drill 1.2</h2>
 
 <p>This release of Drill fixes <a href="/docs/apache-drill-1-2-0-release-notes/">many issues</a> and introduces a number of enhancements, including the following ones:</p>
 
@@ -1099,7 +1099,7 @@ Javadocs and better application dependency compatibility<br></li>
 <li>Improved LIMIT processing</li>
 </ul>
 
-<h2 id="what-39-s-new-in-apache-drill-1-1">What&#39;s New in Apache Drill 1.1</h2>
+<h2 id="what&#39;s-new-in-apache-drill-1.1">What&#39;s New in Apache Drill 1.1</h2>
 
 <p>Many enhancements in Apache Drill 1.1 include the following key features:</p>
 
@@ -1110,7 +1110,7 @@ Javadocs and better application dependency compatibility<br></li>
 <li>Support for UNION and UNION ALL and better optimized plans that include UNION.</li>
 </ul>
 
-<h2 id="what-39-s-new-in-apache-drill-1-0">What&#39;s New in Apache Drill 1.0</h2>
+<h2 id="what&#39;s-new-in-apache-drill-1.0">What&#39;s New in Apache Drill 1.0</h2>
 
 <p>Apache Drill 1.0 offers the following new features:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/docs/drill-patch-review-tool/index.html
----------------------------------------------------------------------
diff --git a/docs/drill-patch-review-tool/index.html b/docs/drill-patch-review-tool/index.html
index 7a13090..96e0c4a 100644
--- a/docs/drill-patch-review-tool/index.html
+++ b/docs/drill-patch-review-tool/index.html
@@ -1079,7 +1079,7 @@
 
 <h3 id="drill-jira-and-reviewboard-script">Drill JIRA and Reviewboard script</h3>
 
-<h4 id="1-setup">1. Setup</h4>
+<h4 id="1.-setup">1. Setup</h4>
 
 <ol>
 <li>Follow instructions <a href="/docs/drill-patch-review-tool/#jira-command-line-tool">here</a> to setup the jira-python package</li>
@@ -1090,7 +1090,7 @@ On Mac -&gt; sudo easy_install argparse
 </code></pre></div></li>
 </ol>
 
-<h4 id="2-usage">2. Usage</h4>
+<h4 id="2.-usage">2. Usage</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">nnarkhed-mn: nnarkhed$ python drill-patch-review.py --help
 usage: drill-patch-review.py [-h] -b BRANCH -j JIRA [-s SUMMARY]
                              [-d DESCRIPTION] [-r REVIEWBOARD] [-t TESTING]
@@ -1117,7 +1117,7 @@ optional arguments:
   -rbu, --reviewboard-user Reviewboard user name
   -rbp, --reviewboard-password Reviewboard password
 </code></pre></div>
-<h4 id="3-upload-patch">3. Upload patch</h4>
+<h4 id="3.-upload-patch">3. Upload patch</h4>
 
 <ol>
 <li>Specify the branch against which the patch should be created (-b)</li>
@@ -1128,7 +1128,7 @@ optional arguments:
 <p>Example:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">python drill-patch-review.py -b origin/master -j DRILL-241 -rbu tnachen -rbp password
 </code></pre></div>
-<h4 id="4-update-patch">4. Update patch</h4>
+<h4 id="4.-update-patch">4. Update patch</h4>
 
 <ol>
 <li>Specify the branch against which the patch should be created (-b)</li>
@@ -1143,12 +1143,12 @@ optional arguments:
 </code></pre></div>
 <h3 id="jira-command-line-tool">JIRA command line tool</h3>
 
-<h4 id="1-download-the-jira-command-line-package">1. Download the JIRA command line package</h4>
+<h4 id="1.-download-the-jira-command-line-package">1. Download the JIRA command line package</h4>
 
 <p>Install the jira-python package.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">sudo easy_install jira-python
 </code></pre></div>
-<h4 id="2-configure-jira-username-and-password">2. Configure JIRA username and password</h4>
+<h4 id="2.-configure-jira-username-and-password">2. Configure JIRA username and password</h4>
 
 <p>Include a jira.ini file in your $HOME directory that contains your Apache JIRA
 username and password.</p>
@@ -1161,7 +1161,7 @@ password=***********
 <p>This is a quick tutorial on using <a href="https://reviews.apache.org">Review Board</a>
 with Drill.</p>
 
-<h4 id="1-install-the-post-review-tool">1. Install the post-review tool</h4>
+<h4 id="1.-install-the-post-review-tool">1. Install the post-review tool</h4>
 
 <p>If you are on RHEL, Fedora or CentOS, follow these steps:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">sudo yum install python-setuptools
@@ -1174,7 +1174,7 @@ sudo easy_install -U RBTools
 <p>For other platforms, follow the <a href="http://www.reviewboard.org/docs/manual/dev/users/tools/post-review/">instructions</a> to
 setup the post-review tool.</p>
 
-<h4 id="2-configure-stuff">2. Configure Stuff</h4>
+<h4 id="2.-configure-stuff">2. Configure Stuff</h4>
 
 <p>Then you need to configure a few things to make it work.</p>
 
@@ -1192,7 +1192,7 @@ TARGET_GROUPS = &#39;drill-git&#39;
 
 <h3 id="faq">FAQ</h3>
 
-<h4 id="when-i-run-the-script-it-throws-the-following-error-and-exits">When I run the script, it throws the following error and exits</h4>
+<h4 id="when-i-run-the-script,-it-throws-the-following-error-and-exits">When I run the script, it throws the following error and exits</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">nnarkhed$python drill-patch-review.py -b trunk -j DRILL-241
 There don&#39;t seem to be any diffs
 </code></pre></div>
@@ -1203,7 +1203,7 @@ There don&#39;t seem to be any diffs
 <li>The -b branch is not pointing to the remote branch. In the example above, &quot;trunk&quot; is specified as the branch, which is the local branch. The correct value for the -b (--branch) option is the remote branch. &quot;git branch -r&quot; gives the list of the remote branch names.</li>
 </ul>
 
-<h4 id="when-i-run-the-script-it-throws-the-following-error-and-exits">When I run the script, it throws the following error and exits</h4>
+<h4 id="when-i-run-the-script,-it-throws-the-following-error-and-exits">When I run the script, it throws the following error and exits</h4>
 
 <p>Error uploading diff</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/docs/drill-plan-syntax/index.html
----------------------------------------------------------------------
diff --git a/docs/drill-plan-syntax/index.html b/docs/drill-plan-syntax/index.html
index 621c968..dd2145f 100644
--- a/docs/drill-plan-syntax/index.html
+++ b/docs/drill-plan-syntax/index.html
@@ -1048,7 +1048,7 @@
 
     <div class="int_text" align="left">
       
-        <h3 id="whats-the-plan">Whats the plan?</h3>
+        <h3 id="whats-the-plan?">Whats the plan?</h3>
 
 <p>This section is about the end-to-end plan flow for Drill. The incoming query
 to Drill can be a SQL 2003 query/DrQL or MongoQL. The query is converted to a

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/docs/drop-table/index.html
----------------------------------------------------------------------
diff --git a/docs/drop-table/index.html b/docs/drop-table/index.html
index 17266b1..5b3aed8 100644
--- a/docs/drop-table/index.html
+++ b/docs/drop-table/index.html
@@ -1103,7 +1103,7 @@
 
 <p>The following examples show results for several DROP TABLE scenarios.  </p>
 
-<h3 id="example-1-identifying-a-schema">Example 1:  Identifying a schema</h3>
+<h3 id="example-1:-identifying-a-schema">Example 1:  Identifying a schema</h3>
 
 <p>This example shows you how to identify a schema with the USE and DROP TABLE commands and successfully drop a table named <code>donuts_json</code> in the <code>&quot;donuts&quot;</code> workspace configured within the DFS storage plugin configuration.  </p>
 
@@ -1157,7 +1157,7 @@
    Error: PARSE ERROR: Root schema is immutable. Creating or dropping tables/views is not allowed in root schema.Select a schema using &#39;USE schema&#39; command.
    [Error Id: 8c42cb6a-27eb-48fd-b42a-671a6fb58c14 on 10.250.56.218:31010] (state=,code=0)
 </code></pre></div>
-<h3 id="example-2-dropping-a-table-created-from-a-file">Example 2: Dropping a table created from a file</h3>
+<h3 id="example-2:-dropping-a-table-created-from-a-file">Example 2: Dropping a table created from a file</h3>
 
 <p>In the following example, the <code>donuts_json</code> table is removed from the <code>/tmp</code> workspace using the DROP TABLE command. This example assumes that the steps in the <a href="/docs/create-table-as-ctas/#complete-ctas-example">Complete CTAS Example</a> were already completed. </p>
 
@@ -1193,7 +1193,7 @@
    +-------+------------------------------+
    1 row selected (0.107 seconds)  
 </code></pre></div>
-<h3 id="example-3-dropping-a-table-created-as-a-directory">Example 3: Dropping a table created as a directory</h3>
+<h3 id="example-3:-dropping-a-table-created-as-a-directory">Example 3: Dropping a table created as a directory</h3>
 
 <p>When you create a table that writes files to a directory, you can issue the <code>DROP TABLE</code> command against the table to remove the directory. All files and subdirectories are deleted. For example, the following CTAS command writes Parquet data from the <code>nation.parquet</code> file, installed with Drill, to the <code>/tmp/name_key</code> directory.  </p>
 
@@ -1250,7 +1250,7 @@
    +-------+---------------------------+
    1 row selected (0.086 seconds)
 </code></pre></div>
-<h3 id="example-4-dropping-a-table-that-does-not-exist">Example 4: Dropping a table that does not exist</h3>
+<h3 id="example-4:-dropping-a-table-that-does-not-exist">Example 4: Dropping a table that does not exist</h3>
 
 <p>The following example shows the result of dropping a table that does not exist because it was either already dropped or never existed. </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   0: jdbc:drill:zk=local&gt; use use dfs.tmp;
@@ -1266,7 +1266,7 @@
    Error: VALIDATION ERROR: Table [name_key] not found
    [Error Id: fc6bfe17-d009-421c-8063-d759d7ea2f4e on 10.250.56.218:31010] (state=,code=0)
 </code></pre></div>
-<h3 id="example-5-dropping-a-table-without-permissions">Example 5: Dropping a table without permissions</h3>
+<h3 id="example-5:-dropping-a-table-without-permissions">Example 5: Dropping a table without permissions</h3>
 
 <p>The following example shows the result of dropping a table without the appropriate permissions in the file system.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   0: jdbc:drill:zk=local&gt; drop table name_key;
@@ -1274,7 +1274,7 @@
    Error: PERMISSION ERROR: Unauthorized to drop table
    [Error Id: 36f6b51a-786d-4950-a4a7-44250f153c55 on 10.10.30.167:31010] (state=,code=0)  
 </code></pre></div>
-<h3 id="example-6-dropping-and-querying-a-table-concurrently">Example 6: Dropping and querying a table concurrently</h3>
+<h3 id="example-6:-dropping-and-querying-a-table-concurrently">Example 6: Dropping and querying a table concurrently</h3>
 
 <p>The result of this scenario depends on the delta in time between one user dropping a table and another user issuing a query against the table. Results can also vary. In some instances the drop may succeed and the query fails completely or the query completes partially and then the table is dropped returning an exception in the middle of the query results.</p>
 
@@ -1296,7 +1296,7 @@
    Fragment 1:0
    [Error Id: 6e3c6a8d-8cfd-4033-90c4-61230af80573 on 10.10.30.167:31010] (state=,code=0)
 </code></pre></div>
-<h3 id="example-7-dropping-a-table-with-different-file-formats">Example 7: Dropping a table with different file formats</h3>
+<h3 id="example-7:-dropping-a-table-with-different-file-formats">Example 7: Dropping a table with different file formats</h3>
 
 <p>The following example shows the result of dropping a table when multiple file formats exists in the directory. In this scenario, the <code>sales_dir</code> table resides in the <code>dfs.sales</code> workspace and contains Parquet, CSV, and JSON files.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/docs/explain/index.html
----------------------------------------------------------------------
diff --git a/docs/explain/index.html b/docs/explain/index.html
index 17fbdb4..a3f83b2 100644
--- a/docs/explain/index.html
+++ b/docs/explain/index.html
@@ -1084,7 +1084,7 @@ you are selecting from, you are likely to see plan changes.</p>
 <p>This option returns costing information. You can use this option for both
 physical and logical plans.</p>
 
-<h4 id="with-implementation-without-implementation">WITH IMPLEMENTATION | WITHOUT IMPLEMENTATION</h4>
+<h4 id="with-implementation-|-without-implementation">WITH IMPLEMENTATION | WITHOUT IMPLEMENTATION</h4>
 
 <p>These options return the physical and logical plan information, respectively.
 The default is physical (WITH IMPLEMENTATION).</p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/docs/getting-to-know-the-drill-sandbox/index.html
----------------------------------------------------------------------
diff --git a/docs/getting-to-know-the-drill-sandbox/index.html b/docs/getting-to-know-the-drill-sandbox/index.html
index feb1e2a..eff836d 100644
--- a/docs/getting-to-know-the-drill-sandbox/index.html
+++ b/docs/getting-to-know-the-drill-sandbox/index.html
@@ -1157,7 +1157,7 @@ URI. Metadata for Hive tables is automatically available for users to query.</p>
 </code></pre></div>
 <p>Do not use this storage plugin configuration outside the sandbox. Use the configuration for either the <a href="/docs/hive-storage-plugin/">remote or embedded metastore configuration</a>.</p>
 
-<h2 id="what-39-s-next">What&#39;s Next</h2>
+<h2 id="what&#39;s-next">What&#39;s Next</h2>
 
 <p>Start running queries by going to <a href="/docs/lesson-1-learn-about-the-data-set">Lesson 1: Learn About the Data
 Set</a>.</p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/docs/how-to-partition-data/index.html
----------------------------------------------------------------------
diff --git a/docs/how-to-partition-data/index.html b/docs/how-to-partition-data/index.html
index e036691..c6ee926 100644
--- a/docs/how-to-partition-data/index.html
+++ b/docs/how-to-partition-data/index.html
@@ -1056,7 +1056,7 @@
 
 <p>Unlike using the Drill 1.0 partitioning, no view query is subsequently required, nor is it necessary to use the <a href="/docs/querying-directories">dir* variables</a> after you use the PARTITION BY clause in a CTAS statement. </p>
 
-<h2 id="drill-1-0-partitioning">Drill 1.0 Partitioning</h2>
+<h2 id="drill-1.0-partitioning">Drill 1.0 Partitioning</h2>
 
 <p>Drill 1.0 does not support the PARTITION BY clause of the CTAS command supported by later versions. Partitioning Drill 1.0-generated data involves performing the following steps.   </p>
 
@@ -1068,7 +1068,7 @@
 
 <p>After partitioning the data, you need to create a view of the partitioned data to query the data. You can use the <a href="/docs/querying-directories">dir* variables</a> in queries to refer to subdirectories in your workspace path.</p>
 
-<h3 id="drill-1-0-partitioning-example">Drill 1.0 Partitioning Example</h3>
+<h3 id="drill-1.0-partitioning-example">Drill 1.0 Partitioning Example</h3>
 
 <p>Suppose you have text files containing several years of log data. To partition the data by year and quarter, create the following hierarchy of directories:  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   …/logs/1994/Q1  

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/docs/installing-the-apache-drill-sandbox/index.html
----------------------------------------------------------------------
diff --git a/docs/installing-the-apache-drill-sandbox/index.html b/docs/installing-the-apache-drill-sandbox/index.html
index 582417b..24aaae5 100644
--- a/docs/installing-the-apache-drill-sandbox/index.html
+++ b/docs/installing-the-apache-drill-sandbox/index.html
@@ -1083,7 +1083,7 @@ instructions:</p>
 <li>To install VirtualBox, see the <a href="http://dlc.sun.com.edgesuite.net/virtualbox/4.3.4/UserManual.pdf">Oracle VM VirtualBox User Manual</a>. By downloading VirtualBox, you agree to the terms and conditions of the respective license.</li>
 </ul>
 
-<h2 id="installing-the-mapr-sandbox-with-apache-drill-on-vmware-player-vmware-fusion">Installing the MapR Sandbox with Apache Drill on VMware Player/VMware Fusion</h2>
+<h2 id="installing-the-mapr-sandbox-with-apache-drill-on-vmware-player/vmware-fusion">Installing the MapR Sandbox with Apache Drill on VMware Player/VMware Fusion</h2>
 
 <p>Complete the following steps to install the MapR Sandbox with Apache Drill on
 VMware Player or VMware Fusion:</p>
@@ -1125,7 +1125,7 @@ The Import Virtual Machine dialog appears.</p></li>
 <li>Alternatively, access the command line on the VM: Press Alt+F2 on Windows or Option+F5 on Mac.<br></li>
 </ul>
 
-<h3 id="what-39-s-next">What&#39;s Next</h3>
+<h3 id="what&#39;s-next">What&#39;s Next</h3>
 
 <p>After downloading and installing the sandbox, continue with the tutorial by
 <a href="/docs/getting-to-know-the-drill-sandbox/">Getting to Know the Drill
@@ -1175,7 +1175,7 @@ VirtualBox:</p>
 </ul></li>
 </ol>
 
-<h3 id="what-39-s-next">What&#39;s Next</h3>
+<h3 id="what&#39;s-next">What&#39;s Next</h3>
 
 <p>After downloading and installing the sandbox, continue with the tutorial by
 <a href="/docs/getting-to-know-the-drill-sandbox/">Getting to Know the Drill Sandbox</a>.</p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/docs/installing-the-driver-on-linux/index.html
----------------------------------------------------------------------
diff --git a/docs/installing-the-driver-on-linux/index.html b/docs/installing-the-driver-on-linux/index.html
index 9c1f5f6..5e978bb 100644
--- a/docs/installing-the-driver-on-linux/index.html
+++ b/docs/installing-the-driver-on-linux/index.html
@@ -1092,7 +1092,7 @@ Example: <code>127.0.0.1 localhost</code></p></li>
 
 <p>To install the driver, you need Administrator privileges on the computer.</p>
 
-<h2 id="step-1-download-the-mapr-drill-odbc-driver">Step 1: Download the MapR Drill ODBC Driver</h2>
+<h2 id="step-1:-download-the-mapr-drill-odbc-driver">Step 1: Download the MapR Drill ODBC Driver</h2>
 
 <p>Download either the 32- or 64-bit driver:</p>
 
@@ -1101,7 +1101,7 @@ Example: <code>127.0.0.1 localhost</code></p></li>
 <li><a href="http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/MapRDrill_odbc_v1.2.0.1000/MapRDrillODBC-1.2.0.x86_64.rpm">MapR Drill ODBC Driver (64-bit)</a></li>
 </ul>
 
-<h2 id="step-2-install-the-mapr-drill-odbc-driver">Step 2: Install the MapR Drill ODBC Driver</h2>
+<h2 id="step-2:-install-the-mapr-drill-odbc-driver">Step 2: Install the MapR Drill ODBC Driver</h2>
 
 <p>To install the driver, complete the following steps:</p>
 
@@ -1156,7 +1156,7 @@ locations and descriptions:</p>
 </tr>
 </tbody></table>
 
-<h2 id="step-3-check-the-mapr-drill-odbc-driver-version">Step 3: Check the MapR Drill ODBC Driver version</h2>
+<h2 id="step-3:-check-the-mapr-drill-odbc-driver-version">Step 3: Check the MapR Drill ODBC Driver version</h2>
 
 <p>To check the version of the driver you installed, use the following case-sensitive command on the terminal command line:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/90078fe1/docs/installing-the-driver-on-mac-os-x/index.html
----------------------------------------------------------------------
diff --git a/docs/installing-the-driver-on-mac-os-x/index.html b/docs/installing-the-driver-on-mac-os-x/index.html
index 852a6d1..41d4b09 100644
--- a/docs/installing-the-driver-on-mac-os-x/index.html
+++ b/docs/installing-the-driver-on-mac-os-x/index.html
@@ -1077,7 +1077,7 @@ Example: <code>127.0.0.1 localhost</code></p></li>
 
 <hr>
 
-<h2 id="step-1-download-the-mapr-drill-odbc-driver">Step 1: Download the MapR Drill ODBC Driver</h2>
+<h2 id="step-1:-download-the-mapr-drill-odbc-driver">Step 1: Download the MapR Drill ODBC Driver</h2>
 
 <p>Click the following link to download the driver:  </p>
 
@@ -1085,7 +1085,7 @@ Example: <code>127.0.0.1 localhost</code></p></li>
 
 <hr>
 
-<h2 id="step-2-install-the-mapr-drill-odbc-driver">Step 2: Install the MapR Drill ODBC Driver</h2>
+<h2 id="step-2:-install-the-mapr-drill-odbc-driver">Step 2: Install the MapR Drill ODBC Driver</h2>
 
 <p>To install the driver, complete the following steps:</p>
 
@@ -1107,7 +1107,7 @@ Example: <code>127.0.0.1 localhost</code></p></li>
 <li><code>/opt/mapr/drillodbc/lib/universal</code> – Binaries directory</li>
 </ul>
 
-<h2 id="step-3-check-the-mapr-drill-odbc-driver-version">Step 3: Check the MapR Drill ODBC Driver version</h2>
+<h2 id="step-3:-check-the-mapr-drill-odbc-driver-version">Step 3: Check the MapR Drill ODBC Driver version</h2>
 
 <p>To check the version of the driver you installed, use the following command on the terminal command line:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">$ pkgutil --info mapr.drillodbc