You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@drill.apache.org by br...@apache.org on 2018/02/09 00:23:14 UTC

[2/2] drill-site git commit: edit files for JJ transform in DITA

edit files for JJ transform in DITA


Project: http://git-wip-us.apache.org/repos/asf/drill-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill-site/commit/edc3f206
Tree: http://git-wip-us.apache.org/repos/asf/drill-site/tree/edc3f206
Diff: http://git-wip-us.apache.org/repos/asf/drill-site/diff/edc3f206

Branch: refs/heads/asf-site
Commit: edc3f206ca677087e917b32e102137330d080d9b
Parents: 36b5428
Author: Bridget Bevens <bb...@maprtech.com>
Authored: Thu Feb 8 16:22:56 2018 -0800
Committer: Bridget Bevens <bb...@maprtech.com>
Committed: Thu Feb 8 16:22:56 2018 -0800

----------------------------------------------------------------------
 .../index.html                                  |  10 +-
 .../running-sql-queries-on-amazon-s3/index.html | 240 +++++++++++++++++++
 blog/index.html                                 |  10 +-
 .../index.html                                  |   4 +-
 docs/configuring-jreport-with-drill/index.html  |  12 +-
 .../index.html                                  |  20 +-
 docs/configuring-user-security/index.html       |   7 +-
 docs/drill-plan-syntax/index.html               |   4 +-
 docs/explain/index.html                         |  14 +-
 docs/lexical-structure/index.html               |   4 +-
 .../index.html                                  |   9 +-
 docs/querying-sequence-files/index.html         |  21 +-
 docs/rest-api-introduction/index.html           |   4 +-
 docs/rpc-overview/index.html                    |   4 +-
 docs/sequence-files/index.html                  |   6 +-
 docs/show-tables/index.html                     |   4 +-
 docs/supported-sql-commands/index.html          |   2 +-
 docs/text-files-csv-tsv-psv/index.html          |  30 +--
 .../index.html                                  |  22 +-
 .../index.html                                  |  20 +-
 .../index.html                                  |  22 +-
 .../index.html                                  |  16 +-
 .../index.html                                  |  16 +-
 docs/using-qlik-sense-with-drill/index.html     |  28 +--
 .../index.html                                  |   8 +-
 feed.xml                                        | 156 ++++++------
 index.html                                      |   2 +-
 27 files changed, 442 insertions(+), 253 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill-site/blob/edc3f206/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html
----------------------------------------------------------------------
diff --git a/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html b/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html
index 2b6fe1d..41d8b17 100644
--- a/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html
+++ b/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html
@@ -152,23 +152,23 @@
 
 <p>Apache Drill committers Tomer Shiran, Jacques Nadeau, and Ted Dunning, as well as Tableau Product Manager Jeff Feng and Data Scientist Dr. Kirk Borne will be on hand to answer your questions.</p>
 
-<h4 id="tomer-shiran,-apache-drill-founder-(@tshiran)">Tomer Shiran, Apache Drill Founder (@tshiran)</h4>
+<h2 id="tomer-shiran,-apache-drill-founder-(@tshiran)">Tomer Shiran, Apache Drill Founder (@tshiran)</h2>
 
 <p>Tomer Shiran is the founder of Apache Drill, and a PMC member and committer on the project. He is VP Product Management at MapR, responsible for product strategy, roadmap and new feature development. Prior to MapR, Tomer held numerous product management and engineering roles at Microsoft, most recently as the product manager for Microsoft Internet Security &amp; Acceleration Server (now Microsoft Forefront). He is the founder of two websites that have served tens of millions of users, and received coverage in prestigious publications such as The New York Times, USA Today and The Times of London. Tomer is also the author of a 900-page programming book. He holds an MS in Computer Engineering from Carnegie Mellon University and a BS in Computer Science from Technion - Israel Institute of Technology.</p>
 
-<h4 id="jeff-feng,-product-manager-tableau-software-(@jtfeng)">Jeff Feng, Product Manager Tableau Software (@jtfeng)</h4>
+<h2 id="jeff-feng,-product-manager-tableau-software-(@jtfeng)">Jeff Feng, Product Manager Tableau Software (@jtfeng)</h2>
 
 <p>Jeff Feng is a Product Manager at Tableau and leads their Big Data product roadmap &amp; strategic vision.  In his role, he focuses on joint technology integration and partnership efforts with a number of Hadoop, NoSQL and web application partners in helping users see and understand their data.</p>
 
-<h4 id="ted-dunning,-apache-drill-comitter-(@ted_dunning)">Ted Dunning, Apache Drill Comitter (@Ted_Dunning)</h4>
+<h2 id="ted-dunning,-apache-drill-comitter-(@ted_dunning)">Ted Dunning, Apache Drill Comitter (@Ted_Dunning)</h2>
 
 <p>Ted Dunning is Chief Applications Architect at MapR Technologies and committer and PMC member of the Apache Mahout, Apache ZooKeeper, and Apache Drill projects and mentor for Apache Storm. He contributed to Mahout clustering, classification and matrix decomposition algorithms  and helped expand the new version of Mahout Math library. Ted was the chief architect behind the MusicMatch (now Yahoo Music) and Veoh recommendation systems, he built fraud detection systems for ID Analytics (LifeLock) and he has issued 24 patents to date. Ted has a PhD in computing science from University of Sheffield. When he’s not doing data science, he plays guitar and mandolin.</p>
 
-<h4 id="jacques-nadeau,-vice-president,-apache-drill-(@intjesus)">Jacques Nadeau, Vice President, Apache Drill (@intjesus)</h4>
+<h2 id="jacques-nadeau,-vice-president,-apache-drill-(@intjesus)">Jacques Nadeau, Vice President, Apache Drill (@intjesus)</h2>
 
 <p>Jacques Nadeau leads Apache Drill development efforts at MapR Technologies. He is an industry veteran with over 15 years of big data and analytics experience. Most recently, he was cofounder and CTO of search engine startup YapMap. Before that, he was director of new product engineering with Quigo (contextual advertising, acquired by AOL in 2007). He also built the Avenue A | Razorfish analytics data warehousing system and associated services practice (acquired by Microsoft).</p>
 
-<h4 id="dr.-kirk-borne,-george-mason-university-(@kirkdborne)">Dr. Kirk Borne, George Mason University (@KirkDBorne)</h4>
+<h2 id="dr.-kirk-borne,-george-mason-university-(@kirkdborne)">Dr. Kirk Borne, George Mason University (@KirkDBorne)</h2>
 
 <p>Dr. Kirk Borne is a Transdisciplinary Data Scientist and an Astrophysicist. He is Professor of Astrophysics and Computational Science in the George Mason University School of Physics, Astronomy, and Computational Sciences. He has been at Mason since 2003, where he teaches and advises students in the graduate and undergraduate Computational Science, Informatics, and Data Science programs. Previously, he spent nearly 20 years in positions supporting NASA projects, including an assignment as NASA&#39;s Data Archive Project Scientist for the Hubble Space Telescope, and as Project Manager in NASA&#39;s Space Science Data Operations Office. He has extensive experience in big data and data science, including expertise in scientific data mining and data systems. He has published over 200 articles (research papers, conference papers, and book chapters), and given over 200 invited talks at conferences and universities worldwide.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/edc3f206/blog/2018/02/09/running-sql-queries-on-amazon-s3/index.html
----------------------------------------------------------------------
diff --git a/blog/2018/02/09/running-sql-queries-on-amazon-s3/index.html b/blog/2018/02/09/running-sql-queries-on-amazon-s3/index.html
new file mode 100644
index 0000000..660262f
--- /dev/null
+++ b/blog/2018/02/09/running-sql-queries-on-amazon-s3/index.html
@@ -0,0 +1,240 @@
+<!DOCTYPE html>
+<html>
+
+<head>
+
+<meta charset="UTF-8">
+<meta name=viewport content="width=device-width, initial-scale=1">
+
+
+<title>Running SQL Queries on Amazon S3 - Apache Drill</title>
+
+<link href="//maxcdn.bootstrapcdn.com/font-awesome/4.3.0/css/font-awesome.min.css" rel="stylesheet" type="text/css"/>
+<link href='//fonts.googleapis.com/css?family=PT+Sans' rel='stylesheet' type='text/css'/>
+<link href="/css/site.css" rel="stylesheet" type="text/css"/>
+
+<link rel="shortcut icon" href="/favicon.ico" type="image/x-icon"/>
+<link rel="icon" href="/favicon.ico" type="image/x-icon"/>
+
+<script src="//ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js" language="javascript" type="text/javascript"></script>
+<script src="//cdnjs.cloudflare.com/ajax/libs/jquery-easing/1.3/jquery.easing.min.js" language="javascript" type="text/javascript"></script>
+<script language="javascript" type="text/javascript" src="/js/modernizr.custom.js"></script>
+<script language="javascript" type="text/javascript" src="/js/script.js"></script>
+<script language="javascript" type="text/javascript" src="/js/drill.js"></script>
+
+</head>
+
+
+<body onResize="resized();">
+  <div class="page-wrap">
+    <div class="bui"></div>
+
+<div id="menu" class="mw">
+<ul>
+  <li class='toc-categories'>
+  <a class="expand-toc-icon" href="javascript:void(0);"><i class="fa fa-bars"></i></a>
+  </li>
+  <li class="logo"><a href="/"></a></li>
+  <li class='expand-menu'>
+  <a href="javascript:void(0);"><span class='menu-text'>Menu</span><span class='expand-icon'><i class="fa fa-bars"></i></span></a>
+  </li>
+  <li class='clear-float'></li>
+  <li class="documentation-menu">
+    <a href="/docs/">Documentation</a>
+    <ul>
+      
+        <li><a href="/docs/getting-started/">Getting Started</a></li>
+      
+        <li><a href="/docs/architecture/">Architecture</a></li>
+      
+        <li><a href="/docs/tutorials/">Tutorials</a></li>
+      
+        <li><a href="/docs/install-drill/">Install Drill</a></li>
+      
+        <li><a href="/docs/configure-drill/">Configure Drill</a></li>
+      
+        <li><a href="/docs/connect-a-data-source/">Connect a Data Source</a></li>
+      
+        <li><a href="/docs/odbc-jdbc-interfaces/">ODBC/JDBC Interfaces</a></li>
+      
+        <li><a href="/docs/query-data/">Query Data</a></li>
+      
+        <li><a href="/docs/performance-tuning/">Performance Tuning</a></li>
+      
+        <li><a href="/docs/log-and-debug/">Log and Debug</a></li>
+      
+        <li><a href="/docs/sql-reference/">SQL Reference</a></li>
+      
+        <li><a href="/docs/data-sources-and-file-formats/">Data Sources and File Formats</a></li>
+      
+        <li><a href="/docs/develop-custom-functions/">Develop Custom Functions</a></li>
+      
+        <li><a href="/docs/troubleshooting/">Troubleshooting</a></li>
+      
+        <li><a href="/docs/developer-information/">Developer Information</a></li>
+      
+        <li><a href="/docs/release-notes/">Release Notes</a></li>
+      
+        <li><a href="/docs/sample-datasets/">Sample Datasets</a></li>
+      
+        <li><a href="/docs/project-bylaws/">Project Bylaws</a></li>
+      
+    </ul>
+  </li>
+  <li class='nav'>
+    <a href="/community-resources/">Community</a>
+    <ul>
+      <li><a href="/team/">Team</a></li>
+      <li><a href="/mailinglists/">Mailing Lists</a></li>
+      <li><a href="/community-resources/">Community Resources</a></li>
+    </ul>
+  </li>
+  <li class='nav'><a href="/faq/">FAQ</a></li>
+  <li class='nav'><a href="/blog/">Blog</a></li>
+  <li id="twitter-menu-item"><a href="https://twitter.com/apachedrill" title="apachedrill on twitter" target="_blank"><img src="/images/twitter_32_26_white.png" alt="twitter logo" align="center"></a> </li>
+  <li class='search-bar'>
+    <form id="drill-search-form">
+      <input type="text" placeholder="Search Apache Drill" id="drill-search-term" />
+      <button type="submit">
+        <i class="fa fa-search"></i>
+      </button>
+    </form>
+  </li>
+  <li class="d">
+    <a href="/download/">
+      <i class="fa fa-cloud-download"></i> Download
+    </a>
+  </li>
+</ul>
+</div>
+
+    <link href="/css/content.css" rel="stylesheet" type="text/css">
+
+<div class="post int_text">
+  <header class="post-header">
+    <div class="int_title">
+      <h1 class="post-title">Running SQL Queries on Amazon S3</h1>
+    </div>
+    <p class="post-meta">
+    
+      
+      
+      <strong>Author:</strong> Nick Amato (Director, Technical Marketing, MapR Technologies)<br />
+    
+<strong>Date:</strong> Feb 9, 2018
+</p>
+  </header>
+  <div class="addthis_sharing_toolbox"></div>
+
+  <article class="post-content">
+    <p>The functionality and sheer usefulness of Drill is growing fast.  If you&#39;re a user of some of the popular BI tools out there like Tableau or SAP Lumira, now is a good time to take a look at how Drill can make your life easier, especially if  you&#39;re faced with the task of quickly getting a handle on large sets of unstructured data.  With schema generated on the fly, you can save a lot of time and headaches by running SQL queries on the data where it rests without knowing much about columns or formats.  There&#39;s even more good news:  Drill also works with data stored in the cloud.  With a few simple steps, you can configure the S3 storage plugin for Drill and be off to the races running queries.  In this post we&#39;ll look at how to configure Drill to access data stored in an S3 bucket.</p>
+
+<p>If you&#39;re more of a visual person, you can skip this article entirely and <a href="https://www.youtube.com/watch?v=w8gZ2nn_ZUQ">go straight to a video</a> I put together that walks through an end-to-end example with Tableau.  This example is easily extended to other BI tools, as the steps are identical on the Drill side.</p>
+
+<p>At a high level, configuring Drill to access S3 bucket data is accomplished with the following steps on each node running a drillbit.</p>
+
+<ul>
+<li>Download and install the <a href="http://www.jets3t.org/">JetS3t</a> JAR files and enable them.</li>
+<li>Add your S3 credentials in the relevant XML configuration file.</li>
+<li>Configure and enable the S3 storage plugin through the Drill web interface.</li>
+<li>Connect your BI tool of choice and query away.</li>
+</ul>
+
+<p>Consult the <a href="https://cwiki.apache.org/confluence/display/DRILL/Architectural+Overview">Architectural Overview</a> for a refresher on the architecture of Drill.</p>
+
+<h2 id="prerequisites">Prerequisites</h2>
+
+<p>These steps assume you have a <a href="https://cwiki.apache.org/confluence/display/DRILL/Apache+Drill+in+10+Minutes">typical Drill cluster and ZooKeeper quorum</a> configured and running.  To access data in S3, you will need an S3 bucket configured and have the required Amazon security credentials in your possession.  An <a href="http://blogs.aws.amazon.com/security/post/Tx1R9KDN9ISZ0HF/Where-s-my-secret-access-key">Amazon blog post</a> has more information on how to get these from your account.</p>
+
+<h2 id="configuration-steps">Configuration Steps</h2>
+
+<p>To connect Drill to S3, all of the drillbit nodes will need to access code in the JetS3t library developed by Amazon.  As of this writing, 0.9.2 is the latest version but you might want to check <a href="https://jets3t.s3.amazonaws.com/toolkit/toolkit.html">the main page</a> to see if anything has been updated.  Be sure to get version 0.9.2 or later as earlier versions have a bug relating to reading Parquet data.</p>
+<div class="highlight"><pre><code class="language-bash" data-lang="bash">wget http://bitbucket.org/jmurty/jets3t/downloads/jets3t-0.9.2.zip
+cp jets3t-0.9.2/jars/jets3t-0.9.2.jar <span class="nv">$DRILL_HOME</span>/jars/3rdparty
+</code></pre></div>
+<p>Next, enable the plugin by editing the file:</p>
+<div class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nv">$DRILL_HOME</span>/bin/hadoop_excludes.txt
+</code></pre></div>
+<p>and removing the line <code>jets3t</code>.</p>
+
+<p>Drill will need to know your S3 credentials in order to access data there. These credentials will need to be placed in the core-site.xml file for your installation.  If you already have a core-site.xml file configured for your environment, add the following parameters to it, otherwise create the file from scratch.  If you do end up creating it from scratch you will need to wrap these parameters with <code>&lt;configuration&gt;</code> and <code>&lt;/configuration&gt;</code>.</p>
+<div class="highlight"><pre><code class="language-xml" data-lang="xml"><span class="nt">&lt;property&gt;</span>
+  <span class="nt">&lt;name&gt;</span>fs.s3.awsAccessKeyId<span class="nt">&lt;/name&gt;</span>
+  <span class="nt">&lt;value&gt;</span>ID<span class="nt">&lt;/value&gt;</span>
+<span class="nt">&lt;/property&gt;</span>
+
+<span class="nt">&lt;property&gt;</span>
+  <span class="nt">&lt;name&gt;</span>fs.s3.awsSecretAccessKey<span class="nt">&lt;/name&gt;</span>
+  <span class="nt">&lt;value&gt;</span>SECRET<span class="nt">&lt;/value&gt;</span>
+<span class="nt">&lt;/property&gt;</span>
+
+<span class="nt">&lt;property&gt;</span>
+  <span class="nt">&lt;name&gt;</span>fs.s3n.awsAccessKeyId<span class="nt">&lt;/name&gt;</span>
+  <span class="nt">&lt;value&gt;</span>ID<span class="nt">&lt;/value&gt;</span>
+<span class="nt">&lt;/property&gt;</span>
+
+<span class="nt">&lt;property&gt;</span>
+  <span class="nt">&lt;name&gt;</span>fs.s3n.awsSecretAccessKey<span class="nt">&lt;/name&gt;</span>
+  <span class="nt">&lt;value&gt;</span>SECRET<span class="nt">&lt;/value&gt;</span>
+<span class="nt">&lt;/property&gt;</span>
+</code></pre></div>
+<p>The steps so far give Drill enough information to connect to the S3 service.  Remember, you have to do this on all the nodes running drillbit.</p>
+
+<p>Next, let&#39;s go into the Drill web interface and enable the S3 storage plugin.  In this case you only need to connect to <strong>one</strong> of the nodes because Drill&#39;s configuration is synchronized across the cluster.  Complete the following steps:</p>
+
+<ol>
+<li>Point your browser to <code>http://&lt;host&gt;:8047</code></li>
+<li>Select the &#39;Storage&#39; tab.</li>
+<li>A good starting configuration for S3 can be entirely the same as the <code>dfs</code> plugin, except the connection parameter is changed to <code>s3://bucket</code>.  So first select the <code>Update</code> button for <code>dfs</code>, then select the text area and copy it into the clipboard (on Windows, ctrl-A, ctrl-C works).</li>
+<li>Press <code>Back</code>, then create a new plugin by typing the name into the <code>New Storage Plugin</code>, then press <code>Create</code>.  You can choose any name, but a good convention is to use <code>s3-&lt;bucketname&gt;</code> so you can easily identify it later.</li>
+<li>In the configuration area, paste the configuration you just grabbed from &#39;dfs&#39;.  Change the line <code>connection: &quot;file:///&quot;</code> to <code>connection: &quot;s3://&lt;bucket&gt;&quot;</code>.</li>
+<li>Click <code>Update</code>.  You should see a message that indicates success.</li>
+</ol>
+
+<p>At this point you can run queries on the data directly and you have a couple of options on how you want to access it.  You can use Drill Explorer and create a custom view (based on an SQL query) that you can then access in Tableau or other BI tools, or just use Drill directly from within the tool.</p>
+
+<p>You may want to check out the <a href="http://www.youtube.com/watch?v=jNUsprJNQUg">Tableau demo</a>.</p>
+
+<p>With just a few lines of configuration, you&#39;ve just opened the vast world of data available in the Amazon cloud and reduced the amount of work you have to do in advance to access data stored there with SQL.  There are even some <a href="https://aws.amazon.com/datasets">public datasets</a> available directly on S3 that are great for experimentation.</p>
+
+<p>Happy Drilling!</p>
+
+  </article>
+ <div id="disqus_thread"></div>
+    <script type="text/javascript">
+        /* * * CONFIGURATION VARIABLES: EDIT BEFORE PASTING INTO YOUR WEBPAGE * * */
+        var disqus_shortname = 'drill'; // required: replace example with your forum shortname
+
+        /* * * DON'T EDIT BELOW THIS LINE * * */
+        (function() {
+            var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true;
+            dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js';
+            (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq);
+        })();
+    </script>
+    <noscript>Please enable JavaScript to view the <a href="http://disqus.com/?ref_noscript">comments powered by Disqus.</a></noscript>
+    
+</div>
+<script type="text/javascript" src="//s7.addthis.com/js/300/addthis_widget.js#pubid=ra-548b2caa33765e8d" async="async"></script>
+
+  </div>
+  <p class="push"></p>
+<div id="footer" class="mw">
+<div class="wrapper">
+Copyright © 2012-2014 The Apache Software Foundation, licensed under the Apache License, Version 2.0.<br>
+Apache and the Apache feather logo are trademarks of The Apache Software Foundation. Other names appearing on the site may be trademarks of their respective owners.<br/><br/>
+</div>
+</div>
+
+  <script>
+(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
+(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
+m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
+})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
+
+ga('create', 'UA-53379651-1', 'auto');
+ga('send', 'pageview');
+</script>
+<script type="text/javascript" src="//s7.addthis.com/js/300/addthis_widget.js#pubid=ra-548b2caa33765e8d" async="async"></script>
+</body>
+</html>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/edc3f206/blog/index.html
----------------------------------------------------------------------
diff --git a/blog/index.html b/blog/index.html
index d66b889..9339ac9 100644
--- a/blog/index.html
+++ b/blog/index.html
@@ -114,6 +114,11 @@
 </div>
 
 <div class="int_text" align="left"><!-- previously: site.posts -->
+<p><a class="post-link" href="/blog/2018/02/09/running-sql-queries-on-amazon-s3/">Running SQL Queries on Amazon S3</a><br/>
+<span class="post-date">Posted on Feb 9, 2018
+by Nick Amato</span>
+<br/>Drill enables you to run SQL queries directly on data in S3. There's no need to ingest the data into a managed cluster or transform the data. This is a step-by-step tutorial on how to use Drill with S3.</p>
+<!-- previously: site.posts -->
 <p><a class="post-link" href="/blog/2017/12/15/drill-1.12-released/">Drill 1.12 Released</a><br/>
 <span class="post-date">Posted on Dec 15, 2017
 by Bridget Bevens</span>
@@ -229,11 +234,6 @@ by Tomer Shiran</span>
 by Tomer Shiran</span>
 <br/>Join us on Twitter for a live Q&A on Wednesday, December 17.</p>
 <!-- previously: site.posts -->
-<p><a class="post-link" href="/blog/2014/12/09/running-sql-queries-on-amazon-s3/">Running SQL Queries on Amazon S3</a><br/>
-<span class="post-date">Posted on Dec 9, 2014
-by Nick Amato</span>
-<br/>Drill enables you to run SQL queries directly on data in S3. There's no need to ingest the data into a managed cluster or transform the data. This is a step-by-step tutorial on how to use Drill with S3.</p>
-<!-- previously: site.posts -->
 <p><a class="post-link" href="/blog/2014/12/02/drill-top-level-project/">Apache Drill Graduates to a Top-Level Project</a><br/>
 <span class="post-date">Posted on Dec 2, 2014
 by Tomer Shiran</span>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/edc3f206/docs/apache-drill-m1-release-notes-apache-drill-alpha/index.html
----------------------------------------------------------------------
diff --git a/docs/apache-drill-m1-release-notes-apache-drill-alpha/index.html b/docs/apache-drill-m1-release-notes-apache-drill-alpha/index.html
index 13f40ef..91d0ae2 100644
--- a/docs/apache-drill-m1-release-notes-apache-drill-alpha/index.html
+++ b/docs/apache-drill-m1-release-notes-apache-drill-alpha/index.html
@@ -1153,7 +1153,7 @@
 
     <div class="int_text" align="left">
       
-        <h3 id="milestone-1-goals">Milestone 1 Goals</h3>
+        <h2 id="milestone-1-goals">Milestone 1 Goals</h2>
 
 <p>The first release of Apache Drill is designed as a technology preview for
 people to better understand the architecture and vision. It is a functional
@@ -1174,7 +1174,7 @@ architectural analysis and performance optimization.</p>
 <li>Support complex data type manipulation via logical plan operations</li>
 </ul>
 
-<h3 id="known-issues">Known Issues</h3>
+<h2 id="known-issues">Known Issues</h2>
 
 <p>SQL Parsing<br>
 Because Apache Drill is built to support late-bound changing schemas while SQL

http://git-wip-us.apache.org/repos/asf/drill-site/blob/edc3f206/docs/configuring-jreport-with-drill/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-jreport-with-drill/index.html b/docs/configuring-jreport-with-drill/index.html
index 8a803a3..e2c6c77 100644
--- a/docs/configuring-jreport-with-drill/index.html
+++ b/docs/configuring-jreport-with-drill/index.html
@@ -1149,7 +1149,7 @@
 
     </div>
 
-     
+     Feb 9, 2018
 
     <link href="/css/docpage.css" rel="stylesheet" type="text/css">
 
@@ -1165,9 +1165,7 @@
 <li>Use JReport Designer to query the data and create a report.</li>
 </ol>
 
-<hr>
-
-<h3 id="step-1:-install-the-drill-jdbc-driver-with-jreport">Step 1: Install the Drill JDBC Driver with JReport</h3>
+<h2 id="step-1:-install-the-drill-jdbc-driver-with-jreport">Step 1: Install the Drill JDBC Driver with JReport</h2>
 
 <p>Drill provides standard JDBC connectivity to integrate with JReport. JReport 13.1 requires Drill 1.0 or later.
 For general instructions on installing the Drill JDBC driver, see <a href="/docs/using-the-jdbc-driver/">Using JDBC</a>.</p>
@@ -1185,9 +1183,7 @@ For example, on Windows, copy the Drill JDBC driver jar file into:</p>
 <li><p>Verify that the JReport system can resolve the hostnames of the ZooKeeper nodes of the Drill cluster. You can do this by configuring DNS for all of the systems. Alternatively, you can edit the hosts file on the JReport system to include the hostnames and IP addresses of all the ZooKeeper nodes used with the Drill cluster.  For Linux systems, the hosts file is located at <code>/etc/hosts</code>. For Windows systems, the hosts file is located at <code>%WINDIR%\system32\drivers\etc\hosts</code>  Here is an example of a Windows hosts file: <img src="/docs/img/jreport-hostsfile.png" alt="drill query flow"></p></li>
 </ol>
 
-<hr>
-
-<h3 id="step-2:-create-a-new-jreport-catalog-to-manage-the-drill-connection">Step 2: Create a New JReport Catalog to Manage the Drill Connection</h3>
+<h2 id="step-2:-create-a-new-jreport-catalog-to-manage-the-drill-connection">Step 2: Create a New JReport Catalog to Manage the Drill Connection</h2>
 
 <ol>
 <li> Click Create <strong>New -&gt; Catalog…</strong></li>
@@ -1202,7 +1198,7 @@ For example, on Windows, copy the Drill JDBC driver jar file into:</p>
 <li>Click <strong>Done</strong> when you have added all the tables you need. </li>
 </ol>
 
-<h3 id="step-3:-use-jreport-designer">Step 3: Use JReport Designer</h3>
+<h2 id="step-3:-use-jreport-designer">Step 3: Use JReport Designer</h2>
 
 <ol>
 <li> In the Catalog Browser, right-click <strong>Queries</strong> and select <strong>Add Query…</strong></li>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/edc3f206/docs/configuring-tibco-spotfire-server-with-drill/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-tibco-spotfire-server-with-drill/index.html b/docs/configuring-tibco-spotfire-server-with-drill/index.html
index ae80f88..1b039af 100644
--- a/docs/configuring-tibco-spotfire-server-with-drill/index.html
+++ b/docs/configuring-tibco-spotfire-server-with-drill/index.html
@@ -1149,7 +1149,7 @@
 
     </div>
 
-     
+     Feb 9, 2018
 
     <link href="/css/docpage.css" rel="stylesheet" type="text/css">
 
@@ -1166,9 +1166,7 @@
 <li>Query and analyze various data formats with Tibco Spotfire and Drill.</li>
 </ol>
 
-<hr>
-
-<h3 id="step-1:-install-and-configure-the-drill-jdbc-driver">Step 1: Install and Configure the Drill JDBC Driver</h3>
+<h2 id="step-1:-install-and-configure-the-drill-jdbc-driver">Step 1: Install and Configure the Drill JDBC Driver</h2>
 
 <p>Drill provides standard JDBC connectivity, making it easy to integrate data exploration capabilities on complex, schema-less data sets. Tibco Spotfire Server (TSS) requires Drill 1.0 or later, which incudes the JDBC driver. The JDBC driver is bundled with the Drill configuration files, and it is recommended that you use the JDBC driver that is shipped with the specific Drill version.</p>
 
@@ -1194,9 +1192,7 @@ For Windows systems, the hosts file is located here:
 <code>%WINDIR%\system32\drivers\etc\hosts</code></p></li>
 </ol>
 
-<hr>
-
-<h3 id="step-2:-configure-the-drill-data-source-template-in-tss">Step 2: Configure the Drill Data Source Template in TSS</h3>
+<h2 id="step-2:-configure-the-drill-data-source-template-in-tss">Step 2: Configure the Drill Data Source Template in TSS</h2>
 
 <p>The Drill Data Source template can now be configured with the TSS Configuration Tool. The Windows-based TSS Configuration Tool is recommended. If TSS is installed on a Linux system, you also need to install TSS on a small Windows-based system so you can utilize the Configuration Tool. In this case, it is also recommended that you install the Drill JDBC driver on the TSS Windows system.</p>
 
@@ -1213,7 +1209,7 @@ For Windows systems, the hosts file is located here:
 <li>Restart TSS to enable it to use the Drill data source template.</li>
 </ol>
 
-<h4 id="xml-template">XML Template</h4>
+<p><strong>XML Template</strong></p>
 
 <p>Make sure that you enter the correct ZooKeeper node name instead of <code>&lt;zk-node&gt;</code>, as well as the correct Drill cluster name instead of <code>&lt;drill-cluster-name&gt;</code> in the example below. This is just a template that will appear whenever a data source is configured. The hostnames of ZooKeeper nodes and the Drill cluster name can be found in the <code>$DRILL_HOME/conf/drill-override.conf</code> file on any of the Drill nodes in the cluster.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">  &lt;jdbc-type-settings&gt;
@@ -1249,9 +1245,7 @@ For Windows systems, the hosts file is located here:
   &lt;/java-to-sql-type-conversions&gt;
   &lt;/jdbc-type-settings&gt;
 </code></pre></div>
-<hr>
-
-<h3 id="step-3:-configure-drill-data-sources-with-tibco-spotfire-desktop">Step 3: Configure Drill Data Sources with Tibco Spotfire Desktop</h3>
+<h2 id="step-3:-configure-drill-data-sources-with-tibco-spotfire-desktop">Step 3: Configure Drill Data Sources with Tibco Spotfire Desktop</h2>
 
 <p>To configure Drill data sources in TSS, you need to use the Tibco Spotfire Desktop client.</p>
 
@@ -1266,9 +1260,7 @@ For Windows systems, the hosts file is located here:
 <li>When the data source is saved, it will appear in the <strong>Data Sources</strong> tab, and you will be able to navigate the schema. <img src="/docs/img/spotfire-server-datasources-tab.png" alt="drill query flow"></li>
 </ol>
 
-<hr>
-
-<h3 id="step-4:-query-and-analyze-the-data">Step 4: Query and Analyze the Data</h3>
+<h2 id="step-4:-query-and-analyze-the-data">Step 4: Query and Analyze the Data</h2>
 
 <p>After the Drill data source has been configured in the Information Designer, the information elements can be defined. </p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/edc3f206/docs/configuring-user-security/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-user-security/index.html b/docs/configuring-user-security/index.html
index 9bf514c..769ff0a 100644
--- a/docs/configuring-user-security/index.html
+++ b/docs/configuring-user-security/index.html
@@ -1149,7 +1149,7 @@
 
     </div>
 
-     Feb 8, 2018
+     Feb 9, 2018
 
     <link href="/css/docpage.css" rel="stylesheet" type="text/css">
 
@@ -1160,7 +1160,8 @@
 <p>Authentication is the process of establishing confidence of authenticity. A Drill client user is authenticated when a drillbit process running in a Drill cluster confirms the identity it is presented with.  Drill supports several authentication mechanisms through which users can prove their identity before accessing cluster data: </p>
 
 <ul>
-<li><strong>Kerberos</strong> - See <a href="/docs/configuring-kerberos-security/">Configuring Kerberos Security</a>.</li>
+<li><strong>Kerberos</strong> - </li>
+<li>See <a href="/docs/configuring-kerberos-security/">Configuring Kerberos Security</a>.</li>
 <li><strong>Plain</strong> [also known as basic authentication (auth), which is username and password-based authentication, through the Linux Pluggable Authentication Module (PAM)] - See <a href="/docs/configuring-plain-security/">Configuring Plain Security</a>.</li>
 <li><strong>Custom authenticators</strong> - See <a href="/docs/creating-custom-authenticators">Creating Custom Authenticators</a>.</li>
 </ul>
@@ -1194,7 +1195,7 @@
 
 <p><img src="/docs/img/client-encrypt-compatibility.png" alt="compatEncrypt"></p>
 
-<p>See <em>Client Encryption</em> in <a href="/docs/server-communication-paths/#configuring-kerberos-security#client-encryption">Configuring Kerberos Security</a> for the client connection string parameter, <code>sasl_encrypt</code> usage information.</p>
+<p>See <em>Client Encryption</em> in <a href="/docs/configuring-kerberos-authentication/#client-encryption">Configuring Kerberos Security</a> for the client connection string parameter, <code>sasl_encrypt</code> usage information.</p>
 
 <h2 id="impersonation">Impersonation</h2>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/edc3f206/docs/drill-plan-syntax/index.html
----------------------------------------------------------------------
diff --git a/docs/drill-plan-syntax/index.html b/docs/drill-plan-syntax/index.html
index 4286097..6ddfe4f 100644
--- a/docs/drill-plan-syntax/index.html
+++ b/docs/drill-plan-syntax/index.html
@@ -1149,13 +1149,13 @@
 
     </div>
 
-     
+     Feb 9, 2018
 
     <link href="/css/docpage.css" rel="stylesheet" type="text/css">
 
     <div class="int_text" align="left">
       
-        <h3 id="whats-the-plan?">Whats the plan?</h3>
+        <h2 id="whats-the-plan?">Whats the plan?</h2>
 
 <p>This section is about the end-to-end plan flow for Drill. The incoming query
 to Drill can be a SQL 2003 query/DrQL or MongoQL. The query is converted to a

http://git-wip-us.apache.org/repos/asf/drill-site/blob/edc3f206/docs/explain/index.html
----------------------------------------------------------------------
diff --git a/docs/explain/index.html b/docs/explain/index.html
index 2f92f92..2e05e76 100644
--- a/docs/explain/index.html
+++ b/docs/explain/index.html
@@ -1149,7 +1149,7 @@
 
     </div>
 
-     
+     Feb 9, 2018
 
     <link href="/css/docpage.css" rel="stylesheet" type="text/css">
 
@@ -1186,17 +1186,17 @@ you are selecting from, you are likely to see plan changes.</p>
 </code></pre></div>
 <p>where <code>query</code> is any valid SELECT statement supported by Drill.</p>
 
-<h5 id="including-all-attributes">INCLUDING ALL ATTRIBUTES</h5>
+<p><strong>INCLUDING ALL ATTRIBUTES</strong></p>
 
 <p>This option returns costing information. You can use this option for both
-physical and logical plans.</p>
+physical and logical plans.  </p>
 
-<h4 id="with-implementation-|-without-implementation">WITH IMPLEMENTATION | WITHOUT IMPLEMENTATION</h4>
+<p><strong>WITH IMPLEMENTATION | WITHOUT IMPLEMENTATION</strong></p>
 
 <p>These options return the physical and logical plan information, respectively.
 The default is physical (WITH IMPLEMENTATION).</p>
 
-<h2 id="explain-for-physical-plans">EXPLAIN for Physical Plans</h2>
+<h3 id="explain-for-physical-plans">EXPLAIN for Physical Plans</h3>
 
 <p>The EXPLAIN PLAN FOR <query> command returns the chosen physical execution
 plan for a query statement without running the query. You can use this command
@@ -1253,7 +1253,7 @@ for submitting the query via Drill APIs.</p>
   },
 ....
 </code></pre></div>
-<h2 id="costing-information">Costing Information</h2>
+<p><strong>Costing Information</strong></p>
 
 <p>Add the INCLUDING ALL ATTRIBUTES option to the EXPLAIN command to see cost
 estimates for the query plan. For example:</p>
@@ -1270,7 +1270,7 @@ estimates for the query plan. For example:</p>
 00-05           Project(T1¦¦*=[$0], type=[$1]): rowcount = 1.0, cumulative cost = {1.0 rows, 8.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 884
 00-06               Scan(groupscan=[EasyGroupScan [selectionRoot=/home/donuts/donuts.json, numFiles=1, columns=[`*`], files=[file:/home/donuts/donuts.json]]]): rowcount = 1.0, cumulative cost = {0.0 rows, 0.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 883
 </code></pre></div>
-<h2 id="explain-for-logical-plans">EXPLAIN for Logical Plans</h2>
+<h3 id="explain-for-logical-plans">EXPLAIN for Logical Plans</h3>
 
 <p>To return the logical plan for a query (again, without actually running the
 query), use the EXPLAIN PLAN WITHOUT IMPLEMENTATION syntax:</p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/edc3f206/docs/lexical-structure/index.html
----------------------------------------------------------------------
diff --git a/docs/lexical-structure/index.html b/docs/lexical-structure/index.html
index 9b106bc..28ebfdb 100644
--- a/docs/lexical-structure/index.html
+++ b/docs/lexical-structure/index.html
@@ -1147,7 +1147,7 @@
 
     </div>
 
-     Nov 14, 2017
+     Feb 9, 2018
 
     <link href="/css/docpage.css" rel="stylesheet" type="text/css">
 
@@ -1225,7 +1225,7 @@
 <li><p>Timestamp: 2008-12-15 22:55:55.12345</p></li>
 </ul>
 
-<p>If you have dates and times in other formats, use a <a href="/data-type-conversion/#other-data-type-conversions">data type conversion function</a> in your queries.</p>
+<p>If you have dates and times in other formats, use a <a href="/docs/data-type-conversion/">data type conversion function</a> in your queries.</p>
 
 <h3 id="identifiers">Identifiers</h3>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/edc3f206/docs/querying-complex-data-introduction/index.html
----------------------------------------------------------------------
diff --git a/docs/querying-complex-data-introduction/index.html b/docs/querying-complex-data-introduction/index.html
index 08e5a51..e831285 100644
--- a/docs/querying-complex-data-introduction/index.html
+++ b/docs/querying-complex-data-introduction/index.html
@@ -1149,7 +1149,7 @@
 
     </div>
 
-     
+     Feb 9, 2018
 
     <link href="/css/docpage.css" rel="stylesheet" type="text/css">
 
@@ -1159,8 +1159,9 @@
 trying to access, regardless of its source system or its schema and data
 types. The sweet spot for Apache Drill is a SQL query workload against
 <em>complex data</em>: data made up of various types of records and fields, rather
-than data in a recognizable relational form (discrete rows and columns). Drill
-is capable of discovering the form of the data when you submit the query.
+than data in a recognizable relational form (discrete rows and columns). </p>
+
+<p>Drill is capable of discovering the form of the data when you submit the query.
 Nested data formats such as JSON (JavaScript Object Notation) files and
 Parquet files are not only <em>accessible</em>: Drill provides special operators and
 functions that you can use to <em>drill down</em> into these files and ask
@@ -1192,7 +1193,7 @@ examples show how to use the Drill extensions in the context of standard SQL
 SELECT statements. For the most part, the extensions use standard JavaScript
 notation for referencing data elements in a hierarchy.</p>
 
-<h3 id="before-you-begin">Before You Begin</h3>
+<h2 id="before-you-begin">Before You Begin</h2>
 
 <p>The examples in this section operate on JSON data files. In order to write
 your own queries, you need to be aware of the basic data types in these files:</p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/edc3f206/docs/querying-sequence-files/index.html
----------------------------------------------------------------------
diff --git a/docs/querying-sequence-files/index.html b/docs/querying-sequence-files/index.html
index 01c897e..bb1a465 100644
--- a/docs/querying-sequence-files/index.html
+++ b/docs/querying-sequence-files/index.html
@@ -1149,32 +1149,27 @@
 
     </div>
 
-     Nov 21, 2016
+     Feb 9, 2018
 
     <link href="/css/docpage.css" rel="stylesheet" type="text/css">
 
     <div class="int_text" align="left">
       
-        <p>Sequence files are flat files storing binary key value pairs.
-Drill projects sequence files as table with two columns &#39;binary_key&#39;, &#39;binary_value&#39;.</p>
+        <p>Sequence files are flat files that store binary key value pairs.
+Drill projects sequence files as a table with two columns &#39;binary_key&#39;, &#39;binary_value&#39;.</p>
 
-<h3 id="querying-sequence-file.">Querying sequence file.</h3>
+<h2 id="querying-a-sequence-file">Querying a Sequence File</h2>
 
-<p>Start drill shell</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">    SELECT *
-    FROM dfs.tmp.`simple.seq`
-    LIMIT 1;
+<p>Start the Drill shell and enter your query.</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">    SELECT * FROM dfs.tmp.`simple.seq` LIMIT 1;
     +--------------+---------------+
     |  binary_key  | binary_value  |
     +--------------+---------------+
     | [B@70828f46  | [B@b8c765f    |
     +--------------+---------------+
 </code></pre></div>
-<p>Since simple.seq contains byte serialized strings as keys and values, we can convert them to strings.</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">    SELECT CONVERT_FROM(binary_key, &#39;UTF8&#39;), CONVERT_FROM(binary_value, &#39;UTF8&#39;)
-    FROM dfs.tmp.`simple.seq`
-    LIMIT 1
-    ;
+<p>Since simple.seq contains byte serialized strings as keys and values, you can convert them to strings.</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">    SELECT CONVERT_FROM(binary_key, &#39;UTF8&#39;), CONVERT_FROM(binary_value, &#39;UTF8&#39;) FROM dfs.tmp.`simple.seq` LIMIT 1;
     +-----------+-------------+
     |  EXPR$0   |   EXPR$1    |
     +-----------+-------------+

http://git-wip-us.apache.org/repos/asf/drill-site/blob/edc3f206/docs/rest-api-introduction/index.html
----------------------------------------------------------------------
diff --git a/docs/rest-api-introduction/index.html b/docs/rest-api-introduction/index.html
index caae142..a533b20 100644
--- a/docs/rest-api-introduction/index.html
+++ b/docs/rest-api-introduction/index.html
@@ -1149,13 +1149,13 @@
 
     </div>
 
-     Jan 22, 2018
+     Feb 9, 2018
 
     <link href="/css/docpage.css" rel="stylesheet" type="text/css">
 
     <div class="int_text" align="left">
       
-        <p>The Drill REST API provides programmatic access to Drill through the <a href="/starting-the-web-console/">Web Console</a>. Using HTTP requests, you can run queries, perform storage plugin tasks, such as creating a storage plugin, obtain profiles of queries, and get current memory metrics. </p>
+        <p>The Drill REST API provides programmatic access to Drill through the <a href="/docs/starting-the-web-console/">Web Console</a>. Using HTTP requests, you can run queries, perform storage plugin tasks, such as creating a storage plugin, obtain profiles of queries, and get current memory metrics. </p>
 
 <p>AN HTTP request uses the familiar Web Console URI:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/edc3f206/docs/rpc-overview/index.html
----------------------------------------------------------------------
diff --git a/docs/rpc-overview/index.html b/docs/rpc-overview/index.html
index fd2a585..0436e53 100644
--- a/docs/rpc-overview/index.html
+++ b/docs/rpc-overview/index.html
@@ -1149,7 +1149,7 @@
 
     </div>
 
-     Aug 7, 2017
+     Feb 9, 2018
 
     <link href="/css/docpage.css" rel="stylesheet" type="text/css">
 
@@ -1188,8 +1188,6 @@ Body (bytes), RawBody (bytes).</p>
 
 <p><img src="/docs/img/drill-channel-pipeline-with-handlers.png" alt="drillpipeline">  </p>
 
-<h6 id="drill-channel-pipeline-with-handlers">Drill Channel Pipeline with Handlers</h6>
-
     
       
         <div class="doc-nav">

http://git-wip-us.apache.org/repos/asf/drill-site/blob/edc3f206/docs/sequence-files/index.html
----------------------------------------------------------------------
diff --git a/docs/sequence-files/index.html b/docs/sequence-files/index.html
index cd704be..7601ecb 100644
--- a/docs/sequence-files/index.html
+++ b/docs/sequence-files/index.html
@@ -1147,7 +1147,7 @@
 
     </div>
 
-     
+     Feb 9, 2018
 
     <link href="/css/docpage.css" rel="stylesheet" type="text/css">
 
@@ -1156,7 +1156,7 @@
         <p>Hadoop Sequence files (<a href="https://wiki.apache.org/hadoop/SequenceFile">https://wiki.apache.org/hadoop/SequenceFile</a>) are flat files storing binary key, value pairs.
 Drill projects sequence files as table with two columns - &#39;binary_key&#39;, &#39;binary_value&#39; of type VARBINARY.</p>
 
-<h3 id="storage-plugin-format-for-sequence-files.">Storage plugin format for sequence files.</h3>
+<h2 id="storage-plugin-format-for-sequence-files">Storage Plugin Format for Sequence Files</h2>
 <div class="highlight"><pre><code class="language-text" data-lang="text">. . .
 &quot;sequencefile&quot;: {
   &quot;type&quot;: &quot;sequencefile&quot;,
@@ -1166,7 +1166,7 @@ Drill projects sequence files as table with two columns - &#39;binary_key&#39;,
 },
 . . .
 </code></pre></div>
-<h3 id="querying-sequence-file.">Querying sequence file.</h3>
+<h2 id="querying-a-sequence-file">Querying a Sequence File</h2>
 <div class="highlight"><pre><code class="language-text" data-lang="text">SELECT *
 FROM dfs.tmp.`simple.seq`
 LIMIT 1;

http://git-wip-us.apache.org/repos/asf/drill-site/blob/edc3f206/docs/show-tables/index.html
----------------------------------------------------------------------
diff --git a/docs/show-tables/index.html b/docs/show-tables/index.html
index a9c1119..e229a2b 100644
--- a/docs/show-tables/index.html
+++ b/docs/show-tables/index.html
@@ -1149,7 +1149,7 @@
 
     </div>
 
-     
+     Feb 9, 2018
 
     <link href="/css/docpage.css" rel="stylesheet" type="text/css">
 
@@ -1177,7 +1177,7 @@ only want information from the <code>dfs.myviews</code> schema:</p>
 <p>When you use a particular schema and then issue the SHOW TABLES command, Drill
 returns the tables and views within that schema.</p>
 
-<h4 id="limitations">Limitations</h4>
+<h2 id="limitations">Limitations</h2>
 
 <ul>
 <li><p>You can create and query tables within the file system, however Drill does not return these tables when you issue the SHOW TABLES command. You can issue the <a href="/docs/show-files-command">SHOW FILES </a>command to see a list of all files, tables, and views, including those created in Drill. </p></li>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/edc3f206/docs/supported-sql-commands/index.html
----------------------------------------------------------------------
diff --git a/docs/supported-sql-commands/index.html b/docs/supported-sql-commands/index.html
index 4c884a2..2622573 100644
--- a/docs/supported-sql-commands/index.html
+++ b/docs/supported-sql-commands/index.html
@@ -1149,7 +1149,7 @@
 
     </div>
 
-     Feb 8, 2018
+     Feb 9, 2018
 
     <link href="/css/docpage.css" rel="stylesheet" type="text/css">
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/edc3f206/docs/text-files-csv-tsv-psv/index.html
----------------------------------------------------------------------
diff --git a/docs/text-files-csv-tsv-psv/index.html b/docs/text-files-csv-tsv-psv/index.html
index 6095bce..2e8dc4f 100644
--- a/docs/text-files-csv-tsv-psv/index.html
+++ b/docs/text-files-csv-tsv-psv/index.html
@@ -1147,7 +1147,7 @@
 
     </div>
 
-     Mar 21, 2016
+     Feb 9, 2018
 
     <link href="/css/docpage.css" rel="stylesheet" type="text/css">
 
@@ -1161,7 +1161,7 @@
 <li>Use a distributed file system<br></li>
 </ul>
 
-<h3 id="select-data-from-particular-columns">Select Data from Particular Columns</h3>
+<h2 id="select-data-from-particular-columns">Select Data from Particular Columns</h2>
 
 <p>Converting text files to another format, such as Parquet, using the CTAS command and a SELECT * statement is not recommended. Instead, you should select data from particular columns. If your text files have no headers, use the <a href="/docs/querying-plain-text-files">COLUMN[n] syntax</a>, and then assign meaningful column names using aliases. For example:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">CREATE TABLE parquet_users AS SELECT CAST(COLUMNS[0] AS INT) AS user_id,
@@ -1175,7 +1175,7 @@ FROM `users.csv1`;
 username, CAST(registration_date AS TIMESTAMP) AS registration_date
 FROM `users.csv1`;
 </code></pre></div>
-<h3 id="cast-data">Cast data</h3>
+<h2 id="cast-data">Cast Data</h2>
 
 <p>You can also improve performance by casting the VARCHAR data in a text file to INT, FLOAT, DATETIME, and so on when you read the data from a text file. Drill performs better reading fixed-width than reading VARCHAR data. </p>
 
@@ -1194,11 +1194,11 @@ FROM `users.csv1`;
 </code></pre></div></li>
 </ul>
 
-<h3 id="use-a-distributed-file-system">Use a Distributed File System</h3>
+<h2 id="use-a-distributed-file-system">Use a Distributed File System</h2>
 
 <p>Using a distributed file system, such as HDFS, instead of a local file system to query files improves performance because Drill attempts to split files on block boundaries.</p>
 
-<h2 id="configuring-drill-to-read-text-files">Configuring Drill to Read Text Files</h2>
+<p><strong>Configuring Drill to Read Text Files</strong> </p>
 
 <p>In the storage plugin configuration, you <a href="/docs/plugin-configuration-basics/#list-of-attributes-and-definitions">set the attributes</a> that affect how Drill reads CSV, TSV, PSV (comma-, tab-, pipe-separated) files:  </p>
 
@@ -1213,7 +1213,7 @@ FROM `users.csv1`;
 
 <p>Set the <code>sys.options</code> property setting <code>exec.storage.enable_new_text_reader</code> to true (the default) before attempting to use these attributes. </p>
 
-<h3 id="using-quotation-marks">Using Quotation Marks</h3>
+<p><strong>Using Quotation Marks</strong> </p>
 
 <p>CSV files typically enclose text fields in double quotation marks, and Drill treats the double quotation mark in CSV files as a special character accordingly. By default, Drill treats double quotation marks as a special character in TSV files also. If you want Drill <em>not</em> to treat double quotation marks as a special character, configure the storage plugin to set the <code>quote</code> attribute to the unicode null <code>&quot;\u0000&quot;</code>. For example:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   . . .
@@ -1229,11 +1229,11 @@ FROM `users.csv1`;
 </code></pre></div>
 <p>As mentioned previously, set the <code>sys.options</code> property setting <code>exec.storage.enable_new_text_reader</code> to true (the default).</p>
 
-<h2 id="examples-of-querying-text-files">Examples of Querying Text Files</h2>
+<p>*<em>Examples of Querying Text Files *</em></p>
 
 <p>The examples in this section show the results of querying CSV files that use and do not use a header, include comments, and use an escape character:</p>
 
-<h3 id="not-using-a-header-in-a-file">Not Using a Header in a File</h3>
+<p><strong>Not Using a Header in a File</strong></p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">&quot;csv&quot;: {
   &quot;type&quot;: &quot;text&quot;,
   &quot;extensions&quot;: [
@@ -1258,7 +1258,7 @@ FROM `users.csv1`;
 +------------------------+
 7 rows selected (0.112 seconds)
 </code></pre></div>
-<h3 id="using-a-header-in-a-file">Using a Header in a File</h3>
+<p><strong>Using a Header in a File</strong></p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">&quot;csv&quot;: {
   &quot;type&quot;: &quot;text&quot;,
   &quot;extensions&quot;: [
@@ -1284,7 +1284,7 @@ FROM `users.csv1`;
 +-------+------+------+------+
 7 rows selected (0.12 seconds)
 </code></pre></div>
-<h3 id="file-with-no-header">File with no Header</h3>
+<p><strong>File with no Header</strong></p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">&quot;csv&quot;: {
   &quot;type&quot;: &quot;text&quot;,
   &quot;extensions&quot;: [
@@ -1310,7 +1310,7 @@ FROM `users.csv1`;
 +------------------------+
 7 rows selected (0.112 seconds)
 </code></pre></div>
-<h3 id="escaping-a-character-in-a-file">Escaping a Character in a File</h3>
+<p><strong>Escaping a Character in a File</strong></p>
 
 <p><img src="/docs/img/csv_with_escape.png" alt="CSV with escape"></p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=local&gt; SELECT * FROM dfs.`/tmp/csv_with_escape.csv`;
@@ -1327,7 +1327,7 @@ FROM `users.csv1`;
 +------------------------------------------------------------------------+
 7 rows selected (0.104 seconds)
 </code></pre></div>
-<h3 id="adding-comments-to-a-file">Adding Comments to a File</h3>
+<p><strong>Adding Comments to a File</strong></p>
 
 <p><img src="/docs/img/csv_with_comments.png" alt="CSV with comments"></p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=local&gt; SELECT * FROM dfs.`/tmp/csv_with_comments.csv2`;
@@ -1350,7 +1350,7 @@ FROM `users.csv1`;
 
 <p>You can deal with a mix of text files with and without headers either by creating two separate format plugins or by creating two format plugins within the same storage plugin. The former approach is typically easier than the latter.</p>
 
-<h3 id="creating-two-separate-storage-plugin-configurations">Creating Two Separate Storage Plugin Configurations</h3>
+<p><strong>Creating Two Separate Storage Plugin Configurations</strong></p>
 
 <p>A storage plugin configuration defines a root directory that Drill targets. You can use a different configuration for each root directory that sets attributes to match the files stored below that directory. All files can use the same extension, such as .csv, as shown in the following example:</p>
 
@@ -1376,7 +1376,7 @@ FROM `users.csv1`;
   &quot;delimiter&quot;: &quot;,&quot;
 },
 </code></pre></div>
-<h3 id="creating-one-storage-plugin-configuration-to-handle-multiple-formats">Creating One Storage Plugin Configuration to Handle Multiple Formats</h3>
+<p><strong>Creating One Storage Plugin Configuration to Handle Multiple Formats</strong>  </p>
 
 <p>You can use a different extension for files with and without a header, and use a storage plugin that looks something like the following example. This method requires renaming some files to use the csv2 extension.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">&quot;csv&quot;: {
@@ -1401,7 +1401,7 @@ FROM `users.csv1`;
 
 <p>A common use case when working with Hadoop is to store and query text files, such as CSV and TSV. To get better performance and efficient storage, you convert these files into Parquet. You can use code to achieve this, as you can see in the <a href="https://github.com/Parquet/parquet-compatibility/blob/master/parquet-compat/src/test/java/parquet/compat/test/ConvertUtils.java">ConvertUtils</a> sample/test class. A simpler way to convert these text files to Parquet is to query the text files using Drill, and save the result to Parquet files.</p>
 
-<h3 id="how-to-convert-csv-to-parquet">How to Convert CSV to Parquet</h3>
+<p><strong>How to Convert CSV to Parquet</strong></p>
 
 <p>This example uses the <a href="http://media.flysfo.com/media/sfo/media/air-traffic/Passenger_4.zip">Passenger Dataset</a> from SFO Air Traffic Statistics.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/edc3f206/docs/using-apache-drill-with-tableau-10-2/index.html
----------------------------------------------------------------------
diff --git a/docs/using-apache-drill-with-tableau-10-2/index.html b/docs/using-apache-drill-with-tableau-10-2/index.html
index dcb890f..fa75b87 100644
--- a/docs/using-apache-drill-with-tableau-10-2/index.html
+++ b/docs/using-apache-drill-with-tableau-10-2/index.html
@@ -1149,7 +1149,7 @@
 
     </div>
 
-     Mar 31, 2017
+     Feb 9, 2018
 
     <link href="/css/docpage.css" rel="stylesheet" type="text/css">
 
@@ -1167,9 +1167,7 @@
 
 <p>This document describes how to connect Tableau 10.2 to Apache Drill and instantly explore multiple data formats from various data sources.  </p>
 
-<hr>
-
-<h3 id="prerequisites">Prerequisites</h3>
+<h2 id="prerequisites">Prerequisites</h2>
 
 <p>Your system must meet the following prerequisites before you can complete the steps required to connect Tableau 10.2 to Apache Drill:  </p>
 
@@ -1179,18 +1177,14 @@
 <li>MapR Drill ODBC Driver v1.3.0 or later<br></li>
 </ul>
 
-<hr>
-
-<h3 id="required-steps">Required Steps</h3>
+<h2 id="required-steps">Required Steps</h2>
 
 <p>Complete the following steps to use Apache Drill with Tableau 10.2:<br>
 1.  <a href="/docs/using-apache-drill-with-tableau-10-2/#step-1:-install-and-configure-the-mapr-drill-odbc-driver">Install and Configure the MapR Drill ODBC Driver.</a><br>
 2.  <a href="/docs/using-apache-drill-with-tableau-10-2/#step-2:-connect-tableau-to-drill">Connect Tableau to Drill (using the Apache Drill Data Connector).</a><br>
 3.  <a href="/docs/using-apache-drill-with-tableau-10-2/#step-3:-query-and-analyze-the-data">Query and Analyze the Data (various data formats with Tableau and Drill).</a>  </p>
 
-<hr>
-
-<h3 id="step-1:-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h3>
+<h2 id="step-1:-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h2>
 
 <p>Drill uses standard ODBC connectivity to provide you with easy data exploration capabilities on complex, schema-less data sets. </p>
 
@@ -1208,9 +1202,7 @@
 
 <p><strong>Important:</strong> Verify that the Tableau client system can resolve the hostnames for the Drill and Zookeeper nodes correctly. See the <em>System Requirements</em> section of the ODBC <a href="http://drill.apache.org/docs/installing-the-driver-on-mac-os-x/">Mac</a> or <a href="http://drill.apache.org/docs/installing-the-driver-on-windows/">Windows</a> installation page for instructions.  </p>
 
-<hr>
-
-<h3 id="step-2:-connect-tableau-to-drill">Step 2: Connect Tableau to Drill</h3>
+<h2 id="step-2:-connect-tableau-to-drill">Step 2: Connect Tableau to Drill</h2>
 
 <p>To connect Tableau to Drill, complete the following steps:</p>
 
@@ -1228,9 +1220,7 @@
 
 <p><strong>Note:</strong> Tableau can natively work with Hive tables and Drill views. You can use custom SQL or create a view in Drill to represent the complex data in Drill data sources, such as data in files or HBase/MapR-DB tables, to Tableau. For more information, see <a href="http://drill.apache.org/docs/tableau-examples/">Tableau Examples</a>.  </p>
 
-<hr>
-
-<h3 id="step-3:-query-and-analyze-the-data">Step 3: Query and Analyze the Data</h3>
+<h2 id="step-3:-query-and-analyze-the-data">Step 3: Query and Analyze the Data</h2>
 
 <p>Tableau can now use Drill to query various data sources and visualize the information, as shown in the following example.  </p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/edc3f206/docs/using-apache-drill-with-tableau-9-desktop/index.html
----------------------------------------------------------------------
diff --git a/docs/using-apache-drill-with-tableau-9-desktop/index.html b/docs/using-apache-drill-with-tableau-9-desktop/index.html
index 2934813..90287c8 100644
--- a/docs/using-apache-drill-with-tableau-9-desktop/index.html
+++ b/docs/using-apache-drill-with-tableau-9-desktop/index.html
@@ -1149,7 +1149,7 @@
 
     </div>
 
-     Apr 5, 2017
+     Feb 9, 2018
 
     <link href="/css/docpage.css" rel="stylesheet" type="text/css">
 
@@ -1166,9 +1166,7 @@
 <li>Query and analyze various data formats with Tableau and Drill.</li>
 </ol>
 
-<hr>
-
-<h3 id="step-1:-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h3>
+<h2 id="step-1:-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h2>
 
 <p>Drill uses standard ODBC connectivity to provide easy data-exploration capabilities on complex, schema-less data sets. For the best experience use the latest release of Apache Drill. For Tableau 9.0 Desktop, Drill Version 0.9 or higher is recommended.</p>
 
@@ -1186,9 +1184,7 @@
 
 <p>Also make sure to test the ODBC connection to Drill before using it with Tableau.</p>
 
-<hr>
-
-<h3 id="step-2:-install-the-tableau-data-connection-customization-(tdc)-file">Step 2: Install the Tableau Data-connection Customization (TDC) File</h3>
+<h2 id="step-2:-install-the-tableau-data-connection-customization-(tdc)-file">Step 2: Install the Tableau Data-connection Customization (TDC) File</h2>
 
 <p>The MapR Drill ODBC Driver includes a file named <code>MapRDrillODBC.TDC</code>. The TDC file includes customizations that improve ODBC configuration and performance when using Tableau. The MapR Drill ODBC driver installer automatically installs the TDC file if the installer can find the Tableau installation. If you installed the MapR Drill ODBC driver first and then installed Tableau, the TDC file is not installed automatically. You must install the TDC file manually. </p>
 
@@ -1205,9 +1201,7 @@ For example, you can press the SPACEBAR key.</p></li>
 
 <p>If the installation of the TDC file fails, this is likely due to your Tableau repository being in location other than the default one.  In this case, manually copy the My Tableau Repository to C:\Users&lt;user&gt;\Documents\My Tableau Repository. Repeat the procedure to install the MapRDrillODBC.TDC file manually.</p>
 
-<hr>
-
-<h3 id="step-3:-connect-tableau-to-drill-via-odbc">Step 3: Connect Tableau to Drill via ODBC</h3>
+<h2 id="step-3:-connect-tableau-to-drill-via-odbc">Step 3: Connect Tableau to Drill via ODBC</h2>
 
 <p>Complete the following steps to configure an ODBC data connection: </p>
 
@@ -1233,9 +1227,7 @@ Tableau is now connected to Drill, and you can select various tables and views.
 
 <p>Note: If Drill authentication and impersonation is enabled, only the views that the user has access to will be displayed in the Table dialog box. Also, if custom SQL is being used to try and access data sources that the user does not have access to, an error message will be displayed. <img src="/docs/img/tableau-error.png" alt="drill query flow"></p>
 
-<hr>
-
-<h3 id="step-4:-query-and-analyze-the-data">Step 4: Query and Analyze the Data</h3>
+<h2 id="step-4:-query-and-analyze-the-data">Step 4: Query and Analyze the Data</h2>
 
 <p>Tableau Desktop can now use Drill to query various data sources and visualize the information.</p>
 
@@ -1254,8 +1246,6 @@ The data sources are now configured and ready to be used in the visualization.</
 <li><p>Add a grand total row by clicking <strong>Analysis &gt; Totals &gt; Show Column Grand Totals</strong>. <img src="/docs/img/tableau-desktop-query.png" alt="drill query flow"></p></li>
 </ol>
 
-<hr>
-
 <p>In this quick tutorial, you saw how you can configure Tableau Desktop 9.0 to work with Apache Drill. </p>
 
     

http://git-wip-us.apache.org/repos/asf/drill-site/blob/edc3f206/docs/using-apache-drill-with-tableau-9-server/index.html
----------------------------------------------------------------------
diff --git a/docs/using-apache-drill-with-tableau-9-server/index.html b/docs/using-apache-drill-with-tableau-9-server/index.html
index b8962af..4497bda 100644
--- a/docs/using-apache-drill-with-tableau-9-server/index.html
+++ b/docs/using-apache-drill-with-tableau-9-server/index.html
@@ -1149,7 +1149,7 @@
 
     </div>
 
-     
+     Feb 9, 2018
 
     <link href="/css/docpage.css" rel="stylesheet" type="text/css">
 
@@ -1165,9 +1165,7 @@
 <li> Publish Tableau visualizations and data sources from Tableau Desktop to Tableau Server for collaboration.</li>
 </ol>
 
-<hr>
-
-<h3 id="step-1:-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h3>
+<h2 id="step-1:-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h2>
 
 <p>Drill uses standard ODBC connectivity to provide easy data-exploration capabilities on complex, schema-less data sets. The latest release of Apache Drill. For Tableau 9.0 Server, Drill Version 0.9 or higher is recommended.</p>
 
@@ -1181,13 +1179,11 @@
 <li>If Drill authentication is enabled, select <strong>Basic Authentication</strong> as the authentication type. Enter a valid user and password. <img src="/docs/img/tableau-odbc-setup.png" alt="drill query flow"></li>
 </ol>
 
-<p>Note: If you select <strong>ZooKeeper Quorum</strong> as the ODBC connection type, the client system must be able to resolve the hostnames of the ZooKeeper nodes. The simplest way is to add the hostnames and IP addresses for the ZooKeeper nodes to the <code>%WINDIR%\system32\drivers\etc\hosts</code> file. <img src="/docs/img/tableau-odbc-setup-2.png" alt="drill query flow"></p>
+<p><strong>Note:</strong> If you select <strong>ZooKeeper Quorum</strong> as the ODBC connection type, the client system must be able to resolve the hostnames of the ZooKeeper nodes. The simplest way is to add the hostnames and IP addresses for the ZooKeeper nodes to the <code>%WINDIR%\system32\drivers\etc\hosts</code> file. <img src="/docs/img/tableau-odbc-setup-2.png" alt="drill query flow"></p>
 
 <p>Also make sure to test the ODBC connection to Drill before using it with Tableau.</p>
 
-<hr>
-
-<h3 id="step-2:-install-the-tableau-data-connection-customization-(tdc)-file">Step 2: Install the Tableau Data-connection Customization (TDC) File</h3>
+<h2 id="step-2:-install-the-tableau-data-connection-customization-(tdc)-file">Step 2: Install the Tableau Data-connection Customization (TDC) File</h2>
 
 <p>The MapR Drill ODBC Driver includes a file named <code>MapRDrillODBC.TDC</code>. The TDC file includes customizations that improve ODBC configuration and performance when using Tableau.</p>
 
@@ -1198,13 +1194,11 @@
 
 <p>For more information about Tableau TDC configuration, see <a href="http://kb.tableau.com/articles/knowledgebase/customizing-odbc-connections">Customizing and Tuning ODBC Connections</a></p>
 
-<hr>
-
-<h3 id="step-3:-publish-tableau-visualizations-and-data-sources">Step 3: Publish Tableau Visualizations and Data Sources</h3>
+<h2 id="step-3:-publish-tableau-visualizations-and-data-sources">Step 3: Publish Tableau Visualizations and Data Sources</h2>
 
 <p>For collaboration purposes, you can now use Tableau Desktop to publish data sources and visualizations on Tableau Server.</p>
 
-<h4 id="publishing-visualizations">Publishing Visualizations</h4>
+<h3 id="publishing-visualizations">Publishing Visualizations</h3>
 
 <p>To publish a visualization from Tableau Desktop to Tableau Server:</p>
 
@@ -1219,7 +1213,7 @@
 <li><p>In the Authentication window, select <strong>Embedded Password</strong>, then click <strong>OK</strong>. Then click <strong>Publish</strong> in the Publish Workbook window to publish the visualization to Tableau Server. <img src="/docs/img/tableau-server-authentication.png" alt="drill query flow"></p></li>
 </ol>
 
-<h4 id="publishing-data-sources">Publishing Data Sources</h4>
+<h3 id="publishing-data-sources">Publishing Data Sources</h3>
 
 <p>If all you want to do is publish data sources to Tableau Server, follow these steps:
 1.  Open data source(s) in Tableau Desktop.
@@ -1231,8 +1225,6 @@
 <li><p>In the <strong>Authentication</strong> drop-down list, select <strong>Embedded Password</strong>. Select permissions as needed, then click <strong>Publish</strong>. The data source will now be published on the Tableau Server and is available for building visualizations. <img src="/docs/img/tableau-server-publish-datasource3.png" alt="drill query flow"></p></li>
 </ol>
 
-<hr>
-
 <p>In this quick tutorial, you saw how you can configure Tableau Server 9.0 to work with Tableau Desktop and Apache Drill. </p>
 
     

http://git-wip-us.apache.org/repos/asf/drill-site/blob/edc3f206/docs/using-information-builders-webfocus-with-apache-drill/index.html
----------------------------------------------------------------------
diff --git a/docs/using-information-builders-webfocus-with-apache-drill/index.html b/docs/using-information-builders-webfocus-with-apache-drill/index.html
index 26664ad..09417f9 100644
--- a/docs/using-information-builders-webfocus-with-apache-drill/index.html
+++ b/docs/using-information-builders-webfocus-with-apache-drill/index.html
@@ -1149,7 +1149,7 @@
 
     </div>
 
-     
+     Feb 9, 2018
 
     <link href="/css/docpage.css" rel="stylesheet" type="text/css">
 
@@ -1165,13 +1165,11 @@
 <li>(Optional) Create additional Drill connections.<br></li>
 </ol>
 
-<h3 id="prerequisite">Prerequisite</h3>
+<h2 id="prerequisite">Prerequisite</h2>
 
 <p>Drill 1.2 or later</p>
 
-<hr>
-
-<h3 id="step-1:-install-the-apache-drill-jdbc-driver.">Step 1: Install the Apache Drill JDBC driver.</h3>
+<h2 id="step-1:-install-the-apache-drill-jdbc-driver.">Step 1: Install the Apache Drill JDBC driver.</h2>
 
 <p>Drill provides JDBC connectivity that easily integrates with WebFOCUS. See <a href="https://drill.apache.org/docs/using-the-jdbc-driver/">/docs/using-the-jdbc-driver/</a> for general installation steps.  </p>
 
@@ -1187,9 +1185,7 @@ The following example shows the driver JAR file copied to a directory on a Linux
 <code>/usr/lib/drill-1.4.0/jdbc-driver/drill-jdbc-all-1.4.0.jar</code></li>
 </ol>
 
-<hr>
-
-<h3 id="step-2:-configure-the-webfocus-adapter-and-connections-to-drill.">Step 2: Configure the WebFOCUS adapter and connections to Drill.</h3>
+<h2 id="step-2:-configure-the-webfocus-adapter-and-connections-to-drill.">Step 2: Configure the WebFOCUS adapter and connections to Drill.</h2>
 
 <ol>
 <li>From a web browser, access the WebFOCUS Management Console. The WebFOCUS administrator provides you with the URL information: <code>http://hostname:port/</code><br>
@@ -1208,9 +1204,7 @@ The Apache Drill adapter appears in the list.<br>
 Now you can use the WebFOCUS adapter and connection or create additional connections.</p></li>
 </ol>
 
-<hr>
-
-<h3 id="(optional)-step-3:-create-additional-drill-connections.">(Optional) Step 3: Create additional Drill connections.</h3>
+<h2 id="(optional)-step-3:-create-additional-drill-connections.">(Optional) Step 3: Create additional Drill connections.</h2>
 
 <p>Complete the following steps to create additional connections:  </p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/edc3f206/docs/using-microstrategy-analytics-with-apache-drill/index.html
----------------------------------------------------------------------
diff --git a/docs/using-microstrategy-analytics-with-apache-drill/index.html b/docs/using-microstrategy-analytics-with-apache-drill/index.html
index f6442d6..10103e0 100644
--- a/docs/using-microstrategy-analytics-with-apache-drill/index.html
+++ b/docs/using-microstrategy-analytics-with-apache-drill/index.html
@@ -1149,7 +1149,7 @@
 
     </div>
 
-     
+     Feb 9, 2018
 
     <link href="/css/docpage.css" rel="stylesheet" type="text/css">
 
@@ -1168,7 +1168,7 @@
 
 <hr>
 
-<h3 id="step-1:-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h3>
+<h2 id="step-1:-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h2>
 
 <p>Drill uses standard ODBC connectivity to provide easy data exploration capabilities on complex, schema-less data sets. Verify that the ODBC driver version that you download correlates with the Apache Drill version that you use. Ideally, you should upgrade to the latest version of Apache Drill and the MapR Drill ODBC Driver. </p>
 
@@ -1204,7 +1204,7 @@
 
 <hr>
 
-<h3 id="step-2:-install-the-drill-object-on-microstrategy-analytics-enterprise">Step 2: Install the Drill Object on MicroStrategy Analytics Enterprise</h3>
+<h2 id="step-2:-install-the-drill-object-on-microstrategy-analytics-enterprise">Step 2: Install the Drill Object on MicroStrategy Analytics Enterprise</h2>
 
 <p>The steps listed in this section were created based on the MicroStrategy Technote for installing DBMS objects which you can reference at: </p>
 
@@ -1237,7 +1237,7 @@
 
 <hr>
 
-<h3 id="step-3:-create-the-microstrategy-database-connection-for-apache-drill">Step 3: Create the MicroStrategy database connection for Apache Drill</h3>
+<h2 id="step-3:-create-the-microstrategy-database-connection-for-apache-drill">Step 3: Create the MicroStrategy database connection for Apache Drill</h2>
 
 <p>Complete the following steps to use the Database Instance Wizard to create the MicroStrategy database connection for Apache Drill:</p>
 
@@ -1256,15 +1256,15 @@
 
 <hr>
 
-<h3 id="step-4:-query-and-analyze-the-data">Step 4: Query and Analyze the Data</h3>
+<h2 id="step-4:-query-and-analyze-the-data">Step 4: Query and Analyze the Data</h2>
 
 <p>This step includes an example scenario that shows you how to use MicroStrategy, with Drill as the database instance, to analyze Twitter data stored as complex JSON documents. </p>
 
-<h4 id="scenario">Scenario</h4>
+<h3 id="scenario">Scenario</h3>
 
 <p>The Drill distributed file system plugin is configured to read Twitter data in a directory structure. A view is created in Drill to capture the most relevant maps and nested maps and arrays for the Twitter JSON documents. Refer to <a href="/docs/query-data-introduction/">Query Data</a> for more information about how to configure and use Drill to work with complex data:</p>
 
-<h4 id="part-1:-create-a-project">Part 1: Create a Project</h4>
+<h3 id="part-1:-create-a-project">Part 1: Create a Project</h3>
 
 <p>Complete the following steps to create a project:</p>
 
@@ -1282,7 +1282,7 @@
 <li> Click <strong>OK</strong>. The new project is created in MicroStrategy Developer. </li>
 </ol>
 
-<h4 id="part-2:-create-a-freeform-report-to-analyze-data">Part 2: Create a Freeform Report to Analyze Data</h4>
+<h3 id="part-2:-create-a-freeform-report-to-analyze-data">Part 2: Create a Freeform Report to Analyze Data</h3>
 
 <p>Complete the following steps to create a Freeform Report and analyze data:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/edc3f206/docs/using-qlik-sense-with-drill/index.html
----------------------------------------------------------------------
diff --git a/docs/using-qlik-sense-with-drill/index.html b/docs/using-qlik-sense-with-drill/index.html
index 4feca96..7a67978 100644
--- a/docs/using-qlik-sense-with-drill/index.html
+++ b/docs/using-qlik-sense-with-drill/index.html
@@ -1149,7 +1149,7 @@
 
     </div>
 
-     
+     Feb 9, 2018
 
     <link href="/css/docpage.css" rel="stylesheet" type="text/css">
 
@@ -1169,16 +1169,14 @@
 <li> Analyze data with Qlik Sense and Drill.<br></li>
 </ol>
 
-<p><strong>Prerequisites</strong>  </p>
+<h2 id="prerequisites">Prerequisites</h2>
 
 <ul>
 <li> Apache Drill installed. See <a href="/docs/install-drill/">Install Drill</a>.<br></li>
 <li> Qlik Sense installed. See <a href="http://www.qlik.com/us/explore/products/sense">Qlik Sense</a>.</li>
 </ul>
 
-<hr>
-
-<h3 id="step-1:-install-and-configure-the-drill-odbc-driver">Step 1: Install and Configure the Drill ODBC Driver</h3>
+<h2 id="step-1:-install-and-configure-the-drill-odbc-driver">Step 1: Install and Configure the Drill ODBC Driver</h2>
 
 <p>Drill uses standard ODBC connectivity to provide easy data exploration capabilities on complex, schema-less data sets. Verify that the ODBC driver version that you download correlates with the Apache Drill version that you use. Ideally, you should upgrade to the latest version of Apache Drill and the MapR Drill ODBC Driver. </p>
 
@@ -1190,9 +1188,7 @@
 <li><a href="/docs/configuring-odbc-on-windows">Configure ODBC</a>.</li>
 </ol>
 
-<hr>
-
-<h3 id="step-2:-configure-a-connection-in-qlik-sense">Step 2: Configure a Connection in Qlik Sense</h3>
+<h2 id="step-2:-configure-a-connection-in-qlik-sense">Step 2: Configure a Connection in Qlik Sense</h2>
 
 <p>Once you create an ODBC DSN, it shows up as another option when you create a connection from a new and/or existing Qlik Sense application. The steps for creating a connection from an application are the same in Qlik Sense Desktop and Qlik Sense Server. </p>
 
@@ -1206,9 +1202,7 @@
 <img src="/docs/img/step3_img1.png" alt=""></li>
 </ol>
 
-<hr>
-
-<h3 id="step-3:-authenticate">Step 3: Authenticate</h3>
+<h2 id="step-3:-authenticate">Step 3: Authenticate</h2>
 
 <p>After providing the credentials and saving the connection, click <strong>Select</strong> in the new connection to trigger the authentication against Drill.  </p>
 
@@ -1222,9 +1216,7 @@
 
 <p><img src="/docs/img/step4_img3.png" alt=""></p>
 
-<hr>
-
-<h3 id="step-4:-select-tables-and-load-the-data-model">Step 4: Select Tables and Load the Data Model</h3>
+<h2 id="step-4:-select-tables-and-load-the-data-model">Step 4: Select Tables and Load the Data Model</h2>
 
 <p>Explore the various tables available in Drill, and select the tables of interest. For each table selected, Qlik Sense shows a preview of the logic used for the table.  </p>
 
@@ -1257,9 +1249,7 @@
 
 <p><img src="/docs/img/step5_img5.png" alt="">  </p>
 
-<hr>
-
-<h3 id="step-5:-analyze-data-with-qlik-sense-and-drill">Step 5: Analyze Data with Qlik Sense and Drill</h3>
+<h2 id="step-5:-analyze-data-with-qlik-sense-and-drill">Step 5: Analyze Data with Qlik Sense and Drill</h2>
 
 <p>After the data model is loaded into the application, use Qlik Sense to build a wide range of visualizations on top of the data that Drill delivers via ODBC. Qlik Sense specializes in self-service data visualization at the point of decision.  </p>
 
@@ -1269,9 +1259,7 @@
 
 <p><img src="/docs/img/step6_img2.png" alt=""></p>
 
-<hr>
-
-<h3 id="summary">Summary</h3>
+<h2 id="summary">Summary</h2>
 
 <p>Together, Drill and Qlik Sense can provide a wide range of solutions that enable organizations to analyze all of their data and efficiently find solutions to various business problems.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/edc3f206/docs/using-tibco-spotfire-desktop-with-drill/index.html
----------------------------------------------------------------------
diff --git a/docs/using-tibco-spotfire-desktop-with-drill/index.html b/docs/using-tibco-spotfire-desktop-with-drill/index.html
index 01cf270..621466e 100644
--- a/docs/using-tibco-spotfire-desktop-with-drill/index.html
+++ b/docs/using-tibco-spotfire-desktop-with-drill/index.html
@@ -1149,7 +1149,7 @@
 
     </div>
 
-     
+     Feb 9, 2018
 
     <link href="/css/docpage.css" rel="stylesheet" type="text/css">
 
@@ -1164,9 +1164,7 @@
 <li> Configure the Spotfire Desktop data connection for Drill.</li>
 </ol>
 
-<hr>
-
-<h3 id="step-1:-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h3>
+<h2 id="step-1:-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h2>
 
 <p>Drill uses standard ODBC connectivity to provide easy data exploration capabilities on complex, schema-less data sets. Verify that the ODBC driver version that you download correlates with the Apache Drill version that you use. Ideally, you should upgrade to the latest version of Apache Drill and the MapR Drill ODBC Driver. </p>
 
@@ -1184,7 +1182,7 @@
 
 <hr>
 
-<h3 id="step-2:-configure-the-spotfire-desktop-data-connection-for-drill">Step 2: Configure the Spotfire Desktop Data Connection for Drill</h3>
+<h2 id="step-2:-configure-the-spotfire-desktop-data-connection-for-drill">Step 2: Configure the Spotfire Desktop Data Connection for Drill</h2>
 
 <p>Complete the following steps to configure a Drill data connection: </p>