You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@drill.apache.org by br...@apache.org on 2015/05/22 01:43:56 UTC

drill-site git commit: minor doc updates from kris and bridget

Repository: drill-site
Updated Branches:
  refs/heads/asf-site 78bc53119 -> 7783cd532


minor doc updates from kris and bridget


Project: http://git-wip-us.apache.org/repos/asf/drill-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill-site/commit/7783cd53
Tree: http://git-wip-us.apache.org/repos/asf/drill-site/tree/7783cd53
Diff: http://git-wip-us.apache.org/repos/asf/drill-site/diff/7783cd53

Branch: refs/heads/asf-site
Commit: 7783cd5326cde1681bf43b7caafadd423fe912c6
Parents: 78bc531
Author: Bridget Bevens <bb...@maprtech.com>
Authored: Thu May 21 16:43:35 2015 -0700
Committer: Bridget Bevens <bb...@maprtech.com>
Committed: Thu May 21 16:43:35 2015 -0700

----------------------------------------------------------------------
 .../index.html                                  |   5 +-
 docs/configuring-user-authentication/index.html |  10 +-
 docs/configuring-user-impersonation/index.html  |  16 +-
 docs/data-type-conversion/index.html            |  31 ++-
 docs/date-time-and-timestamp/index.html         |  43 +++-
 .../index.html                                  |  74 +++++--
 docs/drill-query-execution/index.html           |  10 +-
 docs/embedded-mode-prerequisites/index.html     |   2 +-
 docs/error-messages/index.html                  |   6 +-
 docs/partition-pruning/index.html               |   5 +-
 docs/physical-operators/index.html              |   8 +-
 docs/supported-data-types/index.html            |  13 +-
 docs/tutorials-introduction/index.html          |   2 +-
 docs/useful-research/index.html                 |   2 +-
 docs/workspaces/index.html                      |   2 +-
 feed.xml                                        |  11 +-
 js/script.js                                    | 194 +++++++++----------
 17 files changed, 257 insertions(+), 177 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill-site/blob/7783cd53/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html
----------------------------------------------------------------------
diff --git a/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html b/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html
index 5a11f19..a83f165 100644
--- a/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html
+++ b/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html
@@ -132,9 +132,8 @@
   <div class="addthis_sharing_toolbox"></div>
 
   <article class="post-content">
-    <script type="text/javascript" src="//addthisevent.com/libs/1.5.8/ate.min.js"></script>
-
-<p><a href="/blog/2014/12/11/apache-drill-qa-panelist-spotlight/" title="Add to Calendar" class="addthisevent">
+    <p><script type="text/javascript" src="//addthisevent.com/libs/1.5.8/ate.min.js"></script>
+<a href="/blog/2014/12/11/apache-drill-qa-panelist-spotlight/" title="Add to Calendar" class="addthisevent">
     Add to Calendar
     <span class="_start">12-17-2014 11:30:00</span>
     <span class="_end">12-17-2014 12:30:00</span>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/7783cd53/docs/configuring-user-authentication/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-user-authentication/index.html b/docs/configuring-user-authentication/index.html
index 5a3830a..d1195b4 100644
--- a/docs/configuring-user-authentication/index.html
+++ b/docs/configuring-user-authentication/index.html
@@ -974,7 +974,9 @@
 
 <p>If user impersonation is enabled, Drill executes the client requests as the authenticated user. Otherwise, Drill executes client requests as the user that started the Drillbit process. You can enable both authorization and impersonation to improve Drill security. See <a href="/docs/configuring-user-impersonation/">Configuring User Impersonation</a>.</p>
 
-<p>When using PAM for authentication, each user that has permission to run Drill must exist in the list of users that resides on each Drill node in the cluster. The username (including uid) and password for each user must be identical across all of the Drill nodes. </p>
+<p>When using PAM for authentication, each user that has permission to run Drill queries must exist in the list of users that resides on each Drill node in the cluster. The username (including uid) and password for each user must be identical across all of the Drill nodes. </p>
+
+<p>If you use PAM with /etc/passwd for authentication, verify that the users with permission to start the Drill process are part of the shadow user group on all nodes in the cluster. This enables Drill to read the /etc/shadow file for authentication. </p>
 
 <h2 id="user-authentication-process">User Authentication Process</h2>
 
@@ -987,7 +989,7 @@
 
 <p><img src="/docs/img/UserAuth_ODBC_Driver.png" alt="ODBC Driver"></p>
 
-<p>The client passes the username and password to a Drillbit, which then passes the credentials to PAM. If PAM can verify that the user is authorized to access Drill, the user can connect to the Drillbit process from the client and issue queries against the file system or other storage plugins, such as Hive or HBase. However, if PAM cannot verify that the user is authorized to access Drill, the client returns an error.</p>
+<p>The client passes the username and password to a Drillbit as part of the connection request, which then passes the credentials to PAM. If PAM can verify that the user is authorized to access Drill, the connection is successful, and the user can issues queries against the file system or other storage plugins, such as Hive or HBase. However, if PAM cannot verify that the user is authorized to access Drill, the connection is terminated as AUTH_FAILED.</p>
 
 <p>The following image illustrates the user authentication process in Drill:</p>
 
@@ -995,7 +997,7 @@
 
 <h3 id="installing-and-configuring-pam">Installing and Configuring PAM</h3>
 
-<p>Install and configure the provided Drill PAM. Drill only supports the PAM provided here.</p>
+<p>Install and configure the provided Drill PAM. Drill only supports the PAM provided here. Optionally, you can build and implement a custom authenticator using the instructions under &quot;Implementing and Configuring a Custom Authenticator.&quot;</p>
 
 <p>Complete the following steps to install and configure PAM for Drill:</p>
 
@@ -1017,7 +1019,7 @@ Example: <code>export DRILLBIT_JAVA_OPTS=&quot; -Djava.library.path=/opt/pam/&qu
    } 
   }
 </code></pre></div></li>
-<li><p>(Optional) To add or remove different PAM profiles, add or delete the profile names in the <code>“pam_profiles”</code> array.  </p></li>
+<li><p>(Optional) To add or remove different PAM profiles, add or delete the profile names in the <code>“pam_profiles”</code> array shown above.  </p></li>
 <li><p>Restart the Drillbit process on each Drill node.</p>
 
 <ul>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/7783cd53/docs/configuring-user-impersonation/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-user-impersonation/index.html b/docs/configuring-user-impersonation/index.html
index 36dabc5..9378f1a 100644
--- a/docs/configuring-user-impersonation/index.html
+++ b/docs/configuring-user-impersonation/index.html
@@ -970,7 +970,7 @@
 
     <div class="int_text" align="left">
       
-        <p>Impersonation allows a service to act on behalf of a client while performing the action requested by the client. By default, user impersonation is disabled in Drill. You can configure user impersonation in the drill-override.conf file.</p>
+        <p>Impersonation allows a service to act on behalf of a client while performing the action requested by the client. By default, user impersonation is disabled in Drill. You can configure user impersonation in the <DRILLINSTALL_HOME>/conf/drill-override.conf file.</p>
 
 <p>When you enable impersonation, Drill executes client requests as the user logged in to the client. Drill passes the user credentials to the file system, and the file system checks to see if the user has permission to access the data. When you enable authentication, Drill uses the pluggable authentication module (PAM) to authenticate a user’s identity before the user can access the Drillbit process. See User Authentication.</p>
 
@@ -981,7 +981,7 @@
 <p>When impersonation is disabled and user Bob issues a query through the SQLLine client, SQLLine passes the query to the connecting Drillbit. The Drillbit executes the query as the system user that started the Drill process on the node. For the purpose of this example, we will assume that the system user has full access to the file system. Drill executes the query and returns the results back to the client.
 <img src="http://i.imgur.com/4XxQK2I.png" alt=""></p>
 
-<p>When impersonation is enabled and user Bob issues a query through the SQLLine client, the Drillbit executes the query against the file system as Bob. The file system checks to see if Bob has permission to access the data. If so, Drill returns the query results to the client. If Bob does not have permission, Drill returns an error.
+<p>When impersonation is enabled and user Bob issues a query through the SQLLine client, the Drillbit uses Bob&#39;s credentials to access data in the file system. The file system checks to see if Bob has permission to access the data. If so, Drill returns the query results to the client. If Bob does not have permission, Drill returns an error.
 <img src="http://i.imgur.com/oigWqVg.png" alt=""></p>
 
 <h2 id="impersonation-support">Impersonation Support</h2>
@@ -996,8 +996,8 @@
   </tr>
   <tr>
     <td>Clients</td>
-    <td>SQLLine ODBC JDBC</td>
-    <td>Drill Web UI REST API</td>
+    <td>SQLLine, ODBC, JDBC</td>
+    <td>Drill Web UI, REST API</td>
   </tr>
   <tr>
     <td>Storage Plugins</td>
@@ -1015,7 +1015,7 @@
 
 <p>You can use views with impersonation to provide granular access to data and protect sensitive information. When you create a view, Drill stores the view definition in a file and suffixes the file with .drill.view. For example, if you create a view named myview, Drill creates a view file named myview.drill.view and saves it in the current workspace or the workspace specified, such as dfs.views.myview. See <a href="/docs/create-view">CREATE VIEW</a> Command.</p>
 
-<p>You can create a view and grant read permissions on the view to give other users access to the data that the view references. When a user queries the view, Drill impersonates the view owner to access the underlying data. A user with read access to a view can create new views from the originating view to further restrict access on data.</p>
+<p>You can create a view and grant read permissions on the view to give other users access to the data that the view references. When a user queries the view, Drill impersonates the view owner to access the underlying data. If the user tries to access the data directory, Drill returns a permission denied error. A user with read access to a view can create new views from the originating view to further restrict access on data.</p>
 
 <h3 id="view-permissions">View Permissions</h3>
 
@@ -1049,7 +1049,7 @@ ALTER SYSTEM SET `new_view_default_permissions` = &#39;&lt;octal_code&gt;&#39;;
 
 <p>You can configure Drill to allow chained impersonation on views when you enable impersonation in the <code>drill-override.conf</code> file. Chained impersonation controls the number of identity transitions that Drill can make when a user queries a view. Each identity transition is equal to one hop.</p>
 
-<p>You can set the maximum number of hops on views to limit the number of times that Drill can impersonate a different user when a user queries a view. The default maximum number of hops is set at 3. When the maximum number of hops is set to 0, Drill does not allow impersonation chaining, and a user can only read data for which they have direct permission to access. You may set chain length to 0 to protect highly sensitive data. </p>
+<p>You can set the maximum number of hops on views to limit the number of times that Drill can impersonate a different user when a user queries a view. The default maximum number of hops is set at 3. When the maximum number of hops is set to 0, Drill does not allow impersonation chaining, and a user can only read data for which they have direct permission to access. An administrator may set the chain length to 0 to protect highly sensitive data. Only an administrator can change this setting.</p>
 
 <p>The following example depicts a scenario where the maximum hop number is set to 3, and Drill must impersonate three users to access data when Chad queries a view that Jane created:</p>
 
@@ -1106,13 +1106,13 @@ ALTER SYSTEM SET `new_view_default_permissions` = &#39;&lt;octal_code&gt;&#39;;
 <p>Each record in the employees table consists of the following information:
 emp_id, emp_name, emp_ssn, emp_salary, emp_addr, emp_phone, emp_mgr</p>
 
-<p>Frank needs to share a subset of this information with Joe who is an HR manager reporting to Frank. To share the employee data, Frank creates a view called emp_mgr_view that accesses a subset of the data. The emp_mgr_view filters out sensitive employee information, such as the employee social security numbers, and only shows data for the employees that report directly to Joe or the manager running the query on the view. Frank and Joe both belong to the mgr group. Managers have read permission on Frank’s directory.</p>
+<p>Frank needs to share a subset of this information with Joe who is an HR manager reporting to Frank. To share the employee data, Frank creates a view called emp_mgr_view that accesses a subset of the data. The emp_mgr_view filters out sensitive employee information, such as the employee social security numbers, and only shows data for the employees that report directly to Joe. Frank and Joe both belong to the mgr group. Managers have read permission on Frank’s directory.</p>
 
 <p>rwxr-----     frank:mgr   /user/frank/emp_mgr_view.drill.view</p>
 
 <p>The emp_mgr_view.drill.view file contains the following view definition:</p>
 
-<p>(view definition: SELECT emp_id, emp_name, emp_salary, emp_addr, emp_phone FROM `/user/frank/employee` WHERE emp_mgr = user())</p>
+<p>(view definition: SELECT emp_id, emp_name, emp_salary, emp_addr, emp_phone FROM `/user/frank/employee` WHERE emp_mgr = &#39;Joe&#39;)</p>
 
 <p>When Joe issues SELECT * FROM emp_mgr_view, Drill impersonates Frank when accessing the employee data, and the query returns the data that Joe has permission to see based on the view definition. The query results do not include any sensitive data because the view protects that information. If Joe tries to query the employees table directly, Drill returns an error or null values.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/7783cd53/docs/data-type-conversion/index.html
----------------------------------------------------------------------
diff --git a/docs/data-type-conversion/index.html b/docs/data-type-conversion/index.html
index 6859548..91aa13d 100644
--- a/docs/data-type-conversion/index.html
+++ b/docs/data-type-conversion/index.html
@@ -1063,15 +1063,18 @@ SELECT CAST(456 as CHAR(3)) FROM sys.version;
 
 <h3 id="casting-intervals">Casting Intervals</h3>
 
-<p>To cast interval data to the INTERVALDAY or INTERVALYEAR types use the following syntax:</p>
+<p>To cast interval data to interval types you can query from a data source such as JSON, for example, use the following syntax, respectively:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">CAST (column_name AS INTERVAL DAY)
 CAST (column_name AS INTERVAL YEAR)
+CAST (column_name AS INTERVAL SECOND)
 </code></pre></div>
-<p>For example, a JSON file named intervals.json contains the following objects:</p>
+<p>For example, a JSON file named <code>intervals.json</code> contains the following objects:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">{ &quot;INTERVALYEAR_col&quot;:&quot;P1Y&quot;, &quot;INTERVALDAY_col&quot;:&quot;P1D&quot;, &quot;INTERVAL_col&quot;:&quot;P1Y1M1DT1H1M&quot; }
 { &quot;INTERVALYEAR_col&quot;:&quot;P2Y&quot;, &quot;INTERVALDAY_col&quot;:&quot;P2D&quot;, &quot;INTERVAL_col&quot;:&quot;P2Y2M2DT2H2M&quot; }
 { &quot;INTERVALYEAR_col&quot;:&quot;P3Y&quot;, &quot;INTERVALDAY_col&quot;:&quot;P3D&quot;, &quot;INTERVAL_col&quot;:&quot;P3Y3M3DT3H3M&quot; }
 </code></pre></div>
+<p>Create a table in Parquet from the interval data in the <code>intervals.json</code> file.</p>
+
 <ol>
 <li><p>Set the storage format to Parquet.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">ALTER SESSION SET `store.format` = &#39;parquet&#39;;
@@ -1084,13 +1087,27 @@ CAST (column_name AS INTERVAL YEAR)
 1 row selected (0.072 seconds)
 </code></pre></div></li>
 <li><p>Use a CTAS statement to cast text from a JSON file to year and day intervals and to write the data to a Parquet table:</p>
-
-<p>CREATE TABLE dfs.tmp.parquet_intervals AS 
-(SELECT CAST( INTERVALYEAR_col as interval year) INTERVALYEAR_col, 
-        CAST( INTERVALDAY_col as interval day) INTERVALDAY_col 
-FROM dfs.<code>/Users/drill/intervals.json</code>);</p></li>
+<div class="highlight"><pre><code class="language-text" data-lang="text">CREATE TABLE dfs.tmp.parquet_intervals AS 
+(SELECT CAST( INTERVALYEAR_col as INTERVAL YEAR) INTERVALYEAR_col, 
+        CAST( INTERVALDAY_col as INTERVAL DAY) INTERVALDAY_col,
+        CAST( INTERVAL_col as INTERVAL SECOND) INTERVAL_col 
+FROM dfs.`/Users/drill/intervals.json`);
+</code></pre></div></li>
+<li><p>Take a look at what Drill wrote to the Parquet file:</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT * FROM dfs.`tmp`.parquet_intervals;
++-------------------+------------------+---------------+
+| INTERVALYEAR_col  | INTERVALDAY_col  | INTERVAL_col  |
++-------------------+------------------+---------------+
+| P12M              | P1D              | P1DT3660S     |
+| P24M              | P2D              | P2DT7320S     |
+| P36M              | P3D              | P3DT10980S    |
++-------------------+------------------+---------------+
+3 rows selected (0.082 seconds)
+</code></pre></div></li>
 </ol>
 
+<p>Because you cast the INTERVAL_col to INTERVAL SECOND, Drill returns the interval data representing the year, month, day, hour, minute, and second. </p>
+
 <h2 id="convert_to-and-convert_from">CONVERT_TO and CONVERT_FROM</h2>
 
 <p>The CONVERT_TO and CONVERT_FROM functions encode and decode

http://git-wip-us.apache.org/repos/asf/drill-site/blob/7783cd53/docs/date-time-and-timestamp/index.html
----------------------------------------------------------------------
diff --git a/docs/date-time-and-timestamp/index.html b/docs/date-time-and-timestamp/index.html
index cc61a42..dc30be8 100644
--- a/docs/date-time-and-timestamp/index.html
+++ b/docs/date-time-and-timestamp/index.html
@@ -972,13 +972,27 @@
 
     <div class="int_text" align="left">
       
-        <p>Using familiar date and time formats, listed in the <a href="/docs/data-types/data-types">SQL data types table</a>, you can construct query date and time data. You need to cast textual data to date and time data types. The format of date, time, and timestamp text in a textual data source needs to match the SQL query format for successful casting. </p>
-
+        <p>Using familiar date and time formats, listed in the <a href="/docs/supported-data-types">SQL data types table</a>, you can construct query date and time data. You need to cast textual data to date and time data types. The format of date, time, and timestamp text in a textual data source needs to match the SQL query format for successful casting. Drill supports date, time, timestamp, and interval literals shown in the following example:</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT DATE &#39;2008-2-23&#39;, 
+       TIME &#39;12:23:34&#39;, 
+       TIMESTAMP &#39;2008-2-23 12:23:34.456&#39;, 
+       INTERVAL &#39;1&#39; YEAR, INTERVAL &#39;2&#39; DAY, 
+       DATE_ADD(DATE &#39;2008-2-23&#39;, INTERVAL &#39;1 10:20:30&#39; DAY TO SECOND) 
+       DATE_ADD(DATE &#39;2010-2-23&#39;, 1)
+FROM sys.version LIMIT 1;
++-------------+-----------+--------------------------+---------+---------+------------------------+-------------+
+|   EXPR$0    |  EXPR$1   |          EXPR$2          | EXPR$3  | EXPR$4  |         EXPR$5         |   EXPR$6    |
++-------------+-----------+--------------------------+---------+---------+------------------------+-------------+
+| 2008-02-23  | 12:23:34  | 2008-02-23 12:23:34.456  | P1Y     | P2D     | 2008-02-24 10:20:30.0  | 2010-02-24  |
++-------------+-----------+--------------------------+---------+---------+------------------------+-------------+
+</code></pre></div>
 <h2 id="intervalyear-and-intervalday">INTERVALYEAR and INTERVALDAY</h2>
 
 <p>The INTERVALYEAR AND INTERVALDAY types represent a period of time. The INTERVALYEAR type specifies values from a year to a month. The INTERVALDAY type specifies values from a day to seconds.</p>
 
-<p>If your interval data is in the data source, for example a JSON file, cast the JSON VARCHAR types to intervalyear and intervalday using the folliwng ISO 8601 syntax:</p>
+<h3 id="interval-in-data-source">Interval in Data Source</h3>
+
+<p>If your interval data is in the data source, for example a JSON file, cast the JSON VARCHAR types to INTERVALYEAR and INTERVALDAY using the following ISO 8601 syntax:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">P [qty] Y [qty] M [qty] D T [qty] H [qty] M [qty] S
 
 P [qty] D T [qty] H [qty] M [qty] S
@@ -997,7 +1011,9 @@ P [qty] Y [qty] M
 <li>S follows a number of seconds and optional milliseconds to the right of a decimal point</li>
 </ul>
 
-<p>If your input is interval data, use the following SQL literals to restrict the set of stored interval fields:</p>
+<h3 id="interval-literal">Interval Literal</h3>
+
+<p>You can use INTERVAL as a keyword that introduces an interval literal that denotes a data type. With the input of interval data, use the following SQL literals to restrict the set of stored interval fields:</p>
 
 <ul>
 <li>YEAR</li>
@@ -1015,6 +1031,21 @@ P [qty] Y [qty] M
 <li>MINUTE TO SECOND</li>
 </ul>
 
+<h3 id="interval-in-a-data-source-example">Interval in a Data Source Example</h3>
+
+<p>To cast interval data to interval types you can query from a data source such as JSON, see the example in the section, <a href="/docs/data-type-conversion/#casting-intervals">&quot;Casting Intervals&quot;</a>.</p>
+
+<h3 id="literal-interval-exampls">Literal Interval Exampls</h3>
+
+<p>In the following example, the INTERVAL keyword followed by 200 adds 200 years to the timestamp. The parentheticated 3 in <code>YEAR(3)</code> specifies the precision of the year interval, 3 digits in this case to support the hundreds interval.</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT CURRENT_TIMESTAMP + INTERVAL &#39;200&#39; YEAR(3) FROM sys.version;
++--------------------------+
+|          EXPR$0          |
++--------------------------+
+| 2215-05-20 14:04:25.129  |
++--------------------------+
+1 row selected (0.148 seconds)
+</code></pre></div>
 <p>The following examples show the input and output format of INTERVALYEAR (Year, Month) and INTERVALDAY (Day, Hours, Minutes, Seconds, Milliseconds). The following SELECT statements show how to format the query input. The output shows how to format the data in the data source.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">SELECT INTERVAL &#39;1 10:20:30.123&#39; day to second FROM sys.version;
 +------------+
@@ -1048,11 +1079,9 @@ SELECT INTERVAL &#39;13&#39; month FROM sys.version;
 +------------+
 1 row selected (0.076 seconds)
 </code></pre></div>
-<p>For information about casting interval data, see the <a href="/docs/data-type-conversion#cast">&quot;CAST&quot;</a> function.</p>
-
 <h2 id="date,-time,-and-timestamp">DATE, TIME, and TIMESTAMP</h2>
 
-<p>DATE, TIME, and TIMESTAMP store values in Coordinated Universal Time (UTC). Drill supports time functions in the range 1971 to 2037.</p>
+<p>DATE, TIME, and TIMESTAMP literals. Drill stores values in Coordinated Universal Time (UTC). Drill supports time functions in the range 1971 to 2037.</p>
 
 <p>Drill does not support TIMESTAMP with time zone; however, if your data includes the time zone, use the <a href="/docs/casting/converting-data-types#to_timestamp">TO_TIMESTAMP function</a> and <a href="/docs/data-type-conversion/#format-specifiers-for-date/time-conversions">Joda format specifiers</a> as shown the examples in section, <a href="/docs/data-type-conversion/#time-zone-limitation">&quot;Time Zone Limitation&quot;</a>.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/7783cd53/docs/date-time-functions-and-arithmetic/index.html
----------------------------------------------------------------------
diff --git a/docs/date-time-functions-and-arithmetic/index.html b/docs/date-time-functions-and-arithmetic/index.html
index f36b9f2..0aa592f 100644
--- a/docs/date-time-functions-and-arithmetic/index.html
+++ b/docs/date-time-functions-and-arithmetic/index.html
@@ -1047,7 +1047,7 @@
 
 <h3 id="age-examples">AGE Examples</h3>
 
-<p>Find the interval between midnight April 3, 2015 and June 13, 1957.</p>
+<p>Find the interval between midnight today, April 3, 2015, and June 13, 1957.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">SELECT AGE(&#39;1957-06-13&#39;) FROM sys.version;
 +------------+
 |   EXPR$0   |
@@ -1056,6 +1056,16 @@
 +------------+
 1 row selected (0.064 seconds)
 </code></pre></div>
+<p>Find the interval between midnight today, May 21, 2015, and hire dates of employees 578 and 761 in the employees.json file included with the Drill installation.</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT AGE(CAST(hire_date AS TIMESTAMP)) FROM cp.`employee.json` where employee_id IN( &#39;578&#39;,&#39;761&#39;);
++------------------+
+|      EXPR$0      |
++------------------+
+| P236MT25200S     |
+| P211M19DT25200S  |
++------------------+
+2 rows selected (0.121 seconds)
+</code></pre></div>
 <p>Find the interval between 11:10:10 PM on January 1, 2001 and 10:10:10 PM on January 1, 2001.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">SELECT AGE(CAST(&#39;2010-01-01 10:10:10&#39; AS TIMESTAMP), CAST(&#39;2001-01-01 11:10:10&#39; AS TIMESTAMP)) FROM sys.version;
 +------------------+
@@ -1090,9 +1100,6 @@ DATE_ADD(column &lt;date type&gt;)
 
 <h3 id="date_add-examples">DATE_ADD Examples</h3>
 
-<p>Add two days to the birthday column in the genealogy database.</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT DATE_ADD (CAST (birthdays AS date), 2) from genealogy.json;
-</code></pre></div>
 <p>Add two days to today&#39;s date May 15, 2015.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">SELECT DATE_ADD(date &#39;2015-05-15&#39;, 2) FROM sys.version;
 +------------+
@@ -1102,32 +1109,47 @@ DATE_ADD(column &lt;date type&gt;)
 +------------+
 1 row selected (0.07 seconds)
 </code></pre></div>
-<p>Add two months to April 15, 2015.</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT DATE_ADD(date &#39;2015-04-15&#39;, interval &#39;2&#39; month) FROM sys.version;
+<p>Using the example data from the <a href="/docs/data-type-conversion/#casting-intervals">&quot;Casting Intervals&quot;</a> section, add intervals from the <code>intervals.json</code> file to a literal timestamp using an interval expression. Create an interval expression that casts the interval data in the intervals.json file to a timestamp.</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT DATE_ADD(timestamp &#39;2015-04-15 22:55:55&#39;, CAST(INTERVALDAY_col as interval second)) FROM dfs.`/Users/drilluser/apache-drill-1.0.0/intervals.json`;
 +------------------------+
 |         EXPR$0         |
 +------------------------+
-| 2015-06-15 00:00:00.0  |
+| 2015-04-16 22:55:55.0  |
+| 2015-04-17 22:55:55.0  |
+| 2015-04-18 22:55:55.0  |
 +------------------------+
-1 row selected (0.107 seconds)
+3 rows selected (0.105 seconds)
 </code></pre></div>
-<p>Add 10 hours to the timestamp 2015-04-15 22:55:55.</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT DATE_ADD(timestamp &#39;2015-04-15 22:55:55&#39;, interval &#39;10&#39; hour) FROM sys.version;
+<p>The query returns the sum of the timestamp plus 1, 2, and 3 days becuase the INTERVALDAY_col contains P1D, P2D, and P3D, </p>
+
+<p>The Drill installation includes the <code>employee.json</code> file that has records of employee hire dates:</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT * FROM cp.`employee.json` LIMIT 1;
++--------------+---------------+-------------+------------+--------------+-----------------+-----------+----------------+-------------+------------------------+----------+----------------+------------------+-----------------+---------+--------------------+
+| employee_id  |   full_name   | first_name  | last_name  | position_id  | position_title  | store_id  | department_id  | birth_date  |       hire_date        |  salary  | supervisor_id  | education_level  | marital_status  | gender  |  management_role   |
++--------------+---------------+-------------+------------+--------------+-----------------+-----------+----------------+-------------+------------------------+----------+----------------+------------------+-----------------+---------+--------------------+
+| 1            | Sheri Nowmer  | Sheri       | Nowmer     | 1            | President       | 0         | 1              | 1961-08-26  | 1994-12-01 00:00:00.0  | 80000.0  | 0              | Graduate Degree  | S               | F       | Senior Management  |
++--------------+---------------+-------------+------------+--------------+-----------------+-----------+----------------+-------------+------------------------+----------+----------------+------------------+-----------------+---------+--------------------+
+1 row selected (0.137 seconds)
+</code></pre></div>
+<p>Look at the hire_dates for the employee 578 and 761 in <code>employee.json</code>.</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT hire_date FROM cp.`employee.json` where employee_id IN( &#39;578&#39;,&#39;761&#39;);
 +------------------------+
-|         EXPR$0         |
+|       hire_date        |
 +------------------------+
-| 2015-04-16 08:55:55.0  |
+| 1996-01-01 00:00:00.0  |
+| 1998-01-01 00:00:00.0  |
 +------------------------+
-1 row selected (0.199 seconds)
+2 rows selected (0.135 seconds)
 </code></pre></div>
-<p>Add 10 hours to the time 22 hours, 55 minutes, 55 seconds.</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT DATE_ADD(time &#39;22:55:55&#39;, interval &#39;10&#39; hour) FROM sys.version;
-+------------+
-|   EXPR$0   |
-+------------+
-| 08:55:55   |
-+------------+
-1 row selected (0.085 seconds)
+<p>Cast the hire_dates of the employees 578 and 761 to a timestamp, and add 10 hours to the hire_date timestamp. Because Drill reads data from JSON as VARCHAR, you need to cast the hire_date to the TIMESTAMP type. </p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT DATE_ADD(CAST(hire_date AS TIMESTAMP), interval &#39;10&#39; hour) FROM cp.`employee.json` where employee_id IN( &#39;578&#39;,&#39;761&#39;);
++------------------------+
+|         EXPR$0         |
++------------------------+
+| 1996-01-01 10:00:00.0  |
+| 1998-01-01 10:00:00.0  |
++------------------------+
+2 rows selected (0.172 seconds)
 </code></pre></div>
 <p>Add 1 year and 1 month to the timestamp 2015-04-15 22:55:55.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">SELECT DATE_ADD(timestamp &#39;2015-04-15 22:55:55&#39;, interval &#39;1-2&#39; year to month) FROM sys.version;
@@ -1210,6 +1232,16 @@ DATE_ADD(column &lt;date type&gt;)
 
 <h3 id="date_sub-examples">DATE_SUB Examples</h3>
 
+<p>Cast the hire_dates of the employees 578 and 761 to a timestamp, and add 10 hours to the hire_date timestamp. Because Drill reads data from JSON as VARCHAR, you need to cast the hire_date to the TIMESTAMP type. </p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT DATE_SUB(CAST(hire_date AS TIMESTAMP), interval &#39;10&#39; hour) FROM cp.`employee.json` WHERE employee_id IN( &#39;578&#39;,&#39;761&#39;);
++------------------------+
+|         EXPR$0         |
++------------------------+
+| 1995-12-31 14:00:00.0  |
+| 1997-12-31 14:00:00.0  |
++------------------------+
+2 rows selected (0.161 seconds)
+</code></pre></div>
 <p>Subtract two days to today&#39;s date May 15, 2015.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">SELECT DATE_SUB(date &#39;2015-05-15&#39;, 2) FROM sys.version;
 +------------+

http://git-wip-us.apache.org/repos/asf/drill-site/blob/7783cd53/docs/drill-query-execution/index.html
----------------------------------------------------------------------
diff --git a/docs/drill-query-execution/index.html b/docs/drill-query-execution/index.html
index bcd31b2..16dc80d 100644
--- a/docs/drill-query-execution/index.html
+++ b/docs/drill-query-execution/index.html
@@ -984,7 +984,7 @@
 
 <p>A parallelizer in the Foreman transforms the physical plan into multiple phases, called major and minor fragments. These fragments create a multi-level execution tree that rewrites the query and executes it in parallel against the configured data sources, sending the results back to the client or application.</p>
 
-<p><img src="/docs/img/execution-tree.png" alt="">  </p>
+<p><img src="/docs/img/execution-tree.PNG" alt="">  </p>
 
 <h2 id="major-fragments">Major Fragments</h2>
 
@@ -1010,13 +1010,13 @@
 
 <p>Drill executes each minor fragment in its own thread as quickly as possible based on its upstream data requirements. Drill schedules the minor fragments on nodes with data locality. Otherwise, Drill schedules them in a round-robin fashion on the existing, available Drillbits.</p>
 
-<p>Minor fragments contain one or more relational operators. An operator performs a relational operation, such as scan, filter, join, or group by. Each operator has a particular operator type and an OperatorID. Each OperatorID defines its relationship within the minor fragment to which it belongs.  </p>
+<p>Minor fragments contain one or more relational operators. An operator performs a relational operation, such as scan, filter, join, or group by. Each operator has a particular operator type and an OperatorID. Each OperatorID defines its relationship within the minor fragment to which it belongs. See <a href="/docs/physical-operators/">Physical Operators</a>.</p>
 
 <p><img src="/docs/img/operators.png" alt=""></p>
 
 <p>For example, when performing a hash aggregation of two files, Drill breaks the first phase dedicated to scanning into two minor fragments. Each minor fragment contains scan operators that scan the files. Drill breaks the second phase dedicated to aggregation into four minor fragments. Each of the four minor fragments contain hash aggregate operators that perform the hash  aggregation operations on the data. </p>
 
-<p>You cannot modify the number of minor fragments within the execution plan. However, you can view the query profile in the Drill Web UI and modify some configuration options that change the behavior of minor fragments, such as the maximum number of slices. See <a href="/docs/configuration-options-introduction/">Configuration Options</a> for more information.</p>
+<p>You cannot modify the number of minor fragments within the execution plan. However, you can view the query profile in the Drill Web UI and modify some configuration options that change the behavior of minor fragments, such as the maximum number of slices. See <a href="/docs/configuration-options-introduction/">Configuration Options</a>.</p>
 
 <h3 id="execution-of-minor-fragments">Execution of Minor Fragments</h3>
 
@@ -1026,9 +1026,9 @@
 
 <p>Intermediate fragments start work when data is available or fed to them from other fragments. They perform operations on the data and then send the data downstream. They also pass the aggregated results to the root fragment, which performs further aggregation and provides the query results to the client or application.</p>
 
-<p>The leaf fragments scan tables in parallel and communicate with the storage layer or access data on local disk. The leaf fragments pass partial results to the intermediate fragments, which perform parallel operations on intermediate results.</p>
+<p>The leaf fragments scan tables in parallel and communicate with the storage layer or access data on local disk. The leaf fragments pass partial results to the intermediate fragments, which perform parallel operations on intermediate results.  </p>
 
-<p><img src="/docs/leaf-frag.png" alt=""></p>
+<p><img src="/docs/img/leaf-frag.png" alt="">    </p>
 
 <p>Drill only plans queries that have concurrent running fragments. For example, if 20 available slices exist in the cluster, Drill plans a query that runs no more than 20 minor fragments in a particular major fragment. Drill is optimistic and assumes that it can complete all of the work in parallel. All minor fragments for a particular major fragment start at the same time based on their upstream data dependency.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/7783cd53/docs/embedded-mode-prerequisites/index.html
----------------------------------------------------------------------
diff --git a/docs/embedded-mode-prerequisites/index.html b/docs/embedded-mode-prerequisites/index.html
index dd4cc9f..fe31faa 100644
--- a/docs/embedded-mode-prerequisites/index.html
+++ b/docs/embedded-mode-prerequisites/index.html
@@ -987,7 +987,7 @@ running Linux, Mac OS X, or Windows.</p>
 <li>Windows only:<br>
 
 <ul>
-<li>A JAVA_HOME environment variable set up that points to  to the JDK installation<br></li>
+<li>A JAVA_HOME environment variable set up that points to the JDK installation<br></li>
 <li>A PATH environment variable that includes a pointer to the JDK installation<br></li>
 <li>A third-party utility for unzipping a tar.gz file </li>
 </ul></li>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/7783cd53/docs/error-messages/index.html
----------------------------------------------------------------------
diff --git a/docs/error-messages/index.html b/docs/error-messages/index.html
index 03fc2a9..e44a2bd 100644
--- a/docs/error-messages/index.html
+++ b/docs/error-messages/index.html
@@ -978,7 +978,7 @@
 <li>Connection reset by peer</li>
 </ul>
 
-<p>These issues typically result from a problem outside of the query process. However, if you encounter a java.lang.OutOfMemoryError error, take action and give Drill as much memory as possible to resolve the issue. See Configuring Drill Memory.</p>
+<p>These issues typically result from a problem outside of the query process. However, if you encounter a java.lang.OutOfMemoryError error, take action and give Drill as much memory as possible to resolve the issue. See <a href="/docs/configuring-drill-memory/">Configuring Drill Memory</a>.</p>
 
 <p>Drill assigns an ErrorId to each error that occurs. An ErrorID is a unique identifier for a particular error that tells you which node assigned the error. For example,
 [ 1ee8e004-2fce-420f-9790-5c6f8b7cad46 on 10.1.1.109:31010 ]. You can log into the node that assigned the error and grep the Drill log for the ErrorId to get more information about the error.</p>
@@ -995,7 +995,7 @@
 </thead><tbody>
 <tr>
 <td>QueryID</td>
-<td>The identifier assigned to the query. You can locate a query in Drill Web UI by the QueryID and then cancel the query if needed. See Query Profiles for more information.</td>
+<td>The identifier assigned to the query. You can locate a query in Drill Web UI by the QueryID and then cancel the query if needed. See <a href="/docs/query-profiles/">Query Profiles</a>.</td>
 </tr>
 <tr>
 <td>MajorFragmentID</td>
@@ -1003,7 +1003,7 @@
 </tr>
 <tr>
 <td>MinorFragmentID</td>
-<td>The identifier assigned to the minor fragment. Minor fragments map to the parallelization of major fragments. See Query Profiles for more information.</td>
+<td>The identifier assigned to the minor fragment. Minor fragments map to the parallelization of major fragments. See <a href="/docs/query-profiles">Query Profiles</a>.</td>
 </tr>
 </tbody></table>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/7783cd53/docs/partition-pruning/index.html
----------------------------------------------------------------------
diff --git a/docs/partition-pruning/index.html b/docs/partition-pruning/index.html
index 6f47398..633e173 100644
--- a/docs/partition-pruning/index.html
+++ b/docs/partition-pruning/index.html
@@ -980,8 +980,9 @@
 
 <p>Partitioning data requires you to determine a partitioning scheme, or a logical way to store the data in a hierarchy of directories. You can then use CTAS to create Parquet files from the original data, specifying filter conditions, and then move the files into the correlating directories in the hierarchy. Once you have partitioned the data, you can create and query views on the data.</p>
 
-<p>Partitioning Example
-For example, if you have several text files with log data which span multiple years, and you want to partition the data by year and quarter, you could create the following hierarchy of directories:  </p>
+<h3 id="partitioning-example">Partitioning Example</h3>
+
+<p>If you have several text files with log data which span multiple years, and you want to partition the data by year and quarter, you could create the following hierarchy of directories:  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   …/logs/1994/Q1  
    …/logs/1994/Q2  
    …/logs/1994/Q3  

http://git-wip-us.apache.org/repos/asf/drill-site/blob/7783cd53/docs/physical-operators/index.html
----------------------------------------------------------------------
diff --git a/docs/physical-operators/index.html b/docs/physical-operators/index.html
index 2e31abe..c2840bf 100644
--- a/docs/physical-operators/index.html
+++ b/docs/physical-operators/index.html
@@ -1165,13 +1165,13 @@
 
 <table><thead>
 <tr>
-<th>PartitionSender</th>
-<th></th>
+<th>Operator</th>
+<th>Description</th>
 </tr>
 </thead><tbody>
 <tr>
-<td>The PartitionSender operator maintains a queue for each outbound destination.  May be either the number of outbound minor fragments or the number of the nodes</td>
-<td>depending on the use of muxxing operations.  Each queue may store up to 3 record batches for each destination.</td>
+<td>PartitionSender</td>
+<td>The PartitionSender operator maintains a queue for each outbound destination.  May be either the number of outbound minor fragments or the number of the nodes, depending on the use of muxxing operations.  Each queue may store up to 3 record batches for each destination.</td>
 </tr>
 </tbody></table>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/7783cd53/docs/supported-data-types/index.html
----------------------------------------------------------------------
diff --git a/docs/supported-data-types/index.html b/docs/supported-data-types/index.html
index 547e10f..9e7d960 100644
--- a/docs/supported-data-types/index.html
+++ b/docs/supported-data-types/index.html
@@ -1074,15 +1074,13 @@
 <p>Drill supports the following composite types:</p>
 
 <ul>
-<li>Array</li>
 <li>Map</li>
+<li>Array</li>
 </ul>
 
 <p>A map is a set of name/value pairs. A value in a map can be a scalar type, such as string or int, or a complex type, such as an array or another map. An array is a repeated list of values. A value in an array can be a scalar type, such as string or int, or an array can be a complex type, such as a map or another array.</p>
 
-<p>Drill uses map and array data types internally for reading complex and nested data structures from data sources. For more information, see examples of <a href="/docs/handling-different-data-types/#handling-json-and-parquet-data">handling JSON maps and arrays</a>. </p>
-
-<p>In this release of Drill, you cannot reference a composite type by name in a query, but Drill supports array values coming from data sources. For example, you can use the index syntax to query data and get the value of an array element:  </p>
+<p>Drill uses map and array data types internally for reading complex and nested data structures from data sources. In this release of Drill, you cannot reference a composite type by name in a query, but Drill supports array values coming from data sources. For example, you can use the index syntax to query data and get the value of an array element:  </p>
 
 <p><code>a[1]</code>  </p>
 
@@ -1090,9 +1088,12 @@
 
 <p><code>m[&#39;k&#39;]</code></p>
 
-<p>The section <a href="/docs/querying-complex-data-introduction">“Query Complex Data”</a> show how to use <a href="/docs/supported-data-types/#composite-types">composite types</a> to access nested arrays.</p>
+<p>The section <a href="/docs/querying-complex-data-introduction">“Query Complex Data”</a> shows how to use <a href="/docs/supported-data-types/#composite-types">composite types</a> to access nested arrays. <a href="/docs/handling-different-data-types/#handling-json-and-parquet-data">&quot;Handling Different Data Types&quot;</a> includes examples of JSON maps and arrays. Drill provides functions for handling array and map types:</p>
 
-<p>For more information about using array and map types, see the sections, <a href="/docs/kvgen/">&quot;KVGEN&quot;</a> and <a href="/docs/flatten/">&quot;FLATTEN&quot;</a>.</p>
+<ul>
+<li><a href="/docs/kvgen/">&quot;KVGEN&quot;</a></li>
+<li><a href="/docs/flatten/">&quot;FLATTEN&quot;</a></li>
+</ul>
 
 <h2 id="casting-and-converting-data-types">Casting and Converting Data Types</h2>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/7783cd53/docs/tutorials-introduction/index.html
----------------------------------------------------------------------
diff --git a/docs/tutorials-introduction/index.html b/docs/tutorials-introduction/index.html
index b67617f..f322051 100644
--- a/docs/tutorials-introduction/index.html
+++ b/docs/tutorials-introduction/index.html
@@ -988,7 +988,7 @@ Access Hive tables in Tableau.<br></li>
 <li><a href="/docs/using-microstrategy-analytics-with-apache-drill/">Using MicroStrategy Analytics with Apache Drill</a><br>
 Use the Drill ODBC driver from MapR to analyze data and generate a report using Drill from the MicroStrategy UI.<br></li>
 <li><a href="/docs/using-tibco-spotfire-with-drill/">Using Tibco Spotfire Server with Drill</a><br>
-Use the Apache Drill to query complex data structures from Tibco Spotfire Desktop.</li>
+Use Apache Drill to query complex data structures from Tibco Spotfire Desktop.</li>
 <li><a href="/docs/configuring-tibco-spotfire-server-with-drill">Configuring Tibco Spotfire Server with Drill</a><br>
 Integrate Tibco Spotfire Server with Apache Drill and explore multiple data formats on Hadoop.<br></li>
 <li><a href="/docs/using-apache-drill-with-tableau-9-desktop">Using Apache Drill with Tableau 9 Desktop</a><br>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/7783cd53/docs/useful-research/index.html
----------------------------------------------------------------------
diff --git a/docs/useful-research/index.html b/docs/useful-research/index.html
index f6f388c..06b9cd2 100644
--- a/docs/useful-research/index.html
+++ b/docs/useful-research/index.html
@@ -1021,7 +1021,7 @@
 <li><a href="https://github.com/rgrzywinski/field-stripe/">https://github.com/rgrzywinski/field-stripe/</a></li>
 </ul>
 
-<h2 id="code-generation-/-physical-plan-generation">Code generation / Physical plan generation</h2>
+Code generation / Physical plan generation</h2>
 
 <ul>
 <li><a href="http://www.vldb.org/pvldb/vol4/p539-neumann.pdf">http://www.vldb.org/pvldb/vol4/p539-neumann.pdf</a> (SLIDES: <a href="http://www.vldb.org/2011/files/slides/research9/rSession9-3.pdf">http://www.vldb.org/2011/files/slides/research9/rSession9-3.pdf</a>)</li>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/7783cd53/docs/workspaces/index.html
----------------------------------------------------------------------
diff --git a/docs/workspaces/index.html b/docs/workspaces/index.html
index be553f6..7126ce4 100644
--- a/docs/workspaces/index.html
+++ b/docs/workspaces/index.html
@@ -1028,7 +1028,7 @@ files and tables in the <code>file</code> or <code>hive default</code> workspace
 workspace name from the query.</p>
 
 <p>For example, you can issue a query on a Hive table in the <code>default workspace</code>
-using either of the following formats and get the the same results:</p>
+using either of the following formats and get the same results:</p>
 
 <p><strong>Example</strong></p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">SELECT * FROM hive.customers LIMIT 10;

http://git-wip-us.apache.org/repos/asf/drill-site/blob/7783cd53/feed.xml
----------------------------------------------------------------------
diff --git a/feed.xml b/feed.xml
index d21ae4f..3f4030d 100644
--- a/feed.xml
+++ b/feed.xml
@@ -6,9 +6,9 @@
 </description>
     <link>/</link>
     <atom:link href="/feed.xml" rel="self" type="application/rss+xml"/>
-    <pubDate>Wed, 20 May 2015 22:57:53 -0700</pubDate>
-    <lastBuildDate>Wed, 20 May 2015 22:57:53 -0700</lastBuildDate>
-    <generator>Jekyll v2.5.3</generator>
+    <pubDate>Thu, 21 May 2015 16:35:17 -0700</pubDate>
+    <lastBuildDate>Thu, 21 May 2015 16:35:17 -0700</lastBuildDate>
+    <generator>Jekyll v2.5.2</generator>
     
       <item>
         <title>The Apache Software Foundation Announces Apache Drill 1.0</title>
@@ -444,9 +444,8 @@ Tomer Shiran&lt;/p&gt;
     
       <item>
         <title>Apache Drill Q&amp;A Panelist Spotlight</title>
-        <description>&lt;script type=&quot;text/javascript&quot; src=&quot;//addthisevent.com/libs/1.5.8/ate.min.js&quot;&gt;&lt;/script&gt;
-
-&lt;p&gt;&lt;a href=&quot;/blog/2014/12/11/apache-drill-qa-panelist-spotlight/&quot; title=&quot;Add to Calendar&quot; class=&quot;addthisevent&quot;&gt;
+        <description>&lt;p&gt;&lt;script type=&quot;text/javascript&quot; src=&quot;//addthisevent.com/libs/1.5.8/ate.min.js&quot;&gt;&lt;/script&gt;
+&lt;a href=&quot;/blog/2014/12/11/apache-drill-qa-panelist-spotlight/&quot; title=&quot;Add to Calendar&quot; class=&quot;addthisevent&quot;&gt;
     Add to Calendar
     &lt;span class=&quot;_start&quot;&gt;12-17-2014 11:30:00&lt;/span&gt;
     &lt;span class=&quot;_end&quot;&gt;12-17-2014 12:30:00&lt;/span&gt;

http://git-wip-us.apache.org/repos/asf/drill-site/blob/7783cd53/js/script.js
----------------------------------------------------------------------
diff --git a/js/script.js b/js/script.js
index 1cfc511..0f213c4 100644
--- a/js/script.js
+++ b/js/script.js
@@ -1,97 +1,97 @@
-var reelPointer = null;
-$(document).ready(function(e) {
-	
-  $(".aLeft").click(function() {
-		moveReel("prev");
-	});
-	$(".aRight").click(function() {
-		moveReel("next");
-	});
-	
-	if ($("#header .scroller .item").length == 1) {
-		
-	} else {
-		
-		$("#header .dots, .aLeft, .aRight").css({ display: 'block' });
-		$("#header .scroller .item").each(function(i) {
-			$("#header .dots").append("<div class='dot'></div>");
-			$("#header .dots .dot").eq(i).click(function() {
-				var index = $(this).prevAll(".dot").length;
-				moveReel(index);
-			});
-		});
-		
-		reelPointer = setTimeout(function() { moveReel(1); },5000);
-	}
-	
-	$("#menu ul li").each(function(index, element) {
-        if ($(this).find("ul").length) {
-			$(this).addClass("parent");	
-		}
-    });
-
-	$("#header .dots .dot:eq(0)").addClass("sel");
-	
-	resized();
-	
-	$(window).scroll(onScroll);
-});
-
-var reel_currentIndex = 0;
-function resized() {
-	
-	var WW = parseInt($(window).width(),10);
-	var IW = (WW < 999) ? 999 : WW;
-	var IH = parseInt($("#header .scroller .item").css("height"),10);
-	var IN = $("#header .scroller .item").length;
-	
-	$("#header .scroller").css({ width: (IN * IW)+"px", marginLeft: -(reel_currentIndex * IW)+"px" });
-	$("#header .scroller .item").css({ width: IW+"px" });
-	
-	
-	$("#header .scroller .item").each(function(i) {
-		var th = parseInt($(this).find(".tc").height(),10);
-		var d = IH - th + 25;
-		$(this).find(".tc").css({ top: Math.round(d/2)+"px" });
-	});
-	
-	if (WW < 999) $("#menu").addClass("r");
-	else $("#menu").removeClass("r");
-	
-	onScroll();
-		
-}
-
-function moveReel(direction) {
-	
-	if (reelPointer) clearTimeout(reelPointer);
-	
-	var IN = $("#header .scroller .item").length;
-	var IW = $("#header .scroller .item").width();
-	if (direction == "next") reel_currentIndex++;
-	else if (direction == "prev") reel_currentIndex--;
-	else reel_currentIndex = direction;
-	
-	if (reel_currentIndex >= IN) reel_currentIndex = 0;
-	if (reel_currentIndex < 0) reel_currentIndex = IN-1;
-	
-	$("#header .dots .dot").removeClass("sel");
-	$("#header .dots .dot").eq(reel_currentIndex).addClass("sel");
-		
-	$("#header .scroller").stop(false,true,false).animate({ marginLeft: -(reel_currentIndex * IW)+"px" }, 1000, "easeOutQuart");
-	
-	reelPointer = setTimeout(function() { moveReel(1); },5000);
-	
-}
-
-function onScroll() {
-	var ST = document.body.scrollTop || document.documentElement.scrollTop;
-	//if ($("#menu.r").length) {
-	//	$("#menu.r").css({ top: ST+"px" });	
-	//} else {
-	//	$("#menu").css({ top: "0px" });
-	//}
-	
-	if (ST > 400) $("#subhead").addClass("show");	
-	else $("#subhead").removeClass("show");	
-}
+var reelPointer = null;
+$(document).ready(function(e) {
+	
+  $(".aLeft").click(function() {
+		moveReel("prev");
+	});
+	$(".aRight").click(function() {
+		moveReel("next");
+	});
+	
+	if ($("#header .scroller .item").length == 1) {
+		
+	} else {
+		
+		$("#header .dots, .aLeft, .aRight").css({ display: 'block' });
+		$("#header .scroller .item").each(function(i) {
+			$("#header .dots").append("<div class='dot'></div>");
+			$("#header .dots .dot").eq(i).click(function() {
+				var index = $(this).prevAll(".dot").length;
+				moveReel(index);
+			});
+		});
+		
+		reelPointer = setTimeout(function() { moveReel(1); },5000);
+	}
+	
+	$("#menu ul li").each(function(index, element) {
+        if ($(this).find("ul").length) {
+			$(this).addClass("parent");	
+		}
+    });
+
+	$("#header .dots .dot:eq(0)").addClass("sel");
+	
+	resized();
+	
+	$(window).scroll(onScroll);
+});
+
+var reel_currentIndex = 0;
+function resized() {
+	
+	var WW = parseInt($(window).width(),10);
+	var IW = (WW < 999) ? 999 : WW;
+	var IH = parseInt($("#header .scroller .item").css("height"),10);
+	var IN = $("#header .scroller .item").length;
+	
+	$("#header .scroller").css({ width: (IN * IW)+"px", marginLeft: -(reel_currentIndex * IW)+"px" });
+	$("#header .scroller .item").css({ width: IW+"px" });
+	
+	
+	$("#header .scroller .item").each(function(i) {
+		var th = parseInt($(this).find(".tc").height(),10);
+		var d = IH - th + 25;
+		$(this).find(".tc").css({ top: Math.round(d/2)+"px" });
+	});
+	
+	if (WW < 999) $("#menu").addClass("r");
+	else $("#menu").removeClass("r");
+	
+	onScroll();
+		
+}
+
+function moveReel(direction) {
+	
+	if (reelPointer) clearTimeout(reelPointer);
+	
+	var IN = $("#header .scroller .item").length;
+	var IW = $("#header .scroller .item").width();
+	if (direction == "next") reel_currentIndex++;
+	else if (direction == "prev") reel_currentIndex--;
+	else reel_currentIndex = direction;
+	
+	if (reel_currentIndex >= IN) reel_currentIndex = 0;
+	if (reel_currentIndex < 0) reel_currentIndex = IN-1;
+	
+	$("#header .dots .dot").removeClass("sel");
+	$("#header .dots .dot").eq(reel_currentIndex).addClass("sel");
+		
+	$("#header .scroller").stop(false,true,false).animate({ marginLeft: -(reel_currentIndex * IW)+"px" }, 1000, "easeOutQuart");
+	
+	reelPointer = setTimeout(function() { moveReel(1); },5000);
+	
+}
+
+function onScroll() {
+	var ST = document.body.scrollTop || document.documentElement.scrollTop;
+	//if ($("#menu.r").length) {
+	//	$("#menu.r").css({ top: ST+"px" });	
+	//} else {
+	//	$("#menu").css({ top: "0px" });
+	//}
+	
+	if (ST > 400) $("#subhead").addClass("show");	
+	else $("#subhead").removeClass("show");	
+}