You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@drill.apache.org by br...@apache.org on 2015/07/17 03:22:59 UTC

[2/2] drill-site git commit: Drill edits for 1.1

Drill edits for 1.1


Project: http://git-wip-us.apache.org/repos/asf/drill-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill-site/commit/d0ba86f3
Tree: http://git-wip-us.apache.org/repos/asf/drill-site/tree/d0ba86f3
Diff: http://git-wip-us.apache.org/repos/asf/drill-site/diff/d0ba86f3

Branch: refs/heads/asf-site
Commit: d0ba86f3b882ceea6834cd9c3c85d373bd3fa202
Parents: 94a79d4
Author: Bridget Bevens <bb...@maprtech.com>
Authored: Thu Jul 16 18:22:38 2015 -0700
Committer: Bridget Bevens <bb...@maprtech.com>
Committed: Thu Jul 16 18:22:38 2015 -0700

----------------------------------------------------------------------
 .../index.html                                  | 228 ++++++++++-----
 .../index.html                                  |   9 +-
 .../index.html                                  | 277 ++++++++++---------
 .../index.html                                  |  24 +-
 .../index.html                                  |  36 ++-
 docs/drill-default-input-format/index.html      |   6 +-
 docs/drill-introduction/index.html              |   4 +-
 docs/file-system-storage-plugin/index.html      |  48 +---
 docs/hbase-storage-plugin/index.html            |  43 +--
 docs/hive-storage-plugin/index.html             |  16 +-
 docs/json-data-model/index.html                 |   9 +-
 docs/mongodb-plugin-for-apache-drill/index.html |   4 +-
 docs/partition-by-clause/index.html             |   4 +-
 docs/partition-pruning/index.html               |  16 +-
 docs/plugin-configuration-basics/index.html     |  72 ++---
 docs/storage-plugin-registration/index.html     |  14 +-
 docs/tableau-examples/index.html                |   7 +-
 docs/text-files-csv-tsv-psv/index.html          | 119 ++++----
 docs/workspaces/index.html                      |  21 +-
 feed.xml                                        |   4 +-
 20 files changed, 478 insertions(+), 483 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill-site/blob/d0ba86f3/docs/aggregate-and-aggregate-statistical/index.html
----------------------------------------------------------------------
diff --git a/docs/aggregate-and-aggregate-statistical/index.html b/docs/aggregate-and-aggregate-statistical/index.html
index a001354..248f1c8 100644
--- a/docs/aggregate-and-aggregate-statistical/index.html
+++ b/docs/aggregate-and-aggregate-statistical/index.html
@@ -1051,33 +1051,68 @@ Drill queries:</p>
 
 <p>AVG, COUNT, MIN, MAX, and SUM accept ALL and DISTINCT keywords. The default is ALL.</p>
 
+<p>These examples of aggregate functions use the <code>cp</code> storage plugin to access a JSON file installed with Drill. By default, JSON reads numbers as double-precision floating point numbers. These examples assume that you are using the default option <a href="/docs/json-data-model/#handling-type-differences">all_text_mode</a> set to false.</p>
+
 <h2 id="avg">AVG</h2>
 
 <p>Averages a column of all records in a data source. Averages a column of one or more groups of records. Which records to include in the calculation can be based on a condition.</p>
 
-<h3 id="syntax">Syntax</h3>
-<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT AVG(aggregate_expression)
+<h3 id="avg-syntax">AVG Syntax</h3>
+<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT AVG([ALL | DISTINCT] aggregate_expression)
 FROM tables
 WHERE conditions;
 
 SELECT expression1, expression2, ... expression_n,
-       AVG(aggregate_expression)
+       AVG([ALL | DISTINCT] aggregate_expression)
 FROM tables
 WHERE conditions
 GROUP BY expression1, expression2, ... expression_n;
 </code></pre></div>
-<p>Expressions listed within the AVG function and must be included in the GROUP BY clause.</p>
+<p>Expressions listed within the AVG function and must be included in the GROUP BY clause. </p>
 
-<h3 id="examples">Examples</h3>
-<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT AVG(salary) FROM cp.`employee.json`;
+<h3 id="avg-examples">AVG Examples</h3>
+<div class="highlight"><pre><code class="language-text" data-lang="text">ALTER SESSION SET `store.json.all_text_mode` = false;
++-------+------------------------------------+
+|  ok   |              summary               |
++-------+------------------------------------+
+| true  | store.json.all_text_mode updated.  |
++-------+------------------------------------+
+1 row selected (0.073 seconds)
+</code></pre></div>
+<p>Take a look at the salaries of employees having IDs 1139, 1140, and 1141. These are the salaries that subsequent examples will average and sum.</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT * FROM cp.`employee.json` WHERE employee_id IN (1139, 1140, 1141);
++--------------+------------------+-------------+------------+--------------+--------------------------+-----------+----------------+-------------+------------------------+-------------+----------------+------------------+-----------------+---------+-----------------------+
+| employee_id  |    full_name     | first_name  | last_name  | position_id  |      position_title      | store_id  | department_id  | birth_date  |       hire_date        |   salary    | supervisor_id  | education_level  | marital_status  | gender  |    management_role    |
++--------------+------------------+-------------+------------+--------------+--------------------------+-----------+----------------+-------------+------------------------+-------------+----------------+------------------+-----------------+---------+-----------------------+
+| 1139         | Jeanette Belsey  | Jeanette    | Belsey     | 12           | Store Assistant Manager  | 18        | 11             | 1972-05-12  | 1998-01-01 00:00:00.0  | 10000.0000  | 17             | Graduate Degree  | S               | M       | Store Management      |
+| 1140         | Mona Jaramillo   | Mona        | Jaramillo  | 13           | Store Shift Supervisor   | 18        | 11             | 1961-09-24  | 1998-01-01 00:00:00.0  | 8900.0000   | 1139           | Partial College  | S               | M       | Store Management      |
+| 1141         | James Compagno   | James       | Compagno   | 15           | Store Permanent Checker  | 18        | 15             | 1914-02-02  | 1998-01-01 00:00:00.0  | 6400.0000   | 1139           | Graduate Degree  | S               | M       | Store Full Time Staf  |
++--------------+------------------+-------------+------------+--------------+--------------------------+-----------+----------------+-------------+------------------------+-------------+----------------+------------------+-----------------+---------+-----------------------+
+3 rows selected (0.284 seconds)
+</code></pre></div><div class="highlight"><pre><code class="language-text" data-lang="text">SELECT AVG(salary) FROM cp.`employee.json` WHERE employee_id IN (1139, 1140, 1141);
++--------------------+
+|       EXPR$0       |
++--------------------+
+| 8433.333333333334  |
++--------------------+
+1 row selected (0.208 seconds)
+
+SELECT AVG(ALL salary) FROM cp.`employee.json` WHERE employee_id IN (1139, 1140, 1141);
++--------------------+
+|       EXPR$0       |
++--------------------+
+| 8433.333333333334  |
++--------------------+
+1 row selected (0.17 seconds)
+
+SELECT AVG(DISTINCT salary) FROM cp.`employee.json`;
 +---------------------+
 |       EXPR$0        |
 +---------------------+
-| 4019.6017316017314  |
+| 12773.333333333334  |
 +---------------------+
-1 row selected (0.221 seconds)
-
-SELECT education_level, AVG(salary) FROM cp.`employee.json` GROUP BY education_level;
+1 row selected (0.384 seconds)
+</code></pre></div><div class="highlight"><pre><code class="language-text" data-lang="text">SELECT education_level, AVG(salary) FROM cp.`employee.json` GROUP BY education_level;
 +----------------------+---------------------+
 |   education_level    |       EXPR$1        |
 +----------------------+---------------------+
@@ -1089,83 +1124,126 @@ SELECT education_level, AVG(salary) FROM cp.`employee.json` GROUP BY education_l
 +----------------------+---------------------+
 5 rows selected (0.495 seconds)
 </code></pre></div>
-<h2 id="count,-min,-max,-and-sum">COUNT, MIN, MAX, and SUM</h2>
+<h2 id="count">COUNT</h2>
 
-<h3 id="examples">Examples</h3>
-<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT a2 FROM t2;
-+------------+
-|     a2     |
-+------------+
-| 0          |
-| 1          |
-| 2          |
-| 2          |
-| 2          |
-| 3          |
-| 4          |
-| 5          |
-| 6          |
-| 7          |
-| 7          |
-| 8          |
-| 9          |
-+------------+
-13 rows selected (0.056 seconds)
+<p>Returns the number of rows that match the given criteria.</p>
 
-SELECT AVG(ALL a2) FROM t2;
-+--------------------+
-|        EXPR$0      |
-+--------------------+
-| 4.3076923076923075 |
-+--------------------+
-1 row selected (0.084 seconds)
+<h3 id="count-syntax">COUNT Syntax</h3>
+<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT COUNT([DISTINCT | ALL] column) FROM . . .
+SELECT COUNT(*) FROM . . .
+</code></pre></div>
+<ul>
+<li>column<br>
+Returns the number of values of the specified column.<br></li>
+<li>DISTINCT column<br>
+Returns the number of distinct values in the column.<br></li>
+<li>ALL column<br>
+Returns the number of values of the specified column.<br></li>
+<li>* (asterisk)
+Returns the number of records in the table.</li>
+</ul>
 
-SELECT AVG(DISTINCT a2) FROM t2;
-+------------+
-|   EXPR$0   |
-+------------+
-| 4.5        |
-+------------+
-1 row selected (0.079 seconds)
+<h3 id="count-examples">COUNT Examples</h3>
+<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT COUNT(DISTINCT salary) FROM cp.`employee.json`;
++---------+
+| EXPR$0  |
++---------+
+| 48      |
++---------+
+1 row selected (0.159 seconds)
 
-SELECT SUM(ALL a2) FROM t2;
-+------------+
-|   EXPR$0   |
-+------------+
-| 56         |
-+------------+
-1 row selected (0.086 seconds)
+SELECT COUNT(ALL salary) FROM cp.`employee.json`;
++---------+
+| EXPR$0  |
++---------+
+| 1155    |
++---------+
+1 row selected (0.106 seconds)
 
-SELECT SUM(DISTINCT a2) FROM t2;
-+------------+
-|   EXPR$0   |
-+------------+
-| 45         |
-+------------+
-1 row selected (0.078 seconds)
+SELECT COUNT(salary) FROM cp.`employee.json`;
++---------+
+| EXPR$0  |
++---------+
+| 1155    |
++---------+
+1 row selected (0.102 seconds)
 
-+------------+
-|   EXPR$0   |
-+------------+
-| 13         |
-+------------+
-1 row selected (0.056 seconds)
+SELECT COUNT(*) FROM cp.`employee.json`;
++---------+
+| EXPR$0  |
++---------+
+| 1155    |
++---------+
+1 row selected (0.174 seconds)
+</code></pre></div>
+<h2 id="min-and-max-functions">MIN and MAX Functions</h2>
 
-SELECT COUNT(ALL a2) FROM t2;
-+------------+
-|   EXPR$0   |
-+------------+
-| 13         |
-+------------+
-1 row selected (0.056 seconds)
+<p>These functions return the smallest and largest values of the selected columns, respectively.</p>
+
+<h3 id="min-and-max-syntax">MIN and MAX Syntax</h3>
+
+<p>MIN(column)<br>
+MAX(column)</p>
+
+<h3 id="min-and-max-examples">MIN and MAX Examples</h3>
+<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT MIN(salary) FROM cp.`employee.json`;
++---------+
+| EXPR$0  |
++---------+
+| 20.0    |
++---------+
+1 row selected (0.138 seconds)
+
+SELECT MAX(salary) FROM cp.`employee.json`;
++----------+
+|  EXPR$0  |
++----------+
+| 80000.0  |
++----------+
+1 row selected (0.139 seconds)
+</code></pre></div>
+<p>Use a correlated subquery to find the names and salaries of the lowest paid employees:</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT full_name, SALARY FROM cp.`employee.json` WHERE salary = (SELECT MIN(salary) FROM cp.`employee.json`);
++------------------------+---------+
+|       full_name        | SALARY  |
++------------------------+---------+
+| Leopoldo Renfro        | 20.0    |
+| Donna Brockett         | 20.0    |
+| Laurie Anderson        | 20.0    |
+. . .
+</code></pre></div>
+<h2 id="sum-function">SUM Function</h2>
 
-SELECT COUNT(DISTINCT a2) FROM t2;
+<p>Returns the total of a numeric column.</p>
+
+<h3 id="sum-syntax">SUM syntax</h3>
+
+<p>SUM(column)</p>
+
+<h3 id="examples">Examples</h3>
+<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT SUM(ALL salary) FROM cp.`employee.json`;
 +------------+
 |   EXPR$0   |
 +------------+
-| 10         |
+| 4642640.0  |
 +------------+
-1 row selected (0.074 seconds)
+1 row selected (0.123 seconds)
+
+SELECT SUM(DISTINCT salary) FROM cp.`employee.json`;
++-----------+
+|  EXPR$0   |
++-----------+
+| 613120.0  |
++-----------+
+1 row selected (0.309 seconds)
+
+SELECT SUM(salary) FROM cp.`employee.json` WHERE employee_id IN (1139, 1140, 1141);
++----------+
+|  EXPR$0  |
++----------+
+| 25300.0  |
++----------+
+1 row selected (1.995 seconds)
 </code></pre></div>
 <h2 id="aggregate-statistical-functions">Aggregate Statistical Functions</h2>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d0ba86f3/docs/configuration-options-introduction/index.html
----------------------------------------------------------------------
diff --git a/docs/configuration-options-introduction/index.html b/docs/configuration-options-introduction/index.html
index f481d9e..d59b4d2 100644
--- a/docs/configuration-options-introduction/index.html
+++ b/docs/configuration-options-introduction/index.html
@@ -1056,12 +1056,12 @@ Drill sources the local <code>&lt;drill_installation_directory&gt;/conf</code> d
 <tr>
 <td>exec.max_hash_table_size</td>
 <td>1073741824</td>
-<td>Ending size for hash tables. Range: 0 - 1073741824.</td>
+<td>Ending size in buckets for hash tables. Range: 0 - 1073741824.</td>
 </tr>
 <tr>
 <td>exec.min_hash_table_size</td>
 <td>65536</td>
-<td>Starting size for hash tables. Increase according to available memory to improve performance. Increasing for very large aggregations or joins when you have large amounts of memory for Drill to use. Range: 0 - 1073741824.</td>
+<td>Starting size in bucketsfor hash tables. Increase according to available memory to improve performance. Increasing for very large aggregations or joins when you have large amounts of memory for Drill to use. Range: 0 - 1073741824.</td>
 </tr>
 <tr>
 <td>exec.queue.enable</td>
@@ -1339,6 +1339,11 @@ Drill sources the local <code>&lt;drill_installation_directory&gt;/conf</code> d
 <td>Not supported in this release.</td>
 </tr>
 <tr>
+<td>store.partition.hash_distribute</td>
+<td>FALSE</td>
+<td>Uses a hash algorithm to distribute data on partition keys in a CTAS partitioning operation. An alpha option--for experimental use at this stage. Do not use in production systems.</td>
+</tr>
+<tr>
 <td>store.text.estimated_row_size_bytes</td>
 <td>100</td>
 <td>Estimate of the row size in a delimited text file, such as csv. The closer to actual, the better the query plan. Used for all csv files in the system/session where the value is set. Impacts the decision to plan a broadcast join or not.</td>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d0ba86f3/docs/configuring-user-impersonation-with-hive-authorization/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-user-impersonation-with-hive-authorization/index.html b/docs/configuring-user-impersonation-with-hive-authorization/index.html
index a87c7b2..b4f9f44 100644
--- a/docs/configuring-user-impersonation-with-hive-authorization/index.html
+++ b/docs/configuring-user-impersonation-with-hive-authorization/index.html
@@ -1003,27 +1003,27 @@
       
         <p>As of Drill 1.1, you can enable impersonation in Drill and configure authorization in Hive version 1.0 to authorize access to metadata in the Hive metastore repository and data in the Hive warehouse. Impersonation allows a service to act on behalf of a client while performing the action requested by the client. See <a href="/docs/configuring-user-impersonation">Configuring User Impersonation</a>.</p>
 
-<p>There are two types of Hive authorizations that you can configure to work with impersonation in Drill: SQL standard based or storage based authorization.</p>
+<p>There are two types of Hive authorizations that you can configure to work with impersonation in Drill: SQL standard based and storage based authorization.  </p>
 
-<h2 id="storage-based-authorization">Storage Based Authorization</h2>
+<h2 id="sql-standard-based-authorization">SQL Standard Based Authorization</h2>
 
-<p>You can configure Hive storage-based authorization in Hive version 1.0 to work with impersonation in Drill 1.1. Hive storage-based authorization is a remote metastore server security feature that uses the underlying file system permissions to determine permissions on databases, tables, and partitions. The unit style read/write permissions or ACLs a user or group has on directories in the file system determine access to data. Because the file system controls access at the directory and file level, storage-based authorization cannot control access to data at the column or view level.</p>
+<p>You can configure Hive SQL standard based authorization in Hive version 1.0 to work with impersonation in Drill 1.1. The SQL standard based authorization model can control which users have access to columns, rows, and views. Users with the appropriate permissions can issue the GRANT and REVOKE statements to manage privileges from Hive.</p>
 
-<p>You manage user and group privileges through permissions and ACLs in the distributed file system. You manage authorizations through the remote metastore server.</p>
+<p>For more information, see <a href="https://cwiki.apache.org/confluence/display/HELIX/SQL+Standard+Based+Hive+Authorization">SQL Standard Based Hive Authorization</a>.  </p>
 
-<p>DDL statements that manage permissions, such as GRANT and REVOKE, do not have any effect on permissions in the storage based authorization model.</p>
+<h2 id="storage-based-authorization">Storage Based Authorization</h2>
 
-<p>For more information, see <a href="https://cwiki.apache.org/confluence/display/Hive/Storage+Based+Authorization+in+the+Metastore+Server">Storage Based Authorization in the Metastore Server</a>.  </p>
+<p>You can configure Hive storage based authorization in Hive version 1.0 to work with impersonation in Drill 1.1. Hive storage based authorization is a remote metastore server security feature that uses the underlying file system permissions to determine permissions on databases, tables, and partitions. The unit style read/write permissions or ACLs that a user or group has on directories in the file system determine access to data. Because the file system controls access at the directory and file level, storage based authorization cannot control access to data at the column or view level.</p>
 
-<h2 id="sql-standard-based-authorization">SQL Standard Based Authorization</h2>
+<p>You manage user and group privileges through permissions and ACLs in the distributed file system. You manage storage based authorization through the remote metastore server to authorize access to data and metadata.</p>
 
-<p>You can configure Hive SQL standard based authorization in Hive version 1.0 to work with impersonation in Drill 1.1. The SQL standard based authorization model can control which users have access to columns, rows, and views. Users with the appropriate permissions can issue the GRANT and REVOKE statements to manage privileges from Hive.</p>
+<p>DDL statements that manage permissions, such as GRANT and REVOKE, do not affect permissions in the storage based authorization model.</p>
 
-<p>For more information, see <a href="https://cwiki.apache.org/confluence/display/HELIX/SQL+Standard+Based+Hive+Authorization">SQL Standard Based Hive Authorization</a>.  </p>
+<p>For more information, see <a href="https://cwiki.apache.org/confluence/display/Hive/Storage+Based+Authorization+in+the+Metastore+Server">Storage Based Authorization in the Metastore Server</a>.  </p>
 
 <h2 id="configuration">Configuration</h2>
 
-<p>Once you determine the Hive authorization model that you want to implement, enable impersonation in Drill. Update hive-site.xml with the relevant parameters for the authorization type. Modify the Hive storage plugin instance in Drill with the relevant settings for the authorization type.  </p>
+<p>Once you determine the Hive authorization model that you want to implement, enable impersonation in Drill, update the <code>hive-site.xml</code> file with the relevant parameters for the authorization type, and modify the Hive storage plugin configuration in Drill with the relevant properties for the authorization type.  </p>
 
 <h3 id="prerequisites">Prerequisites</h3>
 
@@ -1035,38 +1035,21 @@
 
 <h2 id="step-1:-enabling-drill-impersonation">Step 1: Enabling Drill Impersonation</h2>
 
-<p>Complete the following steps on each Drillbit node to enable user impersonation, and set the <a href="/docs/configuring-user-impersonation/#chained-impersonation">maximum number of chained user hops</a> that Drill allows:  </p>
+<p>Modify <code>&lt;DRILL_HOME&gt;/conf/drill-override.conf</code> on each Drill node to include the required properties, set the <a href="/docs/configuring-user-impersonation/#chained-impersonation">maximum number of chained user hops</a>, and restart the Drillbit process.</p>
 
 <ol>
-<li>Navigate to <code>&lt;drill_installation_directory&gt;/conf/</code> and edit <code>drill-override.conf</code>.</li>
-<li><p>Under <code>drill.exe</code>, add the following:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">  drill.exec.impersonation: {
-        enabled: true,
+<li><p>Add the following properties to the <code>drill.exec</code> block in <code>drill-override.conf</code>:  </p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">  drill.exec: {
+   cluster-id: &quot;&lt;drill_cluster_name&gt;&quot;,
+   zk.connect: &quot;&lt;hostname&gt;:&lt;port&gt;,&lt;hostname&gt;:&lt;port&gt;,&lt;hostname&gt;:&lt;port&gt;&quot;
+   impersonation: {
+         enabled: true,
          max_chained_user_hops: 3
-  }
+    }
+   }  
 </code></pre></div></li>
-<li><p>Verify that enabled is set to <code>&quot;true&quot;</code>.</p></li>
-<li><p>Set the maximum number of chained user hops that you want Drill to allow.</p></li>
-<li><p>(MapR clusters only) Add the following lines to the <code>drill-env.sh</code> file:</p>
-
-<ul>
-<li>If the underlying file system is not secure, add the following line:
-<code>export MAPR_IMPERSONATION_ENABLED=true</code></li>
-<li>If the underlying file system has MapR security enabled, add the following line:
-<code>export MAPR_TICKETFILE_LOCATION=/opt/mapr/conf/mapruserticket</code><br></li>
-<li>If you are implementing Hive SQL standard based authorization, and you are running Drill     and Hive in a secure MapR cluster, add the following lines:<br>
-<code>export DRILL_JAVA_OPTS=&quot;$DRILL_JAVA_OPTS -Dmapr_sec_enabled=true -Dhadoop.login=maprsasl -Dzookeeper.saslprovider=com.mapr.security.maprsasl.MaprSaslProvider -Dmapr.library.flatclass&quot;</code><br>
-<code>export MAPR_IMPERSONATION_ENABLED=true</code><br>
-<code>export MAPR_TICKETFILE_LOCATION=/opt/mapr/conf/mapruserticket</code></li>
-</ul></li>
-<li><p>Restart the Drillbit process on each Drill node.</p>
-
-<ul>
-<li>In a MapR cluster, run the following command:
-<code>maprcli node services -name drill-bits -action restart -nodes &lt;hostname&gt; -f</code></li>
-<li>In a non-MapR environment, run the following command:<br>
-<code>&lt;DRILLINSTALL_HOME&gt;/bin/drillbit.sh restart</code><br></li>
-</ul></li>
+<li><p>Issue the following command to restart the Drillbit process on each Drill node:<br>
+<code>&lt;DRILLINSTALL_HOME&gt;/bin/drillbit.sh restart</code>  </p></li>
 </ol>
 
 <h2 id="step-2:-updating-hive-site.xml">Step 2:  Updating hive-site.xml</h2>
@@ -1078,7 +1061,7 @@
 <p>Add the following required authorization parameters in hive-site.xml to configure storage based authentication:  </p>
 
 <p><strong>hive.metastore.pre.event.listeners</strong><br>
-<strong>Description:</strong> Turns on metastore-side security.<br>
+<strong>Description:</strong> Enables metastore security.<br>
 <strong>Value:</strong> org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener  </p>
 
 <p><strong>hive.security.metastore.authorization.manager</strong><br>
@@ -1090,54 +1073,71 @@
 <strong>Value:</strong> org.apache.hadoop.hive.ql.security.HadoopDefaultMetastoreAuthenticator  </p>
 
 <p><strong>hive.security.metastore.authorization.auth.reads</strong><br>
-<strong>Description:</strong> Tells Hive metastore authorization checks for read access.<br>
+<strong>Description:</strong> When enabled, Hive metastore authorization checks for read access.<br>
 <strong>Value:</strong> true  </p>
 
 <p><strong>hive.metastore.execute.setugi</strong><br>
-<strong>Description:</strong> Causes the metastore to execute file system operations using the client&#39;s reported user and group permissions. You must set this property on both the client and server sides. If client sets it to true and server sets it to false, the client setting is ignored.<br>
+<strong>Description:</strong> When enabled, this property causes the metastore to execute DFS operations using the client&#39;s reported user and group permissions. This property must be set on both the client and server sides. If the cient and server settings differ, the client setting is ignored.<br>
 <strong>Value:</strong> true </p>
 
 <p><strong>hive.server2.enable.doAs</strong><br>
-<strong>Description:</strong> Tells HiveServer2 to execute Hive operations as the user making the calls.<br>
-<strong>Value:</strong> true </p>
-
-<h3 id="example-hive-site.xml-settings-for-storage-based-authorization">Example hive-site.xml Settings for Storage Based Authorization</h3>
-<div class="highlight"><pre><code class="language-text" data-lang="text">   &lt;property&gt;
-      &lt;name&gt;hive.metastore.pre.event.listeners&lt;/name&gt;
-      &lt;value&gt;org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener&lt;/value&gt;
-    &lt;/property&gt;
-
-    &lt;property&gt;
-      &lt;name&gt;hive.security.metastore.authenticator.manager&lt;/name&gt;
-      &lt;value&gt;org.apache.hadoop.hive.ql.security.HadoopDefaultMetastoreAuthenticator&lt;/value&gt;
-    &lt;/property&gt;
-
-    &lt;property&gt;
-      &lt;name&gt;hive.security.metastore.authorization.manager&lt;/name&gt;
-      &lt;value&gt;org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider&lt;/value&gt;
-    &lt;/property&gt;
-
-    &lt;property&gt;
-      &lt;name&gt;hive.security.metastore.authorization.auth.reads&lt;/name&gt;
-      &lt;value&gt;true&lt;/value&gt;
-    &lt;/property&gt;
-
-    &lt;property&gt;
-      &lt;name&gt;hive.metastore.execute.setugi&lt;/name&gt;
-      &lt;value&gt;true&lt;/value&gt;
-    &lt;/property&gt;
-
-    &lt;property&gt;
-      &lt;name&gt;hive.server2.enable.doAs&lt;/name&gt;
-      &lt;value&gt;true&lt;/value&gt;
-    &lt;/property&gt;  
+<strong>Description:</strong> Tells HiveServer2 to execute Hive operations as the user submitting the query. Must be set to true for the storage based model.<br>
+<strong>Value:</strong> true</p>
+
+<h3 id="example-of-hive-site.xml-configuration-with-the-required-properties-for-storage-based-authorization">Example of hive-site.xml configuration with the required properties for storage based authorization</h3>
+<div class="highlight"><pre><code class="language-text" data-lang="text">   &lt;configuration&gt;
+     &lt;property&gt;
+       &lt;name&gt;hive.metastore.uris&lt;/name&gt;
+       &lt;value&gt;thrift://10.10.100.120:9083&lt;/value&gt;    
+     &lt;/property&gt;  
+
+     &lt;property&gt;
+       &lt;name&gt;javax.jdo.option.ConnectionURL&lt;/name&gt;
+       &lt;value&gt;jdbc:derby:;databaseName=/opt/hive/hive-1.0/bin/metastore_db;create=true&lt;/value&gt;    
+     &lt;/property&gt;
+
+     &lt;property&gt;
+       &lt;name&gt;javax.jdo.option.ConnectionDriverName&lt;/name&gt;
+       &lt;value&gt;org.apache.derby.jdbc.EmbeddedDriver&lt;/value&gt;    
+     &lt;/property&gt;
+
+     &lt;property&gt;
+       &lt;name&gt;hive.metastore.pre.event.listeners&lt;/name&gt;
+       &lt;value&gt;org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener&lt;/value&gt;
+     &lt;/property&gt;
+
+     &lt;property&gt;
+       &lt;name&gt;hive.security.metastore.authenticator.manager&lt;/name&gt;
+       &lt;value&gt;org.apache.hadoop.hive.ql.security.HadoopDefaultMetastoreAuthenticator&lt;/value&gt;
+     &lt;/property&gt;
+
+     &lt;property&gt;
+       &lt;name&gt;hive.security.metastore.authorization.manager&lt;/name&gt;
+       &lt;value&gt;org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider&lt;/value&gt;
+     &lt;/property&gt;
+
+     &lt;property&gt;
+       &lt;name&gt;hive.security.metastore.authorization.auth.reads&lt;/name&gt;
+       &lt;value&gt;true&lt;/value&gt;
+     &lt;/property&gt;
+
+     &lt;property&gt;
+       &lt;name&gt;hive.metastore.execute.setugi&lt;/name&gt;
+       &lt;value&gt;true&lt;/value&gt;
+     &lt;/property&gt;
+
+     &lt;property&gt;
+       &lt;name&gt;hive.server2.enable.doAs&lt;/name&gt;
+       &lt;value&gt;true&lt;/value&gt;
+     &lt;/property&gt;
+   &lt;/configuration&gt;
 </code></pre></div>
 <h2 id="sql-standard-based-authorization">SQL Standard Based Authorization</h2>
 
 <p>Add the following required authorization parameters in hive-site.xml to configure SQL standard based authentication:  </p>
 
 <p><strong>hive.security.authorization.enabled</strong><br>
-<strong>Description:</strong> Enables/disables Hive security authorization.<br>
+<strong>Description:</strong> Enables Hive security authorization.<br>
 <strong>Value:</strong> true </p>
 
 <p><strong>hive.security.authenticator.manager</strong><br>
@@ -1149,67 +1149,79 @@
 <strong>Value:</strong> org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactory  </p>
 
 <p><strong>hive.server2.enable.doAs</strong><br>
-<strong>Description:</strong> Tells HiveServer2 to execute Hive operations as the user making the calls.<br>
-<strong>Value:</strong> false  </p>
+<strong>Description:</strong> Tells HiveServer2 to execute Hive operations as the user submitting the query. Must be set to false for the storage based model. 
+<strong>Value:</strong> false</p>
 
 <p><strong>hive.users.in.admin.role</strong><br>
 <strong>Description:</strong> A comma separated list of users which gets added to the ADMIN role when the metastore starts up. You can add more uses at any time. Note that a user who belongs to the admin role needs to run the &quot;set role&quot; command before getting the privileges of the admin role, as this role is not in the current roles by default.<br>
 <strong>Value:</strong> Set to the list of comma-separated users who need to be added to the admin role. </p>
 
 <p><strong>hive.metastore.execute.setugi</strong><br>
-<strong>Description:</strong> Causes the metastore to execute file system operations using the client&#39;s reported user and group permissions. You must set this property on both the client and server side. If the client is set to true and the server is set to false, the client setting is ignored.<br>
-<strong>Value:</strong> false </p>
-
-<h3 id="example-hive-site.xml-settings-for-sql-standard-based-authorization">Example hive-site.xml Settings for SQL Standard Based Authorization</h3>
-<div class="highlight"><pre><code class="language-text" data-lang="text">   &lt;property&gt;
-      &lt;name&gt;hive.security.authorization.enabled&lt;/name&gt;
-      &lt;value&gt;true&lt;/value&gt;
-    &lt;/property&gt;
-
-    &lt;property&gt;
-      &lt;name&gt;hive.security.authenticator.manager&lt;/name&gt;
-      &lt;value&gt;org.apache.hadoop.hive.ql.security.SessionStateUserAuthenticator&lt;/value&gt;
-    &lt;/property&gt;
-
-    &lt;property&gt;
-      &lt;name&gt;hive.security.authorization.manager&lt;/name&gt;   
-      &lt;value&gt;org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactory&lt;/value&gt;
-    &lt;/property&gt;
-
-    &lt;property&gt;
-      &lt;name&gt;hive.server2.enable.doAs&lt;/name&gt;
-      &lt;value&gt;false&lt;/value&gt;
-    &lt;/property&gt;
-
-    &lt;property&gt;
-      &lt;name&gt;hive.users.in.admin.role&lt;/name&gt;
-      &lt;value&gt;userA&lt;/value&gt;
-    &lt;/property&gt;
-
-    &lt;property&gt;
-      &lt;name&gt;hive.metastore.execute.setugi&lt;/name&gt;
-      &lt;value&gt;false&lt;/value&gt;
-    &lt;/property&gt;  
+<strong>Description:</strong> In unsecure mode, setting this property to true causes the metastore to execute DFS operations using the client&#39;s reported user and group permissions. Note: This property must be set on both the client and server sides. This is a best effort property. If the client is set to true and the server is set to false, the client setting is ignored.<br>
+<strong>Value:</strong> false  </p>
+
+<h3 id="example-of-hive-site.xml-configuration-with-the-required-properties-for-sql-standard-based-authorization">Example of hive-site.xml configuration with the required properties for SQL standard based authorization</h3>
+<div class="highlight"><pre><code class="language-text" data-lang="text">   &lt;configuration&gt;
+     &lt;property&gt;
+       &lt;name&gt;hive.metastore.uris&lt;/name&gt;
+       &lt;value&gt;thrift://10.10.100.120:9083&lt;/value&gt;    
+     &lt;/property&gt; 
+
+     &lt;property&gt;
+       &lt;name&gt;javax.jdo.option.ConnectionURL&lt;/name&gt;
+       &lt;value&gt;jdbc:derby:;databaseName=/opt/hive/hive-1.0/bin/metastore_db;create=true&lt;/value&gt;    
+     &lt;/property&gt;
+
+     &lt;property&gt;
+       &lt;name&gt;javax.jdo.option.ConnectionDriverName&lt;/name&gt;
+       &lt;value&gt;org.apache.derby.jdbc.EmbeddedDriver&lt;/value&gt;    
+     &lt;/property&gt;  
+
+     &lt;property&gt;
+       &lt;name&gt;hive.security.authorization.enabled&lt;/name&gt;
+       &lt;value&gt;true&lt;/value&gt;
+     &lt;/property&gt;
+
+     &lt;property&gt;
+       &lt;name&gt;hive.security.authenticator.manager&lt;/name&gt;
+       &lt;value&gt;org.apache.hadoop.hive.ql.security.SessionStateUserAuthenticator&lt;/value&gt;
+     &lt;/property&gt;       
+
+     &lt;property&gt;
+       &lt;name&gt;hive.security.authorization.manager&lt;/name&gt;   
+       &lt;value&gt;org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactory&lt;/value&gt;
+     &lt;/property&gt;
+
+     &lt;property&gt;
+       &lt;name&gt;hive.server2.enable.doAs&lt;/name&gt;
+       &lt;value&gt;false&lt;/value&gt;
+     &lt;/property&gt;
+
+     &lt;property&gt;
+       &lt;name&gt;hive.users.in.admin.role&lt;/name&gt;
+       &lt;value&gt;user&lt;/value&gt;
+     &lt;/property&gt;
+
+     &lt;property&gt;
+       &lt;name&gt;hive.metastore.execute.setugi&lt;/name&gt;
+       &lt;value&gt;false&lt;/value&gt;
+     &lt;/property&gt;    
+    &lt;/configuration&gt;
 </code></pre></div>
 <h2 id="step-3:-modifying-the-hive-storage-plugin">Step 3: Modifying the Hive Storage Plugin</h2>
 
-<p>Modify the Hive storage plugin instance in the Drill Web UI to include specific authorization settings. The Drillbit that you use to access the Web UI must be running. </p>
-
-<div class="admonition note">
-  <p class="first admonition-title">Note</p>
-  <p class="last">The metastore host port for MapR is typically 9083.  </p>
-</div>  
+<p>Modify the Hive storage plugin configuration in the Drill Web UI to include specific authorization settings. The Drillbit that you use to access the Web UI must be running.  </p>
 
 <p>Complete the following steps to modify the Hive storage plugin:  </p>
 
 <ol>
 <li> Navigate to <code>http://&lt;drillbit_hostname&gt;:8047</code>, and select the <strong>Storage tab</strong>.<br></li>
-<li> Click <strong>Update</strong> next to the hive instance.<br></li>
-<li> In the configuration window, add the configuration settings for the authorization type. If you are running Drill and Hive in a secure MapR cluster, do not include the line <code>&quot;hive.metastore.sasl.enabled&quot; : &quot;false&quot;</code>.<br></li>
+<li> Click <strong>Update</strong> next to &quot;hive.&quot;<br></li>
+<li> In the configuration window, add the configuration properties for the authorization type.</li>
 </ol>
 
 <ul>
-<li><p>For storage based authorization, add the following settings:  </p>
+<li><p>For storage based authorization, add the following properties:  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">      {
        type:&quot;hive&quot;,
        enabled: true,
@@ -1222,28 +1234,19 @@
        }
       }  
 </code></pre></div></li>
-<li><p>For SQL standard based authorization, add the following settings:  </p>
+<li><p>For SQL standard based authorization, add the following properties:  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">      {
        type:&quot;hive&quot;,
        enabled: true,
        configProps : {
-         &quot;hive.metastore.uris&quot;
-      : &quot;thrift://&lt;metastore_host&gt;:&lt;port&gt;&quot;,
-         &quot;fs.default.name&quot;
-      : &quot;hdfs://&lt;host&gt;:&lt;port&gt;/&quot;,
-         &quot;hive.security.authorization.enabled&quot;
-      : &quot;true&quot;,
-         &quot;hive.security.authenticator.manager&quot;
-      : &quot;org.apache.hadoop.hive.ql.security.SessionStateUserAuthenticator&quot;,
-         &quot;hive.security.authorization.manager&quot;
-      :
-      &quot;org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactory&quot;,
-         &quot;hive.metastore.sasl.enabled&quot;
-      : &quot;false&quot;,
-         &quot;hive.server2.enable.doAs&quot;
-      : &quot;false&quot;,
-         &quot;hive.metastore.execute.setugi&quot;
-      : &quot;false&quot;
+         &quot;hive.metastore.uris&quot; : &quot;thrift://&lt;metastore_host&gt;:9083&quot;,
+         &quot;fs.default.name&quot; : &quot;hdfs://&lt;host&gt;:&lt;port&gt;/&quot;,
+         &quot;hive.security.authorization.enabled&quot; : &quot;true&quot;,
+         &quot;hive.security.authenticator.manager&quot; : &quot;org.apache.hadoop.hive.ql.security.SessionStateUserAuthenticator&quot;,
+         &quot;hive.security.authorization.manager&quot; : &quot;org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactory&quot;,
+         &quot;hive.metastore.sasl.enabled&quot; : &quot;false&quot;,
+         &quot;hive.server2.enable.doAs&quot; : &quot;false&quot;,
+         &quot;hive.metastore.execute.setugi&quot; : &quot;false&quot;
        }
       }
 </code></pre></div></li>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d0ba86f3/docs/connect-a-data-source-introduction/index.html
----------------------------------------------------------------------
diff --git a/docs/connect-a-data-source-introduction/index.html b/docs/connect-a-data-source-introduction/index.html
index 50a5c5d..e6ab1eb 100644
--- a/docs/connect-a-data-source-introduction/index.html
+++ b/docs/connect-a-data-source-introduction/index.html
@@ -1001,14 +1001,9 @@
 
     <div class="int_text" align="left">
       
-        <p>A storage plugin provides the following information to Drill:</p>
+        <p>A storage plugin is a software module for connecting Drill to data sources. A storage plugin typically optimizes execution of Drill queries, provides the location of the data, and configures the workspace and file formats for reading data. Several storage plugins are installed with Drill that you can configure to suit your environment. Through the storage plugin, Drill connects to a data source, such as a database, a file on a local or distributed file system, or a Hive metastore. </p>
 
-<ul>
-<li>Interfaces that Drill can use to read from and write to data sources.<br></li>
-<li>A set of storage plugin optimization rules that assist with efficient and faster execution of Drill queries, such as pushdowns, statistics, and partition awareness.<br></li>
-</ul>
-
-<p>Through the storage plugin, Drill connects to a data source, such as a database, a file on a local or distributed file system, or a Hive metastore. When you execute a query, Drill gets the plugin name in one of several ways:</p>
+<p>You can modify the default configuration of a storage plugin X and give the new version a unique name Y. This document refers to Y as a different storage plugin, although it is actually just a reconfiguration of original interface. When you execute a query, Drill gets the storage plugin name in one of several ways:</p>
 
 <ul>
 <li>The FROM clause of the query can identify the plugin to use.</li>
@@ -1016,25 +1011,14 @@
 <li>You can specify the storage plugin when starting Drill.</li>
 </ul>
 
-<p>In addition to providing a the connection string to the data source, the storage plugin configures the workspace and file formats for reading data, as described in subsequent sections. </p>
-
-<h2 id="storage-plugins-internals">Storage Plugins Internals</h2>
+<h2 id="storage-plugin-internals">Storage Plugin Internals</h2>
 
 <p>The following image represents the storage plugin layer between Drill and a
 data source:</p>
 
 <p><img src="/docs/img/storageplugin.png" alt="drill query flow"></p>
 
-<p>A storage plugin provides the following information to Drill:</p>
-
-<ul>
-<li>Metadata available in the underlying data source</li>
-<li>Location of data</li>
-<li>Interfaces that Drill can use to read from and write to data sources</li>
-<li>A set of storage plugin optimization rules that assist with efficient and faster execution of Drill queries, such as pushdowns, statistics, and partition awareness</li>
-</ul>
-
-<p>A storage plugin performs scanner and writer functions and informs the execution engine of any native capabilities, such
+<p>In addition to the previously mentioned functions, a storage plugin performs scanner and writer functions and informs the execution engine of any native capabilities, such
 as predicate pushdown, joins, and SQL.</p>
 
     

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d0ba86f3/docs/date-time-functions-and-arithmetic/index.html
----------------------------------------------------------------------
diff --git a/docs/date-time-functions-and-arithmetic/index.html b/docs/date-time-functions-and-arithmetic/index.html
index 3a6f406..45563e9 100644
--- a/docs/date-time-functions-and-arithmetic/index.html
+++ b/docs/date-time-functions-and-arithmetic/index.html
@@ -1544,52 +1544,50 @@ SELECT NOW() FROM sys.version;
 
 <p>Returns UNIX Epoch time, which is the number of seconds elapsed since January 1, 1970.</p>
 
-<p>### UNIX_TIMESTAMP Syntax</p>
-
-<p>UNIX_TIMESTAMP()
-UNIX_TIMESTAMP(string date)
-UNIX_TIMESTAMP(string date, string pattern)</p>
-
+<h3 id="unix_timestamp-syntax">UNIX_TIMESTAMP Syntax</h3>
+<div class="highlight"><pre><code class="language-text" data-lang="text">UNIX_TIMESTAMP()  
+UNIX_TIMESTAMP(string date)  
+UNIX_TIMESTAMP(string date, string pattern)  
+</code></pre></div>
 <p>These functions perform the following operations, respectively:</p>
 
 <ul>
-<li>Gets current Unix timestamp in seconds if given no arguments. </li>
-<li>Converts the time string in format yyyy-MM-dd HH:mm:ss to a Unix timestamp in seconds using the default timezone and locale.</li>
-<li>Converts the time string with the given pattern to a Unix time stamp in seconds.</li>
+<li>Gets current Unix timestamp in seconds if given no arguments.<br></li>
+<li>Converts the time string in format yyyy-MM-dd HH:mm:ss to a Unix timestamp in seconds using the default timezone and locale.<br></li>
+<li>Converts the time string with the given pattern to a Unix time stamp in seconds.<br></li>
 </ul>
-
-<p>SELECT UNIX_TIMESTAMP FROM sys.version;
+<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT UNIX_TIMESTAMP FROM sys.version;
 +-------------+
 |   EXPR$0    |
 +-------------+
 | 1435711031  |
 +-------------+
-1 row selected (0.749 seconds)</p>
+1 row selected (0.749 seconds)
 
-<p>SELECT UNIX_TIMESTAMP(&#39;2009-03-20 11:15:55&#39;) FROM sys.version;
+SELECT UNIX_TIMESTAMP(&#39;2009-03-20 11:15:55&#39;) FROM sys.version;
 +-------------+
 |   EXPR$0    |
 +-------------+
 | 1237572955  |
 +-------------+
-1 row selected (1.848 seconds)</p>
+1 row selected (1.848 seconds)
 
-<p>SELECT UNIX_TIMESTAMP(&#39;2009-03-20&#39;, &#39;yyyy-MM-dd&#39;) FROM sys.version;
+SELECT UNIX_TIMESTAMP(&#39;2009-03-20&#39;, &#39;yyyy-MM-dd&#39;) FROM sys.version;
 +-------------+
 |   EXPR$0    |
 +-------------+
 | 1237532400  |
 +-------------+
-1 row selected (0.181 seconds)</p>
+1 row selected (0.181 seconds)
 
-<p>SELECT UNIX_TIMESTAMP(&#39;2015-05-29 08:18:53.0&#39;, &#39;yyyy-MM-dd HH:mm:ss.SSS&#39;) FROM sys.version;
+SELECT UNIX_TIMESTAMP(&#39;2015-05-29 08:18:53.0&#39;, &#39;yyyy-MM-dd HH:mm:ss.SSS&#39;) FROM sys.version;
 +-------------+
 |   EXPR$0    |
 +-------------+
 | 1432912733  |
 +-------------+
-1 row selected (0.171 seconds)</p>
-
+1 row selected (0.171 seconds)
+</code></pre></div>
     
       
         <div class="doc-nav">

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d0ba86f3/docs/drill-default-input-format/index.html
----------------------------------------------------------------------
diff --git a/docs/drill-default-input-format/index.html b/docs/drill-default-input-format/index.html
index c88e238..a142bfd 100644
--- a/docs/drill-default-input-format/index.html
+++ b/docs/drill-default-input-format/index.html
@@ -1042,8 +1042,8 @@ steps:</p>
 <ol>
 <li>Navigate to the Drill Web UI at <code>&lt;drill_node_ip_address&gt;:8047</code>. The Drillbit process must be running on the node before you connect to the Drill Web UI.</li>
 <li>Select <strong>Storage</strong> in the toolbar.</li>
-<li>Click <strong>Update</strong> next to the file system for which you want to define a default input format for a workspace.</li>
-<li><p>In the Configuration area, locate the workspace for which you would like to define the default input format, and change the <code>defaultInputFormat</code> attribute to any of the supported file types.</p>
+<li>Click <strong>Update</strong> next to the storage plugin for which you want to define a default input format for a workspace.</li>
+<li><p>In the Configuration area, locate the workspace, and change the <code>defaultInputFormat</code> attribute to any of the supported file types.</p>
 
 <p><strong>Example</strong></p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">{
@@ -1066,7 +1066,7 @@ steps:</p>
 
 <h2 id="querying-compressed-files">Querying Compressed Files</h2>
 
-<p>You can query compressed GZ files, such as JSON and CSV, as well as uncompressed files. The file extension specified in the <code>formats . . . extensions</code> property of the storage plugin must precede the gz extension in the file name. For example, <code>proddata.json.gz</code> or <code>mydata.csv.gz</code> are valid file names to use in a query, as shown in the example in <a href="/docs/querying-plain-text-files/#query-the-gz-file-directly">&quot;Querying the GZ File Directly&quot;</a>.</p>
+<p>You can query compressed GZ files, such as JSON and CSV, as well as uncompressed files. The file extension specified in the <code>formats . . . extensions</code> property of the storage plugin configuration must precede the gz extension in the file name. For example, <code>proddata.json.gz</code> or <code>mydata.csv.gz</code> are valid file names to use in a query, as shown in the example in <a href="/docs/querying-plain-text-files/#query-the-gz-file-directly">&quot;Querying the GZ File Directly&quot;</a>.</p>
 
     
       

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d0ba86f3/docs/drill-introduction/index.html
----------------------------------------------------------------------
diff --git a/docs/drill-introduction/index.html b/docs/drill-introduction/index.html
index f50d6ab..e6ebe9a 100644
--- a/docs/drill-introduction/index.html
+++ b/docs/drill-introduction/index.html
@@ -1014,8 +1014,8 @@ with existing Apache Hive and Apache HBase deployments. </p>
 
 <ul>
 <li><a href="/docs/sql-window-functions">SQL window functions</a></li>
-<li><a href="">Automatic partitioning</a> using the new <a href="/docs/partition-by-clause">PARTITION BY</a> clause in the CTAS command</li>
-<li>[Delegated Hive impersonation]((/docs/configuring-user-impersonation-with-hive-authorization/)</li>
+<li><a href="">Partitioning data</a> using the new <a href="/docs/partition-by-clause">PARTITION BY</a> clause in the CTAS command</li>
+<li><a href="/docs/configuring-user-impersonation-with-hive-authorization/">Delegated Hive impersonation</a></li>
 <li>Support for UNION and UNION ALL and better optimized plans that include UNION.</li>
 </ul>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d0ba86f3/docs/file-system-storage-plugin/index.html
----------------------------------------------------------------------
diff --git a/docs/file-system-storage-plugin/index.html b/docs/file-system-storage-plugin/index.html
index f6431ad..498da94 100644
--- a/docs/file-system-storage-plugin/index.html
+++ b/docs/file-system-storage-plugin/index.html
@@ -1010,27 +1010,14 @@ system on your machine by default. </p>
 
 <h2 id="connecting-drill-to-a-file-system">Connecting Drill to a File System</h2>
 
-<p>In a Drill cluster, you typically do not query the local file system, but instead place files on the distributed file system. You configure the connection property of the storage plugin workspace to connect Drill to a distributed file system. For example, the following connection properties connect Drill to an HDFS or MapR-FS cluster from a client:</p>
+<p>In a Drill cluster, you typically do not query the local file system, but instead place files on the distributed file system. You configure the connection property of the storage plugin workspace to connect Drill to a distributed file system. For example, the following connection properties connect Drill to an HDFS cluster from a client:</p>
 
-<ul>
-<li>HDFS<br>
-<code>&quot;connection&quot;: &quot;hdfs://&lt;IP Address&gt;:&lt;Port&gt;/&quot;</code><br></li>
-<li>MapR-FS Remote Cluster<br>
-<code>&quot;connection&quot;: &quot;maprfs://&lt;IP Address&gt;/&quot;</code><br></li>
-</ul>
-
-<p>To query a file on HDFS from a node on the cluster, you can simply change the connection to from <code>file:///</code> to <code>hdfs:///</code> in the <code>dfs</code> storage plugin.</p>
+<p><code>&quot;connection&quot;: &quot;hdfs://&lt;IP Address&gt;:&lt;Port&gt;/&quot;</code>   </p>
 
-<p>To register a local or a distributed file system with Apache Drill, complete
-the following steps:</p>
+<p>To query a file on HDFS from a node on the cluster, you can simply change the connection to from <code>file:///</code> to <code>hdfs://</code> in the <code>dfs</code> storage plugin.</p>
 
-<ol>
-<li>Navigate to <a href="http://localhost:8047">http://localhost:8047</a>, and select the <strong>Storage</strong> tab.</li>
-<li>In the New Storage Plugin window, enter a unique name and then click <strong>Create</strong>.</li>
-<li><p>In the Configuration window, provide the following configuration information for the type of file system that you are configuring as a data source.</p>
-
-<ul>
-<li><p>Local file system example:</p>
+<p>To change the <code>dfs</code> storage plugin configuration to point to a local or a distributed file system, use <code>connection</code> attributes as shown in the following example.
+* Local file system example:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">{
   &quot;type&quot;: &quot;file&quot;,
   &quot;enabled&quot;: true,
@@ -1048,9 +1035,11 @@ the following steps:</p>
        }
      }
   }
-</code></pre></div></li>
+</code></pre></div>
+<ul>
 <li><p>Distributed file system example:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">{
+
+<p>{
   &quot;type&quot; : &quot;file&quot;,
   &quot;enabled&quot; : true,
   &quot;connection&quot; : &quot;hdfs://10.10.30.156:8020/&quot;,
@@ -1066,20 +1055,14 @@ the following steps:</p>
       &quot;type&quot; : &quot;json&quot;
     }
   }
-}
-</code></pre></div></li>
+}</p></li>
 </ul>
 
 <p>To connect to a Hadoop file system, you include the IP address of the
-name node and the port number.</p></li>
-<li><p>Click <strong>Enable</strong>.</p></li>
-</ol>
+name node and the port number.</p>
 
-<p>After you have configured a storage plugin instance for the file system, you
-can issue Drill queries against it.</p>
-
-<p>The following example shows an instance of a file type storage plugin with a
-workspace named <code>json_files</code> configured to point Drill to the
+<p>The following example shows an file type storage plugin configuration with a
+workspace named <code>json_files</code>. The configuration points Drill to the
 <code>/users/max/drill/json/</code> directory in the local file system <code>(dfs)</code>:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">{
   &quot;type&quot; : &quot;file&quot;,
@@ -1093,10 +1076,7 @@ workspace named <code>json_files</code> configured to point Drill to the
    } 
 },
 </code></pre></div>
-<div class="admonition note">
-  <p class="first admonition-title">Note</p>
-  <p class="last">The `connection` parameter in the configuration above is "`file:///`", connecting Drill to the local file system (`dfs`).  </p>
-</div>
+<p>The <code>connection</code> parameter in this configuration is &quot;<code>file:///</code>&quot;, connecting Drill to the local file system (<code>dfs</code>).</p>
 
 <p>To query a file in the example <code>json_files</code> workspace, you can issue the <code>USE</code>
 command to tell Drill to use the <code>json_files</code> workspace configured in the <code>dfs</code>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d0ba86f3/docs/hbase-storage-plugin/index.html
----------------------------------------------------------------------
diff --git a/docs/hbase-storage-plugin/index.html b/docs/hbase-storage-plugin/index.html
index 94bbc1e..439504c 100644
--- a/docs/hbase-storage-plugin/index.html
+++ b/docs/hbase-storage-plugin/index.html
@@ -1003,37 +1003,22 @@
 
     <div class="int_text" align="left">
       
-        <p>Register a storage plugin instance and specify a ZooKeeper quorum to connect
-Drill to an HBase data source. When you register a storage plugin instance for
-an HBase data source, provide a unique name for the instance, and identify the
-type as “hbase” in the Drill Web UI.</p>
+        <p>Specify a ZooKeeper quorum to connect
+Drill to an HBase data source. Drill supports HBase version 0.98.</p>
 
-<p>Drill supports HBase version 0.98.</p>
-
-<p>To register HBase with Drill, complete the following steps:</p>
-
-<ol>
-<li>Navigate to <a href="http://localhost:8047/">http://localhost:8047</a>, and select the <strong>Storage</strong> tab</li>
-<li>In the disabled storage plugins section, click <strong>Update</strong> next to the <code>hbase</code> instance.</li>
-<li><p>In the Configuration window, specify the ZooKeeper quorum and port. </p>
-
-<p><strong>Example</strong>  </p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">    {
-      &quot;type&quot;: &quot;hbase&quot;,
-      &quot;config&quot;: {
-        &quot;hbase.zookeeper.quorum&quot;: &quot;10.10.100.62,10.10.10.52,10.10.10.53&quot;,
-        &quot;hbase.zookeeper.property.clientPort&quot;: &quot;2181&quot;
-      },
-      &quot;size.calculator.enabled&quot;: false,
-      &quot;enabled&quot;: true
-    }
-</code></pre></div></li>
-<li><p>Click <strong>Enable</strong>.</p></li>
-</ol>
-
-<p>After you configure a storage plugin instance for the HBase, you can
-issue Drill queries against it.</p>
+<p>To HBase storage plugin configuration installed with Drill appears as follows when you navigate to <a href="http://localhost:8047/">http://localhost:8047</a>, and select the <strong>Storage</strong> tab.</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text"> **Example**  
 
+        {
+          &quot;type&quot;: &quot;hbase&quot;,
+          &quot;config&quot;: {
+            &quot;hbase.zookeeper.quorum&quot;: &quot;10.10.100.62,10.10.10.52,10.10.10.53&quot;,
+            &quot;hbase.zookeeper.property.clientPort&quot;: &quot;2181&quot;
+          },
+          &quot;size.calculator.enabled&quot;: false,
+          &quot;enabled&quot;: true
+        }
+</code></pre></div>
     
       
         <div class="doc-nav">

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d0ba86f3/docs/hive-storage-plugin/index.html
----------------------------------------------------------------------
diff --git a/docs/hive-storage-plugin/index.html b/docs/hive-storage-plugin/index.html
index 759c07c..d68fbae 100644
--- a/docs/hive-storage-plugin/index.html
+++ b/docs/hive-storage-plugin/index.html
@@ -1003,13 +1003,7 @@
 
     <div class="int_text" align="left">
       
-        <p>You can register a storage plugin instance that connects Drill to a Hive data
-source that has a remote or embedded metastore service. When you register a
-storage plugin instance for a Hive data source, provide a unique name for the
-instance, and identify the type as “<code>hive</code>”. You must also provide the
-metastore connection information.</p>
-
-<p>Drill 1.0 supports Hive 0.13. Drill 1.1 supports Hive 1.0. To access Hive tables
+        <p>Drill 1.0 supports Hive 0.13. Drill 1.1 supports Hive 1.0. To access Hive tables
 using custom SerDes or InputFormat/OutputFormat, all nodes running Drillbits
 must have the SerDes or InputFormat/OutputFormat <code>JAR</code> files in the 
 <code>&lt;drill_installation_directory&gt;/jars/3rdparty</code> folder.</p>
@@ -1027,7 +1021,7 @@ in the Drill Web UI to configure a connection to Drill.</p>
   <p class="last">Verify that the Hive metastore service is running before you register the Hive metastore.  </p>
 </div>  
 
-<p>To register a remote Hive metastore with Drill, complete the following steps:</p>
+<p>To configure a remote Hive metastore, complete the following steps:</p>
 
 <ol>
 <li>Issue the following command to start the Hive metastore service on the system specified in the <code>hive.metastore.uris</code>:
@@ -1073,10 +1067,10 @@ in the Drill Web UI to configure a connection to Drill.</p>
 
 <p>In this configuration, the Hive metastore is embedded within the Drill process. Configure an embedded metastore only in a cluster that runs a single Drillbit and only for testing purposes. Do not embed the Hive metastore in production systems.</p>
 
-<p>Provide the metastore database configuration settings in the Drill Web UI. Before you register Hive, verify that the driver you use to connect to the Hive metastore is in the Drill classpath located in <code>/&lt;drill installation directory&gt;/lib/.</code> If the driver is not there, copy the driver to <code>/&lt;drill
+<p>Provide the metastore database configuration settings in the Drill Web UI. Before you configure an embedded Hive metastore, verify that the driver you use to connect to the Hive metastore is in the Drill classpath located in <code>/&lt;drill installation directory&gt;/lib/.</code> If the driver is not there, copy the driver to <code>/&lt;drill
 installation directory&gt;/lib</code> on the Drill node. For more information about storage types and configurations, refer to <a href="https://cwiki.apache.org/confluence/display/Hive/AdminManual+MetastoreAdmin">&quot;Hive Metastore Administration&quot;</a>.</p>
 
-<p>To register an embedded Hive metastore with Drill, complete the following
+<p>To configure an embedded Hive metastore, complete the following
 steps:</p>
 
 <ol>
@@ -1097,7 +1091,7 @@ steps:</p>
     }
   }
 </code></pre></div></li>
-<li><p>Change the <code>&quot;fs.default.name&quot;:</code> attribute to specify the default location of files. The value needs to be a URI that is available and capable of handling filesystem requests. For example, change the local file system URI <code>&quot;file:///&quot;</code> to the HDFS URI: <code>hdfs://</code>, or to the path on HDFS with a namenode: <code>hdfs://&lt;authority&gt;:&lt;port&gt;</code></p></li>
+<li><p>Change the <code>&quot;fs.default.name&quot;:</code> attribute to specify the default location of files. The value needs to be a URI that is available and capable of handling file system requests. For example, change the local file system URI <code>&quot;file:///&quot;</code> to the HDFS URI: <code>hdfs://</code>, or to the path on HDFS with a namenode: <code>hdfs://&lt;authority&gt;:&lt;port&gt;</code></p></li>
 <li><p>Click <strong>Enable</strong>.</p></li>
 </ol>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d0ba86f3/docs/json-data-model/index.html
----------------------------------------------------------------------
diff --git a/docs/json-data-model/index.html b/docs/json-data-model/index.html
index 214ff5b..666c38e 100644
--- a/docs/json-data-model/index.html
+++ b/docs/json-data-model/index.html
@@ -1137,10 +1137,13 @@ SELECT my column from dfs.`&lt;path_file_name&gt;`;
 
 <h2 id="analyzing-json">Analyzing JSON</h2>
 
-<p>Generally, you query JSON files using the following syntax, which includes a table alias. The alias is typically required for querying complex data:</p>
+<p>Generally, you query JSON files using the following syntax, which includes a table alias. The alias is sometimes required for querying complex data. Because of the ambiguity between y.z where y could be a column or a table,
+Drill currently explicitly requires a table prefix for referencing a field
+inside another field (t.y.z).  This isn&#39;t required in the case y, y[z] or
+y[z].x because these references are not ambiguous. Observe the following guidelines:</p>
 
 <ul>
-<li><p>Dot notation to drill down into a JSON map.</p>
+<li><p>Use dot notation to drill down into a JSON map.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">SELECT t.level1.level2. . . . leveln FROM &lt;storage plugin location&gt;`myfile.json` t
 </code></pre></div></li>
 <li><p>Use square brackets, array-style notation to drill down into a JSON array.</p>
@@ -1156,7 +1159,7 @@ SELECT my column from dfs.`&lt;path_file_name&gt;`;
 
 <p>Drill returns null when a document does not have the specified map or level.</p>
 
-<p>Using the following techniques, you can query complex, nested JSON:</p>
+<p>Use the following techniques to query complex, nested JSON:</p>
 
 <ul>
 <li>Flatten nested data</li>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d0ba86f3/docs/mongodb-plugin-for-apache-drill/index.html
----------------------------------------------------------------------
diff --git a/docs/mongodb-plugin-for-apache-drill/index.html b/docs/mongodb-plugin-for-apache-drill/index.html
index 61810fa..5884d80 100644
--- a/docs/mongodb-plugin-for-apache-drill/index.html
+++ b/docs/mongodb-plugin-for-apache-drill/index.html
@@ -1027,7 +1027,7 @@ provided by MongoDB that you download in the following steps:</p>
 
 <h2 id="configuring-mongodb">Configuring MongoDB</h2>
 
-<p>Start Drill and configure the MongoDB storage plugin instance in the Drill Web
+<p>Start Drill and configure the MongoDB storage plugin in the Drill Web
 UI to connect to Drill. Drill must be running in order to access the Web UI.</p>
 
 <p>Complete the following steps to configure MongoDB as a data source for Drill:</p>
@@ -1045,7 +1045,7 @@ UI to connect to Drill. Drill must be running in order to access the Web UI.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">{
   &quot;type&quot;: &quot;mongo&quot;,
   &quot;connection&quot;: &quot;mongodb://localhost:27017/&quot;,
-  &quot;enabled&quot;: true
+  &quot;enabled&quot;: false
 }
 </code></pre></div>
 <p><div class="admonition note">

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d0ba86f3/docs/partition-by-clause/index.html
----------------------------------------------------------------------
diff --git a/docs/partition-by-clause/index.html b/docs/partition-by-clause/index.html
index d2bfee8..684d787 100644
--- a/docs/partition-by-clause/index.html
+++ b/docs/partition-by-clause/index.html
@@ -1003,14 +1003,14 @@
 
     <div class="int_text" align="left">
       
-        <p>The PARTITION BY clause in the CTAS command automatically partitions data, which Drill <a href="/docs/partition-pruning/">prunes</a> to improve performance when you query the data. (Drill 1.1.0)</p>
+        <p>The PARTITION BY clause in the CTAS command partitions data, which Drill <a href="/docs/partition-pruning/">prunes</a> to improve performance when you query the data. (Drill 1.1.0)</p>
 
 <h2 id="syntax">Syntax</h2>
 <div class="highlight"><pre><code class="language-text" data-lang="text"> [ PARTITION_BY ( column_name[, . . .] ) ]
 </code></pre></div>
 <p>The PARTITION BY clause partitions the data by the first column_name, and then subpartitions the data by the next column_name, if there is one, and so on. </p>
 
-<p>Only the Parquet storage format is supported for automatic partitioning. Before using CTAS, <a href="/docs/create-table-as-ctas/#setting-the-storage-format">set the <code>store.format</code> option</a> for the table to Parquet.</p>
+<p>Only the Parquet storage format is supported for partitioning. Before using CTAS, <a href="/docs/create-table-as-ctas/#setting-the-storage-format">set the <code>store.format</code> option</a> for the table to Parquet.</p>
 
 <p>When the base table in the SELECT statement is schema-less, include columns in the PARTITION BY clause in the table&#39;s column list, or use a select all (SELECT *) statement:  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">CREATE TABLE dest_name [ (column, . . .) ]

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d0ba86f3/docs/partition-pruning/index.html
----------------------------------------------------------------------
diff --git a/docs/partition-pruning/index.html b/docs/partition-pruning/index.html
index c114c10..718ef11 100644
--- a/docs/partition-pruning/index.html
+++ b/docs/partition-pruning/index.html
@@ -1007,19 +1007,17 @@
 
 <h2 id="how-to-partition-data">How to Partition Data</h2>
 
-<p>You can partition data manually or automatically to take advantage of partition pruning in Drill. In Drill 1.0 and earlier, you need to organize your data in such a way to take advantage of partition pruning. In Drill 1.1.0 and later, if the data source is Parquet, you can partition data automatically using CTAS--no data organization tasks required. </p>
+<p>In Drill 1.1.0 and later, if the data source is Parquet, no data organization tasks are required to take advantage of partition pruning. Write Parquet data using the <a href="/docs/partition-by-clause/">PARTITION BY</a> clause in the CTAS statement. </p>
 
-<h2 id="automatic-partitioning">Automatic Partitioning</h2>
-
-<p>Automatic partitioning in Drill 1.1 and later occurs when you write Parquet data using the <a href="/docs/partition-by-clause/">PARTITION BY</a> clause in the CTAS statement. Unlike manual partitioning, no view is required, nor is it necessary to use the <a href="/docs/querying-directories">dir* variables</a>. The Parquet writer first sorts by the partition keys, and then creates a new file when it encounters a new value for the partition columns.</p>
-
-<p>Automatic partitioning creates separate files, but not separate directories, for different partitions. Each file contains exactly one partition value, but there can be multiple files for the same partition value.</p>
+<p>The Parquet writer first sorts data by the partition keys, and then creates a new file when it encounters a new value for the partition columns. During partitioning, Drill creates separate files, but not separate directories, for different partitions. Each file contains exactly one partition value, but there can be multiple files for the same partition value. </p>
 
 <p>Partition pruning uses the Parquet column statistics to determine which columns to use to prune. </p>
 
-<h2 id="manual-partitioning">Manual Partitioning</h2>
+<p>Unlike using the Drill 1.0 partitioning, no view query is subsequently required, nor is it necessary to use the <a href="/docs/querying-directories">dir* variables</a> after you use the Drill 1.1 PARTITION BY clause in a CTAS statement. </p>
+
+<h2 id="drill-1.0-partitioning">Drill 1.0 Partitioning</h2>
 
-<p>Manual partitioning is directory-based. You perform the following steps to manually partition data.   </p>
+<p>You perform the following steps to partition data in Drill 1.0.   </p>
 
 <ol>
 <li>Devise a logical way to store the data in a hierarchy of directories. </li>
@@ -1029,7 +1027,7 @@
 
 <p>After partitioning the data, you need to create a view of the partitioned data to query the data. You can use the <a href="/docs/querying-directories">dir* variables</a> in queries to refer to subdirectories in your workspace path.</p>
 
-<h3 id="manual-partitioning-example">Manual Partitioning Example</h3>
+<h3 id="drill-1.0-partitioning-example">Drill 1.0 Partitioning Example</h3>
 
 <p>Suppose you have text files containing several years of log data. To partition the data by year and quarter, create the following hierarchy of directories:  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   …/logs/1994/Q1  

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d0ba86f3/docs/plugin-configuration-basics/index.html
----------------------------------------------------------------------
diff --git a/docs/plugin-configuration-basics/index.html b/docs/plugin-configuration-basics/index.html
index 632b871..7c15bf9 100644
--- a/docs/plugin-configuration-basics/index.html
+++ b/docs/plugin-configuration-basics/index.html
@@ -1003,34 +1003,32 @@
 
     <div class="int_text" align="left">
       
-        <p>When you add or update storage plugin instances on one Drill node in a Drill
-cluster, Drill broadcasts the information to other Drill nodes 
+        <p>When you add or update storage plugin instances on one Drill node in a 
+cluster having multiple installations of Drill, Drill broadcasts the information to other Drill nodes 
 to synchronize the storage plugin configurations. You do not need to
 restart any of the Drillbits when you add or update a storage plugin instance.</p>
 
-<p>Use the Drill Web UI to update or add a new storage plugin. Launch a web browser, go to: <code>http://&lt;IP address or host name&gt;:8047</code>, and then go to the Storage tab. </p>
+<p>Use the Drill Web UI to update or add a new storage plugin configuration. Launch a web browser, go to: <code>http://&lt;IP address or host name&gt;:8047</code>, and then go to the Storage tab. </p>
 
-<p>To create and configure a new storage plugin:</p>
+<p>To create a name and new configuration:</p>
 
 <ol>
-<li>Enter a storage name in New Storage Plugin.
-Each storage plugin registered with Drill must have a distinct
+<li>Enter a name in <strong>New Storage Plugin</strong>.
+Each configuration registered with Drill must have a distinct
 name. Names are case-sensitive.</li>
-<li>Click Create.<br></li>
-<li>In Configuration, configure attributes of the storage plugin, if applicable, using JSON formatting. The Storage Plugin Attributes table in the next section describes attributes typically reconfigured by users. </li>
-<li>Click Create.</li>
+<li>Click <strong>Create</strong>.<br></li>
+<li>In Configuration, it is recommended that you modify a copy of an existing configuration if possible. Reconfigure attributes of the storage plugin using JSON formatting. The Storage Plugin Attributes table in the next section describes attributes typically reconfigured by users. </li>
+<li>Click <strong>Create</strong>.</li>
 </ol>
 
-<p>Click Update to reconfigure an existing, enabled storage plugin.</p>
-
 <h2 id="storage-plugin-attributes">Storage Plugin Attributes</h2>
 
-<p>The following graphic shows key attributes of a typical dfs storage plugin:<br>
+<p>The following graphic shows key attributes of a typical <code>dfs</code>-based storage plugin configuration:<br>
 <img src="/docs/img/connect-plugin.png" alt="dfs plugin"></p>
 
 <h2 id="list-of-attributes-and-definitions">List of Attributes and Definitions</h2>
 
-<p>The following table describes the attributes you configure for storage plugins. 
+<p>The following table describes the attributes you configure for storage plugins installed with Drill. 
 <table>
   <tr>
     <th>Attribute</th>
@@ -1102,13 +1100,7 @@ name. Names are case-sensitive.</li>
     <td>&quot;formats&quot; . . . &quot;delimiter&quot;</td>
     <td>&quot;\t&quot;<br>&quot;,&quot;</td>
     <td>format-dependent</td>
-    <td>One or more characters that separate records in a delimited text file, such as CSV. Use a 4-digit hex ascii code syntax \uXXXX for a non-printable delimiter. </td>
-  </tr>
-  <tr>
-    <td>&quot;formats&quot; . . . &quot;fieldDelimiter&quot;</td>
-    <td>&quot;,&quot;</td>
-    <td>no</td>
-    <td>A single character that separates each value in a column of a delimited text file.</td>
+    <td>One or more characters that serve as a record seperator in a delimited text file, such as CSV. Use a 4-digit hex ascii code syntax \uXXXX for a non-printable delimiter. </td>
   </tr>
   <tr>
     <td>&quot;formats&quot; . . . &quot;quote&quot;</td>
@@ -1120,7 +1112,7 @@ name. Names are case-sensitive.</li>
     <td>&quot;formats&quot; . . . &quot;escape&quot;</td>
     <td>&quot;`&quot;</td>
     <td>no</td>
-    <td>A single character that escapes the quote character.</td>
+    <td>A single character that escapes a quotation mark inside a value.</td>
   </tr>
   <tr>
     <td>&quot;formats&quot; . . . &quot;comment&quot;</td>
@@ -1132,7 +1124,7 @@ name. Names are case-sensitive.</li>
     <td>&quot;formats&quot; . . . &quot;skipFirstLine&quot;</td>
     <td>true</td>
     <td>no</td>
-    <td>To include or omits the header when reading a delimited text file.
+    <td>To include or omit the header when reading a delimited text file. Set to true to avoid reading headers as data.
     </td>
   </tr>
 </table></p>
@@ -1141,7 +1133,7 @@ name. Names are case-sensitive.</li>
 
 <h2 id="using-the-formats">Using the Formats</h2>
 
-<p>You can use the following attributes when the <code>sys.options</code> property setting <code>exec.storage.enable_new_text_reader</code> is true (the default):</p>
+<p>You can use the following attributes in the <code>formats</code> area of the storage plugin configuration. When setting these attributes, you also need to set the <code>sys.options</code> property <code>exec.storage.enable_new_text_reader</code> to true (the default):</p>
 
 <ul>
 <li>comment<br></li>
@@ -1151,29 +1143,11 @@ name. Names are case-sensitive.</li>
 <li>skipFirstLine</li>
 </ul>
 
-<p>The &quot;formats&quot; apply to all workspaces defined in a storage plugin. A typical use case defines separate storage plugins for different root directories to query the files stored below the directory. An alternative use case defines multiple formats within the same storage plugin and names target files using different extensions to match the formats.</p>
+<p>For more information and examples of using formats for text files, see <a href="/docs/text-files-csv-tsv-psv/">&quot;Text Files: CSV, TSV, PSV&quot;</a>.</p>
 
-<p>The following example of a storage plugin for reading CSV files with the new text reader includes two formats for reading files having either a <code>csv</code> or <code>csv2</code> extension. The text reader does include the first line of column names in the queries of <code>.csv</code> files but does not include it in queries of <code>.csv2</code> files. </p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">&quot;csv&quot;: {
-  &quot;type&quot;: &quot;text&quot;,
-  &quot;extensions&quot;: [
-    &quot;csv&quot;
-  ],  
-  &quot;delimiter&quot;: &quot;,&quot; 
-},  
-&quot;csv_with_header&quot;: {
-  &quot;type&quot;: &quot;text&quot;,
-  &quot;extensions&quot;: [
-    &quot;csv2&quot;
-  ],  
-  &quot;comment&quot;: &quot;&amp;&quot;,
-  &quot;skipFirstLine&quot;: true,
-  &quot;delimiter&quot;: &quot;,&quot; 
-},  
-</code></pre></div>
 <h2 id="using-other-attributes">Using Other Attributes</h2>
 
-<p>The configuration of other attributes, such as <code>size.calculator.enabled</code> in the hbase plugin and <code>configProps</code> in the hive plugin, are implementation-dependent and beyond the scope of this document.</p>
+<p>The configuration of other attributes, such as <code>size.calculator.enabled</code> in the <code>hbase</code> plugin and <code>configProps</code> in the <code>hive</code> plugin, are implementation-dependent and beyond the scope of this document.</p>
 
 <h2 id="case-sensitive-names">Case-sensitive Names</h2>
 
@@ -1184,16 +1158,16 @@ name. Names are case-sensitive.</li>
 
 <h2 id="storage-plugin-rest-api">Storage Plugin REST API</h2>
 
-<p>Drill provides a REST API that you can use to create a storage plugin. Use an HTTP POST and pass two properties:</p>
+<p>Drill provides a REST API that you can use to create a storage plugin configuration. Use an HTTP POST and pass two properties:</p>
 
 <ul>
 <li><p>name<br>
-The plugin name. </p></li>
+The storage plugin configuration name. </p></li>
 <li><p>config<br>
-The storage plugin definition as you would enter it in the Web UI.</p></li>
+The attribute settings as you would enter it in the Web UI.</p></li>
 </ul>
 
-<p>For example, this command creates a plugin named myplugin for reading files of an unknown type located on the root of the file system:</p>
+<p>For example, this command creates a storage plugin named myplugin for reading files of an unknown type located on the root of the file system:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">curl -X POST -/json&quot; -d &#39;{&quot;name&quot;:&quot;myplugin&quot;, &quot;config&quot;: {&quot;type&quot;: &quot;file&quot;, &quot;enabled&quot;: false, &quot;connection&quot;: &quot;file:///&quot;, &quot;workspaces&quot;: { &quot;root&quot;: { &quot;location&quot;: &quot;/&quot;, &quot;writable&quot;: false, &quot;defaultInputFormat&quot;: null}}, &quot;formats&quot;: null}}&#39; http://localhost:8047/storage/myplugin.json
 </code></pre></div>
 <h2 id="bootstrapping-a-storage-plugin">Bootstrapping a Storage Plugin</h2>
@@ -1203,9 +1177,9 @@ The storage plugin definition as you would enter it in the Web UI.</p></li>
 <p>Bootstrapping a storage plugin works only when the first drillbit in the cluster first starts up. The configuration is
 stored in zookeeper, preventing Drill from picking up the boostrap-storage-plugins.json again.</p>
 
-<p>After cluster startup, you have to use the REST API or Drill Web UI to add a storage plugin. Alternatively, you
+<p>After cluster startup, you have to use the REST API or Drill Web UI to add a storage plugin configuration. Alternatively, you
 can modify the entry in zookeeper by uploading the json file for
-that plugin to the /drill directory of the zookeeper installation, or just delete the /drill directory if you do not have configuration properties to preserve.</p>
+that plugin to the /drill directory of the zookeeper installation, or by just deleting the /drill directory if you do not have configuration properties to preserve.</p>
 
 <p>If you configure an HBase storage plugin using bootstrap-storage-plugins.json file and HBase is not installed, you might experience a delay when executing the queries. Configure the <a href="http://hbase.apache.org/book.html#config.files">HBase client timeout</a> and retry settings in the config block of HBase plugin instance configuration.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d0ba86f3/docs/storage-plugin-registration/index.html
----------------------------------------------------------------------
diff --git a/docs/storage-plugin-registration/index.html b/docs/storage-plugin-registration/index.html
index 5008516..f0fcf47 100644
--- a/docs/storage-plugin-registration/index.html
+++ b/docs/storage-plugin-registration/index.html
@@ -1001,29 +1001,29 @@
 
     <div class="int_text" align="left">
       
-        <p>You connect Drill to a file system, Hive, HBase, or other data source using storage plugins. Drill includes a number of storage plugins in the installation. On the Storage tab of the Web UI, you can view, create, reconfigure, and register a storage plugin. To open the Storage tab, go to <code>http://&lt;IP address&gt;:8047/storage</code>, where IP address is any one of the installed drill bits:</p>
+        <p>You connect Drill to a file system, Hive, HBase, or other data source through a storage plugin. On the Storage tab of the Web UI, you can view and reconfigure a storage plugin. You can create a new name for the reconfigured version, thereby registering the new version. To open the Storage tab, go to <code>http://&lt;IP address&gt;:8047/storage</code>, where IP address is any one of the installed Drillbits in a distributed system or <code>localhost</code> in an embedded system:</p>
 
 <p><img src="/docs/img/plugin-default.png" alt="drill-installed plugins"></p>
 
-<p>The Drill installation registers the <code>cp</code>, <code>dfs</code>, <code>hbase</code>, <code>hive</code>, and <code>mongo</code> storage plugins instances by default.</p>
+<p>The Drill installation registers the the <code>cp</code>, <code>dfs</code>, <code>hbase</code>, <code>hive</code>, and <code>mongo</code> storage plugin configurations.</p>
 
 <ul>
 <li><code>cp</code><br>
 Points to a JAR file in the Drill classpath that contains the Transaction Processing Performance Council (TPC) benchmark schema TPC-H that you can query. </li>
 <li><code>dfs</code><br>
-Points to the local file system on your machine, but you can configure this instance to
+Points to the local file system, but you can configure this storage plugin to
 point to any distributed file system, such as a Hadoop or S3 file system. </li>
 <li><code>hbase</code><br>
-Provides a connection to HBase/M7.</li>
+Provides a connection to HBase.</li>
 <li><code>hive</code><br>
-Integrates Drill with the Hive metadata abstraction of files, HBase/M7, and libraries to read data and operate on SerDes and UDFs.</li>
+Integrates Drill with the Hive metadata abstraction of files, HBase, and libraries to read data and operate on SerDes and UDFs.</li>
 <li><code>mongo</code><br>
 Provides a connection to MongoDB data.</li>
 </ul>
 
-<p>In the Drill sandbox,  the <code>dfs</code> storage plugin connects you to the MapR File System (MFS). Using an installation of Drill instead of the sandbox, <code>dfs</code> connects you to the root of your file system.</p>
+<p>In the <a href="/docs/about-the-mapr-sandbox/">Drill sandbox</a>, the <code>dfs</code> storage plugin connects you to a simulation of a distributed file system. If you install Drill, <code>dfs</code> connects you to the root of your file system.</p>
 
-<p>Storage plugin configurations are saved in a temporary directory (embedded mode) or in ZooKeeper (distributed mode). Seeing a storage plugin that you created in one version appear in the Drill Web UI of another version is expected. For example, on Mac OS X, Drill uses <code>/tmp/drill/sys.storage_plugins</code> to store storage plugin configurations. To revert to the default storage plugins for a particular version, in embedded mode, delete the contents of this directory and restart the Drill shell.</p>
+<p>Drill saves storage plugin configurations in a temporary directory (embedded mode) or in ZooKeeper (distributed mode). The storage plugin configuration persists after upgrading, so a configuration that you created in one version of Drill appears in the Drill Web UI of an upgraded version of Drill. For example, on Mac OS X, Drill uses <code>/tmp/drill/sys.storage_plugins</code> to store storage plugin configurations. To revert to the default storage plugins for a particular version, in embedded mode, delete the contents of this directory and restart the Drill shell.</p>
 
     
       

http://git-wip-us.apache.org/repos/asf/drill-site/blob/d0ba86f3/docs/tableau-examples/index.html
----------------------------------------------------------------------
diff --git a/docs/tableau-examples/index.html b/docs/tableau-examples/index.html
index b41ecf5..77af1bf 100644
--- a/docs/tableau-examples/index.html
+++ b/docs/tableau-examples/index.html
@@ -1042,7 +1042,7 @@ and then visualize the data in Tableau.<br>
  The <em>MapR Drill ODBC Driver DSN Setup</em> window appears.</li>
 <li>Enter a name for the data source.</li>
 <li>Specify the connection type based on your requirements. The connection type provides the DSN access to Drill Data Sources.<br>
-In this example, we are connecting to a Zookeeper Quorum.</li>
+In this example, we are connecting to a Zookeeper Quorum. Verify that the Cluster ID that you use matches the Cluster ID in <code>&lt;DRILL_HOME&gt;/conf/drill-override.conf</code>.</li>
 <li>In the <strong>Schema</strong> field, select the Hive schema.
  In this example, the Hive schema is named hive.default.
  <img src="/docs/img/Hive_DSN.png" alt=""></li>
@@ -1250,7 +1250,8 @@ You can copy this query to file so that you can use it in Tableau.</li>
 <li>In the <em>On a server</em> section, click <strong>Other Databases (ODBC).</strong>
  The <em>Generic ODBC Connection</em> dialog appears.</li>
 <li>In the <em>Connect Using</em> section, select the DSN that connects to the data source.<br>
- In this example, Files-DrillDataSources was selected.</li>
+ In this example, Files-DrillDataSources was selected.
+ If you do not see the DSN, close and re-open Tableau.</li>
 <li>In the <em>Schema</em> section, select the schema associated with the data source.<br>
  In this example, dfs.default was selected.</li>
 <li>In the <em>Table</em> section, select <strong>Custom SQL</strong>.</li>
@@ -1265,7 +1266,7 @@ You can copy this query to file so that you can use it in Tableau.</li>
 <p class="last">The path to the file depends on its location in your file system.  </p>
 </div> </p></li>
 <li><p>Click <strong>OK</strong> to complete the connection.<br>
- <img src="/docs/img/ODBC_CustomSQL.png" alt=""></p></li>
+ <img src="/docs/img/ODBC_CustomSQL.png" alt="">  </p></li>
 <li><p>In the <em>Data Connection dialog</em>, click <strong>Connect Live</strong>.</p></li>
 </ol>