You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@drill.apache.org by br...@apache.org on 2015/09/09 02:20:37 UTC

drill-site git commit: doc edits

Repository: drill-site
Updated Branches:
  refs/heads/asf-site 31fb6486b -> 3e2e1d4c6


doc edits


Project: http://git-wip-us.apache.org/repos/asf/drill-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill-site/commit/3e2e1d4c
Tree: http://git-wip-us.apache.org/repos/asf/drill-site/tree/3e2e1d4c
Diff: http://git-wip-us.apache.org/repos/asf/drill-site/diff/3e2e1d4c

Branch: refs/heads/asf-site
Commit: 3e2e1d4c6a4011264d65fdbf0326b0310b5b17c1
Parents: 31fb648
Author: Bridget Bevens <bb...@maprtech.com>
Authored: Tue Sep 8 17:20:13 2015 -0700
Committer: Bridget Bevens <bb...@maprtech.com>
Committed: Tue Sep 8 17:20:13 2015 -0700

----------------------------------------------------------------------
 docs/architecture-introduction/index.html |  24 ++--
 docs/core-modules/index.html              |  18 ++-
 docs/drill-in-10-minutes/index.html       |  55 +++++----
 docs/drill-query-execution/index.html     |  12 +-
 docs/img/query-flow-client.png            | Bin 13734 -> 11366 bytes
 docs/performance/index.html               |  31 +++--
 docs/querying-hbase/index.html            | 157 ++++++++++++++++---------
 docs/tutorials-introduction/index.html    |  12 +-
 docs/why-drill/index.html                 |  12 +-
 feed.xml                                  |   4 +-
 10 files changed, 184 insertions(+), 141 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill-site/blob/3e2e1d4c/docs/architecture-introduction/index.html
----------------------------------------------------------------------
diff --git a/docs/architecture-introduction/index.html b/docs/architecture-introduction/index.html
index a886f20..280619e 100644
--- a/docs/architecture-introduction/index.html
+++ b/docs/architecture-introduction/index.html
@@ -1018,7 +1018,7 @@ metadata repository.</p>
 <h2 id="high-level-architecture">High-Level Architecture</h2>
 
 <p>Drill includes a distributed execution environment, purpose built for large-
-scale data processing. At the core of Apache Drill is the ‘Drillbit’ service,
+scale data processing. At the core of Apache Drill is the &quot;Drillbit&quot; service,
 which is responsible for accepting requests from the client, processing the
 queries, and returning results to the client.</p>
 
@@ -1031,7 +1031,7 @@ uses ZooKeeper to maintain cluster membership and health-check information.</p>
 <p>Though Drill works in a Hadoop cluster environment, Drill is not tied to
 Hadoop and can run in any distributed cluster environment. The only pre-requisite for Drill is Zookeeper.</p>
 
-<p>See Drill Query Execution.</p>
+<p>See <a href="/docs/drill-query-execution/">Drill Query Execution</a>.</p>
 
 <h2 id="drill-clients">Drill Clients</h2>
 
@@ -1050,32 +1050,30 @@ Hadoop and can run in any distributed cluster environment. The only pre-requisit
 the query execution process. Drill starts data processing in record-batches
 and discovers the schema during processing. Self-describing data formats such
 as Parquet, JSON, AVRO, and NoSQL databases have schema specified as part of
-the data itself, which Drill leverages dynamically at query time. Because
-schema can change over the course of a Drill query, all Drill operators are
+the data itself, which Drill leverages dynamically at query time. Because the
+schema can change over the course of a Drill query, many Drill operators are
 designed to reconfigure themselves when schemas change.</p>
 
 <h3 id="flexible-data-model"><strong><em>Flexible data model</em></strong></h3>
 
-<p>Drill allows access to nested data attributes, just like SQL columns, and
+<p>Drill allows access to nested data attributes, as if they were SQL columns, and
 provides intuitive extensions to easily operate on them. From an architectural
 point of view, Drill provides a flexible hierarchical columnar data model that
-can represent complex, highly dynamic and evolving data models. Drill allows
-for efficient processing of these models without the need to flatten or
-materialize them at design time or at execution time. Relational data in Drill
+can represent complex, highly dynamic and evolving data models. Relational data in Drill
 is treated as a special or simplified case of complex/multi-structured data.</p>
 
-<h3 id="de-centralized-metadata"><strong><em>De-centralized metadata</em></strong></h3>
+<h3 id="no-centralized-metadata"><strong><em>No centralized metadata</em></strong></h3>
 
 <p>Drill does not have a centralized metadata requirement. You do not need to
 create and manage tables and views in a metadata repository, or rely on a
 database administrator group for such a function. Drill metadata is derived
-from the storage plugins that correspond to data sources. Storage plugins
+through the storage plugins that correspond to data sources. Storage plugins
 provide a spectrum of metadata ranging from full metadata (Hive), partial
 metadata (HBase), or no central metadata (files). De-centralized metadata
 means that Drill is NOT tied to a single Hive repository. You can query
 multiple Hive repositories at once and then combine the data with information
 from HBase tables or with a file in a distributed file system. You can also
-use SQL DDL syntax to create metadata within Drill, which gets organized just
+use SQL DDL statements to create metadata within Drill, which gets organized just
 like a traditional database. Drill metadata is accessible through the ANSI
 standard INFORMATION_SCHEMA database.</p>
 
@@ -1084,8 +1082,8 @@ standard INFORMATION_SCHEMA database.</p>
 <p>Drill provides an extensible architecture at all layers, including the storage
 plugin, query, query optimization/execution, and client API layers. You can
 customize any layer for the specific needs of an organization or you can
-extend the layer to a broader array of use cases. Drill provides a built in
-classpath scanning and plugin concept to add additional storage plugins,
+extend the layer to a broader array of use cases. Drill uses 
+classpath scanning to find and load plugins, and to add additional storage plugins,
 functions, and operators with minimal configuration.</p>
 
     

http://git-wip-us.apache.org/repos/asf/drill-site/blob/3e2e1d4c/docs/core-modules/index.html
----------------------------------------------------------------------
diff --git a/docs/core-modules/index.html b/docs/core-modules/index.html
index dea232b..adc74b7 100644
--- a/docs/core-modules/index.html
+++ b/docs/core-modules/index.html
@@ -1012,26 +1012,24 @@
 <p>The following list describes the key components of a Drillbit:</p>
 
 <ul>
-<li><p><strong>RPC end point</strong>: Drill exposes a low overhead protobuf-based RPC protocol to communicate with the clients. Additionally, a C++ and Java API layers are also available for the client applications to interact with Drill. Clients can communicate to a specific Drillbit directly or go through a ZooKeeper quorum to discover the available Drillbits before submitting queries. It is recommended that the clients always go through ZooKeeper to shield clients from the intricacies of cluster management, such as the addition or removal of nodes. </p></li>
-<li><p><strong>SQL parser</strong>: Drill uses <a href="https://calcite.incubator.apache.org/">Calcite</a>, the open source framework, to parse incoming queries. The output of the parser component is a language agnostic, computer-friendly logical plan that represents the query. </p></li>
-<li><p><strong>Storage plugin interfaces</strong>: Drill serves as a query layer on top of several data sources. Storage plugins in Drill represent the abstractions that Drill uses to interact with the data sources. Storage plugins provide Drill with the following information:</p>
+<li><p><strong>RPC endpoint</strong>: Drill exposes a low overhead protobuf-based RPC protocol to communicate with the clients. Additionally, C++ and Java API layers are also available for client applications to interact with Drill. Clients can communicate with a specific Drillbit directly or go through a ZooKeeper quorum to discover the available Drillbits before submitting queries. It is recommended that the clients always go through ZooKeeper to shield clients from the intricacies of cluster management, such as the addition or removal of nodes. </p></li>
+<li><p><strong>SQL parser</strong>: Drill uses <a href="https://calcite.incubator.apache.org/">Calcite</a>, the open source  SQL parser framework, to parse incoming queries. The output of the parser component is a language agnostic, computer-friendly logical plan that represents the query. </p></li>
+<li><p><strong>Storage plugin interface</strong>: Drill serves as a query layer on top of several data sources. Storage plugins in Drill represent the abstractions that Drill uses to interact with the data sources. Storage plugins provide Drill with the following information:</p>
 
 <ul>
 <li>Metadata available in the source</li>
 <li>Interfaces for Drill to read from and write to data sources</li>
-<li>Location of data and a set of optimization rules to help with efficient and faster execution of Drill queries on a specific data source </li>
+<li>Location of data and a set of optimization rules to help with efficient and fast execution of Drill queries on a specific data source </li>
+</ul></li>
 </ul>
 
-<p>In the context of Hadoop, Drill provides storage plugins for files and
-HBase. Drill also integrates with Hive as a storage plugin since Hive
-provides a metadata abstraction layer on top of files, HBase, and provides
-libraries to read data and operate on these sources (Serdes and UDFs).</p>
+<p>In the context of Hadoop, Drill provides storage plugins for distributed files and
+HBase. Drill also integrates with Hive using a storage plugin.</p>
 
 <p>When users query files and HBase with Drill, they can do it directly or go
 through Hive if they have metadata defined there. Drill integration with Hive
 is only for metadata. Drill does not invoke the Hive execution engine for any
-requests.</p></li>
-</ul>
+requests.</p>
 
     
       

http://git-wip-us.apache.org/repos/asf/drill-site/blob/3e2e1d4c/docs/drill-in-10-minutes/index.html
----------------------------------------------------------------------
diff --git a/docs/drill-in-10-minutes/index.html b/docs/drill-in-10-minutes/index.html
index cc45b8d..3378678 100644
--- a/docs/drill-in-10-minutes/index.html
+++ b/docs/drill-in-10-minutes/index.html
@@ -1013,11 +1013,11 @@ without having to perform any setup tasks.</p>
 
 <h2 id="installation-overview">Installation Overview</h2>
 
-<p>You can install Drill in embedded mode on a machine running Linux, Mac OS X, or Windows. For information about installing Drill in distributed mode, see <a href="/docs/installing-drill-in-distributed-mode">Installing Drill in Distributed Mode</a>.</p>
+<p>You can install Drill to run in embedded mode on a machine running Linux, Mac OS X, or Windows. For information about installing Drill to run in distributed mode, see <a href="/docs/installing-drill-in-distributed-mode">Installing Drill in Distributed Mode</a>.</p>
 
-<p>This installation procedure includes how to download the Apache Drill archive and extract the contents to a directory on your machine. The Apache Drill archive contains sample JSON and Parquet files that you can query immediately.</p>
+<p>This installation procedure includes how to download the Apache Drill archive file and extract the contents to a directory on your machine. The Apache Drill archive contains sample JSON and Parquet files that you can query immediately.</p>
 
-<p>After installing Drill, you start the Drill shell. The Drill shell is a pure-Java console-based utility for connecting to relational databases and executing SQL commands. Drill follows the ANSI SQL: 2011 standard with <a href="/docs/sql-extensions/">extensions</a> for nested data formats and other capabilities.</p>
+<p>After installing Drill, you start the Drill shell. The Drill shell is a pure-Java console-based utility for connecting to relational databases and executing SQL commands. Drill follows the SQL:2011 standard with <a href="/docs/sql-extensions/">extensions</a> for nested data formats and other capabilities.</p>
 
 <h2 id="embedded-mode-installation-prerequisites">Embedded Mode Installation Prerequisites</h2>
 
@@ -1028,17 +1028,18 @@ without having to perform any setup tasks.</p>
 <li>Windows only:<br>
 
 <ul>
-<li>A JAVA_HOME environment variable set up that points to  to the JDK installation<br></li>
-<li>A PATH environment variable that includes a pointer to the JDK installation<br></li>
-<li>A third-party utility for unzipping a tar.gz file </li>
+<li>A JAVA_HOME environment variable set up that points to the JDK installation<br></li>
+<li>A PATH environment variable that includes a pointer to the bin directory of the JDK installation </li>
+<li>A third-party utility for unzipping a .tar.gz file </li>
 </ul></li>
 </ul>
 
 <h3 id="java-installation-prerequisite-check">Java Installation Prerequisite Check</h3>
 
 <p>Run the following command in a terminal (Linux and Mac OS X) or Command Prompt (Windows) to verify that Java 7 is the version in effect:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">java -version
-</code></pre></div>
+
+<p><code>java -version</code></p>
+
 <p>The output looks something like this:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">java version &quot;1.7.0_79&quot;
 Java(TM) SE Runtime Environment (build 1.7.0_7965-b15)
@@ -1049,7 +1050,7 @@ Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)
 <p>Complete the following steps to install Drill:  </p>
 
 <ol>
-<li><p>In a terminal windows, change to the directory where you want to install Drill.</p></li>
+<li><p>In a terminal window, change to the directory where you want to install Drill.</p></li>
 <li><p>To download the latest version of Apache Drill, download Drill from the <a href="http://getdrill.org/drill/download/apache-drill-1.1.0.tar.gz">Drill web site</a> or run one of the following commands, depending on which you have installed on your system:</p></li>
 </ol>
 
@@ -1060,12 +1061,12 @@ Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)
 
 <ol>
 <li><p>Copy the downloaded file to the directory where you want to install Drill. </p></li>
-<li><p>Extract the contents of the Drill tar.gz file. Use sudo only if necessary:  </p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">tar -xvzf apache-drill-1.1.0.tar.gz  
-</code></pre></div></li>
+<li><p>Extract the contents of the Drill tar.gz file. Use sudo if necessary:  </p>
+
+<p><code>tar -xvzf apache-drill-1.1.0.tar.gz</code>  </p></li>
 </ol>
 
-<p>The extraction process creates the installation directory named apache-drill-1.1.0 containing the Drill software.</p>
+<p>The extraction process creates an installation directory containing the Drill software.</p>
 
 <p>At this point, you can start Drill.</p>
 
@@ -1075,11 +1076,11 @@ Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)
 
 <ol>
 <li><p>Navigate to the Drill installation directory. For example:  </p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">cd apache-drill-1.1.0  
-</code></pre></div></li>
+
+<p><code>cd apache-drill-1.1.0</code>  </p></li>
 <li><p>Issue the following command to launch Drill in embedded mode:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">bin/drill-embedded  
-</code></pre></div></li>
+
+<p><code>bin/drill-embedded</code>  </p></li>
 </ol>
 
 <p>The message of the day followed by the <code>0: jdbc:drill:zk=local&gt;</code>  prompt appears.  </p>
@@ -1121,8 +1122,9 @@ Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)
 <h2 id="stopping-drill">Stopping Drill</h2>
 
 <p>Issue the following command when you want to exit the Drill shell:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">!quit
-</code></pre></div>
+
+<p><code>!quit</code></p>
+
 <h2 id="query-sample-data">Query Sample Data</h2>
 
 <p>Your Drill installation includes a <code>sample-data</code> directory with JSON and
@@ -1138,8 +1140,9 @@ configuration, refer to <a href="/docs/connect-a-data-source-introduction">Stora
 
 <p>To view the data in the <code>employee.json</code> file, submit the following SQL query
 to Drill, using the <a href="/docs/storage-plugin-registration/">cp (classpath) storage plugin</a> to point to the file.</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=local&gt; SELECT * FROM cp.`employee.json` LIMIT 3;
-</code></pre></div>
+
+<p><code>0: jdbc:drill:zk=local&gt; SELECT * FROM cp.</code>employee.json<code>LIMIT 3;</code></p>
+
 <p>The query output is:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">+--------------+------------------+-------------+------------+--------------+---------------------+-----------+----------------+-------------+------------------------+----------+----------------+------------------+-----------------+---------+--------------------+
 | employee_id  |    full_name     | first_name  | last_name  | position_id  |   position_title    | store_id  | department_id  | birth_date  |       hire_date        |  salary  | supervisor_id  | education_level  | marital_status  | gender  |  management_role   |
@@ -1168,8 +1171,9 @@ systems.</p>
 
 <p>To view the data in the <code>region.parquet</code> file, issue the query appropriate for
 your operating system:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">    SELECT * FROM dfs.`&lt;path-to-installation&gt;/apache-drill-&lt;version&gt;/sample-data/region.parquet`;
-</code></pre></div>
+
+<p><code>SELECT * FROM dfs.`&lt;path-to-installation&gt;/apache-drill-&lt;version&gt;/sample-data/region.parquet`;</code></p>
+
 <p>The query returns the following results:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">+--------------+--------------+-----------------------+
 | R_REGIONKEY  |    R_NAME    |       R_COMMENT       |
@@ -1195,8 +1199,9 @@ systems.</p>
 
 <p>To view the data in the <code>nation.parquet</code> file, issue the query appropriate for
 your operating system:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">      SELECT * FROM dfs.`&lt;path-to-installation&gt;/apache-drill-&lt;version&gt;/sample-data/nation.parquet`;
-</code></pre></div>
+
+<p><code>SELECT * FROM dfs.`&lt;path-to-installation&gt;/apache-drill-&lt;version&gt;/sample-data/nation.parquet`;</code></p>
+
 <p>The query returns the following results:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">SELECT * FROM dfs.`Users/khahn/drill/apache-drill-1.1.0-SNAPSHOT/sample-data/nation.parquet`;
 +--------------+-----------------+--------------+-----------------------+

http://git-wip-us.apache.org/repos/asf/drill-site/blob/3e2e1d4c/docs/drill-query-execution/index.html
----------------------------------------------------------------------
diff --git a/docs/drill-query-execution/index.html b/docs/drill-query-execution/index.html
index b5d565a..d73a621 100644
--- a/docs/drill-query-execution/index.html
+++ b/docs/drill-query-execution/index.html
@@ -1011,7 +1011,7 @@
 
 <p><img src="/docs/img/query-flow-client.png" alt=""></p>
 
-<p>The Drillbit that receives the query from a client or application becomes the Foreman for the query and drives the entire query. A parser in the Foreman parses the SQL, applying custom rules to convert specific SQL operators into a specific logical operator syntax that Drill understands. This collection of logical operators forms a logical plan. The logical plan describes the work required to generate the query results and defines what data sources and operations to apply.</p>
+<p>The Drillbit that receives the query from a client or application becomes the Foreman for the query and drives the entire query. A parser in the Foreman parses the SQL, applying custom rules to convert specific SQL operators into a specific logical operator syntax that Drill understands. This collection of logical operators forms a logical plan. The logical plan describes the work required to generate the query results and defines which data sources and operations to apply.</p>
 
 <p>The Foreman sends the logical plan into a cost-based optimizer to optimize the order of SQL operators in a statement and read the logical plan. The optimizer applies various types of rules to rearrange operators and functions into an optimal plan. The optimizer converts the logical plan into a physical plan that describes how to execute the query.</p>
 
@@ -1023,25 +1023,25 @@
 
 <h2 id="major-fragments">Major Fragments</h2>
 
-<p>A major fragment is an abstract concept that represents a phase of the query execution. A phase can consist of one or multiple operations that Drill must perform to execute the query. Drill assigns each major fragment a MajorFragmentID.</p>
+<p>A major fragment is a concept that represents a phase of the query execution. A phase can consist of one or multiple operations that Drill must perform to execute the query. Drill assigns each major fragment a MajorFragmentID.</p>
 
 <p>For example, to perform a hash aggregation of two files, Drill may create a plan with two major phases (major fragments) where the first phase is dedicated to scanning the two files and the second phase is dedicated to the aggregation of the data.  </p>
 
 <p><img src="/docs/img/ex-operator.png" alt=""></p>
 
-<p>Drill separates major fragments by an exchange operator. An exchange is a change in data location and/or parallelization of the physical plan. An exchange is composed of a sender and a receiver to allow data to move between nodes. </p>
+<p>Drill uses an exchange operator to separate major fragments. An exchange is a change in data location and/or parallelization of the physical plan. An exchange is composed of a sender and a receiver to allow data to move between nodes. </p>
 
 <p>Major fragments do not actually perform any query tasks. Each major fragment is divided into one or multiple minor fragments (discussed in the next section) that actually execute the operations required to complete the query and return results back to the client.</p>
 
-<p>You can interact with major fragments within the physical plan by capturing a JSON representation of the plan in a file, manually modifying it, and then submitting it back to Drill using the SUBMIT PLAN command. You can also view major fragments in the query profile, which is visible in the Drill Web UI. See <a href="/docs/explain/">EXPLAIN </a>and <a href="/docs/query-profiles/">Query Profiles</a> for more information.</p>
+<p>You can work with major fragments within the physical plan by capturing a JSON representation of the plan in a file, manually modifying it, and then submitting it back to Drill using the SUBMIT PLAN command. You can also view major fragments in the query profile, which is visible in the Drill Web UI. See <a href="/docs/explain/">EXPLAIN </a>and <a href="/docs/query-profiles/">Query Profiles</a> for more information.</p>
 
 <h2 id="minor-fragments">Minor Fragments</h2>
 
-<p>Each major fragment is parallelized into minor fragments. A minor fragment is a logical unit of work that runs inside of a thread. A logical unit of work in Drill is also referred to as a slice. The execution plan that Drill creates is composed of minor fragments. Drill assigns each minor fragment a MinorFragmentID.  </p>
+<p>Each major fragment is parallelized into minor fragments. A minor fragment is a logical unit of work that runs inside a thread. A logical unit of work in Drill is also referred to as a slice. The execution plan that Drill creates is composed of minor fragments. Drill assigns each minor fragment a MinorFragmentID.  </p>
 
 <p><img src="/docs/img/min-frag.png" alt=""></p>
 
-<p>The parallelizer in the Foreman creates one or more minor fragments from a major fragment at execution time, by breaking a major fragment into as many minor fragments as it can run simultaneously on the cluster.</p>
+<p>The parallelizer in the Foreman creates one or more minor fragments from a major fragment at execution time, by breaking a major fragment into as many minor fragments as it can usefully run at the same time on the cluster.</p>
 
 <p>Drill executes each minor fragment in its own thread as quickly as possible based on its upstream data requirements. Drill schedules the minor fragments on nodes with data locality. Otherwise, Drill schedules them in a round-robin fashion on the existing, available Drillbits.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/3e2e1d4c/docs/img/query-flow-client.png
----------------------------------------------------------------------
diff --git a/docs/img/query-flow-client.png b/docs/img/query-flow-client.png
index 10fe24f..0ae87fc 100755
Binary files a/docs/img/query-flow-client.png and b/docs/img/query-flow-client.png differ

http://git-wip-us.apache.org/repos/asf/drill-site/blob/3e2e1d4c/docs/performance/index.html
----------------------------------------------------------------------
diff --git a/docs/performance/index.html b/docs/performance/index.html
index 4f610f9..f337f6b 100644
--- a/docs/performance/index.html
+++ b/docs/performance/index.html
@@ -1012,47 +1012,46 @@ performance:</p>
 <p><strong><em>Distributed engine</em></strong></p>
 
 <p>Drill provides a powerful distributed execution engine for processing queries.
-Users can submit requests to any node in the cluster. You can simply add new
-nodes to the cluster to scale for larger volumes of data, support more users
-or to improve performance.</p>
+Users can submit requests to any node in the cluster. You can add new
+nodes to the cluster to scale for larger volumes of data to support more users
+or improve performance.</p>
 
 <p><strong><em>Columnar execution</em></strong></p>
 
 <p>Drill optimizes for both columnar storage and execution by using an in-memory
 data model that is hierarchical and columnar. When working with data stored in
 columnar formats such as Parquet, Drill avoids disk access for columns that
-are not involved in an analytic query. Drill also provides an execution layer
-that performs SQL processing directly on columnar data without row
+are not involved in a query. Drill&#39;s execution layer also 
+performs SQL processing directly on columnar data without row
 materialization. The combination of optimizations for columnar storage and
 direct columnar execution significantly lowers memory footprints and provides
-faster execution of BI/Analytic type of workloads.</p>
+faster execution of BI and analytic types of workloads.</p>
 
 <p><strong><em>Vectorization</em></strong></p>
 
 <p>Rather than operating on single values from a single table record at one time,
 vectorization in Drill allows the CPU to operate on vectors, referred to as a
-Record Batches. Record Batches are arrays of values from many different
+record batches. A record batch has arrays of values from many different
 records. The technical basis for efficiency of vectorized processing is modern
 chip technology with deep-pipelined CPU designs. Keeping all pipelines full to
-achieve efficiency near peak performance is something impossible to achieve in
+achieve efficiency near peak performance is impossible to achieve in
 traditional database engines, primarily due to code complexity.</p>
 
 <p><strong><em>Runtime compilation</em></strong></p>
 
-<p>Runtime compilation is faster compared to the interpreted execution. Drill
-generates highly efficient custom code for every single query for every single
-operator. Here is a quick overview of the Drill compilation/code generation
-process at a glance.</p>
+<p>Runtime compilation enables faster execution than interpreted execution. Drill
+generates highly efficient custom code for every single query. 
+The following image shows the Drill compilation/code generation
+process:</p>
 
 <p><img src="/docs/img/58.png" alt="drill compiler"></p>
 
 <p><strong><em>Optimistic and pipelined query execution</em></strong></p>
 
-<p>Drill adopts an optimistic execution model to process queries. Drill assumes
-that failures are infrequent within the short span of a query and therefore
+<p>Using an optimistic execution model to process queries, Drill assumes
+that failures are infrequent within the short span of a query. Drill 
 does not spend time creating boundaries or checkpoints to minimize recovery
-time. Failures at node level are handled gracefully. In the instance of a
-single query failure, the query is rerun. Drill execution uses a pipeline
+time. In the instance of a single query failure, the query is rerun. Drill execution uses a pipeline
 model where all tasks are scheduled at once. The query execution happens in-
 memory as much as possible to move data through task pipelines, persisting to
 disk only if there is memory overflow.</p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/3e2e1d4c/docs/querying-hbase/index.html
----------------------------------------------------------------------
diff --git a/docs/querying-hbase/index.html b/docs/querying-hbase/index.html
index f98f86b..a689ebc 100644
--- a/docs/querying-hbase/index.html
+++ b/docs/querying-hbase/index.html
@@ -1005,81 +1005,124 @@
 
     <div class="int_text" align="left">
       
-        <!-- 
-To use Drill to query HBase data, you need to understand how to work with the HBase byte arrays. If you want Drill to interpret the underlying HBase row key as something other than a byte array, you need to know the encoding of the data in HBase. By default, HBase stores data in little endian and Drill assumes the data is little endian, which is unsorted. The following table shows the sorting of typical rowkey IDs in bytes, encoded in little endian and big endian, respectively:
+        <p>To use Drill to query HBase data, you need to understand how to work with the HBase byte arrays. If you want Drill to interpret the underlying HBase row key as something other than a byte array, you need to know the encoding of the data in HBase. By default, HBase stores data in little endian and Drill assumes the data is little endian, which is unsorted. The following table shows the sorting of typical row key IDs in bytes, encoded in little endian and big endian, respectively:</p>
 
-| IDs in Byte Notation Little Endian Sorting | IDs in Decimal Notation | IDs in Byte Notation Big Endian Sorting | IDs in Decimal Notation |
-|--------------------------------------------|-------------------------|-----------------------------------------|-------------------------|
-| 0 x 010000 . . . 000                       | 1                       | 0 x 010000 . . . 000                    | 1                       |
-| 0 x 010100 . . . 000                       | 17                      | 0 x 020000 . . . 000                    | 2                       |
-| 0 x 020000 . . . 000                       | 2                       | 0 x 030000 . . . 000                    | 3                       |
-| . . .                                      |                         | 0 x 040000 . . . 000                    | 4                       |
-| 0x 050000 . . . 000                        | 5                       | 0 x 050000 . . . 000                    | 5                       |
-| . . .                                      |                         | . . .                                   |                         |
-| 0 x 0A000000                               | 10                      | 0 x 0A0000 . . . 000                    | 10                      |
-|                                            |                         | 0 x 010100 . . . 000                    | 17                      |
+<table><thead>
+<tr>
+<th>IDs in Byte Notation Little Endian Sorting</th>
+<th>IDs in Decimal Notation</th>
+<th>IDs in Byte Notation Big Endian Sorting</th>
+<th>IDs in Decimal Notation</th>
+</tr>
+</thead><tbody>
+<tr>
+<td>0 x 010000 . . . 000</td>
+<td>1</td>
+<td>0 x 00000001</td>
+<td>1</td>
+</tr>
+<tr>
+<td>0 x 010100 . . . 000</td>
+<td>17</td>
+<td>0 x 00000002</td>
+<td>2</td>
+</tr>
+<tr>
+<td>0 x 020000 . . . 000</td>
+<td>2</td>
+<td>0 x 00000003</td>
+<td>3</td>
+</tr>
+<tr>
+<td>. . .</td>
+<td></td>
+<td>0 x 00000004</td>
+<td>4</td>
+</tr>
+<tr>
+<td>0 x 050000 . . . 000</td>
+<td>5</td>
+<td>0 x 00000005</td>
+<td>5</td>
+</tr>
+<tr>
+<td>. . .</td>
+<td></td>
+<td>. . .</td>
+<td></td>
+</tr>
+<tr>
+<td>0 x 0A0000 . . . 000</td>
+<td>10</td>
+<td>0 x 0000000A</td>
+<td>10</td>
+</tr>
+<tr>
+<td></td>
+<td></td>
+<td>0 x 00000101</td>
+<td>17</td>
+</tr>
+</tbody></table>
 
-## Querying Big Endian-Encoded Data
+<h2 id="querying-big-endian-encoded-data">Querying Big Endian-Encoded Data</h2>
 
-Drill optimizes scans of HBase tables when you use the ["CONVERT_TO and CONVERT_FROM data types"](/docs/supported-data-types/#convert_to-and-convert_from-data-types) on big endian-encoded data. Drill provides the \*\_BE encoded types for use with CONVERT_TO and CONVERT_FROM to take advantage of these optimizations. Here are a few examples of the \*\_BE types.
+<p>Drill optimizes scans of HBase tables when you use the <a href="/docs/supported-data-types/#convert_to-and-convert_from-data-types">&quot;CONVERT_TO and CONVERT_FROM data types&quot;</a> on big endian-encoded data. Drill provides the *_BE encoded types for use with CONVERT_TO and CONVERT_FROM to take advantage of these optimizations. Here are a few examples of the *_BE types.</p>
 
-* DATE_EPOCH_BE  
-* TIME_EPOCH_BE  
-* TIMESTAMP_EPOCH_BE  
-* UINT8_BE  
-* BIGINT_BE  
-
-For example, Drill returns results performantly when you use the following query on big endian-encoded data:
-
-```
-SELECT
- CONVERT_FROM(BYTE_SUBSTR(row_key, 1, 8), 'DATE_EPOCH_BE') d
-, CONVERT_FROM(BYTE_SUBSTR(row_key, 9, 8), 'BIGINT_BE') id
-, CONVERT_FROM(tableName.f.c, 'UTF8') 
- FROM hbase.`TestTableCompositeDate` tableName
- WHERE
- CONVERT_FROM(BYTE_SUBSTR(row_key, 1, 8), 'DATE_EPOCH_BE') < DATE '2015-06-18' AND
- CONVERT_FROM(BYTE_SUBSTR(row_key, 1, 8), 'DATE_EPOCH_BE') > DATE '2015-06-13';
-```
-
-This query assumes that the row key of the table represents the DATE_EPOCH type encoded in big-endian format. The Drill HBase plugin will be able to prune the scan range since there is a condition on the big endian-encoded prefix of the row key. For more examples, see the [test code:](https://github.com/apache/drill/blob/95623912ebf348962fe8a8846c5f47c5fdcf2f78/contrib/storage-hbase/src/test/java/org/apache/drill/hbase/TestHBaseFilterPushDown.java).
+<ul>
+<li>DATE_EPOCH_BE<br></li>
+<li>TIME_EPOCH_BE<br></li>
+<li>TIMESTAMP_EPOCH_BE<br></li>
+<li>UINT8_BE<br></li>
+<li>BIGINT_BE<br></li>
+</ul>
 
-To query HBase data:
+<p>For example, Drill returns results performantly when you use the following query on big endian-encoded data:</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT
+  CONVERT_FROM(BYTE_SUBSTR(row_key, 1, 8), &#39;DATE_EPOCH_BE&#39;) d,
+  CONVERT_FROM(BYTE_SUBSTR(row_key, 9, 8), &#39;BIGINT_BE&#39;) id,
+  CONVERT_FROM(tableName.f.c, &#39;UTF8&#39;) 
+FROM hbase.`TestTableCompositeDate` tableName
+WHERE
+  CONVERT_FROM(BYTE_SUBSTR(row_key, 1, 8), &#39;DATE_EPOCH_BE&#39;) &lt; DATE &#39;2015-06-18&#39; AND
+  CONVERT_FROM(BYTE_SUBSTR(row_key, 1, 8), &#39;DATE_EPOCH_BE&#39;) &gt; DATE &#39;2015-06-13&#39;;
+</code></pre></div>
+<p>This query assumes that the row key of the table represents the DATE_EPOCH type encoded in big-endian format. The Drill HBase plugin will be able to prune the scan range since there is a condition on the big endian-encoded prefix of the row key. For more examples, see the <a href="https://github.com/apache/drill/blob/95623912ebf348962fe8a8846c5f47c5fdcf2f78/contrib/storage-hbase/src/test/java/org/apache/drill/hbase/TestHBaseFilterPushDown.java">test code</a>.</p>
 
-1. Connect the data source to Drill using the [HBase storage plugin](/docs/hbase-storage-plugin/).  
-2. Determine the encoding of the HBase data you want to query. Ask the person in charge of creating the data.  
-3. Based on the encoding type of the data, use the ["CONVERT_TO and CONVERT_FROM data types"](/docs/supported-data-types/#convert_to-and-convert_from-data-types) to convert HBase binary representations to an SQL type as you query the data.  
-    For example, use CONVERT_FROM in your Drill query to convert a big endian-encoded row key to an SQL BIGINT type:  
+<p>To query HBase data:</p>
 
-    `SELECT CONVERT_FROM(BYTE_SUBSTR(row_key, 1, 8),'BIGINT_BE’) FROM my_hbase_table;`
+<ol>
+<li>Connect the data source to Drill using the <a href="/docs/hbase-storage-plugin/">HBase storage plugin</a>.<br></li>
+<li>Determine the encoding of the HBase data you want to query. Ask the person in charge of creating the data.<br></li>
+<li><p>Based on the encoding type of the data, use the <a href="/docs/supported-data-types/#convert_to-and-convert_from-data-types">&quot;CONVERT_TO and CONVERT_FROM data types&quot;</a> to convert HBase binary representations to an SQL type as you query the data.<br>
+For example, use CONVERT_FROM in your Drill query to convert a big endian-encoded row key to an SQL BIGINT type:  </p>
 
-The [BYTE_SUBSTR function](/docs/string-manipulation/#byte_substr) separates parts of a HBase composite key in this example. The Drill optimization is based on the capability in Drill 1.2 and later to push conditional filters down to the storage layer when HBase data is in big endian format. 
+<p><code>SELECT CONVERT_FROM(BYTE_SUBSTR(row_key, 1, 8),&#39;BIGINT_BE’) FROM my_hbase_table;</code></p></li>
+</ol>
 
-Drill can performantly query HBase data that uses composite keys, as shown in the last example, if only the first component of the composite is encoded in big endian format. If the HBase row key is not stored in big endian, do not use the \*\_BE types. If you want to convert a little endian byte array to integer, use BIGINT instead of BIGINT_BE, for example, as an argument to CONVERT_FROM. 
+<p>The <a href="/docs/string-manipulation/#byte_substr">BYTE_SUBSTR function</a> separates parts of a HBase composite key in this example. The Drill optimization is based on the capability in Drill 1.2 and later to push conditional filters down to the storage layer when HBase data is in big endian format. </p>
 
-## Leveraging HBase Ordered Byte Encoding
+<p>Drill can performantly query HBase data that uses composite keys, as shown in the last example, if only the first component of the composite is encoded in big endian format. If the HBase row key is not stored in big endian, do not use the *_BE types. If you want to convert a little endian byte array to integer, use BIGINT instead of BIGINT_BE, for example, as an argument to CONVERT_FROM. </p>
 
-Drill 1.2 leverages new features introduced by [HBASE-8201 Jira](https://issues.apache.org/jira/browse/HBASE-8201) that allows ordered byte encoding of different data types. This encoding scheme preserves the sort order of the native data type when the data is stored as sorted byte arrays on disk. Thus, Drill will be able to process data through the HBase storage plugin if the row keys have been encoded in OrderedBytes format.
+<h2 id="leveraging-hbase-ordered-byte-encoding">Leveraging HBase Ordered Byte Encoding</h2>
 
-To execute the following query, Drill prunes the scan range to only include the row keys representing [-32,59) range, thus reducing the amount of data read.
+<p>Drill 1.2 leverages new features introduced by <a href="https://issues.apache.org/jira/browse/HBASE-8201">HBASE-8201 Jira</a> that allows ordered byte encoding of different data types. This encoding scheme preserves the sort order of the native data type when the data is stored as sorted byte arrays on disk. Thus, Drill will be able to process data through the HBase storage plugin if the row keys have been encoded in OrderedBytes format.</p>
 
-```
-SELECT
- CONVERT_FROM(t.row_key, 'INT_OB') rk,
- CONVERT_FROM(t.`f`.`c`, 'UTF8') val
+<p>To execute the following query, Drill prunes the scan range to only include the row keys representing [-32,59) range, thus reducing the amount of data read.</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT
+ CONVERT_FROM(t.row_key, &#39;INT_OB&#39;) rk,
+ CONVERT_FROM(t.`f`.`c`, &#39;UTF8&#39;) val
 FROM
   hbase.`TestTableIntOB` t
 WHERE
-  CONVERT_FROM(row_key, 'INT_OB') >= cast(-32 as INT) AND
-  CONVERT_FROM(row_key, 'INT_OB') < cast(59 as INT);
-```
-
-For more examples, see the [test code:](https://github.com/apache/drill/blob/95623912ebf348962fe8a8846c5f47c5fdcf2f78/contrib/storage-hbase/src/test/java/org/apache/drill/hbase/TestHBaseFilterPushDown.java).
+  CONVERT_FROM(row_key, &#39;INT_OB&#39;) &gt;= cast(-32 as INT) AND
+  CONVERT_FROM(row_key, &#39;INT_OB&#39;) &lt; cast(59 as INT);
+</code></pre></div>
+<p>For more examples, see the <a href="https://github.com/apache/drill/blob/95623912ebf348962fe8a8846c5f47c5fdcf2f78/contrib/storage-hbase/src/test/java/org/apache/drill/hbase/TestHBaseFilterPushDown.java">test code</a>.</p>
 
-By taking advantage of ordered byte encoding, Drill 1.2 and later can performantly execute conditional queries without a secondary index on HBase big endian data. 
+<p>By taking advantage of ordered byte encoding, Drill 1.2 and later can performantly execute conditional queries without a secondary index on HBase big endian data. </p>
 
-## Querying Little Endian-Encoded Data
- -->
+<h2 id="querying-little-endian-encoded-data">Querying Little Endian-Encoded Data</h2>
 
 <p>As mentioned earlier, HBase stores data in little endian by default and Drill assumes the data is encoded in little endian. This exercise involves working with data that is encoded in little endian. First, you create two tables in HBase, students and clicks, that you can query with Drill. You use the CONVERT_TO and CONVERT_FROM functions to convert binary text to/from typed data. You use the CAST function to convert the binary data to an INT in step 4 of <a href="/docs/querying-hbase/#query-hbase-tables">Query HBase Tables</a>. When converting an INT or BIGINT number, having a byte count in the destination/source that does not match the byte count of the number in the binary source/destination, use CAST.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/3e2e1d4c/docs/tutorials-introduction/index.html
----------------------------------------------------------------------
diff --git a/docs/tutorials-introduction/index.html b/docs/tutorials-introduction/index.html
index b0fe0e5..7352ee6 100644
--- a/docs/tutorials-introduction/index.html
+++ b/docs/tutorials-introduction/index.html
@@ -1009,17 +1009,17 @@
 
 <ul>
 <li><a href="/docs/drill-in-10-minutes">Drill in 10 Minutes</a><br>
-Download and install Drill in embedded mode, which means you use a single-node cluster.<br></li>
+Download, install, and start Drill in embedded mode (single-node cluster mode).<br></li>
 <li><a href="/docs/analyzing-the-yelp-academic-dataset">Analyzing the Yelp Academic Dataset</a><br>
 Download and install Drill in embedded mode and use SQL examples to analyze Yelp data.<br></li>
 <li><a href="/docs/about-the-mapr-sandbox">Learn Drill with the MapR Sandbox</a><br>
 Explore data using a Hadoop environment pre-configured with Drill.<br></li>
 <li><a href="/docs/analyzing-highly-dynamic-datasets">Analyzing Highly Dynamic Datasets</a><br>
-Delve into changing data without creating a schema or going through an ETL phase.</li>
+Learn how to handle dynamic data without changing a schema or going through an ETL phase.</li>
 <li><a href="/docs/analyzing-social-media">Analyzing Social Media</a><br>
-Analyze Twitter data in native JSON format using Apache Drill.<br></li>
+Analyze Twitter data in its native JSON format using Drill.<br></li>
 <li><a href="/docs/tableau-examples">Tableau Examples</a><br>
-Access Hive tables in Tableau.<br></li>
+Access Hive tables using Drill and Tableau.<br></li>
 <li><a href="/docs/using-microstrategy-analytics-with-apache-drill/">Using MicroStrategy Analytics with Apache Drill</a><br>
 Use the Drill ODBC driver from MapR to analyze data and generate a report using Drill from the MicroStrategy UI.<br></li>
 <li><a href="/docs/using-tibco-spotfire-desktop-with-drill/">Using Tibco Spotfire Desktop with Drill</a><br>
@@ -1031,9 +1031,9 @@ Connect Tableau 9 Desktop to Apache Drill, explore multiple data formats on Hado
 <li><a href="/docs/using-apache-drill-with-tableau-9-server">Using Apache Drill with Tableau 9 Server</a><br>
 Connect Tableau 9 Server to Apache Drill, explore multiple data formats on Hadoop, access semi-structured data, and share Tableau visualizations with others.<br></li>
 <li><a href="https://github.com/vicenteg/spot-price-history#drill-workshop---amazon-spot-prices">Using Drill to Analyze Amazon Spot Prices</a><br>
-A Drill workshop on github that covers views of JSON and Parquet data.<br></li>
+Use a Drill workshop on github to create views of JSON and Parquet data.<br></li>
 <li><a href="http://drill.apache.org/blog/2014/12/09/running-sql-queries-on-amazon-s3/">Running Drill Queries on S3 Data</a><br>
-Nick Amato&#39;s blog that steps through querying files using Drill and Amazon Simple Storage Service (S3).<br></li>
+Step through querying files using Drill and Amazon Simple Storage Service (S3).<br></li>
 </ul>
 
     

http://git-wip-us.apache.org/repos/asf/drill-site/blob/3e2e1d4c/docs/why-drill/index.html
----------------------------------------------------------------------
diff --git a/docs/why-drill/index.html b/docs/why-drill/index.html
index 382ae4d..d44005a 100644
--- a/docs/why-drill/index.html
+++ b/docs/why-drill/index.html
@@ -1007,9 +1007,9 @@
       
         <h2 id="top-10-reasons-to-use-drill">Top 10 Reasons to Use Drill</h2>
 
-<h3 id="1.-get-started-in-minutes">1. Get started in minutes</h3>
+<h2 id="1.-get-started-in-minutes">1. Get started in minutes</h2>
 
-<p>It takes just a few minutes to get started with Drill. Untar the Drill software on your Mac or Windows laptop and run a query on a local file. No need to set up any infrastructure or to define schemas. Just point to the data, such as data in a file, directory, HBase table, and drill.</p>
+<p>It takes just a few minutes to get started with Drill. Untar the Drill software on your Linux, Mac, or Windows laptop and run a query on a local file. No need to set up any infrastructure or to define schemas. Just point to the data, such as data in a file, directory, HBase table, and drill.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">$ tar -xvf apache-drill-&lt;version&gt;.tar.gz
 $ &lt;install directory&gt;/bin/drill-embedded
 0: jdbc:drill:zk=local&gt; SELECT * FROM cp.`employee.json` LIMIT 5;
@@ -1038,13 +1038,13 @@ ORDER BY sq.prod_id;
 </code></pre></div>
 <h2 id="4.-real-sql----not-&quot;sql-like&quot;">4. Real SQL -- not &quot;SQL-like&quot;</h2>
 
-<p>Drill supports the standard SQL:2003 syntax. No need to learn a new &quot;SQL-like&quot; language or struggle with a semi-functional BI tool. Drill supports many data types including DATE, INTERVALDAY/INTERVALYEAR, TIMESTAMP, and VARCHAR, as well as complex query constructs such as correlated sub-queries and joins in WHERE clauses. Here is an example of a TPC-H standard query that runs in Drill &quot;as is&quot;:</p>
+<p>Drill supports the standard SQL:2003 syntax. No need to learn a new &quot;SQL-like&quot; language or struggle with a semi-functional BI tool. Drill supports many data types including DATE, INTERVAL, TIMESTAMP, and VARCHAR, as well as complex query constructs such as correlated sub-queries and joins in WHERE clauses. Here is an example of a TPC-H standard query that runs in Drill:</p>
 
 <h3 id="tpc-h-query-4">TPC-H query 4</h3>
-<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT  o.o_orderpriority, count(*) AS order_count
+<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT  o.o_orderpriority, COUNT(*) AS order_count
 FROM orders o
-WHERE o.o_orderdate &gt;= date &#39;1996-10-01&#39;
-      AND o.o_orderdate &lt; date &#39;1996-10-01&#39; + interval &#39;3&#39; month
+WHERE o.o_orderdate &gt;= DATE &#39;1996-10-01&#39;
+      AND o.o_orderdate &lt; DATE &#39;1996-10-01&#39; + INTERVAL &#39;3&#39; month
       AND EXISTS(
                  SELECT * FROM lineitem l 
                  WHERE l.l_orderkey = o.o_orderkey

http://git-wip-us.apache.org/repos/asf/drill-site/blob/3e2e1d4c/feed.xml
----------------------------------------------------------------------
diff --git a/feed.xml b/feed.xml
index acb9fc3..56ead64 100644
--- a/feed.xml
+++ b/feed.xml
@@ -6,8 +6,8 @@
 </description>
     <link>/</link>
     <atom:link href="/feed.xml" rel="self" type="application/rss+xml"/>
-    <pubDate>Tue, 08 Sep 2015 16:46:23 -0700</pubDate>
-    <lastBuildDate>Tue, 08 Sep 2015 16:46:23 -0700</lastBuildDate>
+    <pubDate>Tue, 08 Sep 2015 17:15:54 -0700</pubDate>
+    <lastBuildDate>Tue, 08 Sep 2015 17:15:54 -0700</lastBuildDate>
     <generator>Jekyll v2.5.2</generator>
     
       <item>