You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@drill.apache.org by ts...@apache.org on 2015/05/30 07:03:27 UTC

[01/26] drill git commit: exhume Basics Tutorial to address user question

Repository: drill
Updated Branches:
  refs/heads/gh-pages 446d71c24 -> a6822dc4f


exhume Basics Tutorial to address user question


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/55f25490
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/55f25490
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/55f25490

Branch: refs/heads/gh-pages
Commit: 55f2549073de462f3dd507e904b45dc4a110a287
Parents: 45c29be
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Mon May 25 12:21:22 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Mon May 25 12:21:22 2015 -0700

----------------------------------------------------------------------
 .../030-querying-plain-text-files.md            | 188 ++++++++++++++++++-
 .../040-querying-directories.md                 |  34 ++++
 .../030-date-time-functions-and-arithmetic.md   |   2 +-
 _docs/tutorials/020-drill-in-10-minutes.md      |   2 +-
 4 files changed, 219 insertions(+), 7 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/55f25490/_docs/query-data/query-a-file-system/030-querying-plain-text-files.md
----------------------------------------------------------------------
diff --git a/_docs/query-data/query-a-file-system/030-querying-plain-text-files.md b/_docs/query-data/query-a-file-system/030-querying-plain-text-files.md
index 8924835..ab73c57 100644
--- a/_docs/query-data/query-a-file-system/030-querying-plain-text-files.md
+++ b/_docs/query-data/query-a-file-system/030-querying-plain-text-files.md
@@ -2,16 +2,20 @@
 title: "Querying Plain Text Files"
 parent: "Querying a File System"
 ---
-You can use Drill to access both structured file types and plain text files
-(flat files). This section shows a few simple examples that work on flat
-files:
+You can use Drill to access structured file types and plain text files
+(flat files), such as the following file types:
 
   * CSV files (comma-separated values)
   * TSV files (tab-separated values)
   * PSV files (pipe-separated values)
 
-The examples here show CSV files, but queries against TSV and PSV files return
-equivalent results. However, make sure that your registered storage plugins
+Follow these general guidelines for querying a plain text file:
+
+  * Use a storage plugin that defines the file format, such as comma-separated (CSV) or tab-separated values (TSV), of the data in the plain text file.
+  * In the SELECT statement, use the `COLUMNS[n]` syntax in lieu of column names, which do not exist in a plain text file. The first column is column `0`.
+  * In the FROM clause, use the path to the plain text file instead of using a table name. Enclose the path and file name in back ticks. 
+
+Make sure that your registered storage plugins
 recognize the appropriate file types and extensions. For example, the
 following configuration expects PSV files (files with a pipe delimiter) to
 have a `tbl` extension, not a `psv` extension. Drill returns a "file not
@@ -117,3 +121,177 @@ example:
 Note that the restriction with the use of aliases applies to queries against
 all data sources.
 
+## Example of Querying a TSV File
+
+This example uses a tab-separated value (TSV) file that you download from a
+Google internet site. The data in the file consists of phrases from books that
+Google scans and generates for its [Google Books Ngram
+Viewer](http://storage.googleapis.com/books/ngrams/books/datasetsv2.html). You
+use the data to find the relative frequencies of Ngrams. 
+
+### About the Data
+
+Each line in the TSV file has the following structure:
+
+`ngram TAB year TAB match_count TAB volume_count NEWLINE`
+
+For example, lines 1722089 and 1722090 in the file contain this data:
+
+<table ><tbody><tr><th >ngram</th><th >year</th><th colspan="1" >match_count</th><th >volume_count</th></tr><tr><td ><p class="p1">Zoological Journal of the Linnean</p></td><td >2007</td><td colspan="1" >284</td><td >101</td></tr><tr><td colspan="1" ><p class="p1">Zoological Journal of the Linnean</p></td><td colspan="1" >2008</td><td colspan="1" >257</td><td colspan="1" >87</td></tr></tbody></table> 
+  
+In 2007, "Zoological Journal of the Linnean" occurred 284 times overall in 101
+distinct books of the Google sample.
+
+### Download and Set Up the Data
+
+After downloading the file, you use the `dfs` storage plugin, and then select
+data from the file as you would a table. In the SELECT statement, enclose the
+path and name of the file in back ticks.
+
+  1. Download the compressed Google Ngram data from this location:  
+    
+     http://storage.googleapis.com/books/ngrams/books/googlebooks-eng-all-5gram-20120701-zo.gz
+
+  2. Unzip the file.  
+     A file named googlebooks-eng-all-5gram-20120701-zo appears.
+
+  3. Change the file name to add a `.tsv` extension.  
+The Drill `dfs` storage plugin definition includes a TSV format that requires
+a file to have this extension.
+
+### Query the Data
+
+Get data about "Zoological Journal of the Linnean" that appears more than 250
+times a year in the books that Google scans.
+
+  1. Switch back to using the `dfs` storage plugin.
+  
+          USE dfs;
+
+  2. Issue a SELECT statement to get the first three columns in the file.  
+     * In the FROM clause of the example, substitute your path to the TSV file.  
+     * Use aliases to replace the column headers, such as EXPR$0, with user-friendly column headers, Ngram, Publication Date, and Frequency.
+     * In the WHERE clause, enclose the string literal "Zoological Journal of the Linnean" in single quotation marks.  
+     * Limit the output to 10 rows.  
+  
+         SELECT COLUMNS[0] AS Ngram,
+                COLUMNS[1] AS Publication_Date,
+                COLUMNS[2] AS Frequency
+         FROM `/Users/drilluser/Downloads/googlebooks-eng-all-5gram-20120701-zo.tsv`
+         WHERE ((columns[0] = 'Zoological Journal of the Linnean')
+             AND (columns[2] > 250)) LIMIT 10;
+
+     The output is:
+
+         +------------------------------------+-------------------+------------+
+         |               Ngram                | Publication_Date  | Frequency  |
+         +------------------------------------+-------------------+------------+
+         | Zoological Journal of the Linnean  | 1993              | 297        |
+         | Zoological Journal of the Linnean  | 1997              | 255        |
+         | Zoological Journal of the Linnean  | 2003              | 254        |
+         | Zoological Journal of the Linnean  | 2007              | 284        |
+         | Zoological Journal of the Linnean  | 2008              | 257        |
+         +------------------------------------+-------------------+------------+
+         5 rows selected (1.175 seconds)
+
+The Drill default storage plugins support common file formats. If you need
+support for some other file format, such as GZ, create a custom storage plugin. You can also create a storage plugin to simplify querying file having long path names. A workspace name replaces the long path name.
+
+
+## Create a Storage Plugin
+
+This example covers how to create and use a storage plugin to simplify queries or to query a file type that `dfs` does not specify, GZ in this case. First, you create the storage plugin in the Drill Web UI. Next, you connect to the
+file through the plugin to query a file.
+
+You can create a storage plugin using the Apache Drill Web UI to query the GZ file containing the compressed TSV data directly.
+
+  1. Create an `ngram` directory on your file system.
+  2. Copy the GZ file `googlebooks-eng-all-5gram-20120701-zo.gz` to the `ngram` directory.
+  3. Open the Drill Web UI by navigating to <http://localhost:8047/storage>.   
+     To open the Drill Web UI, the [Drill shell]({{site.baseurl}}/docs/starting-drill-on-linux-and-mac-os-x/) must still be running.
+  4. In New Storage Plugin, type `myplugin`.  
+     ![new plugin]({{ site.baseurl }}/docs/img/ngram_plugin.png)    
+  5. Click **Create**.  
+     The Configuration screen appears.
+  6. Replace null with the following storage plugin definition, except on the location line, use the path to your `ngram` directory instead of the drilluser's path and give your workspace an arbitrary name, for example, ngram:
+  
+        {
+          "type": "file",
+          "enabled": true,
+          "connection": "file:///",
+          "workspaces": {
+            "ngram": {
+              "location": "/Users/drilluser/ngram",
+              "writable": false,
+              "defaultInputFormat": null
+           }
+         },
+         "formats": {
+           "tsv": {
+             "type": "text",
+             "extensions": [
+               "gz"
+             ],
+             "delimiter": "\t"
+            }
+          }
+        }
+
+  7. Click **Create**.  
+     The success message appears briefly.
+  8. Click **Back**.  
+     The new plugin appears in Enabled Storage Plugins.  
+     ![new plugin]({{ site.baseurl }}/docs/img/ngram_plugin.png) 
+  9. Go back to the Drill shell, and list the storage plugins.  
+          SHOW DATABASES;
+
+          +---------------------+
+          |     SCHEMA_NAME     |
+          +---------------------+
+          | INFORMATION_SCHEMA  |
+          | cp.default          |
+          | dfs.default         |
+          | dfs.root            |
+          | dfs.tmp             |
+          | myplugin.default    |
+          | myplugin.ngram      |
+          | sys                 |
+          +---------------------+
+          8 rows selected (0.105 seconds)
+
+Your custom plugin appears in the list and has two workspaces: the `ngram`
+workspace that you defined and a default workspace.
+
+### Connect to and Query a File
+
+When querying the same data source repeatedly, avoiding long path names is
+important. This exercise demonstrates how to simplify the query. Instead of
+using the full path to the Ngram file, you use dot notation in the FROM
+clause.
+
+``<workspace name>.`<location>```
+
+This syntax assumes you connected to a storage plugin that defines the
+location of the data. To query the data source while you are _not_ connected to
+that storage plugin, include the plugin name:
+
+``<plugin name>.<workspace name>.`<location>```
+
+This exercise shows how to query Ngram data when you are connected to `myplugin`.
+
+  1. Connect to the ngram file through the custom storage plugin.  
+     `USE myplugin;`
+  2. Get data about "Zoological Journal of the Linnean" that appears more than 250 times a year in the books that Google scans. In the FROM clause, instead of using the full path to the file as you did in the last exercise, connect to the data using the storage plugin workspace name ngram.
+  
+         SELECT COLUMNS[0], 
+                COLUMNS[1], 
+                COLUMNS[2] 
+         FROM ngram.`/googlebooks-eng-all-5gram-20120701-zo.gz` 
+         WHERE ((columns[0] = 'Zoological Journal of the Linnean') 
+          AND (columns[2] > 250)) 
+         LIMIT 10;
+
+     The five rows of output appear.  
+
+To continue with this example and query multiple files in a directory, see the section, ["Example of Querying Multiple Files in a Directory"]({{site.baseurl}}/docs/querying-directories/#example-of-querying-multiple-files-in-a-directory).
+

http://git-wip-us.apache.org/repos/asf/drill/blob/55f25490/_docs/query-data/query-a-file-system/040-querying-directories.md
----------------------------------------------------------------------
diff --git a/_docs/query-data/query-a-file-system/040-querying-directories.md b/_docs/query-data/query-a-file-system/040-querying-directories.md
index 1a55b75..4a5b4ae 100644
--- a/_docs/query-data/query-a-file-system/040-querying-directories.md
+++ b/_docs/query-data/query-a-file-system/040-querying-directories.md
@@ -89,4 +89,38 @@ first level down from logs, `dir1` to the next level, and so on.
     +------------+------------+------------+------------+------------+------------+------------+------------+------------+-------------+
     10 rows selected (0.583 seconds)
 
+## Example of Querying Multiple Files in a Directory
+
+This example is a continuation of the example in the section, ["Example of Querying a TSV File"]({{site.baseurl}}/docs/querying-plain-text-files/#example-of-querying-a-tsv-file) that creates a subdirectory in the `ngram` directory and [custom plugin workspace]({{site.baseurl}}/docs/querying-plain-text-files/#create-a-storage-plugin) you created earlier.
+
+You download a second Ngram file. Next, you
+move both Ngram GZ files you downloaded to the `ngram` subdirectory. Finally, using the custom
+plugin workspace, you query both files. In the FROM clause, simply reference
+the subdirectory.
+
+  1. Download a second file of compressed Google Ngram data from this location: 
+  
+     http://storage.googleapis.com/books/ngrams/books/googlebooks-eng-all-2gram-20120701-ze.gz
+  2. Move `googlebooks-eng-all-2gram-20120701-ze.gz` to the `ngram/myfiles` subdirectory. 
+  3. Move the 5gram file you downloaded earlier `googlebooks-eng-all-5gram-20120701-zo.gz` to the `ngram/myfiles` subdirectory.
+  4. In the Drill shell, use the `myplugin.ngrams` workspace. 
+   
+          USE myplugin.ngram;
+  5. Query the myfiles directory for the "Zoological Journal of the Linnean" or "zero temperatures" in books published in 1998.
+  
+          SELECT * 
+          FROM myfiles 
+          WHERE (((COLUMNS[0] = 'Zoological Journal of the Linnean')
+            OR (COLUMNS[0] = 'zero temperatures')) 
+            AND (COLUMNS[1] = '1998'));
+The output lists ngrams from both files.
+
+          +----------------------------------------------------------+
+          |                         columns                          |
+          +----------------------------------------------------------+
+          | ["Zoological Journal of the Linnean","1998","157","53"]  |
+          | ["zero temperatures","1998","628","487"]                 |
+          +----------------------------------------------------------+
+          2 rows selected (7.007 seconds)
+
 For more information about querying directories, see the section, ["Query Directory Functions"]({{site.baseurl}}/docs/query-directory-functions).
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/55f25490/_docs/sql-reference/sql-functions/030-date-time-functions-and-arithmetic.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-functions/030-date-time-functions-and-arithmetic.md b/_docs/sql-reference/sql-functions/030-date-time-functions-and-arithmetic.md
index 23b0983..a6df716 100644
--- a/_docs/sql-reference/sql-functions/030-date-time-functions-and-arithmetic.md
+++ b/_docs/sql-reference/sql-functions/030-date-time-functions-and-arithmetic.md
@@ -46,7 +46,7 @@ Find the interval between midnight today, April 3, 2015, and June 13, 1957.
     +------------+
     1 row selected (0.064 seconds)
 
-Find the interval between midnight today, May 21, 2015, and hire dates of employees 578 and 761 in the employees.json file included with the Drill installation.
+Find the interval between midnight today, May 21, 2015, and hire dates of employees 578 and 761 in the `employees.json` file included with the Drill installation.
 
     SELECT AGE(CAST(hire_date AS TIMESTAMP)) FROM cp.`employee.json` where employee_id IN( '578','761');
     +------------------+

http://git-wip-us.apache.org/repos/asf/drill/blob/55f25490/_docs/tutorials/020-drill-in-10-minutes.md
----------------------------------------------------------------------
diff --git a/_docs/tutorials/020-drill-in-10-minutes.md b/_docs/tutorials/020-drill-in-10-minutes.md
index 6584021..cf21743 100755
--- a/_docs/tutorials/020-drill-in-10-minutes.md
+++ b/_docs/tutorials/020-drill-in-10-minutes.md
@@ -45,7 +45,7 @@ Complete the following steps to install Drill:
 
 1. In a terminal windows, change to the directory where you want to install Drill.
 
-2. To download the latest version of Apache Drill, download Drill from the [Drill web site](http://getdrill.org/drill/download/apache-drill-1.0.0.tar.gz)or run one of the following commands, depending on which you have installed on your system:
+2. To download the latest version of Apache Drill, download Drill from the [Drill web site](http://getdrill.org/drill/download/apache-drill-1.0.0.tar.gz) or run one of the following commands, depending on which you have installed on your system:
 
    * `wget http://getdrill.org/drill/download/apache-drill-1.0.0.tar.gz`  
    *  `curl -o apache-drill-1.0.0.tar.gz http://getdrill.org/drill/download/apache-drill-1.0.0.tar.gz`  


[02/26] drill git commit: DRILL-3179

Posted by ts...@apache.org.
DRILL-3179


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/c6be6cc5
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/c6be6cc5
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/c6be6cc5

Branch: refs/heads/gh-pages
Commit: c6be6cc507faa0bf653c7d7ba8953e168476125b
Parents: 55f2549
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Tue May 26 14:39:52 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Tue May 26 14:39:52 2015 -0700

----------------------------------------------------------------------
 .../020-configuring-drill-memory.md             |   3 +
 .../060-configuring-a-shared-drillbit.md        |   3 +
 .../010-configuration-options-introduction.md   |   9 +-
 .../035-plugin-configuration-introduction.md    |   4 +-
 .../050-json-data-model.md                      | 106 ++++++++-----------
 .../030-querying-plain-text-files.md            |  22 ++--
 6 files changed, 64 insertions(+), 83 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/c6be6cc5/_docs/configure-drill/020-configuring-drill-memory.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/020-configuring-drill-memory.md b/_docs/configure-drill/020-configuring-drill-memory.md
index 5948bcc..30d5121 100644
--- a/_docs/configure-drill/020-configuring-drill-memory.md
+++ b/_docs/configure-drill/020-configuring-drill-memory.md
@@ -38,3 +38,6 @@ The `drill-env.sh` file contains the following options:
 * Xmx specifies the maximum memory allocation pool for a Java Virtual Machine (JVM). 
 * Xms specifies the initial memory allocation pool.
 
+If performance is an issue, replace the -ea flag with -Dbounds=false, as shown in the following example:
+
+    export DRILL_JAVA_OPTS="-Xms1G -Xmx$DRILL_MAX_HEAP -XX:MaxDirectMemorySize=$DRILL_MAX_DIRECT_MEMORY -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=1G -Dbounds=false"
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/c6be6cc5/_docs/configure-drill/060-configuring-a-shared-drillbit.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/060-configuring-a-shared-drillbit.md b/_docs/configure-drill/060-configuring-a-shared-drillbit.md
index 1070586..2d31cef 100644
--- a/_docs/configure-drill/060-configuring-a-shared-drillbit.md
+++ b/_docs/configure-drill/060-configuring-a-shared-drillbit.md
@@ -10,6 +10,9 @@ Set [options in sys.options]({{site.baseurl}}/docs/configuration-options-introdu
 
 * exec.queue.large  
 * exec.queue.small  
+* exec.queue.threshold
+
+The exec.queue.threshold sets the cost threshold for determining whether query is large or small based on complexity. Complex queries have higher thresholds. The default, 30,000,000, represents the estimated rows that a query will process. To serialize incoming queries, set the small queue at 0 and the threshold at 0.
 
 For more information, see the section, ["Performance Tuning"](/docs/performance-tuning-introduction/).
 

http://git-wip-us.apache.org/repos/asf/drill/blob/c6be6cc5/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md b/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
index bdd19f3..524ff67 100644
--- a/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
+++ b/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
@@ -24,10 +24,10 @@ The sys.options table lists the following options that you can set as a system o
 | exec.java_compiler_janino_maxsize              | 262144           | See the exec.java_compiler option comment. Accepts inputs of type LONG.                                                                                                                                                                                                                                                                                          |
 | exec.max_hash_table_size                       | 1073741824       | Ending size for hash tables. Range: 0 - 1073741824.                                                                                                                                                                                                                                                                                                              |
 | exec.min_hash_table_size                       | 65536            | Starting size for hash tables. Increase according to available memory to improve performance. Increasing for very large aggregations or joins when you have large amounts of memory for Drill to use. Range: 0 - 1073741824.                                                                                                                                     |
-| exec.queue.enable                              | FALSE            | Changes the state of query queues to control the number of queries that run simultaneously.                                                                                                                                                                                                                                                                      |
+| exec.queue.enable                              | FALSE            | Changes the state of query queues. False allows unlimited concurrent queries.                                                                                                                                                                                                                                                                                    |
 | exec.queue.large                               | 10               | Sets the number of large queries that can run concurrently in the cluster. Range: 0-1000                                                                                                                                                                                                                                                                         |
 | exec.queue.small                               | 100              | Sets the number of small queries that can run concurrently in the cluster. Range: 0-1001                                                                                                                                                                                                                                                                         |
-| exec.queue.threshold                           | 30000000         | Sets the cost threshold, which depends on the complexity of the queries in queue, for determining whether query is large or small. Complex queries have higher thresholds. Range: 0-9223372036854775807                                                                                                                                                          |
+| exec.queue.threshold                           | 30000000         | Sets the cost threshold for determining whether query is large or small based on complexity. Complex queries have higher thresholds. By default, an estimated 30,000,000 rows will be processed by a query. Range: 0-9223372036854775807                                                                                                                         |
 | exec.queue.timeout_millis                      | 300000           | Indicates how long a query can wait in queue before the query fails. Range: 0-9223372036854775807                                                                                                                                                                                                                                                                |
 | exec.schedule.assignment.old                   | FALSE            | Used to prevent query failure when no work units are assigned to a minor fragment, particularly when the number of files is much larger than the number of leaf fragments.                                                                                                                                                                                       |
 | exec.storage.enable_new_text_reader            | TRUE             | Enables the text reader that complies with the RFC 4180 standard for text/csv files.                                                                                                                                                                                                                                                                             |
@@ -80,7 +80,4 @@ The sys.options table lists the following options that you can set as a system o
 | store.parquet.enable_dictionary_encoding       | FALSE            | For internal use. Do not change.                                                                                                                                                                                                                                                                                                                                 |
 | store.parquet.use_new_reader                   | FALSE            | Not supported in this release.                                                                                                                                                                                                                                                                                                                                   |
 | store.text.estimated_row_size_bytes            | 100              | Estimate of the row size in a delimited text file, such as csv. The closer to actual, the better the query plan. Used for all csv files in the system/session where the value is set. Impacts the decision to plan a broadcast join or not.                                                                                                                      |
-| window.enable                                  | FALSE            | Not supported in this release. Coming soon.                                                                                                                                                                                                                                                                                                                      |
-
-
-
+| window.enable                                  | FALSE            | Not supported in this release. Coming soon.                                                                                                                                                                                                                                                                                                                      |
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/c6be6cc5/_docs/connect-a-data-source/035-plugin-configuration-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/035-plugin-configuration-introduction.md b/_docs/connect-a-data-source/035-plugin-configuration-introduction.md
index 7c850ae..c6bcbf8 100644
--- a/_docs/connect-a-data-source/035-plugin-configuration-introduction.md
+++ b/_docs/connect-a-data-source/035-plugin-configuration-introduction.md
@@ -58,9 +58,9 @@ The following table describes the attributes you configure for storage plugins.
   </tr>
   <tr>
     <td>"workspaces". . . "location"</td>
-    <td>"location": "/"<br>"location": "/tmp"</td>
+    <td>"location": "/Users/johndoe/mydata"<br>"location": "/tmp"</td>
     <td>no</td>
-    <td>Path to a directory on the file system.</td>
+    <td>Full path to a directory on the file system.</td>
   </tr>
   <tr>
     <td>"workspaces". . . "writable"</td>

http://git-wip-us.apache.org/repos/asf/drill/blob/c6be6cc5/_docs/data-sources-and-file-formats/050-json-data-model.md
----------------------------------------------------------------------
diff --git a/_docs/data-sources-and-file-formats/050-json-data-model.md b/_docs/data-sources-and-file-formats/050-json-data-model.md
index 70ccf94..1b1660d 100644
--- a/_docs/data-sources-and-file-formats/050-json-data-model.md
+++ b/_docs/data-sources-and-file-formats/050-json-data-model.md
@@ -126,7 +126,9 @@ Using the following techniques, you can query complex, nested JSON:
 * Generate key/value pairs for loosely structured data
 
 ## Example: Flatten and Generate Key Values for Complex JSON
-This example uses the following data that represents unit sales of tickets to events that were sold over a period of for several days in different states:
+This example uses the following data that represents unit sales of tickets to events that were sold over a period of for several days in December:
+
+### ticket_sales.json Contents
 
     {
       "type": "ticket",
@@ -151,56 +153,32 @@ This example uses the following data that represents unit sales of tickets to ev
     
 Take a look at the data in Drill:
 
-    SELECT * FROM dfs.`/Users/drilluser/ticket_sales.json`;
-    +------------+------------+------------+------------+------------+
-    |    type    |  channel   |   month    |    day     |   sales    |
-    +------------+------------+------------+------------+------------+
-    | ticket     | 123455     | 12         | ["15","25","28","31"] | {"NY":"532806","PA":"112889","TX":"898999","UT":"10875"} |
-    | ticket     | 123456     | 12         | ["10","15","19","31"] | {"NY":"972880","PA":"857475","CA":"87350","OR":"49999"} |
-    +------------+------------+------------+------------+------------+
-    2 rows selected (0.041 seconds)
-
-### Flatten Arrays
-The FLATTEN function breaks the following _day arrays from the JSON example file shown earlier into separate rows.
-
-    "_day": [ 15, 25, 28, 31 ] 
-    "_day": [ 10, 15, 19, 31 ]
-
-Flatten the sales column of the ticket data onto separate rows, one row for each day in the array, for a better view of the data. FLATTEN copies the sales data related in the JSON object on each row.  Using the all (*) wildcard as the argument to flatten is not supported and returns an error.
-
-    SELECT flatten(tkt._day) AS `day`, tkt.sales FROM dfs.`/Users/drilluser/ticket_sales.json` tkt;
-
-    +------------+------------+
-    |    day     |   sales    |
-    +------------+------------+
-    | 15         | {"NY":532806,"PA":112889,"TX":898999,"UT":10875} |
-    | 25         | {"NY":532806,"PA":112889,"TX":898999,"UT":10875} |
-    | 28         | {"NY":532806,"PA":112889,"TX":898999,"UT":10875} |
-    | 31         | {"NY":532806,"PA":112889,"TX":898999,"UT":10875} |
-    | 10         | {"NY":972880,"PA":857475,"CA":87350,"OR":49999} |
-    | 15         | {"NY":972880,"PA":857475,"CA":87350,"OR":49999} |
-    | 19         | {"NY":972880,"PA":857475,"CA":87350,"OR":49999} |
-    | 31         | {"NY":972880,"PA":857475,"CA":87350,"OR":49999} |
-    +------------+------------+
-    8 rows selected (0.072 seconds)
+    +---------+---------+---------------------------------------------------------------+
+    |  type   |  venue  |                             sales                             |
+    +---------+---------+---------------------------------------------------------------+
+    | ticket  | 123455  | {"12-10":532806,"12-11":112889,"12-19":898999,"12-21":10875}  |
+    | ticket  | 123456  | {"12-10":87350,"12-19":49999,"12-21":857475,"12-15":972880}   |
+    +---------+---------+---------------------------------------------------------------+
+    2 rows selected (1.343 seconds)
+
 
 ### Generate Key/Value Pairs
-Use the KVGEN (Key Value Generator) function to generate key/value pairs from complex data. Generating key/value pairs is often helpful when working with data that contains arbitrary maps consisting of dynamic and unknown element names, such as the ticket sales data by state. For example purposes, take a look at how kvgen breaks the sales data into keys and values representing the states and number of tickets sold:
+Continuing with the data from [previous example]({{site.baseurl}}/docs/json-data-model/#example:-flatten-and-generate-key-values-for-complex-json), use the KVGEN (Key Value Generator) function to generate key/value pairs from complex data. Generating key/value pairs is often helpful when working with data that contains arbitrary maps consisting of dynamic and unknown element names, such as the ticket sales data in this example. For example purposes, take a look at how kvgen breaks the sales data into keys and values representing the key dates and number of tickets sold:
 
-    SELECT KVGEN(tkt.sales) AS state_sales FROM dfs.`/Users/drilluser/ticket_sales.json` tkt;
-    +-------------+
-    | state_sales |
-    +-------------+
-    | [{"key":"NY","value":532806},{"key":"PA","value":112889},{"key":"TX","value":898999},{"key":"UT","value":10875}] |
-    | [{"key":"NY","value":972880},{"key":"PA","value":857475},{"key":"CA","value":87350},{"key":"OR","value":49999}] |
-    +-------------+
-    2 rows selected (0.039 seconds)
+    SELECT KVGEN(tkt.sales) AS `key dates:tickets sold` FROM dfs.`/Users/drilluser/ticket_sales.json` tkt;
+    +---------------------------------------------------------------------------------------------------------------------------------------+
+    |                                                        key dates:tickets sold                                                         |
+    +---------------------------------------------------------------------------------------------------------------------------------------+
+    | [{"key":"12-10","value":"532806"},{"key":"12-11","value":"112889"},{"key":"12-19","value":"898999"},{"key":"12-21","value":"10875"}] |
+    | [{"key":"12-10","value":"87350"},{"key":"12-19","value":"49999"},{"key":"12-21","value":"857475"},{"key":"12-15","value":"972880"}] |
+    +---------------------------------------------------------------------------------------------------------------------------------------+
+    2 rows selected (0.106 seconds)
 
 KVGEN allows queries against maps where the keys themselves represent data rather than a schema, as shown in the next example.
 
 ### Flatten JSON Data
 
-FLATTEN breaks the list of key-value pairs into separate rows on which you can apply analytic functions. FLATTEN takes a JSON array, such as the output from kvgen(sales), as an argument. Using the all (*) wildcard as the argument is not supported and returns an error.
+FLATTEN breaks the list of key-value pairs into separate rows on which you can apply analytic functions. FLATTEN takes a JSON array, such as the output from kvgen(sales), as an argument. Using the all (*) wildcard as the argument is not supported and returns an error. The following example continues using data from the [previous example]({{site.baseurl}}/docs/json-data-model/#example:-flatten-and-generate-key-values-for-complex-json):
 
     SELECT FLATTEN(kvgen(sales)) Sales 
     FROM dfs.`/Users/drilluser/drill/ticket_sales.json`;
@@ -220,41 +198,41 @@ FLATTEN breaks the list of key-value pairs into separate rows on which you can a
     8 rows selected (0.171 seconds)
 
 ### Example: Aggregate Loosely Structured Data
-Use flatten and kvgen together to aggregate the data. Continuing with the previous example, make sure all text mode is set to false to sum numbers. Drill returns an error if you attempt to sum data in all text mode. 
+Use flatten and kvgen together to aggregate the data from the [previous example]({{site.baseurl}}/docs/json-data-model/#example:-flatten-and-generate-key-values-for-complex-json). Make sure all text mode is set to false to sum numbers. Drill returns an error if you attempt to sum data in all text mode. 
 
     ALTER SYSTEM SET `store.json.all_text_mode` = false;
     
 Sum the ticket sales by combining the `SUM`, `FLATTEN`, and `KVGEN` functions in a single query.
 
-    SELECT SUM(tkt.tot_sales.`value`) AS TotalSales FROM (SELECT flatten(kvgen(sales)) tot_sales FROM dfs.`/Users/drilluser/ticket_sales.json`) tkt;
+    SELECT SUM(tkt.tot_sales.`value`) AS TicketSold FROM (SELECT flatten(kvgen(sales)) tot_sales FROM dfs.`/Users/drilluser/ticket_sales.json`) tkt;
 
-    +------------+
-    | TotalSales |
-    +------------+
-    | 3523273    |
-    +------------+
-    1 row selected (0.081 seconds)
+    +--------------+
+    | TicketsSold  |
+    +--------------+
+    | 3523273.0    |
+    +--------------+
+    1 row selected (0.244 seconds)
 
 ### Example: Aggregate and Sort Data
-Sum the ticket sales by state and group by state and sort in ascending order. 
+Sum the ticket sales by state and group by day and sort in ascending order. 
 
-    SELECT `right`(tkt.tot_sales.key,2) State, 
+    SELECT `right`(tkt.tot_sales.key,2) `December Date`, 
     SUM(tkt.tot_sales.`value`) AS TotalSales 
-    FROM (SELECT flatten(kvgen(sales)) tot_sales 
+    FROM (SELECT FLATTEN(kvgen(sales)) tot_sales 
     FROM dfs.`/Users/drilluser/ticket_sales.json`) tkt 
     GROUP BY `right`(tkt.tot_sales.key,2) 
     ORDER BY TotalSales;
 
-    +---------------+--------------+
-    | December_Date |  TotalSales  |
-    +---------------+--------------+
-    | 11            | 112889       |
-    | 10            | 620156       |
-    | 21            | 868350       |
-    | 19            | 948998       |
-    | 15            | 972880       |
-    +---------------+--------------+
-    5 rows selected (0.203 seconds)
+    +----------------+-------------+
+    | December Date  | TotalSales  |
+    +----------------+-------------+
+    | 11             | 112889.0    |
+    | 10             | 620156.0    |
+    | 21             | 868350.0    |
+    | 19             | 948998.0    |
+    | 15             | 972880.0    |
+    +----------------+-------------+
+    5 rows selected (0.252 seconds)
 
 ### Example: Access a Map Field in an Array
 To access a map field in an array, use dot notation to drill down through the hierarchy of the JSON data to the field. Examples are based on the following [City Lots San Francisco in .json](https://github.com/zemirco/sf-city-lots-json), modified slightly as described in the empty array workaround in ["Limitations and Workarounds."]({{ site.baseurl }}/docs/json-data-model#empty-array)

http://git-wip-us.apache.org/repos/asf/drill/blob/c6be6cc5/_docs/query-data/query-a-file-system/030-querying-plain-text-files.md
----------------------------------------------------------------------
diff --git a/_docs/query-data/query-a-file-system/030-querying-plain-text-files.md b/_docs/query-data/query-a-file-system/030-querying-plain-text-files.md
index ab73c57..aeb3543 100644
--- a/_docs/query-data/query-a-file-system/030-querying-plain-text-files.md
+++ b/_docs/query-data/query-a-file-system/030-querying-plain-text-files.md
@@ -157,7 +157,7 @@ path and name of the file in back ticks.
 
   3. Change the file name to add a `.tsv` extension.  
 The Drill `dfs` storage plugin definition includes a TSV format that requires
-a file to have this extension.
+a file to have this extension. Later, you learn how to skip this step and query the GZ file directly.
 
 ### Query the Data
 
@@ -174,12 +174,12 @@ times a year in the books that Google scans.
      * In the WHERE clause, enclose the string literal "Zoological Journal of the Linnean" in single quotation marks.  
      * Limit the output to 10 rows.  
   
-         SELECT COLUMNS[0] AS Ngram,
-                COLUMNS[1] AS Publication_Date,
-                COLUMNS[2] AS Frequency
-         FROM `/Users/drilluser/Downloads/googlebooks-eng-all-5gram-20120701-zo.tsv`
-         WHERE ((columns[0] = 'Zoological Journal of the Linnean')
-             AND (columns[2] > 250)) LIMIT 10;
+            SELECT COLUMNS[0] AS Ngram,
+                   COLUMNS[1] AS Publication_Date,
+                   COLUMNS[2] AS Frequency
+            FROM `/Users/drilluser/Downloads/googlebooks-eng-all-5gram-20120701-zo.tsv`
+            WHERE ((columns[0] = 'Zoological Journal of the Linnean')
+            AND (columns[2] > 250)) LIMIT 10;
 
      The output is:
 
@@ -195,7 +195,7 @@ times a year in the books that Google scans.
          5 rows selected (1.175 seconds)
 
 The Drill default storage plugins support common file formats. If you need
-support for some other file format, such as GZ, create a custom storage plugin. You can also create a storage plugin to simplify querying file having long path names. A workspace name replaces the long path name.
+support for some other file format, such as GZ, create a custom storage plugin. You can also create a storage plugin to simplify querying files having long path names. A workspace name replaces the long path name.
 
 
 ## Create a Storage Plugin
@@ -203,7 +203,7 @@ support for some other file format, such as GZ, create a custom storage plugin.
 This example covers how to create and use a storage plugin to simplify queries or to query a file type that `dfs` does not specify, GZ in this case. First, you create the storage plugin in the Drill Web UI. Next, you connect to the
 file through the plugin to query a file.
 
-You can create a storage plugin using the Apache Drill Web UI to query the GZ file containing the compressed TSV data directly.
+You can create a storage plugin using the Apache Drill Web UI to query the GZ file containing the compressed TSV data.
 
   1. Create an `ngram` directory on your file system.
   2. Copy the GZ file `googlebooks-eng-all-5gram-20120701-zo.gz` to the `ngram` directory.
@@ -213,7 +213,7 @@ You can create a storage plugin using the Apache Drill Web UI to query the GZ fi
      ![new plugin]({{ site.baseurl }}/docs/img/ngram_plugin.png)    
   5. Click **Create**.  
      The Configuration screen appears.
-  6. Replace null with the following storage plugin definition, except on the location line, use the path to your `ngram` directory instead of the drilluser's path and give your workspace an arbitrary name, for example, ngram:
+  6. Replace null with the following storage plugin definition, except on the location line, use the *full* path to your `ngram` directory instead of the drilluser's path and give your workspace an arbitrary name, for example, ngram:
   
         {
           "type": "file",
@@ -288,7 +288,7 @@ This exercise shows how to query Ngram data when you are connected to `myplugin`
                 COLUMNS[2] 
          FROM ngram.`/googlebooks-eng-all-5gram-20120701-zo.gz` 
          WHERE ((columns[0] = 'Zoological Journal of the Linnean') 
-          AND (columns[2] > 250)) 
+         AND (columns[2] > 250)) 
          LIMIT 10;
 
      The five rows of output appear.  


[18/26] drill git commit: fonts fixing

Posted by ts...@apache.org.
fonts fixing


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/e643987d
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/e643987d
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/e643987d

Branch: refs/heads/gh-pages
Commit: e643987d2f263708ae2b691ec27e32a632018e97
Parents: c6eedf4
Author: Elliot Berry <el...@Marks-iMac-2.local>
Authored: Fri May 29 13:34:39 2015 -0400
Committer: Elliot Berry <el...@Marks-iMac-2.local>
Committed: Fri May 29 13:34:39 2015 -0400

----------------------------------------------------------------------
 _sass/_doc-content.scss | 2 ++
 1 file changed, 2 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/e643987d/_sass/_doc-content.scss
----------------------------------------------------------------------
diff --git a/_sass/_doc-content.scss b/_sass/_doc-content.scss
index bab2acc..6ed971d 100644
--- a/_sass/_doc-content.scss
+++ b/_sass/_doc-content.scss
@@ -186,6 +186,7 @@ div.docsidebar ul li{
 .docsidebarwrapper a{
   color: #333333;
   text-decoration: none;
+  font-family: 'PT Sans', 'Helvetica Neue', Helvetica, Arial, sans-serif;
 }
 
 .docsidebarwrapper > ul > ul.current_section{
@@ -205,6 +206,7 @@ div.docsidebar ul li{
   line-height: 24px;
   width: 100%;
   display: inline-block;
+  font-family: 'PT Sans', 'Helvetica Neue', Helvetica, Arial, sans-serif;
 }
 
 div.docsidebar ul ul, div.docsidebar ul.want-points{


[21/26] drill git commit: safari fix

Posted by ts...@apache.org.
safari fix


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/2dc24af1
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/2dc24af1
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/2dc24af1

Branch: refs/heads/gh-pages
Commit: 2dc24af131cba7df9bc8f6bcb07afd9fb45ff4bf
Parents: dfa52dc
Author: Elliot Berry <el...@Marks-iMac-2.local>
Authored: Fri May 29 15:15:23 2015 -0400
Committer: Elliot Berry <el...@Marks-iMac-2.local>
Committed: Fri May 29 15:15:23 2015 -0400

----------------------------------------------------------------------
 index.html | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/2dc24af1/index.html
----------------------------------------------------------------------
diff --git a/index.html b/index.html
index 513dd7d..aad4ace 100755
--- a/index.html
+++ b/index.html
@@ -45,10 +45,10 @@ $(document).ready(function() {
   <div class="item">
     <div class="headlines tc">
       <div id="video-slider" class="slider">
-        <div class="slide"><a class="various fancybox.iframe" href="https://www.youtube.com/embed/6pGeQOXDdD8"><img src="{{ site.baseurl }}/images/thumbnail-65c42i7Xg7Q.jpg" class="thumbnail" /><img src="{{ site.baseurl }}/images/play-mq.png" class="play" /></a><div class="title">The Rise of the Non-Relational Datastore</div></div>
-        <div class="slide"><a class="various fancybox.iframe" href="https://www.youtube.com/embed/MYY51kiFPTk"><img src="{{ site.baseurl }}/images/thumbnail-MYY51kiFPTk.jpg" class="thumbnail" /><img src="{{ site.baseurl }}/images/play-mq.png" class="play" /></a><div class="title">Deployment Options and BI Tools</div></div>
-        <div class="slide"><a class="various fancybox.iframe" href="https://www.youtube.com/embed/bhmNbH2yzhM"><img src="{{ site.baseurl }}/images/thumbnail-bhmNbH2yzhM.jpg" class="thumbnail" /><img src="{{ site.baseurl }}/images/play-mq.png" class="play" /></a><div class="title">Connecting to Data Sources</div></div>
-        <div class="slide"><a class="various fancybox.iframe" href="https://www.youtube.com/embed/6pGeQOXDdD8"><img src="{{ site.baseurl }}/images/thumbnail-6pGeQOXDdD8.jpg" class="thumbnail" /><img src="{{ site.baseurl }}/images/play-mq.png" class="play" /></a><div class="title">High Performance with a JSON Data Model</div></div>
+        <div class="slide"><a class="various fancybox.iframe" href="//www.youtube.com/embed/6pGeQOXDdD8"><img src="{{ site.baseurl }}/images/thumbnail-65c42i7Xg7Q.jpg" class="thumbnail" /><img src="{{ site.baseurl }}/images/play-mq.png" class="play" /></a><div class="title">The Rise of the Non-Relational Datastore</div></div>
+        <div class="slide"><a class="various fancybox.iframe" href="/www.youtube.com/embed/MYY51kiFPTk"><img src="{{ site.baseurl }}/images/thumbnail-MYY51kiFPTk.jpg" class="thumbnail" /><img src="{{ site.baseurl }}/images/play-mq.png" class="play" /></a><div class="title">Deployment Options and BI Tools</div></div>
+        <div class="slide"><a class="various fancybox.iframe" href="//www.youtube.com/embed/bhmNbH2yzhM"><img src="{{ site.baseurl }}/images/thumbnail-bhmNbH2yzhM.jpg" class="thumbnail" /><img src="{{ site.baseurl }}/images/play-mq.png" class="play" /></a><div class="title">Connecting to Data Sources</div></div>
+        <div class="slide"><a class="various fancybox.iframe" href="//www.youtube.com/embed/6pGeQOXDdD8"><img src="{{ site.baseurl }}/images/thumbnail-6pGeQOXDdD8.jpg" class="thumbnail" /><img src="{{ site.baseurl }}/images/play-mq.png" class="play" /></a><div class="title">High Performance with a JSON Data Model</div></div>
       </div>
       <h1 class="main-headline">Apache Drill</h1>
       <h2 id="sub-headline">Schema-free SQL Query Engine <br class="mobile-break" />for Hadoop, NoSQL and <br class="mobile-break" />Cloud Storage</h2>


[13/26] drill git commit: aasgsdgsd

Posted by ts...@apache.org.
aasgsdgsd


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/5f066c8e
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/5f066c8e
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/5f066c8e

Branch: refs/heads/gh-pages
Commit: 5f066c8e09d3a2f9ab316bdd80cfc36cfb3b0743
Parents: eb7e446
Author: Elliot Berry <el...@Marks-iMac-2.local>
Authored: Fri May 29 12:01:14 2015 -0400
Committer: Elliot Berry <el...@Marks-iMac-2.local>
Committed: Fri May 29 12:01:14 2015 -0400

----------------------------------------------------------------------
 js/script.js | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/5f066c8e/js/script.js
----------------------------------------------------------------------
diff --git a/js/script.js b/js/script.js
index bd99f3f..fa39fd6 100755
--- a/js/script.js
+++ b/js/script.js
@@ -38,12 +38,15 @@ $(document).ready(function(e) {
 	
 	$(window).scroll(onScroll);
 
-
-    name = name.replace(/[\[]/, "\\[").replace(/[\]]/, "\\]");
-    var regex = new RegExp("[\\?&]" + name + "=([^&#]*)"),
-    results = regex.exec(location.search);
-    alert(results);
-    
+    var pathname = window.location.pathname;
+    var pathSlashesReplaced = pathname.replace(/\//g, " ");
+    var pathSlashesReplacedNoFirstDash = pathSlashesReplaced.replace(" ","");
+    var newClass = pathSlashesReplacedNoFirstDash.replace(/(\.[\s\S]+)/ig, "");
+	$("body").addClass(newClass);
+    if ( $("body").attr("class") == "") 
+    {                                   
+         $("body").addClass("class");
+    }
 });
 
 var reel_currentIndex = 0;


[05/26] drill git commit: Update _site-responsive.scss

Posted by ts...@apache.org.
Update _site-responsive.scss

Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/2b51a7fe
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/2b51a7fe
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/2b51a7fe

Branch: refs/heads/gh-pages
Commit: 2b51a7fe32c7c3eb5cf9ba30d5681fe05cc42a1e
Parents: 55f2549
Author: elliotberry <me...@elliotberry.com>
Authored: Wed May 27 15:30:40 2015 -0400
Committer: elliotberry <me...@elliotberry.com>
Committed: Wed May 27 15:30:40 2015 -0400

----------------------------------------------------------------------
 _sass/_site-responsive.scss | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/2b51a7fe/_sass/_site-responsive.scss
----------------------------------------------------------------------
diff --git a/_sass/_site-responsive.scss b/_sass/_site-responsive.scss
index b1c5290..1a32308 100644
--- a/_sass/_site-responsive.scss
+++ b/_sass/_site-responsive.scss
@@ -195,7 +195,7 @@
     float: right;
   }
   #menu.force-expand {
-    position: relative;
+    position: fixed;
   }
 
   /* Responsive Homepage 570 max width */


[16/26] drill git commit: fonts fixing

Posted by ts...@apache.org.
fonts fixing


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/c6eedf44
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/c6eedf44
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/c6eedf44

Branch: refs/heads/gh-pages
Commit: c6eedf447e709f2f524c86dd36462e53b713466b
Parents: 9dc72b8
Author: Elliot Berry <el...@Marks-iMac-2.local>
Authored: Fri May 29 13:22:05 2015 -0400
Committer: Elliot Berry <el...@Marks-iMac-2.local>
Committed: Fri May 29 13:22:05 2015 -0400

----------------------------------------------------------------------
 _sass/_doc-content.scss | 1 +
 1 file changed, 1 insertion(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/c6eedf44/_sass/_doc-content.scss
----------------------------------------------------------------------
diff --git a/_sass/_doc-content.scss b/_sass/_doc-content.scss
index f81ce78..bab2acc 100644
--- a/_sass/_doc-content.scss
+++ b/_sass/_doc-content.scss
@@ -225,6 +225,7 @@ div.docsidebar ul ul ul{
   line-height: 24px;
   display: inline-block;
   width: 100%;
+  font-family: 'PT Sans', 'Helvetica Neue', Helvetica, Arial, sans-serif;
 }
 
 .docsidebarwrapper li.toctree-l1 ul > li{


[19/26] drill git commit: fonts fixing

Posted by ts...@apache.org.
fonts fixing


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/7d87b4b2
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/7d87b4b2
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/7d87b4b2

Branch: refs/heads/gh-pages
Commit: 7d87b4b2f4f06e1fb58c0fbf983fd30cbcc2bfda
Parents: e643987
Author: Elliot Berry <el...@Marks-iMac-2.local>
Authored: Fri May 29 13:54:36 2015 -0400
Committer: Elliot Berry <el...@Marks-iMac-2.local>
Committed: Fri May 29 13:54:36 2015 -0400

----------------------------------------------------------------------
 _includes/head.html | 1 +
 1 file changed, 1 insertion(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/7d87b4b2/_includes/head.html
----------------------------------------------------------------------
diff --git a/_includes/head.html b/_includes/head.html
index d2e1e97..9b95254 100644
--- a/_includes/head.html
+++ b/_includes/head.html
@@ -7,6 +7,7 @@
 <title>{% if page.title %}{{ page.title }} - {{ site.title_suffix }}{% else %}{{ site.title }}{% endif %}</title>
 
 <link href="//maxcdn.bootstrapcdn.com/font-awesome/4.3.0/css/font-awesome.min.css" rel="stylesheet" type="text/css"/>
+<link href='http://fonts.googleapis.com/css?family=PT+Sans' rel='stylesheet' type='text/css'>
 <link href="{{ site.baseurl }}/css/site.css" rel="stylesheet" type="text/css"/>
 
 <link rel="shortcut icon" href="{{ site.baseurl }}/favicon.ico" type="image/x-icon"/>


[17/26] drill git commit: clarify DFS vs local FS, resolve pull request 97798bf4e555db85ba75e1bc8d18d3298cff989b, typos

Posted by ts...@apache.org.
clarify DFS vs local FS, resolve pull request 97798bf4e555db85ba75e1bc8d18d3298cff989b, typos


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/9a2f1ba9
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/9a2f1ba9
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/9a2f1ba9

Branch: refs/heads/gh-pages
Commit: 9a2f1ba9243c66144bd715a70a707f0de491aa75
Parents: 806f24f
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Fri May 29 10:32:25 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Fri May 29 10:32:25 2015 -0700

----------------------------------------------------------------------
 _data/docs.json                                 | 112 +++++++++----------
 .../020-configuring-drill-memory.md             |   4 +-
 .../010-connect-a-data-source-introduction.md   |   9 +-
 .../040-file-system-storage-plugin.md           | 105 +++++++++++++++++
 _docs/connect-a-data-source/040-workspaces.md   |  76 -------------
 .../050-file-system-storage-plugin.md           |  64 -----------
 _docs/connect-a-data-source/050-workspaces.md   |  33 ++++++
 .../connect-a-data-source/100-mapr-db-format.md |   3 +-
 .../050-json-data-model.md                      |   4 +-
 .../020-develop-a-simple-function.md            |   4 +-
 .../030-developing-an-aggregate-function.md     |  18 +++
 .../060-custom-function-interfaces.md           |  14 +--
 .../design-docs/050-value-vectors.md            |   2 +-
 _docs/img/connect-plugin.png                    | Bin 36731 -> 41222 bytes
 .../data-types/010-supported-data-types.md      |   7 ++
 15 files changed, 239 insertions(+), 216 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/9a2f1ba9/_data/docs.json
----------------------------------------------------------------------
diff --git a/_data/docs.json b/_data/docs.json
index b9c3f83..25cc5ff 100644
--- a/_data/docs.json
+++ b/_data/docs.json
@@ -1435,8 +1435,8 @@
                                 }
                             ], 
                             "children": [], 
-                            "next_title": "Workspaces", 
-                            "next_url": "/docs/workspaces/", 
+                            "next_title": "File System Storage Plugin", 
+                            "next_url": "/docs/file-system-storage-plugin/", 
                             "parent": "Storage Plugin Configuration", 
                             "previous_title": "Storage Plugin Configuration", 
                             "previous_url": "/docs/storage-plugin-configuration/", 
@@ -1456,14 +1456,14 @@
                                 }
                             ], 
                             "children": [], 
-                            "next_title": "File System Storage Plugin", 
-                            "next_url": "/docs/file-system-storage-plugin/", 
+                            "next_title": "Workspaces", 
+                            "next_url": "/docs/workspaces/", 
                             "parent": "Storage Plugin Configuration", 
                             "previous_title": "Plugin Configuration Introduction", 
                             "previous_url": "/docs/plugin-configuration-introduction/", 
-                            "relative_path": "_docs/connect-a-data-source/040-workspaces.md", 
-                            "title": "Workspaces", 
-                            "url": "/docs/workspaces/"
+                            "relative_path": "_docs/connect-a-data-source/040-file-system-storage-plugin.md", 
+                            "title": "File System Storage Plugin", 
+                            "url": "/docs/file-system-storage-plugin/"
                         }, 
                         {
                             "breadcrumbs": [
@@ -1480,11 +1480,11 @@
                             "next_title": "HBase Storage Plugin", 
                             "next_url": "/docs/hbase-storage-plugin/", 
                             "parent": "Storage Plugin Configuration", 
-                            "previous_title": "Workspaces", 
-                            "previous_url": "/docs/workspaces/", 
-                            "relative_path": "_docs/connect-a-data-source/050-file-system-storage-plugin.md", 
-                            "title": "File System Storage Plugin", 
-                            "url": "/docs/file-system-storage-plugin/"
+                            "previous_title": "File System Storage Plugin", 
+                            "previous_url": "/docs/file-system-storage-plugin/", 
+                            "relative_path": "_docs/connect-a-data-source/050-workspaces.md", 
+                            "title": "Workspaces", 
+                            "url": "/docs/workspaces/"
                         }, 
                         {
                             "breadcrumbs": [
@@ -1501,8 +1501,8 @@
                             "next_title": "Hive Storage Plugin", 
                             "next_url": "/docs/hive-storage-plugin/", 
                             "parent": "Storage Plugin Configuration", 
-                            "previous_title": "File System Storage Plugin", 
-                            "previous_url": "/docs/file-system-storage-plugin/", 
+                            "previous_title": "Workspaces", 
+                            "previous_url": "/docs/workspaces/", 
                             "relative_path": "_docs/connect-a-data-source/060-hbase-storage-plugin.md", 
                             "title": "HBase Storage Plugin", 
                             "url": "/docs/hbase-storage-plugin/"
@@ -2965,12 +2965,12 @@
                 }
             ], 
             "children": [], 
-            "next_title": "HBase Storage Plugin", 
-            "next_url": "/docs/hbase-storage-plugin/", 
+            "next_title": "Workspaces", 
+            "next_url": "/docs/workspaces/", 
             "parent": "Storage Plugin Configuration", 
-            "previous_title": "Workspaces", 
-            "previous_url": "/docs/workspaces/", 
-            "relative_path": "_docs/connect-a-data-source/050-file-system-storage-plugin.md", 
+            "previous_title": "Plugin Configuration Introduction", 
+            "previous_url": "/docs/plugin-configuration-introduction/", 
+            "relative_path": "_docs/connect-a-data-source/040-file-system-storage-plugin.md", 
             "title": "File System Storage Plugin", 
             "url": "/docs/file-system-storage-plugin/"
         }, 
@@ -3116,8 +3116,8 @@
             "next_title": "Hive Storage Plugin", 
             "next_url": "/docs/hive-storage-plugin/", 
             "parent": "Storage Plugin Configuration", 
-            "previous_title": "File System Storage Plugin", 
-            "previous_url": "/docs/file-system-storage-plugin/", 
+            "previous_title": "Workspaces", 
+            "previous_url": "/docs/workspaces/", 
             "relative_path": "_docs/connect-a-data-source/060-hbase-storage-plugin.md", 
             "title": "HBase Storage Plugin", 
             "url": "/docs/hbase-storage-plugin/"
@@ -5564,8 +5564,8 @@
                 }
             ], 
             "children": [], 
-            "next_title": "Workspaces", 
-            "next_url": "/docs/workspaces/", 
+            "next_title": "File System Storage Plugin", 
+            "next_url": "/docs/file-system-storage-plugin/", 
             "parent": "Storage Plugin Configuration", 
             "previous_title": "Storage Plugin Configuration", 
             "previous_url": "/docs/storage-plugin-configuration/", 
@@ -9339,8 +9339,8 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "Workspaces", 
-                    "next_url": "/docs/workspaces/", 
+                    "next_title": "File System Storage Plugin", 
+                    "next_url": "/docs/file-system-storage-plugin/", 
                     "parent": "Storage Plugin Configuration", 
                     "previous_title": "Storage Plugin Configuration", 
                     "previous_url": "/docs/storage-plugin-configuration/", 
@@ -9360,14 +9360,14 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "File System Storage Plugin", 
-                    "next_url": "/docs/file-system-storage-plugin/", 
+                    "next_title": "Workspaces", 
+                    "next_url": "/docs/workspaces/", 
                     "parent": "Storage Plugin Configuration", 
                     "previous_title": "Plugin Configuration Introduction", 
                     "previous_url": "/docs/plugin-configuration-introduction/", 
-                    "relative_path": "_docs/connect-a-data-source/040-workspaces.md", 
-                    "title": "Workspaces", 
-                    "url": "/docs/workspaces/"
+                    "relative_path": "_docs/connect-a-data-source/040-file-system-storage-plugin.md", 
+                    "title": "File System Storage Plugin", 
+                    "url": "/docs/file-system-storage-plugin/"
                 }, 
                 {
                     "breadcrumbs": [
@@ -9384,11 +9384,11 @@
                     "next_title": "HBase Storage Plugin", 
                     "next_url": "/docs/hbase-storage-plugin/", 
                     "parent": "Storage Plugin Configuration", 
-                    "previous_title": "Workspaces", 
-                    "previous_url": "/docs/workspaces/", 
-                    "relative_path": "_docs/connect-a-data-source/050-file-system-storage-plugin.md", 
-                    "title": "File System Storage Plugin", 
-                    "url": "/docs/file-system-storage-plugin/"
+                    "previous_title": "File System Storage Plugin", 
+                    "previous_url": "/docs/file-system-storage-plugin/", 
+                    "relative_path": "_docs/connect-a-data-source/050-workspaces.md", 
+                    "title": "Workspaces", 
+                    "url": "/docs/workspaces/"
                 }, 
                 {
                     "breadcrumbs": [
@@ -9405,8 +9405,8 @@
                     "next_title": "Hive Storage Plugin", 
                     "next_url": "/docs/hive-storage-plugin/", 
                     "parent": "Storage Plugin Configuration", 
-                    "previous_title": "File System Storage Plugin", 
-                    "previous_url": "/docs/file-system-storage-plugin/", 
+                    "previous_title": "Workspaces", 
+                    "previous_url": "/docs/workspaces/", 
                     "relative_path": "_docs/connect-a-data-source/060-hbase-storage-plugin.md", 
                     "title": "HBase Storage Plugin", 
                     "url": "/docs/hbase-storage-plugin/"
@@ -10498,12 +10498,12 @@
                 }
             ], 
             "children": [], 
-            "next_title": "File System Storage Plugin", 
-            "next_url": "/docs/file-system-storage-plugin/", 
+            "next_title": "HBase Storage Plugin", 
+            "next_url": "/docs/hbase-storage-plugin/", 
             "parent": "Storage Plugin Configuration", 
-            "previous_title": "Plugin Configuration Introduction", 
-            "previous_url": "/docs/plugin-configuration-introduction/", 
-            "relative_path": "_docs/connect-a-data-source/040-workspaces.md", 
+            "previous_title": "File System Storage Plugin", 
+            "previous_url": "/docs/file-system-storage-plugin/", 
+            "relative_path": "_docs/connect-a-data-source/050-workspaces.md", 
             "title": "Workspaces", 
             "url": "/docs/workspaces/"
         }
@@ -11489,8 +11489,8 @@
                                 }
                             ], 
                             "children": [], 
-                            "next_title": "Workspaces", 
-                            "next_url": "/docs/workspaces/", 
+                            "next_title": "File System Storage Plugin", 
+                            "next_url": "/docs/file-system-storage-plugin/", 
                             "parent": "Storage Plugin Configuration", 
                             "previous_title": "Storage Plugin Configuration", 
                             "previous_url": "/docs/storage-plugin-configuration/", 
@@ -11510,14 +11510,14 @@
                                 }
                             ], 
                             "children": [], 
-                            "next_title": "File System Storage Plugin", 
-                            "next_url": "/docs/file-system-storage-plugin/", 
+                            "next_title": "Workspaces", 
+                            "next_url": "/docs/workspaces/", 
                             "parent": "Storage Plugin Configuration", 
                             "previous_title": "Plugin Configuration Introduction", 
                             "previous_url": "/docs/plugin-configuration-introduction/", 
-                            "relative_path": "_docs/connect-a-data-source/040-workspaces.md", 
-                            "title": "Workspaces", 
-                            "url": "/docs/workspaces/"
+                            "relative_path": "_docs/connect-a-data-source/040-file-system-storage-plugin.md", 
+                            "title": "File System Storage Plugin", 
+                            "url": "/docs/file-system-storage-plugin/"
                         }, 
                         {
                             "breadcrumbs": [
@@ -11534,11 +11534,11 @@
                             "next_title": "HBase Storage Plugin", 
                             "next_url": "/docs/hbase-storage-plugin/", 
                             "parent": "Storage Plugin Configuration", 
-                            "previous_title": "Workspaces", 
-                            "previous_url": "/docs/workspaces/", 
-                            "relative_path": "_docs/connect-a-data-source/050-file-system-storage-plugin.md", 
-                            "title": "File System Storage Plugin", 
-                            "url": "/docs/file-system-storage-plugin/"
+                            "previous_title": "File System Storage Plugin", 
+                            "previous_url": "/docs/file-system-storage-plugin/", 
+                            "relative_path": "_docs/connect-a-data-source/050-workspaces.md", 
+                            "title": "Workspaces", 
+                            "url": "/docs/workspaces/"
                         }, 
                         {
                             "breadcrumbs": [
@@ -11555,8 +11555,8 @@
                             "next_title": "Hive Storage Plugin", 
                             "next_url": "/docs/hive-storage-plugin/", 
                             "parent": "Storage Plugin Configuration", 
-                            "previous_title": "File System Storage Plugin", 
-                            "previous_url": "/docs/file-system-storage-plugin/", 
+                            "previous_title": "Workspaces", 
+                            "previous_url": "/docs/workspaces/", 
                             "relative_path": "_docs/connect-a-data-source/060-hbase-storage-plugin.md", 
                             "title": "HBase Storage Plugin", 
                             "url": "/docs/hbase-storage-plugin/"

http://git-wip-us.apache.org/repos/asf/drill/blob/9a2f1ba9/_docs/configure-drill/020-configuring-drill-memory.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/020-configuring-drill-memory.md b/_docs/configure-drill/020-configuring-drill-memory.md
index 30d5121..ad46997 100644
--- a/_docs/configure-drill/020-configuring-drill-memory.md
+++ b/_docs/configure-drill/020-configuring-drill-memory.md
@@ -33,8 +33,8 @@ The `drill-env.sh` file contains the following options:
 
     export DRILL_JAVA_OPTS="-Xms1G -Xmx$DRILL_MAX_HEAP -XX:MaxDirectMemorySize=$DRILL_MAX_DIRECT_MEMORY -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=1G -ea"
 
-* DRILL_MAX_DIRECT_MEMORY is the Java direct memory. 
-* DRILL_MAX_HEAP is the maximum theoretical heap limit for the JVM. 
+* DRILL_MAX_DIRECT_MEMORY is the Java direct memory limit per node. 
+* DRILL_MAX_HEAP is the maximum theoretical heap limit for the JVM per node. 
 * Xmx specifies the maximum memory allocation pool for a Java Virtual Machine (JVM). 
 * Xms specifies the initial memory allocation pool.
 

http://git-wip-us.apache.org/repos/asf/drill/blob/9a2f1ba9/_docs/connect-a-data-source/010-connect-a-data-source-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/010-connect-a-data-source-introduction.md b/_docs/connect-a-data-source/010-connect-a-data-source-introduction.md
index 29133c0..b86542b 100644
--- a/_docs/connect-a-data-source/010-connect-a-data-source-introduction.md
+++ b/_docs/connect-a-data-source/010-connect-a-data-source-introduction.md
@@ -7,10 +7,13 @@ A storage plugin provides the following information to Drill:
 * Interfaces that Drill can use to read from and write to data sources.   
 * A set of storage plugin optimization rules that assist with efficient and faster execution of Drill queries, such as pushdowns, statistics, and partition awareness.  
 
-Apache Drill connects to a data source, such as a file on the file system or a Hive metastore, through a storage plugin. When you execute a query, Drill gets the plugin name you provide in FROM clause of your query or from the default you specify in the USE.<plugin name> command that precedes the query.
-. 
+Through the storage plugin, Drill connects to a data source, such as a database, a file on a local or distributed file system, or a Hive metastore. When you execute a query, Drill gets the plugin name in one of several ways:
 
-In addition to the connection string, the storage plugin configures the workspace and file formats for reading data, as described in subsequent sections. 
+* The FROM clause of the query can identify the plugin to use.
+* The USE <plugin name> command can precede the query.
+* You can specify the storage plugin when starting Drill.
+
+In addition to providing a the connection string to the data source, the storage plugin configures the workspace and file formats for reading data, as described in subsequent sections. 
 
 ## Storage Plugins Internals
 The following image represents the storage plugin layer between Drill and a

http://git-wip-us.apache.org/repos/asf/drill/blob/9a2f1ba9/_docs/connect-a-data-source/040-file-system-storage-plugin.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/040-file-system-storage-plugin.md b/_docs/connect-a-data-source/040-file-system-storage-plugin.md
new file mode 100644
index 0000000..9f16bde
--- /dev/null
+++ b/_docs/connect-a-data-source/040-file-system-storage-plugin.md
@@ -0,0 +1,105 @@
+---
+title: "File System Storage Plugin"
+parent: "Storage Plugin Configuration"
+---
+You can register a storage plugin instance that connects Drill to a local file system or to a distributed file system registered in `core-site.xml`, such as S3
+or HDFS. By
+default, Drill includes an instance named `dfs` that points to the local file
+system on your machine. 
+
+## Connecting Drill to a File System
+
+In a Drill cluster, you typically do not query the local file system, but instead place files on the distributed file system. You configure the connection property of the storage plugin workspace to connect Drill to a distributed file system. For example, the following connection properties connect Drill to an HDFS, MapR-FS, or Mongo-DB cluster:
+
+* HDFS  
+  `"connection": "hdfs://<IP Address>:<Port>/"`  
+* MapR-FS Remote Cluster  
+  `"connection": "maprfs://<IP Address>/"`  
+* Mongo-DB Cluster  
+  `"connection": "mongodb://<IP Address>:<Port>/"
+
+The Drill installation includes a [Mongo-DB storage plugin]({{site.baseurl}}/docs/mongodb-plugin-for-apache-drill).
+
+To register a local or a distributed file system with Apache Drill, complete
+the following steps:
+
+  1. Navigate to `[http://localhost:8047](http://localhost:8047/)`, and select the **Storage** tab.
+  2. In the New Storage Plugin window, enter a unique name and then click **Create**.
+  3. In the Configuration window, provide the following configuration information for the type of file system that you are configuring as a data source.
+     * Local file system example:
+
+            {
+              "type": "file",
+              "enabled": true,
+              "connection": "file:///",
+              "workspaces": {
+                "root": {
+                  "location": "/user/max/donuts",
+                  "writable": false,
+                  "defaultinputformat": null
+                 }
+              },
+                 "formats" : {
+                   "json" : {
+                     "type" : "json"
+                   }
+                 }
+              }
+     * Distributed file system example:
+    
+            {
+              "type" : "file",
+              "enabled" : true,
+              "connection" : "hdfs://10.10.30.156:8020/",
+              "workspaces" : {
+                "root : {
+                  "location" : "/user/root/drill",
+                  "writable" : true,
+                  "defaultinputformat" : "null"
+                }
+              },
+              "formats" : {
+                "json" : {
+                  "type" : "json"
+                }
+              }
+            }
+
+      To connect to a Hadoop file system, you include the IP address of the
+name node and the port number.
+  4. Click **Enable**.
+
+Once you have configured a storage plugin instance for the file system, you
+can issue Drill queries against it.
+
+The following example shows an instance of a file type storage plugin with a
+workspace named `json_files` configured to point Drill to the
+`/users/max/drill/json/` directory in the local file system `(dfs)`:
+
+    {
+      "type" : "file",
+      "enabled" : true,
+      "connection" : "file:///",
+      "workspaces" : {
+        "json_files" : {
+          "location" : "/users/max/drill/json/",
+          "writable" : false,
+          "defaultinputformat" : json
+       } 
+    },
+
+{% include startnote.html %}The `connection` parameter in the configuration above is "`file:///`", connecting Drill to the local file system (`dfs`).{% include endnote.html %}
+
+To query a file in the example `json_files` workspace, you can issue the `USE`
+command to tell Drill to use the `json_files` workspace configured in the `dfs`
+instance for each query that you issue:
+
+**Example**
+
+    USE dfs.json_files;
+    SELECT * FROM dfs.json_files.`donuts.json` WHERE type='frosted'
+
+If the `json_files` workspace did not exist, the query would have to include the
+full path to the `donuts.json` file:
+
+    SELECT * FROM dfs.`/users/max/drill/json/donuts.json` WHERE type='frosted';
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/9a2f1ba9/_docs/connect-a-data-source/040-workspaces.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/040-workspaces.md b/_docs/connect-a-data-source/040-workspaces.md
deleted file mode 100644
index 864c7ce..0000000
--- a/_docs/connect-a-data-source/040-workspaces.md
+++ /dev/null
@@ -1,76 +0,0 @@
----
-title: "Workspaces"
-parent: "Storage Plugin Configuration"
----
-When you register an instance of a file system data source, you can configure
-one or more workspaces for the instance. A workspace is a directory within the
-file system that you define. Drill searches the workspace to locate data when
-you run a query.
-
-Each workspace that you register defines a schema that you can connect to and
-query. Configuring workspaces is useful when you want to run multiple queries
-on files or tables in a specific directory. You cannot create workspaces for
-`hive` and `hbase` instances, though Hive databases show up as workspaces in
-Drill.
-
-The following example shows an instance of a file type storage plugin with a
-workspace named `json` configured to point Drill to the
-`/users/max/drill/json/` directory in the local file system `(dfs)`:
-
-    {
-      "type" : "file",
-      "enabled" : true,
-      "connection" : "file:///",
-      "workspaces" : {
-        "json" : {
-          "location" : "/users/max/drill/json/",
-          "writable" : false,
-          "defaultinputformat" : json
-       } 
-    },
-
-{% include startnote.html %}The `connection` parameter in the configuration above is "`file:///`", connecting Drill to the local file system (`dfs`).{% include endnote.html %}
-To connect to a Hadoop or MapR file system the `connection` parameter would be "`hdfs:///" `or` "maprfs:///", `respectively.
-
-To query a file in the example `json` workspace, you can issue the `USE`
-command to tell Drill to use the `json` workspace configured in the `dfs`
-instance for each query that you issue:
-
-**Example**
-
-    USE dfs.json;
-    SELECT * FROM dfs.json.`donuts.json` WHERE type='frosted'
-
-If the `json` workspace did not exist, the query would have to include the
-full path to the `donuts.json` file:
-
-    SELECT * FROM dfs.`/users/max/drill/json/donuts.json` WHERE type='frosted';
-
-Using a workspace alleviates the need to repeatedly enter the directory path
-in subsequent queries on the directory.
-
-### Default Workspaces
-
-Each `file` and `hive` instance includes a `default` workspace. The `default`
-workspace points to the file system or to the Hive metastore. When you query
-files and tables in the `file` or `hive default` workspaces, you can omit the
-workspace name from the query.
-
-For example, you can issue a query on a Hive table in the `default workspace`
-using either of the following formats and get the same results:
-
-**Example**
-
-    SELECT * FROM hive.customers LIMIT 10;
-    SELECT * FROM hive.`default`.customers LIMIT 10;
-
-{% include startnote.html %}Default is a reserved word. You must enclose reserved words in back ticks.{% include endnote.html %}
-
-
-Because HBase instances do not have workspaces, you can use the following
-format to query a table in HBase:
-
-    SELECT * FROM hbase.customers LIMIT 10;
-
-After you register a data source as a storage plugin instance with Drill, and
-optionally configure workspaces, you can query the data source.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/9a2f1ba9/_docs/connect-a-data-source/050-file-system-storage-plugin.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/050-file-system-storage-plugin.md b/_docs/connect-a-data-source/050-file-system-storage-plugin.md
deleted file mode 100644
index 2b3e287..0000000
--- a/_docs/connect-a-data-source/050-file-system-storage-plugin.md
+++ /dev/null
@@ -1,64 +0,0 @@
----
-title: "File System Storage Plugin"
-parent: "Storage Plugin Configuration"
----
-You can register a storage plugin instance that connects Drill to a local file
-system or a distributed file system registered in `core-site.xml`, such as S3
-or HDFS. When you register a storage plugin instance for a file system,
-provide a unique name for the instance, and identify the type as “`file`”. By
-default, Drill includes an instance named `dfs` that points to the local file
-system on your machine. You can update this configuration to point to a
-distributed file system or you can create a new instance to point to a
-distributed file system.
-
-To register a local or a distributed file system with Apache Drill, complete
-the following steps:
-
-  1. Navigate to `[http://localhost:8047](http://localhost:8047/)`, and select the **Storage** tab.
-  2. In the New Storage Plugin window, enter a unique name and then click **Create**.
-  3. In the Configuration window, provide the following configuration information for the type of file system that you are configuring as a data source.
-     1. Local file system example:
-
-            {
-              "type": "file",
-              "enabled": true,
-              "connection": "file:///",
-              "workspaces": {
-                "root": {
-                  "location": "/user/max/donuts",
-                  "writable": false,
-                  "defaultinputformat": null
-                 }
-              },
-                 "formats" : {
-                   "json" : {
-                     "type" : "json"
-                   }
-                 }
-              }
-     2. Distributed file system example:
-    
-            {
-              "type" : "file",
-              "enabled" : true,
-              "connection" : "hdfs://10.10.30.156:8020/",
-              "workspaces" : {
-                "root : {
-                  "location" : "/user/root/drill",
-                  "writable" : true,
-                  "defaultinputformat" : "null"
-                }
-              },
-              "formats" : {
-                "json" : {
-                  "type" : "json"
-                }
-              }
-            }
-
-      To connect to a Hadoop file system, you must include the IP address of the
-name node and the port number.
-  4. Click **Enable**.
-
-Once you have configured a storage plugin instance for the file system, you
-can issue Drill queries against it.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/9a2f1ba9/_docs/connect-a-data-source/050-workspaces.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/050-workspaces.md b/_docs/connect-a-data-source/050-workspaces.md
new file mode 100644
index 0000000..361bfec
--- /dev/null
+++ b/_docs/connect-a-data-source/050-workspaces.md
@@ -0,0 +1,33 @@
+---
+title: "Workspaces"
+parent: "Storage Plugin Configuration"
+---
+When you register an instance of a file system data source, you can configure
+one or more workspaces for the instance. The workspace defines the default directory location of files in a local or distributed file system. The `default`
+workspace points to the root of the file system. Drill searches the workspace to locate data when
+you run a query.
+
+You cannot create workspaces for
+`hive` and `hbase` storage plugins, though Hive databases show up as workspaces in
+Drill. Each `hive` instance includes a `default` workspace that points to the  Hive metastore. When you query
+files and tables in the `hive default` workspaces, you can omit the
+workspace name from the query.
+
+For example, you can issue a query on a Hive table in the `default workspace`
+using either of the following formats and get the same results:
+
+**Example**
+
+    SELECT * FROM hive.customers LIMIT 10;
+    SELECT * FROM hive.`default`.customers LIMIT 10;
+
+{% include startnote.html %}Default is a reserved word. You must enclose reserved words in back ticks.{% include endnote.html %}
+
+Because HBase instances do not have workspaces, you can use the following
+format to query a table in HBase:
+
+    SELECT * FROM hbase.customers LIMIT 10;
+
+After you register a data source as a storage plugin instance with Drill, and
+optionally configure workspaces, you can query the data source.
+

http://git-wip-us.apache.org/repos/asf/drill/blob/9a2f1ba9/_docs/connect-a-data-source/100-mapr-db-format.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/100-mapr-db-format.md b/_docs/connect-a-data-source/100-mapr-db-format.md
index f101dfa..b091f8e 100644
--- a/_docs/connect-a-data-source/100-mapr-db-format.md
+++ b/_docs/connect-a-data-source/100-mapr-db-format.md
@@ -2,8 +2,7 @@
 title: "MapR-DB Format"
 parent: "Connect a Data Source"
 ---
-The MapR-DB format is not included in apache drill release. Drill includes a `maprdb` format for MapR-DB that is defined within the
-default `dfs` storage plugin instance when you install Drill from the `mapr-drill` package on a MapR node. The `maprdb` format improves the
+The MapR-DB format is not included in Apache Drill release. If you install Drill from the `mapr-drill` package on a MapR node, the MapR-DB format appears in the `dfs` storage plugin instance. The `maprdb` format improves the
 estimated number of rows that Drill uses to plan a query. It also enables you
 to query tables like you would query files in a file system because MapR-DB
 and MapR-FS share the same namespace.

http://git-wip-us.apache.org/repos/asf/drill/blob/9a2f1ba9/_docs/data-sources-and-file-formats/050-json-data-model.md
----------------------------------------------------------------------
diff --git a/_docs/data-sources-and-file-formats/050-json-data-model.md b/_docs/data-sources-and-file-formats/050-json-data-model.md
index 1b1660d..1bc4cec 100644
--- a/_docs/data-sources-and-file-formats/050-json-data-model.md
+++ b/_docs/data-sources-and-file-formats/050-json-data-model.md
@@ -126,7 +126,7 @@ Using the following techniques, you can query complex, nested JSON:
 * Generate key/value pairs for loosely structured data
 
 ## Example: Flatten and Generate Key Values for Complex JSON
-This example uses the following data that represents unit sales of tickets to events that were sold over a period of for several days in December:
+This example uses the following data that represents unit sales of tickets to events that were sold over a period of several days in December:
 
 ### ticket_sales.json Contents
 
@@ -214,7 +214,7 @@ Sum the ticket sales by combining the `SUM`, `FLATTEN`, and `KVGEN` functions in
     1 row selected (0.244 seconds)
 
 ### Example: Aggregate and Sort Data
-Sum the ticket sales by state and group by day and sort in ascending order. 
+Sum and group the ticket sales by date and sort in ascending order of total tickets sold. 
 
     SELECT `right`(tkt.tot_sales.key,2) `December Date`, 
     SUM(tkt.tot_sales.`value`) AS TotalSales 

http://git-wip-us.apache.org/repos/asf/drill/blob/9a2f1ba9/_docs/develop-custom-functions/020-develop-a-simple-function.md
----------------------------------------------------------------------
diff --git a/_docs/develop-custom-functions/020-develop-a-simple-function.md b/_docs/develop-custom-functions/020-develop-a-simple-function.md
index 794182c..4a4250c 100644
--- a/_docs/develop-custom-functions/020-develop-a-simple-function.md
+++ b/_docs/develop-custom-functions/020-develop-a-simple-function.md
@@ -37,10 +37,10 @@ function interface:
 
 	**Example**
 	
-		public void setup(RecordBatch b) {
+		public void setup() {
 		}
 		public void eval() {
-		 out.value = in1.value + in2.value;
+		  out.value = in1.value + in2.value;
 		}
 
   5. Use the maven-source-plugin to compile the sources and classes JAR files. Verify that an empty `drill-module.conf` is included in the resources folder of the JARs.   

http://git-wip-us.apache.org/repos/asf/drill/blob/9a2f1ba9/_docs/develop-custom-functions/030-developing-an-aggregate-function.md
----------------------------------------------------------------------
diff --git a/_docs/develop-custom-functions/030-developing-an-aggregate-function.md b/_docs/develop-custom-functions/030-developing-an-aggregate-function.md
index 76a9cfe..ac28d9e 100644
--- a/_docs/develop-custom-functions/030-developing-an-aggregate-function.md
+++ b/_docs/develop-custom-functions/030-developing-an-aggregate-function.md
@@ -30,6 +30,24 @@ Complete the following steps to create an aggregate function:
 		@Workspace BitHolder value;
 		@Output BitHolder out;
   4. Include the `setup(), add(), output(),` and `reset()` methods.  
+
+    **Example**
+        public void setup() {
+          value = new BitHolder(); 
+          value.value = 0;
+        }
+         
+        @Override
+        public void add() {
+          value.value++;
+        }
+        @Override
+        public void output() {
+          out.value = value.value;
+        }
+        @Override
+        public void reset() {
+            value.value = 0;
   5. Use the maven-source-plugin to compile the sources and classes JAR files. Verify that an empty `drill-module.conf` is included in the resources folder of the JARs.   
 Drill searches this module during classpath scanning. If the file is not
 included in the resources folder, you can add it to the JAR file or add it to

http://git-wip-us.apache.org/repos/asf/drill/blob/9a2f1ba9/_docs/develop-custom-functions/060-custom-function-interfaces.md
----------------------------------------------------------------------
diff --git a/_docs/develop-custom-functions/060-custom-function-interfaces.md b/_docs/develop-custom-functions/060-custom-function-interfaces.md
index af46c4a..4183ac8 100644
--- a/_docs/develop-custom-functions/060-custom-function-interfaces.md
+++ b/_docs/develop-custom-functions/060-custom-function-interfaces.md
@@ -55,7 +55,6 @@ The following example shows the program created for the `myaddints` function:
     import org.apache.drill.exec.expr.holders.Float8Holder;
     import org.apache.drill.exec.expr.holders.IntHolder;
     import org.apache.drill.exec.expr.holders.VarCharHolder;
-    import org.apache.drill.exec.record.RecordBatch;
      
     public class MyUdfs {
        
@@ -65,7 +64,7 @@ The following example shows the program created for the `myaddints` function:
         @Param BigIntHolder input1;
         @Param BigIntHolder input2;
         @Output BigIntHolder out;
-        public void setup(RecordBatch b){}
+        public void setup(){}
              
         public void eval(){
           out.value = input1.value + input2.value;
@@ -118,7 +117,6 @@ The following example shows the program created for the `mysecondmin` function:
     import org.apache.drill.exec.expr.holders.Float8Holder;
     import org.apache.drill.exec.expr.holders.IntHolder;
     import org.apache.drill.exec.expr.holders.VarCharHolder;
-    import org.apache.drill.exec.record.RecordBatch;
      
     public class MyUdfs {
        
@@ -128,9 +126,9 @@ The following example shows the program created for the `mysecondmin` function:
         @Workspace BigIntHolder min;
         @Workspace BigIntHolder secondMin;
         @Output BigIntHolder out;
-        public void setup(RecordBatch b) {
-            min = new BigIntHolder(); 
-            secondMin = new BigIntHolder(); 
+        public void setup() {
+          min = new BigIntHolder(); 
+          secondMin = new BigIntHolder(); 
           min.value = 999999999;
           secondMin.value = 999999999;
         }
@@ -139,8 +137,8 @@ The following example shows the program created for the `mysecondmin` function:
         public void add() {
              
             if (in.value < min.value) {
-                min.value = in.value;
-                secondMin.value = min.value;
+              min.value = in.value;
+              secondMin.value = min.value;
             }
              
         }

http://git-wip-us.apache.org/repos/asf/drill/blob/9a2f1ba9/_docs/developer-information/design-docs/050-value-vectors.md
----------------------------------------------------------------------
diff --git a/_docs/developer-information/design-docs/050-value-vectors.md b/_docs/developer-information/design-docs/050-value-vectors.md
index 828376a..87bc82d 100644
--- a/_docs/developer-information/design-docs/050-value-vectors.md
+++ b/_docs/developer-information/design-docs/050-value-vectors.md
@@ -20,7 +20,7 @@ Reading a random element from a ValueVector must be a constant time operation.
 To accomodate, elements are identified by their offset from the start of the
 buffer. Repeated, nullable and variable width ValueVectors utilize in an
 additional fixed width value vector to index each element. Write access is not
-supported once the ValueVector has been constructed by the RecordBatch.
+supported once the ValueVector has been constructed.
 
 ### Efficient Subsets of Value Vectors
 

http://git-wip-us.apache.org/repos/asf/drill/blob/9a2f1ba9/_docs/img/connect-plugin.png
----------------------------------------------------------------------
diff --git a/_docs/img/connect-plugin.png b/_docs/img/connect-plugin.png
index db3a3ec..702da8a 100644
Binary files a/_docs/img/connect-plugin.png and b/_docs/img/connect-plugin.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/9a2f1ba9/_docs/sql-reference/data-types/010-supported-data-types.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/data-types/010-supported-data-types.md b/_docs/sql-reference/data-types/010-supported-data-types.md
index 48397d0..3fda60b 100644
--- a/_docs/sql-reference/data-types/010-supported-data-types.md
+++ b/_docs/sql-reference/data-types/010-supported-data-types.md
@@ -61,6 +61,13 @@ The section [“Query Complex Data”]({{ site.baseurl }}/docs/querying-complex-
 * ["KVGEN"]({{site.baseurl}}/docs/kvgen/)
 * ["FLATTEN"]({{site.baseurl}}/docs/flatten/)
 
+## ANY Type
+The ANY type is a key technological advance in Drill that enables it to address late typing problems. Drill uses the ANY type internally and you might see references to ANY in the output of the DESCRIBE or other commands. You cannot cast a value to the ANY type in this release.
+
+Using the ANY type, the parser postpones the problem of resolving the type of some value until the query is actually running.  At that point, Drill has an empirical schema available for each record batch to use for final code
+generation and optimization.  If the empirical schema changes due to
+changes in the data processing, Drill regenerates the code as necessary.
+
 ## Casting and Converting Data Types
 
 In Drill, you cast or convert data to the required type for moving data from one data source to another or to make the data readable.


[24/26] drill git commit: Merge branch 'gh-pages' of https://github.com/elliotberry/drill into gh-pages

Posted by ts...@apache.org.
Merge branch 'gh-pages' of https://github.com/elliotberry/drill into gh-pages


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/803138f2
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/803138f2
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/803138f2

Branch: refs/heads/gh-pages
Commit: 803138f23e5e9cf12ac7852120a9a984291425b7
Parents: a15a7f1 2dc24af
Author: Tomer Shiran <ts...@gmail.com>
Authored: Fri May 29 21:37:20 2015 -0700
Committer: Tomer Shiran <ts...@gmail.com>
Committed: Fri May 29 21:37:20 2015 -0700

----------------------------------------------------------------------
 _includes/head.html         |  1 +
 _sass/_doc-content.scss     | 10 +++++++++-
 _sass/_site-responsive.scss | 26 +++++++++++++++++++++-----
 index.html                  |  8 ++++----
 js/script.js                | 10 ++++++++++
 5 files changed, 45 insertions(+), 10 deletions(-)
----------------------------------------------------------------------



[15/26] drill git commit: added auto body classes, blocking that erroneous blog hamburger

Posted by ts...@apache.org.
added auto body classes, blocking that erroneous blog hamburger


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/9dc72b8c
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/9dc72b8c
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/9dc72b8c

Branch: refs/heads/gh-pages
Commit: 9dc72b8c03afdfca25e2f2c9c8be936e969d1a5b
Parents: c1282dd
Author: Elliot Berry <el...@Marks-iMac-2.local>
Authored: Fri May 29 12:06:36 2015 -0400
Committer: Elliot Berry <el...@Marks-iMac-2.local>
Committed: Fri May 29 12:06:36 2015 -0400

----------------------------------------------------------------------
 _sass/_doc-content.scss | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/9dc72b8c/_sass/_doc-content.scss
----------------------------------------------------------------------
diff --git a/_sass/_doc-content.scss b/_sass/_doc-content.scss
index d78f71c..f81ce78 100644
--- a/_sass/_doc-content.scss
+++ b/_sass/_doc-content.scss
@@ -19,7 +19,9 @@
 body.blog #menu ul li.toc-categories {
   display:none;
 }
-
+body.blog #menu ul li.logo {
+  padding-left:30px;
+}
 /* Bottom navigation (left and right arrows) */
 
 div.doc-nav{


[11/26] drill git commit: aasgsdgsd

Posted by ts...@apache.org.
aasgsdgsd


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/0283a9ed
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/0283a9ed
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/0283a9ed

Branch: refs/heads/gh-pages
Commit: 0283a9ed482130c6c375ab1b32cb3a103fcca04c
Parents: f241403
Author: Elliot Berry <el...@Marks-iMac-2.local>
Authored: Fri May 29 11:56:54 2015 -0400
Committer: Elliot Berry <el...@Marks-iMac-2.local>
Committed: Fri May 29 11:56:54 2015 -0400

----------------------------------------------------------------------
 _layouts/default.html |  2 +-
 _plugins/bodyClass.rb | 34 ----------------------------------
 js/script.js          |  8 ++++++++
 3 files changed, 9 insertions(+), 35 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/0283a9ed/_layouts/default.html
----------------------------------------------------------------------
diff --git a/_layouts/default.html b/_layouts/default.html
index 4059872..ccb47db 100644
--- a/_layouts/default.html
+++ b/_layouts/default.html
@@ -3,7 +3,7 @@
 
 {% include head.html %}
 
-<body onResize="resized();" class="{% body_class %}">
+<body onResize="resized();">
   <div class="page-wrap">
     {% include menu.html %}
     {{ content }}

http://git-wip-us.apache.org/repos/asf/drill/blob/0283a9ed/_plugins/bodyClass.rb
----------------------------------------------------------------------
diff --git a/_plugins/bodyClass.rb b/_plugins/bodyClass.rb
deleted file mode 100644
index 876121f..0000000
--- a/_plugins/bodyClass.rb
+++ /dev/null
@@ -1,34 +0,0 @@
-class BodyClassTag < Liquid::Tag  
-
-  def generate_body_class(prefix, id)
-    id = id.gsub(/\.\w*?$/, '').gsub(/[-\/]/, '_').gsub(/^_/, '') # Remove extension from url, replace '-' and '/' with underscore, Remove leading '_'
-
-    case prefix
-    when "class"
-      prefix = ""
-    else
-      prefix = "#{prefix}_"
-    end
-
-    "#{prefix}#{id}"
-  end
-
-  def render(context)
-    page = context.environments.first["page"]
-    classes = []
-
-    %w[class url categories tags layout].each do |prop|
-      next unless page.has_key?(prop)
-      if page[prop].kind_of?(Array)
-        page[prop].each { |proper| classes.push generate_body_class(prop, proper) }
-      else
-        classes.push generate_body_class(prop, page[prop])
-      end
-    end
-
-    classes.join(" ")
-  end
-
-end
-
-Liquid::Template.register_tag('body_class', BodyClassTag)
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/0283a9ed/js/script.js
----------------------------------------------------------------------
diff --git a/js/script.js b/js/script.js
index 1e98b8d..b68d97d 100755
--- a/js/script.js
+++ b/js/script.js
@@ -37,6 +37,14 @@ $(document).ready(function(e) {
 	resized();
 	
 	$(window).scroll(onScroll);
+
+	function getParameterByName(name) {
+    name = name.replace(/[\[]/, "\\[").replace(/[\]]/, "\\]");
+    var regex = new RegExp("[\\?&]" + name + "=([^&#]*)"),
+        results = regex.exec(location.search);
+        alert(results);
+    return results === null ? "" : decodeURIComponent(results[1].replace(/\+/g, " "));
+	}
 });
 
 var reel_currentIndex = 0;


[22/26] drill git commit: mapr-tech-qa/14d9fdedc971a45b

Posted by ts...@apache.org.
mapr-tech-qa/14d9fdedc971a45b

Update 020-using-jdbc-with-squirrel-on-windows.md #79

broken links

DRILL-343


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/d6f216a6
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/d6f216a6
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/d6f216a6

Branch: refs/heads/gh-pages
Commit: d6f216a60b04b5366a3f3905450988597a421118
Parents: 9a2f1ba
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Fri May 29 13:51:34 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Fri May 29 14:35:49 2015 -0700

----------------------------------------------------------------------
 .../040-persistent-configuration-storage.md               |  2 +-
 .../010-apache-drill-contribution-guidelines.md           |  2 +-
 .../020-using-jdbc-with-squirrel-on-windows.md            |  2 +-
 .../sql-functions/020-data-type-conversion.md             | 10 +++++-----
 4 files changed, 8 insertions(+), 8 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/d6f216a6/_docs/configure-drill/configuration-options/040-persistent-configuration-storage.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/configuration-options/040-persistent-configuration-storage.md b/_docs/configure-drill/configuration-options/040-persistent-configuration-storage.md
index 59180b5..053f25b 100644
--- a/_docs/configure-drill/configuration-options/040-persistent-configuration-storage.md
+++ b/_docs/configure-drill/configuration-options/040-persistent-configuration-storage.md
@@ -4,7 +4,7 @@ parent: "Configuration Options"
 ---
 Drill stores persistent configuration data in a persistent configuration store
 (PStore). This data is encoded in JSON or Protobuf format. Drill can use the
-local file system, ZooKeeper, HBase, or MapR-DB to store this data. The data
+local file system or a distributed file system, such as HDFS or MapR-FS to store this data. The data
 stored in a PStore includes state information for storage plugins, query
 profiles, and ALTER SYSTEM settings. The default type of PStore configured
 depends on the Drill installation mode.

http://git-wip-us.apache.org/repos/asf/drill/blob/d6f216a6/_docs/developer-information/contribute-to-drill/010-apache-drill-contribution-guidelines.md
----------------------------------------------------------------------
diff --git a/_docs/developer-information/contribute-to-drill/010-apache-drill-contribution-guidelines.md b/_docs/developer-information/contribute-to-drill/010-apache-drill-contribution-guidelines.md
index 82531e1..9d1047d 100644
--- a/_docs/developer-information/contribute-to-drill/010-apache-drill-contribution-guidelines.md
+++ b/_docs/developer-information/contribute-to-drill/010-apache-drill-contribution-guidelines.md
@@ -114,7 +114,7 @@ Please do:
   * comment code whose function or rationale is not obvious;
   * update documentation (e.g., _package.html_ files, this wiki, etc.)
 
-Updating a patch
+### Updating a patch
 
 For patch updates, our convention is to number them like
 DRILL-1856.1.patch.txt, DRILL-1856.2.patch.txt, etc. And then click the

http://git-wip-us.apache.org/repos/asf/drill/blob/d6f216a6/_docs/odbc-jdbc-interfaces/020-using-jdbc-with-squirrel-on-windows.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/020-using-jdbc-with-squirrel-on-windows.md b/_docs/odbc-jdbc-interfaces/020-using-jdbc-with-squirrel-on-windows.md
index 1c7d465..9d7d3fb 100755
--- a/_docs/odbc-jdbc-interfaces/020-using-jdbc-with-squirrel-on-windows.md
+++ b/_docs/odbc-jdbc-interfaces/020-using-jdbc-with-squirrel-on-windows.md
@@ -30,7 +30,7 @@ machine:
     <drill_installation_directory>/jars/jdbc-driver/drill-jdbc-all-<version>.jar
 
 Or, you can download the [apache-
-drill-1.0.0.tar.gz](http://apache.osuosl.org/drill/drill-1.0.0/apache-drill-1.0.0-src.tar.gz) file to a location on your Windows machine, and
+drill-1.0.0.tar.gz](http://apache.osuosl.org/drill/drill-1.0.0/apache-drill-1.0.0.tar.gz) file to a location on your Windows machine, and
 extract the contents of the file. You may need to use a decompression utility,
 such as [7-zip](http://www.7-zip.org/) to extract the archive. Once extracted,
 you can locate the driver in the following directory:

http://git-wip-us.apache.org/repos/asf/drill/blob/d6f216a6/_docs/sql-reference/sql-functions/020-data-type-conversion.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-functions/020-data-type-conversion.md b/_docs/sql-reference/sql-functions/020-data-type-conversion.md
index 0476902..0197f8a 100644
--- a/_docs/sql-reference/sql-functions/020-data-type-conversion.md
+++ b/_docs/sql-reference/sql-functions/020-data-type-conversion.md
@@ -395,11 +395,11 @@ use in your Drill queries as described in this section:
 
 **Function**| **Return Type**  
 ---|---  
-[TO_CHAR](#TO_CHAR)(expression, format)| VARCHAR  
-[TO_DATE](#TO_DATE)(expression, format)| DATE  
-[TO_NUMBER](#TO_NUMBER)(VARCHAR, format)| DECIMAL  
-[TO_TIMESTAMP](#TO_TIMESTAMP)(VARCHAR, format)| TIMESTAMP
-[TO_TIMESTAMP](#TO_TIMESTAMP)(DOUBLE)| TIMESTAMP
+[TO_CHAR]({{site.baseurl}}/docs/data-type-conversion/#TO_CHAR)(expression, format)| VARCHAR  
+[TO_DATE]({{site.baseurl}}/docs/data-type-conversion/#TO_DATE)(expression, format)| DATE  
+[TO_NUMBER]({{site.baseurl}}/docs/data-type-conversion/#TO_NUMBER)(VARCHAR, format)| DECIMAL  
+[TO_TIMESTAMP]({{site.baseurl}}/docs/data-type-conversion/#TO_TIMESTAMP)(VARCHAR, format)| TIMESTAMP
+[TO_TIMESTAMP]({{site.baseurl}}/docs/data-type-conversion/#TO_TIMESTAMP)(DOUBLE)| TIMESTAMP
 
 ### Format Specifiers for Numerical Conversions
 Use the following Java format specifiers for converting numbers:


[26/26] drill git commit: Removed http:

Posted by ts...@apache.org.
Removed http:


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/a6822dc4
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/a6822dc4
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/a6822dc4

Branch: refs/heads/gh-pages
Commit: a6822dc4fbba3464dfe3f92d03d1d15f9dd8f2b9
Parents: 7c0e390
Author: Tomer Shiran <ts...@gmail.com>
Authored: Fri May 29 21:48:18 2015 -0700
Committer: Tomer Shiran <ts...@gmail.com>
Committed: Fri May 29 21:48:18 2015 -0700

----------------------------------------------------------------------
 _includes/head.html | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/a6822dc4/_includes/head.html
----------------------------------------------------------------------
diff --git a/_includes/head.html b/_includes/head.html
index 9b95254..05c55a2 100644
--- a/_includes/head.html
+++ b/_includes/head.html
@@ -7,7 +7,7 @@
 <title>{% if page.title %}{{ page.title }} - {{ site.title_suffix }}{% else %}{{ site.title }}{% endif %}</title>
 
 <link href="//maxcdn.bootstrapcdn.com/font-awesome/4.3.0/css/font-awesome.min.css" rel="stylesheet" type="text/css"/>
-<link href='http://fonts.googleapis.com/css?family=PT+Sans' rel='stylesheet' type='text/css'>
+<link href='//fonts.googleapis.com/css?family=PT+Sans' rel='stylesheet' type='text/css'>
 <link href="{{ site.baseurl }}/css/site.css" rel="stylesheet" type="text/css"/>
 
 <link rel="shortcut icon" href="{{ site.baseurl }}/favicon.ico" type="image/x-icon"/>


[20/26] drill git commit: safari fix

Posted by ts...@apache.org.
safari fix


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/dfa52dcb
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/dfa52dcb
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/dfa52dcb

Branch: refs/heads/gh-pages
Commit: dfa52dcbcb659d531d7caf97253411826ee78264
Parents: 7d87b4b
Author: Elliot Berry <el...@Marks-iMac-2.local>
Authored: Fri May 29 15:04:27 2015 -0400
Committer: Elliot Berry <el...@Marks-iMac-2.local>
Committed: Fri May 29 15:04:27 2015 -0400

----------------------------------------------------------------------
 index.html | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/dfa52dcb/index.html
----------------------------------------------------------------------
diff --git a/index.html b/index.html
index f821aac..513dd7d 100755
--- a/index.html
+++ b/index.html
@@ -45,10 +45,10 @@ $(document).ready(function() {
   <div class="item">
     <div class="headlines tc">
       <div id="video-slider" class="slider">
-        <div class="slide"><a class="various fancybox.iframe" href="//www.youtube.com/embed/6pGeQOXDdD8"><img src="{{ site.baseurl }}/images/thumbnail-65c42i7Xg7Q.jpg" class="thumbnail" /><img src="{{ site.baseurl }}/images/play-mq.png" class="play" /></a><div class="title">The Rise of the Non-Relational Datastore</div></div>
-        <div class="slide"><a class="various fancybox.iframe" href="//www.youtube.com/watch?v=MYY51kiFPTk"><img src="{{ site.baseurl }}/images/thumbnail-MYY51kiFPTk.jpg" class="thumbnail" /><img src="{{ site.baseurl }}/images/play-mq.png" class="play" /></a><div class="title">Deployment Options and BI Tools</div></div>
-        <div class="slide"><a class="various fancybox.iframe" href="//www.youtube.com/watch?v=bhmNbH2yzhM"><img src="{{ site.baseurl }}/images/thumbnail-bhmNbH2yzhM.jpg" class="thumbnail" /><img src="{{ site.baseurl }}/images/play-mq.png" class="play" /></a><div class="title">Connecting to Data Sources</div></div>
-        <div class="slide"><a class="various fancybox.iframe" href="//www.youtube.com/watch?v=6pGeQOXDdD8"><img src="{{ site.baseurl }}/images/thumbnail-6pGeQOXDdD8.jpg" class="thumbnail" /><img src="{{ site.baseurl }}/images/play-mq.png" class="play" /></a><div class="title">High Performance with a JSON Data Model</div></div>
+        <div class="slide"><a class="various fancybox.iframe" href="https://www.youtube.com/embed/6pGeQOXDdD8"><img src="{{ site.baseurl }}/images/thumbnail-65c42i7Xg7Q.jpg" class="thumbnail" /><img src="{{ site.baseurl }}/images/play-mq.png" class="play" /></a><div class="title">The Rise of the Non-Relational Datastore</div></div>
+        <div class="slide"><a class="various fancybox.iframe" href="https://www.youtube.com/embed/MYY51kiFPTk"><img src="{{ site.baseurl }}/images/thumbnail-MYY51kiFPTk.jpg" class="thumbnail" /><img src="{{ site.baseurl }}/images/play-mq.png" class="play" /></a><div class="title">Deployment Options and BI Tools</div></div>
+        <div class="slide"><a class="various fancybox.iframe" href="https://www.youtube.com/embed/bhmNbH2yzhM"><img src="{{ site.baseurl }}/images/thumbnail-bhmNbH2yzhM.jpg" class="thumbnail" /><img src="{{ site.baseurl }}/images/play-mq.png" class="play" /></a><div class="title">Connecting to Data Sources</div></div>
+        <div class="slide"><a class="various fancybox.iframe" href="https://www.youtube.com/embed/6pGeQOXDdD8"><img src="{{ site.baseurl }}/images/thumbnail-6pGeQOXDdD8.jpg" class="thumbnail" /><img src="{{ site.baseurl }}/images/play-mq.png" class="play" /></a><div class="title">High Performance with a JSON Data Model</div></div>
       </div>
       <h1 class="main-headline">Apache Drill</h1>
       <h2 id="sub-headline">Schema-free SQL Query Engine <br class="mobile-break" />for Hadoop, NoSQL and <br class="mobile-break" />Cloud Storage</h2>


[07/26] drill git commit: more

Posted by ts...@apache.org.
more


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/615a48c3
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/615a48c3
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/615a48c3

Branch: refs/heads/gh-pages
Commit: 615a48c353d52796ca6b3900123ecfbf172ea5bc
Parents: 7561912
Author: Elliot Berry <el...@Marks-iMac-2.local>
Authored: Wed May 27 17:20:35 2015 -0400
Committer: Elliot Berry <el...@Marks-iMac-2.local>
Committed: Wed May 27 17:20:35 2015 -0400

----------------------------------------------------------------------
 _sass/_site-responsive.scss | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/615a48c3/_sass/_site-responsive.scss
----------------------------------------------------------------------
diff --git a/_sass/_site-responsive.scss b/_sass/_site-responsive.scss
index cc0d432..9494a63 100644
--- a/_sass/_site-responsive.scss
+++ b/_sass/_site-responsive.scss
@@ -162,8 +162,11 @@
   div.int_search {
     width: 100%;
   }
-  
+
  /* Fixing header slider height/width relative issues */
+ .headlines.tc {
+  padding: 0px 30px;
+}
     #header .scroller .item {    
     height: auto !important;
   }
@@ -172,6 +175,7 @@
   }
   #header .scroller, #header .scroller .item, #header .scroller .item div.headlines.tc {    
     max-width: 100% !important;
+    width: auto;
   }
   div#video-slider {    
     float: none !important;    
@@ -221,9 +225,6 @@
     font-size: 24px;
     line-height: 32px;
   }
-  #header .scroller .item div.headlines.tc {
-    margin-left: 30px;
-  }
   div.headlines a.download-headline {
     font-size: .7em;
   }


[03/26] drill git commit: DRILL-3134

Posted by ts...@apache.org.
DRILL-3134


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/9bbb8696
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/9bbb8696
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/9bbb8696

Branch: refs/heads/gh-pages
Commit: 9bbb869691a653f25b202082d1727bdddba2fd06
Parents: c6be6cc
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Tue May 26 14:57:48 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Tue May 26 14:57:48 2015 -0700

----------------------------------------------------------------------
 _docs/sql-reference/data-types/010-supported-data-types.md | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/9bbb8696/_docs/sql-reference/data-types/010-supported-data-types.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/data-types/010-supported-data-types.md b/_docs/sql-reference/data-types/010-supported-data-types.md
index 45a4366..48397d0 100644
--- a/_docs/sql-reference/data-types/010-supported-data-types.md
+++ b/_docs/sql-reference/data-types/010-supported-data-types.md
@@ -52,11 +52,11 @@ Drill uses map and array data types internally for reading complex and nested da
 
 `a[1]`  
 
-You can refer to the value for a key in a map using this syntax:
+You can refer to the value for a key in a map using dot notation:
 
-`m['k']`
+`t.m.k`
 
-The section [“Query Complex Data”]({{ site.baseurl }}/docs/querying-complex-data-introduction) shows how to use [composite types]({{site.baseurl}}/docs/supported-data-types/#composite-types) to access nested arrays. ["Handling Different Data Types"]({{ site.baseurl }}/docs/handling-different-data-types/#handling-json-and-parquet-data) includes examples of JSON maps and arrays. Drill provides functions for handling array and map types:
+The section [“Query Complex Data”]({{ site.baseurl }}/docs/querying-complex-data-introduction) shows how to use composite types to access nested arrays. ["Handling Different Data Types"]({{ site.baseurl }}/docs/handling-different-data-types/#handling-json-and-parquet-data) includes examples of JSON maps and arrays. Drill provides functions for handling array and map types:
 
 * ["KVGEN"]({{site.baseurl}}/docs/kvgen/)
 * ["FLATTEN"]({{site.baseurl}}/docs/flatten/)


[06/26] drill git commit: header fix

Posted by ts...@apache.org.
header fix


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/75619126
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/75619126
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/75619126

Branch: refs/heads/gh-pages
Commit: 75619126a234b2b14d0c264dc66498838b1d1912
Parents: 2b51a7f
Author: Elliot Berry <el...@Marks-iMac-2.local>
Authored: Wed May 27 17:12:12 2015 -0400
Committer: Elliot Berry <el...@Marks-iMac-2.local>
Committed: Wed May 27 17:12:12 2015 -0400

----------------------------------------------------------------------
 _sass/_site-responsive.scss | 15 +++++++++++++++
 1 file changed, 15 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/75619126/_sass/_site-responsive.scss
----------------------------------------------------------------------
diff --git a/_sass/_site-responsive.scss b/_sass/_site-responsive.scss
index 1a32308..cc0d432 100644
--- a/_sass/_site-responsive.scss
+++ b/_sass/_site-responsive.scss
@@ -162,6 +162,21 @@
   div.int_search {
     width: 100%;
   }
+  
+ /* Fixing header slider height/width relative issues */
+    #header .scroller .item {    
+    height: auto !important;
+  }
+  #header {    
+    height: auto !important;
+  }
+  #header .scroller, #header .scroller .item, #header .scroller .item div.headlines.tc {    
+    max-width: 100% !important;
+  }
+  div#video-slider {    
+    float: none !important;    
+    margin: 60px auto 40px auto;
+  }
 }
 
 @media (max-width: 570px) {


[08/26] drill git commit: more cool header fixes, that's how I party

Posted by ts...@apache.org.
more cool header fixes, that's how I party


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/59899874
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/59899874
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/59899874

Branch: refs/heads/gh-pages
Commit: 59899874a04b02f6c163060818e004b0e7d4eff5
Parents: 615a48c
Author: Elliot Berry <el...@Marks-iMac-2.local>
Authored: Wed May 27 17:26:33 2015 -0400
Committer: Elliot Berry <el...@Marks-iMac-2.local>
Committed: Wed May 27 17:26:33 2015 -0400

----------------------------------------------------------------------
 _sass/_site-responsive.scss | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/59899874/_sass/_site-responsive.scss
----------------------------------------------------------------------
diff --git a/_sass/_site-responsive.scss b/_sass/_site-responsive.scss
index 9494a63..09f16dd 100644
--- a/_sass/_site-responsive.scss
+++ b/_sass/_site-responsive.scss
@@ -104,7 +104,7 @@
 
 
   div.home-row{
-    width:100%;
+      padding: 0px 20px 0px 0px;
   }
 
   div.home-row:nth-child(odd) div.small{


[14/26] drill git commit: added auto body classes, blocking that erroneous blog hamburger

Posted by ts...@apache.org.
added auto body classes, blocking that erroneous blog hamburger


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/c1282ddd
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/c1282ddd
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/c1282ddd

Branch: refs/heads/gh-pages
Commit: c1282ddde560749e8fa9d6cdb36e325f34fab6b3
Parents: 5f066c8
Author: Elliot Berry <el...@Marks-iMac-2.local>
Authored: Fri May 29 12:04:48 2015 -0400
Committer: Elliot Berry <el...@Marks-iMac-2.local>
Committed: Fri May 29 12:04:48 2015 -0400

----------------------------------------------------------------------
 _sass/_doc-content.scss | 3 +++
 1 file changed, 3 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/c1282ddd/_sass/_doc-content.scss
----------------------------------------------------------------------
diff --git a/_sass/_doc-content.scss b/_sass/_doc-content.scss
index 300d4f9..d78f71c 100644
--- a/_sass/_doc-content.scss
+++ b/_sass/_doc-content.scss
@@ -16,6 +16,9 @@
   display: none;
   overflow: auto;
 }
+body.blog #menu ul li.toc-categories {
+  display:none;
+}
 
 /* Bottom navigation (left and right arrows) */
 


[04/26] drill git commit: DRILL-3169 multiple dir

Posted by ts...@apache.org.
DRILL-3169 multiple dir


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/806f24fd
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/806f24fd
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/806f24fd

Branch: refs/heads/gh-pages
Commit: 806f24fd461a9b58b27ef1c09b89913faae62744
Parents: 9bbb869
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Tue May 26 16:48:37 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Tue May 26 16:48:37 2015 -0700

----------------------------------------------------------------------
 .../030-querying-plain-text-files.md            | 95 ++------------------
 .../040-querying-directories.md                 | 45 ++--------
 2 files changed, 12 insertions(+), 128 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/806f24fd/_docs/query-data/query-a-file-system/030-querying-plain-text-files.md
----------------------------------------------------------------------
diff --git a/_docs/query-data/query-a-file-system/030-querying-plain-text-files.md b/_docs/query-data/query-a-file-system/030-querying-plain-text-files.md
index aeb3543..f79f2b9 100644
--- a/_docs/query-data/query-a-file-system/030-querying-plain-text-files.md
+++ b/_docs/query-data/query-a-file-system/030-querying-plain-text-files.md
@@ -194,104 +194,23 @@ times a year in the books that Google scans.
          +------------------------------------+-------------------+------------+
          5 rows selected (1.175 seconds)
 
-The Drill default storage plugins support common file formats. If you need
-support for some other file format, such as GZ, create a custom storage plugin. You can also create a storage plugin to simplify querying files having long path names. A workspace name replaces the long path name.
+The Drill default storage plugins support common file formats. 
 
 
-## Create a Storage Plugin
+## Query the GZ File Directly
 
-This example covers how to create and use a storage plugin to simplify queries or to query a file type that `dfs` does not specify, GZ in this case. First, you create the storage plugin in the Drill Web UI. Next, you connect to the
-file through the plugin to query a file.
+This example covers how to query the GZ file containing the compressed TSV data. The GZ file name needs to be renamed to specify the type of delimited file, such as CSV or TSV. You add `.tsv` before the `.gz` extension in this example.
 
-You can create a storage plugin using the Apache Drill Web UI to query the GZ file containing the compressed TSV data.
-
-  1. Create an `ngram` directory on your file system.
-  2. Copy the GZ file `googlebooks-eng-all-5gram-20120701-zo.gz` to the `ngram` directory.
-  3. Open the Drill Web UI by navigating to <http://localhost:8047/storage>.   
-     To open the Drill Web UI, the [Drill shell]({{site.baseurl}}/docs/starting-drill-on-linux-and-mac-os-x/) must still be running.
-  4. In New Storage Plugin, type `myplugin`.  
-     ![new plugin]({{ site.baseurl }}/docs/img/ngram_plugin.png)    
-  5. Click **Create**.  
-     The Configuration screen appears.
-  6. Replace null with the following storage plugin definition, except on the location line, use the *full* path to your `ngram` directory instead of the drilluser's path and give your workspace an arbitrary name, for example, ngram:
-  
-        {
-          "type": "file",
-          "enabled": true,
-          "connection": "file:///",
-          "workspaces": {
-            "ngram": {
-              "location": "/Users/drilluser/ngram",
-              "writable": false,
-              "defaultInputFormat": null
-           }
-         },
-         "formats": {
-           "tsv": {
-             "type": "text",
-             "extensions": [
-               "gz"
-             ],
-             "delimiter": "\t"
-            }
-          }
-        }
-
-  7. Click **Create**.  
-     The success message appears briefly.
-  8. Click **Back**.  
-     The new plugin appears in Enabled Storage Plugins.  
-     ![new plugin]({{ site.baseurl }}/docs/img/ngram_plugin.png) 
-  9. Go back to the Drill shell, and list the storage plugins.  
-          SHOW DATABASES;
-
-          +---------------------+
-          |     SCHEMA_NAME     |
-          +---------------------+
-          | INFORMATION_SCHEMA  |
-          | cp.default          |
-          | dfs.default         |
-          | dfs.root            |
-          | dfs.tmp             |
-          | myplugin.default    |
-          | myplugin.ngram      |
-          | sys                 |
-          +---------------------+
-          8 rows selected (0.105 seconds)
-
-Your custom plugin appears in the list and has two workspaces: the `ngram`
-workspace that you defined and a default workspace.
-
-### Connect to and Query a File
-
-When querying the same data source repeatedly, avoiding long path names is
-important. This exercise demonstrates how to simplify the query. Instead of
-using the full path to the Ngram file, you use dot notation in the FROM
-clause.
-
-``<workspace name>.`<location>```
-
-This syntax assumes you connected to a storage plugin that defines the
-location of the data. To query the data source while you are _not_ connected to
-that storage plugin, include the plugin name:
-
-``<plugin name>.<workspace name>.`<location>```
-
-This exercise shows how to query Ngram data when you are connected to `myplugin`.
-
-  1. Connect to the ngram file through the custom storage plugin.  
-     `USE myplugin;`
-  2. Get data about "Zoological Journal of the Linnean" that appears more than 250 times a year in the books that Google scans. In the FROM clause, instead of using the full path to the file as you did in the last exercise, connect to the data using the storage plugin workspace name ngram.
+  1. Rename the GZ file `googlebooks-eng-all-5gram-20120701-zo.gz` to googlebooks-eng-all-5gram-20120701-zo.tsv.gz.
+  2. Query the renamed GZ file directly to get data about "Zoological Journal of the Linnean" that appears more than 250 times a year in the books that Google scans. In the FROM clause, instead of using the full path to the file as you did in the last exercise, connect to the data using the storage plugin workspace name ngram.
   
          SELECT COLUMNS[0], 
                 COLUMNS[1], 
                 COLUMNS[2] 
-         FROM ngram.`/googlebooks-eng-all-5gram-20120701-zo.gz` 
+         FROM dfs.`/Users/drilluser/Downloads/googlebooks-eng-all-5gram-20120701-zo.tsv.gz` 
          WHERE ((columns[0] = 'Zoological Journal of the Linnean') 
          AND (columns[2] > 250)) 
          LIMIT 10;
 
-     The five rows of output appear.  
-
-To continue with this example and query multiple files in a directory, see the section, ["Example of Querying Multiple Files in a Directory"]({{site.baseurl}}/docs/querying-directories/#example-of-querying-multiple-files-in-a-directory).
+     The 5 rows of output appear.  
 

http://git-wip-us.apache.org/repos/asf/drill/blob/806f24fd/_docs/query-data/query-a-file-system/040-querying-directories.md
----------------------------------------------------------------------
diff --git a/_docs/query-data/query-a-file-system/040-querying-directories.md b/_docs/query-data/query-a-file-system/040-querying-directories.md
index 4a5b4ae..88b5b40 100644
--- a/_docs/query-data/query-a-file-system/040-querying-directories.md
+++ b/_docs/query-data/query-a-file-system/040-querying-directories.md
@@ -13,8 +13,8 @@ same structure: `plays.csv` and `moreplays.csv`. The first file contains 7
 records and the second file contains 3 records. The following query returns
 the "union" of the two files, ordered by the first column:
 
-    0: jdbc:drill:zk=local> select columns[0] as `Year`, columns[1] as Play 
-    from dfs.`/Users/brumsby/drill/testdata` order by 1;
+    0: jdbc:drill:zk=local> SELECT COLUMNS[0] AS `Year`, COLUMNS[1] AS Play 
+    FROM dfs.`/Users/brumsby/drill/testdata` order by 1;
  
     +------------+------------------------+
     |    Year    |          Play          |
@@ -49,7 +49,7 @@ You can query all of these files, or a subset, by referencing the file system
 once in a Drill query. For example, the following query counts the number of
 records in all of the files inside the `2013` directory:
 
-    0: jdbc:drill:> select count(*) from MFS.`/mapr/drilldemo/labs/clicks/logs/2013` ;
+    0: jdbc:drill:> SELECT COUNT(*) FROM MFS.`/mapr/drilldemo/labs/clicks/logs/2013` ;
     +------------+
     |   EXPR$0   |
     +------------+
@@ -64,7 +64,7 @@ subdirectories: `2012`, `2013`, and `2014`. The following query constrains
 files inside the subdirectory named `2013`. The variable `dir0` refers to the
 first level down from logs, `dir1` to the next level, and so on.
 
-    0: jdbc:drill:> use bob.logdata;
+    0: jdbc:drill:> USE bob.logdata;
     +------------+-----------------------------------------+
     |     ok     |              summary                    |
     +------------+-----------------------------------------+
@@ -72,7 +72,7 @@ first level down from logs, `dir1` to the next level, and so on.
     +------------+-----------------------------------------+
     1 row selected (0.305 seconds)
  
-    0: jdbc:drill:> select * from logs where dir0='2013' limit 10;
+    0: jdbc:drill:> SELECT * FROM logs WHERE dir0='2013' LIMIT 10;
     +------------+------------+------------+------------+------------+------------+------------+------------+------------+-------------+
     |    dir0    |    dir1    |  trans_id  |    date    |    time    |  cust_id   |   device   |   state    |  camp_id   |  keywords   |
     +------------+------------+------------+------------+------------+------------+------------+------------+------------+-------------+
@@ -89,38 +89,3 @@ first level down from logs, `dir1` to the next level, and so on.
     +------------+------------+------------+------------+------------+------------+------------+------------+------------+-------------+
     10 rows selected (0.583 seconds)
 
-## Example of Querying Multiple Files in a Directory
-
-This example is a continuation of the example in the section, ["Example of Querying a TSV File"]({{site.baseurl}}/docs/querying-plain-text-files/#example-of-querying-a-tsv-file) that creates a subdirectory in the `ngram` directory and [custom plugin workspace]({{site.baseurl}}/docs/querying-plain-text-files/#create-a-storage-plugin) you created earlier.
-
-You download a second Ngram file. Next, you
-move both Ngram GZ files you downloaded to the `ngram` subdirectory. Finally, using the custom
-plugin workspace, you query both files. In the FROM clause, simply reference
-the subdirectory.
-
-  1. Download a second file of compressed Google Ngram data from this location: 
-  
-     http://storage.googleapis.com/books/ngrams/books/googlebooks-eng-all-2gram-20120701-ze.gz
-  2. Move `googlebooks-eng-all-2gram-20120701-ze.gz` to the `ngram/myfiles` subdirectory. 
-  3. Move the 5gram file you downloaded earlier `googlebooks-eng-all-5gram-20120701-zo.gz` to the `ngram/myfiles` subdirectory.
-  4. In the Drill shell, use the `myplugin.ngrams` workspace. 
-   
-          USE myplugin.ngram;
-  5. Query the myfiles directory for the "Zoological Journal of the Linnean" or "zero temperatures" in books published in 1998.
-  
-          SELECT * 
-          FROM myfiles 
-          WHERE (((COLUMNS[0] = 'Zoological Journal of the Linnean')
-            OR (COLUMNS[0] = 'zero temperatures')) 
-            AND (COLUMNS[1] = '1998'));
-The output lists ngrams from both files.
-
-          +----------------------------------------------------------+
-          |                         columns                          |
-          +----------------------------------------------------------+
-          | ["Zoological Journal of the Linnean","1998","157","53"]  |
-          | ["zero temperatures","1998","628","487"]                 |
-          +----------------------------------------------------------+
-          2 rows selected (7.007 seconds)
-
-For more information about querying directories, see the section, ["Query Directory Functions"]({{site.baseurl}}/docs/query-directory-functions).
\ No newline at end of file


[10/26] drill git commit: Body class plugin for fun

Posted by ts...@apache.org.
Body class plugin for fun


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/f2414032
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/f2414032
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/f2414032

Branch: refs/heads/gh-pages
Commit: f24140322a8a901532e91e03ba93ab7ee3980c6d
Parents: cc1c444
Author: Elliot Berry <el...@Marks-iMac-2.local>
Authored: Fri May 29 11:29:27 2015 -0400
Committer: Elliot Berry <el...@Marks-iMac-2.local>
Committed: Fri May 29 11:29:27 2015 -0400

----------------------------------------------------------------------
 _layouts/default.html |  2 +-
 _plugins/bodyClass.rb | 34 ++++++++++++++++++++++++++++++++++
 2 files changed, 35 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/f2414032/_layouts/default.html
----------------------------------------------------------------------
diff --git a/_layouts/default.html b/_layouts/default.html
index ccb47db..4059872 100644
--- a/_layouts/default.html
+++ b/_layouts/default.html
@@ -3,7 +3,7 @@
 
 {% include head.html %}
 
-<body onResize="resized();">
+<body onResize="resized();" class="{% body_class %}">
   <div class="page-wrap">
     {% include menu.html %}
     {{ content }}

http://git-wip-us.apache.org/repos/asf/drill/blob/f2414032/_plugins/bodyClass.rb
----------------------------------------------------------------------
diff --git a/_plugins/bodyClass.rb b/_plugins/bodyClass.rb
new file mode 100644
index 0000000..876121f
--- /dev/null
+++ b/_plugins/bodyClass.rb
@@ -0,0 +1,34 @@
+class BodyClassTag < Liquid::Tag  
+
+  def generate_body_class(prefix, id)
+    id = id.gsub(/\.\w*?$/, '').gsub(/[-\/]/, '_').gsub(/^_/, '') # Remove extension from url, replace '-' and '/' with underscore, Remove leading '_'
+
+    case prefix
+    when "class"
+      prefix = ""
+    else
+      prefix = "#{prefix}_"
+    end
+
+    "#{prefix}#{id}"
+  end
+
+  def render(context)
+    page = context.environments.first["page"]
+    classes = []
+
+    %w[class url categories tags layout].each do |prop|
+      next unless page.has_key?(prop)
+      if page[prop].kind_of?(Array)
+        page[prop].each { |proper| classes.push generate_body_class(prop, proper) }
+      else
+        classes.push generate_body_class(prop, page[prop])
+      end
+    end
+
+    classes.join(" ")
+  end
+
+end
+
+Liquid::Template.register_tag('body_class', BodyClassTag)
\ No newline at end of file


[12/26] drill git commit: aasgsdgsd

Posted by ts...@apache.org.
aasgsdgsd


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/eb7e4465
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/eb7e4465
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/eb7e4465

Branch: refs/heads/gh-pages
Commit: eb7e446513f64612cc8ab3edae801a290056b96a
Parents: 0283a9e
Author: Elliot Berry <el...@Marks-iMac-2.local>
Authored: Fri May 29 11:58:10 2015 -0400
Committer: Elliot Berry <el...@Marks-iMac-2.local>
Committed: Fri May 29 11:58:10 2015 -0400

----------------------------------------------------------------------
 js/script.js | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/eb7e4465/js/script.js
----------------------------------------------------------------------
diff --git a/js/script.js b/js/script.js
index b68d97d..bd99f3f 100755
--- a/js/script.js
+++ b/js/script.js
@@ -38,13 +38,12 @@ $(document).ready(function(e) {
 	
 	$(window).scroll(onScroll);
 
-	function getParameterByName(name) {
+
     name = name.replace(/[\[]/, "\\[").replace(/[\]]/, "\\]");
     var regex = new RegExp("[\\?&]" + name + "=([^&#]*)"),
-        results = regex.exec(location.search);
-        alert(results);
-    return results === null ? "" : decodeURIComponent(results[1].replace(/\+/g, " "));
-	}
+    results = regex.exec(location.search);
+    alert(results);
+    
 });
 
 var reel_currentIndex = 0;


[09/26] drill git commit: Doing my darndest to fix this whole safari thing where it don't like the iframe

Posted by ts...@apache.org.
Doing my darndest to fix this whole safari thing where it don't like the iframe


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/cc1c444c
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/cc1c444c
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/cc1c444c

Branch: refs/heads/gh-pages
Commit: cc1c444ce5a5fd7d6c7fa8a3f56263505331340b
Parents: 5989987
Author: Elliot Berry <el...@Marks-iMac-2.local>
Authored: Wed May 27 18:18:11 2015 -0400
Committer: Elliot Berry <el...@Marks-iMac-2.local>
Committed: Wed May 27 18:18:11 2015 -0400

----------------------------------------------------------------------
 index.html | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/cc1c444c/index.html
----------------------------------------------------------------------
diff --git a/index.html b/index.html
index b1de264..f821aac 100755
--- a/index.html
+++ b/index.html
@@ -45,7 +45,7 @@ $(document).ready(function() {
   <div class="item">
     <div class="headlines tc">
       <div id="video-slider" class="slider">
-        <div class="slide"><a class="various fancybox.iframe" href="//www.youtube.com/watch?v=65c42i7Xg7Q"><img src="{{ site.baseurl }}/images/thumbnail-65c42i7Xg7Q.jpg" class="thumbnail" /><img src="{{ site.baseurl }}/images/play-mq.png" class="play" /></a><div class="title">The Rise of the Non-Relational Datastore</div></div>
+        <div class="slide"><a class="various fancybox.iframe" href="//www.youtube.com/embed/6pGeQOXDdD8"><img src="{{ site.baseurl }}/images/thumbnail-65c42i7Xg7Q.jpg" class="thumbnail" /><img src="{{ site.baseurl }}/images/play-mq.png" class="play" /></a><div class="title">The Rise of the Non-Relational Datastore</div></div>
         <div class="slide"><a class="various fancybox.iframe" href="//www.youtube.com/watch?v=MYY51kiFPTk"><img src="{{ site.baseurl }}/images/thumbnail-MYY51kiFPTk.jpg" class="thumbnail" /><img src="{{ site.baseurl }}/images/play-mq.png" class="play" /></a><div class="title">Deployment Options and BI Tools</div></div>
         <div class="slide"><a class="various fancybox.iframe" href="//www.youtube.com/watch?v=bhmNbH2yzhM"><img src="{{ site.baseurl }}/images/thumbnail-bhmNbH2yzhM.jpg" class="thumbnail" /><img src="{{ site.baseurl }}/images/play-mq.png" class="play" /></a><div class="title">Connecting to Data Sources</div></div>
         <div class="slide"><a class="various fancybox.iframe" href="//www.youtube.com/watch?v=6pGeQOXDdD8"><img src="{{ site.baseurl }}/images/thumbnail-6pGeQOXDdD8.jpg" class="thumbnail" /><img src="{{ site.baseurl }}/images/play-mq.png" class="play" /></a><div class="title">High Performance with a JSON Data Model</div></div>


[25/26] drill git commit: Typo

Posted by ts...@apache.org.
Typo


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/7c0e3900
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/7c0e3900
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/7c0e3900

Branch: refs/heads/gh-pages
Commit: 7c0e3900dd85e05ed69239848f378fd01b4240e9
Parents: 803138f
Author: Tomer Shiran <ts...@gmail.com>
Authored: Fri May 29 21:46:15 2015 -0700
Committer: Tomer Shiran <ts...@gmail.com>
Committed: Fri May 29 21:46:15 2015 -0700

----------------------------------------------------------------------
 index.html | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/7c0e3900/index.html
----------------------------------------------------------------------
diff --git a/index.html b/index.html
index aad4ace..4b5d198 100755
--- a/index.html
+++ b/index.html
@@ -46,7 +46,7 @@ $(document).ready(function() {
     <div class="headlines tc">
       <div id="video-slider" class="slider">
         <div class="slide"><a class="various fancybox.iframe" href="//www.youtube.com/embed/6pGeQOXDdD8"><img src="{{ site.baseurl }}/images/thumbnail-65c42i7Xg7Q.jpg" class="thumbnail" /><img src="{{ site.baseurl }}/images/play-mq.png" class="play" /></a><div class="title">The Rise of the Non-Relational Datastore</div></div>
-        <div class="slide"><a class="various fancybox.iframe" href="/www.youtube.com/embed/MYY51kiFPTk"><img src="{{ site.baseurl }}/images/thumbnail-MYY51kiFPTk.jpg" class="thumbnail" /><img src="{{ site.baseurl }}/images/play-mq.png" class="play" /></a><div class="title">Deployment Options and BI Tools</div></div>
+        <div class="slide"><a class="various fancybox.iframe" href="//www.youtube.com/embed/MYY51kiFPTk"><img src="{{ site.baseurl }}/images/thumbnail-MYY51kiFPTk.jpg" class="thumbnail" /><img src="{{ site.baseurl }}/images/play-mq.png" class="play" /></a><div class="title">Deployment Options and BI Tools</div></div>
         <div class="slide"><a class="various fancybox.iframe" href="//www.youtube.com/embed/bhmNbH2yzhM"><img src="{{ site.baseurl }}/images/thumbnail-bhmNbH2yzhM.jpg" class="thumbnail" /><img src="{{ site.baseurl }}/images/play-mq.png" class="play" /></a><div class="title">Connecting to Data Sources</div></div>
         <div class="slide"><a class="various fancybox.iframe" href="//www.youtube.com/embed/6pGeQOXDdD8"><img src="{{ site.baseurl }}/images/thumbnail-6pGeQOXDdD8.jpg" class="thumbnail" /><img src="{{ site.baseurl }}/images/play-mq.png" class="play" /></a><div class="title">High Performance with a JSON Data Model</div></div>
       </div>


[23/26] drill git commit: Merge branch 'gh-pages' of https://github.com/tshiran/drill into gh-pages

Posted by ts...@apache.org.
Merge branch 'gh-pages' of https://github.com/tshiran/drill into gh-pages

Conflicts:
	_docs/data-sources-and-file-formats/050-json-data-model.md


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/a15a7f11
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/a15a7f11
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/a15a7f11

Branch: refs/heads/gh-pages
Commit: a15a7f118d245f4bbec0a4d9aa951c9990ea0ae6
Parents: 446d71c d6f216a
Author: Tomer Shiran <ts...@gmail.com>
Authored: Fri May 29 21:33:15 2015 -0700
Committer: Tomer Shiran <ts...@gmail.com>
Committed: Fri May 29 21:33:15 2015 -0700

----------------------------------------------------------------------
 _data/docs.json                                 | 112 +++++++--------
 .../020-configuring-drill-memory.md             |   4 +-
 .../040-persistent-configuration-storage.md     |   2 +-
 .../010-connect-a-data-source-introduction.md   |   9 +-
 .../040-file-system-storage-plugin.md           | 105 ++++++++++++++
 _docs/connect-a-data-source/040-workspaces.md   |  76 ----------
 .../050-file-system-storage-plugin.md           |  64 ---------
 _docs/connect-a-data-source/050-workspaces.md   |  33 +++++
 .../connect-a-data-source/100-mapr-db-format.md |   3 +-
 .../050-json-data-model.md                      | 137 +++++++++----------
 .../020-develop-a-simple-function.md            |   4 +-
 .../030-developing-an-aggregate-function.md     |  18 +++
 .../060-custom-function-interfaces.md           |  14 +-
 .../010-apache-drill-contribution-guidelines.md |   2 +-
 .../design-docs/050-value-vectors.md            |   2 +-
 _docs/img/connect-plugin.png                    | Bin 36731 -> 41222 bytes
 .../020-using-jdbc-with-squirrel-on-windows.md  |   2 +-
 .../data-types/010-supported-data-types.md      |   7 +
 .../sql-functions/020-data-type-conversion.md   |  10 +-
 19 files changed, 313 insertions(+), 291 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/a15a7f11/_docs/data-sources-and-file-formats/050-json-data-model.md
----------------------------------------------------------------------
diff --cc _docs/data-sources-and-file-formats/050-json-data-model.md
index 1b1660d,1bc4cec..0c51757
--- a/_docs/data-sources-and-file-formats/050-json-data-model.md
+++ b/_docs/data-sources-and-file-formats/050-json-data-model.md
@@@ -10,9 -10,9 +10,9 @@@ Drill supports [JSON (JavaScript Objec
  
  Semi-structured JSON data often consists of complex, nested elements having schema-less fields that differ type-wise from row to row. The data can constantly evolve. Applications typically add and remove fields frequently to meet business requirements.
  
--Using Drill you can natively query dynamic JSON data sets using SQL. Drill treats a JSON object as a SQL record. One object equals one row in a Drill table. 
++Using Drill you can natively query dynamic JSON data sets using SQL. Drill treats a JSON object as a SQL record. One object equals one row in a Drill table.
  
--You can also [query compressed .gz files]({{ site.baseurl }}/docs/drill-default-input-format#querying-compressed-json) having JSON as well as uncompressed .json files. 
++You can also [query compressed .gz files]({{ site.baseurl }}/docs/drill-default-input-format#querying-compressed-json) having JSON as well as uncompressed .json files.
  
  In addition to the examples presented later in this section, see ["How to Analyze Highly Dynamic Datasets with Apache Drill"](https://www.mapr.com/blog/how-analyze-highly-dynamic-datasets-apache-drill) for information about how to analyze a JSON data set.
  
@@@ -21,14 -21,14 +21,14 @@@ JSON data consists of the following typ
  
  * Array: ordered values, separated by commas, enclosed in square brackets
  * Boolean: true or false
--* Number: double-precision floating point number, including exponential numbers. No octal, hexadecimal, NaN, or Infinity 
++* Number: double-precision floating point number, including exponential numbers. No octal, hexadecimal, NaN, or Infinity
  * null: empty value
  * Object: unordered key/value collection enclosed in curly braces
  * String: Unicode enclosed in double quotation marks
  * Value: a string, number, true, false, null
  * Whitespace: used between tokens
  
--The following table shows SQL-JSON data type mapping: 
++The following table shows SQL-JSON data type mapping:
  
  | SQL Type | JSON Type | Description                                                                                   |
  |----------|-----------|-----------------------------------------------------------------------------------------------|
@@@ -58,10 -58,10 +58,10 @@@ When you set this option, Drill reads a
  
  [“Query Complex Data”]({{ site.baseurl }}/docs/querying-complex-data-introduction) show how to use [composite types]({{site.baseurl}}/docs/supported-data-types/#composite-types) to access nested arrays.
  
--Drill uses these types internally for reading complex and nested data structures from data sources such as JSON. 
++Drill uses these types internally for reading complex and nested data structures from data sources such as JSON.
  
  ## Reading JSON
--To read JSON data using Drill, use a [file system storage plugin]({{ site.baseurl }}/docs/file-system-storage-plugin/) that defines the JSON format. You can use the `dfs` storage plugin, which includes the definition. 
++To read JSON data using Drill, use a [file system storage plugin]({{ site.baseurl }}/docs/file-system-storage-plugin/) that defines the JSON format. You can use the `dfs` storage plugin, which includes the definition.
  
  JSON data is often complex. Data can be deeply nested and semi-structured. but you can use [workarounds ]({{ site.baseurl }}/docs/json-data-model/#limitations-and-workarounds) covered later.
  
@@@ -69,8 -69,8 +69,8 @@@ Drill reads tuples defined in single ob
  
      { name: "Apples", desc: "Delicious" }
      { name: "Oranges", desc: "Florida Navel" }
--    
--To read and [analyze complex JSON]({{ site.baseurl }}/docs/json-data-model#analyzing-json) files, use the FLATTEN and KVGEN functions. 
++
++To read and [analyze complex JSON]({{ site.baseurl }}/docs/json-data-model#analyzing-json) files, use the FLATTEN and KVGEN functions.
  
  ## Writing JSON
  You can write data from Drill to a JSON file. The following setup is required:
@@@ -90,7 -90,7 +90,7 @@@
  * Set the output format to JSON. For example:
  
          ALTER SESSION SET `store.format`='json';
--    
++
  * Use the path to the workspace location in a CTAS command. for example:
  
          USE myplugin.myworkspace;
@@@ -98,7 -98,7 +98,7 @@@
          SELECT my column from dfs.`<path_file_name>`;
  
  Drill performs the following actions, as shown in the complete [CTAS command example]({{ site.baseurl }}/docs/create-table-as-ctas/):
--   
++
  * Creates a directory using table name.
  * Writes the JSON data to the directory in the workspace location.
  
@@@ -109,11 -109,11 +109,11 @@@ Generally, you query JSON files using t
  * Dot notation to drill down into a JSON map.
  
          SELECT t.level1.level2. . . . leveln FROM <storage plugin location>`myfile.json` t
--        
++
  * Use square brackets, array-style notation to drill down into a JSON array.
  
          SELECT t.level1.level2[n][2] FROM <storage plugin location>`myfile.json` t;
--    
++
    The first index position of an array is 0.
  
  * Do not use a map, array or repeated scalar type in GROUP BY, ORDER BY or in a comparison operator.
@@@ -122,11 -122,11 +122,12 @@@ Drill returns null when a document doe
  
  Using the following techniques, you can query complex, nested JSON:
  
--* Flatten nested data 
++* Flatten nested data
  * Generate key/value pairs for loosely structured data
  
  ## Example: Flatten and Generate Key Values for Complex JSON
- This example uses the following data that represents unit sales of tickets to events that were sold over a period of for several days in December:
++
+ This example uses the following data that represents unit sales of tickets to events that were sold over a period of several days in December:
  
  ### ticket_sales.json Contents
  
@@@ -150,7 -150,7 +151,7 @@@
          "12-21": 857475
        }
      }
--    
++
  Take a look at the data in Drill:
  
      +---------+---------+---------------------------------------------------------------+
@@@ -180,7 -180,7 +181,7 @@@ KVGEN allows queries against maps wher
  
  FLATTEN breaks the list of key-value pairs into separate rows on which you can apply analytic functions. FLATTEN takes a JSON array, such as the output from kvgen(sales), as an argument. Using the all (*) wildcard as the argument is not supported and returns an error. The following example continues using data from the [previous example]({{site.baseurl}}/docs/json-data-model/#example:-flatten-and-generate-key-values-for-complex-json):
  
--    SELECT FLATTEN(kvgen(sales)) Sales 
++    SELECT FLATTEN(kvgen(sales)) Sales
      FROM dfs.`/Users/drilluser/drill/ticket_sales.json`;
  
      +--------------------------------+
@@@ -198,10 -198,10 +199,10 @@@
      8 rows selected (0.171 seconds)
  
  ### Example: Aggregate Loosely Structured Data
--Use flatten and kvgen together to aggregate the data from the [previous example]({{site.baseurl}}/docs/json-data-model/#example:-flatten-and-generate-key-values-for-complex-json). Make sure all text mode is set to false to sum numbers. Drill returns an error if you attempt to sum data in all text mode. 
++Use flatten and kvgen together to aggregate the data from the [previous example]({{site.baseurl}}/docs/json-data-model/#example:-flatten-and-generate-key-values-for-complex-json). Make sure all text mode is set to false to sum numbers. Drill returns an error if you attempt to sum data in all text mode.
  
      ALTER SYSTEM SET `store.json.all_text_mode` = false;
--    
++
  Sum the ticket sales by combining the `SUM`, `FLATTEN`, and `KVGEN` functions in a single query.
  
      SELECT SUM(tkt.tot_sales.`value`) AS TicketSold FROM (SELECT flatten(kvgen(sales)) tot_sales FROM dfs.`/Users/drilluser/ticket_sales.json`) tkt;
@@@ -214,13 -214,13 +215,17 @@@
      1 row selected (0.244 seconds)
  
  ### Example: Aggregate and Sort Data
- Sum the ticket sales by state and group by day and sort in ascending order. 
 -Sum and group the ticket sales by date and sort in ascending order of total tickets sold. 
--
--    SELECT `right`(tkt.tot_sales.key,2) `December Date`, 
--    SUM(tkt.tot_sales.`value`) AS TotalSales 
--    FROM (SELECT FLATTEN(kvgen(sales)) tot_sales 
--    FROM dfs.`/Users/drilluser/ticket_sales.json`) tkt 
--    GROUP BY `right`(tkt.tot_sales.key,2) 
++<<<<<<< HEAD
++Sum the ticket sales by state and group by day and sort in ascending order.
++=======
++Sum and group the ticket sales by date and sort in ascending order of total tickets sold.
++>>>>>>> d6f216a60b04b5366a3f3905450988597a421118
++
++    SELECT `right`(tkt.tot_sales.key,2) `December Date`,
++    SUM(tkt.tot_sales.`value`) AS TotalSales
++    FROM (SELECT FLATTEN(kvgen(sales)) tot_sales
++    FROM dfs.`/Users/drilluser/ticket_sales.json`) tkt
++    GROUP BY `right`(tkt.tot_sales.key,2)
      ORDER BY TotalSales;
  
      +----------------+-------------+
@@@ -240,30 -240,30 +245,30 @@@ To access a map field in an array, use 
      {
        "type": "FeatureCollection",
        "features": [
--      { 
--        "type": "Feature", 
--        "properties": 
--        { 
--          "MAPBLKLOT": "0001001", 
--          "BLKLOT": "0001001", 
--          "BLOCK_NUM": "0001", 
--          "LOT_NUM": "001", 
--          "FROM_ST": "0", 
--          "TO_ST": "0", 
--          "STREET": "UNKNOWN", 
--          "ST_TYPE": null, 
--          "ODD_EVEN": "E" }, 
--          "geometry": 
--        { 
--            "type": "Polygon", 
--            "coordinates": 
--            [ [ 
--            [ -122.422003528252475, 37.808480096967251, 0.0 ], 
--            [ -122.422076013325281, 37.808835019815085, 0.0 ], 
--            [ -122.421102174348633, 37.808803534992904, 0.0 ], 
--            [ -122.421062569067274, 37.808601056818148, 0.0 ], 
--            [ -122.422003528252475, 37.808480096967251, 0.0 ] 
--            ] ] 
++      {
++        "type": "Feature",
++        "properties":
++        {
++          "MAPBLKLOT": "0001001",
++          "BLKLOT": "0001001",
++          "BLOCK_NUM": "0001",
++          "LOT_NUM": "001",
++          "FROM_ST": "0",
++          "TO_ST": "0",
++          "STREET": "UNKNOWN",
++          "ST_TYPE": null,
++          "ODD_EVEN": "E" },
++          "geometry":
++        {
++            "type": "Polygon",
++            "coordinates":
++            [ [
++            [ -122.422003528252475, 37.808480096967251, 0.0 ],
++            [ -122.422076013325281, 37.808835019815085, 0.0 ],
++            [ -122.421102174348633, 37.808803534992904, 0.0 ],
++            [ -122.421062569067274, 37.808601056818148, 0.0 ],
++            [ -122.422003528252475, 37.808480096967251, 0.0 ]
++            ] ]
          }
        },
      . . .
@@@ -281,7 -281,7 +286,7 @@@ This example shows how to drill down us
  
  To access the second geometry coordinate of the first city lot in the San Francisco city lots, use array indexing notation for the coordinates as well as the features:
  
--    SELECT features[0].geometry.coordinates[0][1] 
++    SELECT features[0].geometry.coordinates[0][1]
      FROM dfs.`/Users/drilluser/citylots.json`;
      +-------------------+
      |      EXPR$0       |
@@@ -290,10 -290,10 +295,10 @@@
      +-------------------+
      1 row selected (0.19 seconds)
  
--More examples of drilling down into an array are shown in ["Selecting Nested Data for a Column"]({{ site.baseurl }}/docs/selecting-nested-data-for-a-column). 
++More examples of drilling down into an array are shown in ["Selecting Nested Data for a Column"]({{ site.baseurl }}/docs/selecting-nested-data-for-a-column).
  
  ### Example: Flatten an Array of Maps using a Subquery
--By flattening the following JSON file, which contains an array of maps, you can evaluate the records of the flattened data. 
++By flattening the following JSON file, which contains an array of maps, you can evaluate the records of the flattened data.
  
      {"name":"classic","fillings":[ {"name":"sugar","cal":500} , {"name":"flour","cal":300} ] }
  
@@@ -346,8 -346,8 +351,8 @@@ This example uses a WHERE clause to dri
  
  Use dot notation, for example `t.birth.lastname` and `t.birth.bearer.max_hdl` to drill down to the nested level:
  
--    SELECT t.birth.lastname AS Name, t.birth.weight AS Weight 
--    FROM dfs.`Users/drilluser/vitalstat.json` t 
++    SELECT t.birth.lastname AS Name, t.birth.weight AS Weight
++    FROM dfs.`Users/drilluser/vitalstat.json` t
      WHERE t.birth.bearer.max_hdl < 160;
  
      +----------------+------------+
@@@ -367,7 -367,7 +372,7 @@@ In most cases, you can use a workaround
  * Complex JSON objects
  * Nested column names
  * Schema changes
--* Selecting all in a JSON directory query 
++* Selecting all in a JSON directory query
  
  ### Array at the root level
  Drill cannot read an array at the root level, outside an object.
@@@ -401,7 -401,7 +406,7 @@@ Workaround: To query n-level nested dat
      }
      . . .
  
--    SELECT dev_id, `date`, `time`, t.user_info.user_id, t.user_info.device, t.dev_info.prod_id 
++    SELECT dev_id, `date`, `time`, t.user_info.user_id, t.user_info.device, t.dev_info.prod_id
      FROM dfs.`/Users/mypath/example.json` t;
  
  ### Empty array
@@@ -409,7 -409,7 +414,7 @@@ Drill cannot read an empty array, show
  
          { "a":[] }
  
--Workaround: Remove empty arrays. 
++Workaround: Remove empty arrays.
  
  For example, you cannot query the [City Lots San Francisco in .json](https://github.com/zemirco/sf-city-lots-json) data unless you make the following modification.
  
@@@ -418,7 -418,7 +423,7 @@@
  After removing the extraneous square brackets in the coordinates array, you can drill down to query all the data for the lots.
  
  ### Lengthy JSON objects
--Currently, Drill cannot manage lengthy JSON objects, such as a gigabit JSON file. Finding the beginning and end of records can be time consuming and require scanning the whole file. 
++Currently, Drill cannot manage lengthy JSON objects, such as a gigabit JSON file. Finding the beginning and end of records can be time consuming and require scanning the whole file.
  
  Workaround: Use a tool to split the JSON file into smaller chunks of 64-128MB or 64-256MB initially until you know the total data size and node configuration. Keep the JSON objects intact in each file. A distributed file system, such as MapR-FS, is recommended over trying to manage file partitions.
  
@@@ -426,13 -426,13 +431,13 @@@
  Complex arrays and maps can be difficult or impossible to query.
  
  Workaround: Separate lengthy objects into objects delimited by curly braces using the following functions:
-- 
++
  * [FLATTEN]({{ site.baseurl }}/docs/json-data-model#flatten-json-data) separates a set of nested JSON objects into individual rows in a DRILL table.
  
  * [KVGEN]({{ site.baseurl }}/docs/kvgen/) separates objects having more elements than optimal for querying.
  
--  
--### Nested Column Names 
++
++### Nested Column Names
  
  You cannot use reserved words for nested column names because Drill returns null if you enclose n-level nested column names in back ticks. The previous example encloses the date and time column names in back ticks because the names are reserved words. The enclosure of column names in back ticks works because the date and time columns belong to the first level of the JSON object.
  
@@@ -457,8 -457,8 +462,8 @@@ Drill cannot read JSON files containin
  
  ![drill query flow]({{ site.baseurl }}/docs/img/data-sources-schemachg.png)
  
--Drill interprets numbers that do not have a decimal point as BigInt values. In this example, Drill recognizes the first two coordinates as doubles and the third coordinate as a BigInt, which causes an error. 
--                
++Drill interprets numbers that do not have a decimal point as BigInt values. In this example, Drill recognizes the first two coordinates as doubles and the third coordinate as a BigInt, which causes an error.
++
  Workaround: Set the `store.json.read_numbers_as_double` property, described earlier, to true.
  
      ALTER SYSTEM SET `store.json.read_numbers_as_double` = true;
@@@ -467,9 -467,9 +472,3 @@@
  Drill currently returns only fields common to all the files in a [directory query]({{ site.baseurl }}/docs/querying-directories) that selects all (SELECT *) JSON files.
  
  Workaround: Query each file individually.
--
--
--
--
--
--