You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@drill.apache.org by br...@apache.org on 2015/03/03 03:00:51 UTC

[2/2] drill git commit: DRILL-2336 plugin updates

DRILL-2336 plugin updates


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/0119fdde
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/0119fdde
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/0119fdde

Branch: refs/heads/gh-pages-master
Commit: 0119fdde5ebb1a4921822ed213039bdbbbec4e71
Parents: 2a34ac8
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Mon Mar 2 17:25:46 2015 -0800
Committer: Bridget Bevens <bb...@maprtech.com>
Committed: Mon Mar 2 17:53:58 2015 -0800

----------------------------------------------------------------------
 _docs/005-connect.md                            |  25 +-
 _docs/connect/001-plugin-reg.md                 |  43 ++--
 _docs/connect/002-plugin-conf.md                | 123 ++++++++++
 _docs/connect/002-workspaces.md                 |  74 ------
 _docs/connect/003-reg-fs.md                     |  64 -----
 _docs/connect/003-workspaces.md                 |  74 ++++++
 _docs/connect/004-reg-fs.md                     |  64 +++++
 _docs/connect/004-reg-hbase.md                  |  32 ---
 _docs/connect/005-reg-hbase.md                  |  34 +++
 _docs/connect/005-reg-hive.md                   |  86 -------
 _docs/connect/006-default-frmt.md               |  60 -----
 _docs/connect/006-reg-hive.md                   |  83 +++++++
 _docs/connect/007-default-frmt.md               |  60 +++++
 _docs/connect/007-mongo-plugin.md               | 167 -------------
 _docs/connect/008-mapr-db-plugin.md             |  31 ---
 _docs/connect/008-mongo-plugin.md               | 167 +++++++++++++
 _docs/connect/009-mapr-db-plugin.md             |  30 +++
 _docs/img/StoragePluginConfig.png               | Bin 20403 -> 0 bytes
 _docs/img/data-sources-schemachg.png            | Bin 0 -> 8071 bytes
 _docs/img/datasources-json-bracket.png          | Bin 0 -> 30129 bytes
 _docs/img/datasources-json.png                  | Bin 0 -> 16364 bytes
 _docs/img/get2kno_plugin.png                    | Bin 0 -> 55794 bytes
 _docs/img/json-workaround.png                   | Bin 20786 -> 27547 bytes
 _docs/img/plugin-default.png                    | Bin 0 -> 56412 bytes
 _docs/install/001-drill-in-10.md                |   4 +-
 _docs/sql-ref/data-types/001-date.md            |   8 +-
 _docs/tutorial/002-get2kno-sb.md                | 241 ++++++-------------
 _docs/tutorial/003-lesson1.md                   |  44 ++--
 _docs/tutorial/005-lesson3.md                   | 100 ++++----
 .../install-sandbox/001-install-mapr-vm.md      |   2 +-
 .../install-sandbox/002-install-mapr-vb.md      |   2 +-
 31 files changed, 808 insertions(+), 810 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/0119fdde/_docs/005-connect.md
----------------------------------------------------------------------
diff --git a/_docs/005-connect.md b/_docs/005-connect.md
index b48d200..3c60b2d 100644
--- a/_docs/005-connect.md
+++ b/_docs/005-connect.md
@@ -1,24 +1,24 @@
 ---
-title: "Connect to Data Sources"
+title: "Connect to a Data Source"
 ---
-Apache Drill serves as a query layer that connects to data sources through
-storage plugins. Drill uses the storage plugins to interact with data sources.
-You can think of a storage plugin as a connection between Drill and a data
-source.
+A storage plugin is an interface for connecting to a data source to read and write data. Apache Drill connects to a data source, such as a file on the file system or a Hive metastore, through a storage plugin. When you execute a query, Drill gets the plugin name you provide in FROM clause of your query. 
 
+In addition to the connection string, the storage plugin configures the workspace and file formats for reading and writing data, as described in subsequent sections. 
+
+## Storage Plugins Internals
 The following image represents the storage plugin layer between Drill and a
 data source:
 
 ![drill query flow]({{ site.baseurl }}/docs/img/storageplugin.png)
 
-Storage plugins provide the following information to Drill:
+A storage plugin provides the following information to Drill:
 
   * Metadata available in the underlying data source
   * Location of data
   * Interfaces that Drill can use to read from and write to data sources
   * A set of storage plugin optimization rules that assist with efficient and faster execution of Drill queries, such as pushdowns, statistics, and partition awareness
 
-Storage plugins perform scanner and writer functions, and inform the metadata
+A storage plugin performs scanner and writer functions, and informs the metadata
 repository of any known metadata, such as:
 
   * Schema
@@ -27,15 +27,6 @@ repository of any known metadata, such as:
   * Secondary indices
   * Number of blocks
 
-Storage plugins inform the execution engine of any native capabilities, such
+A storage plugin informs the execution engine of any native capabilities, such
 as predicate pushdown, joins, and SQL.
 
-Drill provides storage plugins for files and HBase/M7. Drill also integrates
-with Hive through a storage plugin. Hive provides a metadata abstraction layer
-on top of files and HBase/M7.
-
-When you run Drill to query files in HBase/M7, Drill can perform direct
-queries on the data or go through Hive, if you have metadata defined there.
-Drill integrates with the Hive metastore for metadata and also uses a Hive
-SerDe for the deserialization of records. Drill does not invoke the Hive
-execution engine for any requests.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/0119fdde/_docs/connect/001-plugin-reg.md
----------------------------------------------------------------------
diff --git a/_docs/connect/001-plugin-reg.md b/_docs/connect/001-plugin-reg.md
index 6e1e679..8def0bb 100644
--- a/_docs/connect/001-plugin-reg.md
+++ b/_docs/connect/001-plugin-reg.md
@@ -1,35 +1,24 @@
 ---
 title: "Storage Plugin Registration"
-parent: "Connect to Data Sources"
+parent: "Connect to a Data Source"
 ---
-You can connect Drill to a file system, Hive, or HBase data source. To connect
-Drill to a data source, you must register the data source as a storage plugin
-instance in the Drill Web UI. You register an instance of a data source as a
-`file`, `hive`, or `hbase` storage plugin type. You can register multiple
-storage plugin instances for each storage plugin type.
+You connect Drill to a file system, Hive, HBase, or other data source using storage plugins. Drill includes a number storage plugins in the installation. On the Storage tab of the Web UI, you can view, create, reconfigure, and register a storage plugin. To open the Storage tab, go to `http://<IP address>:8047/storage`:
 
-Each node with a Drillbit installed has a running browser that provides access
-to the Drill Web UI at `http://localhost:8047/`. The Drill Web UI includes
-`cp`, `dfs`, `hive`, and `hbase` storage plugin instances by default, though
-the `hive` and `hbase` instances are disabled. You can update the `hive` and
-`hbase` instances with configuration details and then enable them.
+![drill-installed plugins]({{ site.baseurl }}/docs/img/plugin-default.png)
 
-The `cp` instance points to a JAR file in Drill’s classpath that contains
-sample data that you can query. By default, the `dfs` instance points to the
-local file system on your machine, but you can configure this instance to
-point to any distributed file system, such as a Hadoop or S3 file system.
+The Drill installation registers the `cp`, `dfs`, `hbase`, `hive`, and `mongo` storage plugins instances by default.
 
-When you add or update storage plugin instances on one Drill node in a Drill
-cluster, Drill broadcasts the information to all of the other Drill nodes so
-they all have identical storage plugin configurations. You do not need to
-restart any of the Drillbits when you add or update a storage plugin instance.
+* `cp`
+  Points to a JAR file in the Drill classpath that contains sample data that you can query. 
+* `dfs`
+  Points to the local file system on your machine, but you can configure this instance to
+point to any distributed file system, such as a Hadoop or S3 file system. 
+* `hbase`
+   Provides a connection to HBase/M7.
+* `hive`
+   Integrates Drill with the Hive metadata abstraction of files, HBase/M7, and libraries to read data and operate on SerDes and UDFs.
+* `mongo`
+   Provides a connection to MongoDB data.
 
-Each storage plugin instance that you register with Drill must have a distinct
-name. For example, if you register two storage plugin instances for a Hadoop
-file system, you might name one storage plugin instance `hdfstest` and the
-other instance `hdfsproduction`.
+In the Drill sandbox,  the `dfs` storage plugin connects you to the MapR File System (MFS). Using an installation of Drill instead of the sandbox, `dfs` connects you to the root of your file system.
 
-The following example shows an HDFS data source registered in the Drill Web UI
-as a storage plugin instance of plugin type "`file"`:
-
-![drill query flow]({{ site.baseurl }}/docs/img/StoragePluginConfig.png)
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/0119fdde/_docs/connect/002-plugin-conf.md
----------------------------------------------------------------------
diff --git a/_docs/connect/002-plugin-conf.md b/_docs/connect/002-plugin-conf.md
new file mode 100644
index 0000000..632ed3f
--- /dev/null
+++ b/_docs/connect/002-plugin-conf.md
@@ -0,0 +1,123 @@
+---
+title: "Storage Plugin Configuration"
+parent: "Connect to a Data Source"
+---
+When you add or update storage plugin instances on one Drill node in a Drill
+cluster, Drill broadcasts the information to all of the other Drill nodes so
+they all have identical storage plugin configurations. You do not need to
+restart any of the Drillbits when you add or update a storage plugin instance.
+
+Use the Drill Web UI to update or add a new storage plugin. Launch a web browser, go to: `http://<IP address of the sandbox>:8047`, and then go to the Storage tab. 
+
+To create and configure a new storage plugin:
+
+1. Enter a storage name in New Storage Plugin.
+   Each storage plugin registered with Drill must have a distinct
+name. Names are case-sensitive.
+2. Click Create.  
+3. In Configuration, configure attributes of the storage plugin, if applicable, using JSON formatting. The Storage Plugin Attributes table in the next section describes attributes typically reconfigured by users. 
+4. Click Create.
+
+Click Update to reconfigure an existing, enabled storage plugin.
+
+## Storage Plugin Attributes
+
+<table>
+  <tr>
+    <th>Attribute</th>
+    <th>Example Values</th>
+    <th>Required</th>
+    <th>Description</th>
+  </tr>
+  <tr>
+    <td>"type"</td>
+    <td>"file"<br>"hbase"<br>"hive"<br>"mongo"</td>
+    <td>yes</td>
+    <td>The storage plugin type name supported by Drill</td>
+  </tr>
+  <tr>
+    <td>"enabled"</td>
+    <td>true<br>false</td>
+    <td>yes</td>
+    <td>The state of the storage plugin</td>
+  </tr>
+  <tr>
+    <td>"connection"</td>
+    <td>"classpath:///"<br>"file:///"<br>"mongodb://localhost:27017/"<br>"maprfs:///"</td>
+    <td>implementation-dependent</td>
+    <td>The type of distributed filesystem. Drill can work with any distributed system, such as HDFS and S3, or JSON or CSV file in your file system.</td>
+  </tr>
+  <tr>
+    <td>"workspaces"</td>
+    <td>null<br>"logs"</td>
+    <td>no</td>
+    <td>One or more arbitrary workspace names, enclosed in double quotation marks. Workspace names are case sensitive.</td>
+  </tr>
+  <tr>
+    <td>"workspaces". . . "location"</td>
+    <td>"location": "/"<br>"location": "/tmp"</td>
+    <td>no</td>
+    <td>An alias for a path to a directory, used to shorten references to files in the a query. </td>
+  </tr>
+  <tr>
+    <td>"workspaces". . . "writable"</td>
+    <td>true<br>false</td>
+    <td>yes</td>
+    <td>Indicates whether or not users can write data to this location</td>
+  </tr>
+  <tr>
+    <td>"workspaces". . . "defaultInputFormat"</td>
+    <td>null<br>"parquet"<br>"csv"<br>"json"</td>
+    <td>yes</td>
+    <td>The format of data written by executing CREATE TABLE AS and CREATE VIEW commands</td>
+  </tr>
+  <tr>
+    <td>"formats"</td>
+    <td>"psv"<br>"csv"<br>"tsv"<br>"parquet"<br>"json"<br>"maprdb"</td>
+    <td>yes</td>
+    <td>One or more file formats of data read by Drill. Drill can detect some file formats based on the file extension or the first few bits of data within the file, but needs format information for others. </td>
+  </tr>
+  <tr>
+    <td>"formats" . . . "type"</td>
+    <td>"text"<br>"parquet"<br>"json"<br>"maprdb"</td>
+    <td>no</td>
+    <td>The type of data in the file. Although Drill can work with different file types in the same directory, restricting a Drill workspace to one file type prevents confusion. </td>
+  </tr>
+  <tr>
+    <td>formats . . . "extensions"</td>
+    <td>["csv"]</td>
+    <td>format-dependent</td>
+    <td>The extension of the file.</td>
+  </tr>
+  <tr>
+    <td>"formats" . . . "delimiter"</td>
+    <td>"\t"<br>","</td>
+    <td>format-dependent</td>
+    <td>The delimiter used to separate columns in text files such as CSV.</td>
+  </tr>
+</table>
+
+The configuration of other attributes, such as `size.calculator.enabled` in the hbase plugin and `configProps` in the hive plugin, are implementation-dependent and beyond the scope of this document.
+
+## Case-sensitive Names
+As previously mentioned, workspace and storage plugin names are case-sensitive. For example, the following query uses a storage plugin name `dfs` and a workspace name `clicks`. When you refer to `dfs.clicks` in an SQL statement, use the defined case:
+
+    0: jdbc:drill:> USE dfs.clicks;
+
+For example, using uppercase letters in the query after defining the storage plugin and workspace names using lowercase letters does not work. 
+
+## REST API
+
+Drill provides a REST API that you can use to create a storage plugin. Use an HTTP POST and pass two properties:
+
+* name
+  The plugin name. 
+
+* config
+  The storage plugin definition as you would enter it in the Web UI.
+
+For example, this command creates a plugin named myplugin for reading files of an unknown type located on the root of the file system:
+
+    curl -X POST -/json" -d '{"name":"myplugin", "config": {"type": "file", "enabled": false, "connection": "file:///", "workspaces": { "root": { "location": "/", "writable": false, "defaultInputFormat": null}}, "formats": null}}' http://localhost:8047/storage/myplugin.json
+
+

http://git-wip-us.apache.org/repos/asf/drill/blob/0119fdde/_docs/connect/002-workspaces.md
----------------------------------------------------------------------
diff --git a/_docs/connect/002-workspaces.md b/_docs/connect/002-workspaces.md
deleted file mode 100644
index 745d61b..0000000
--- a/_docs/connect/002-workspaces.md
+++ /dev/null
@@ -1,74 +0,0 @@
----
-title: "Workspaces"
-parent: "Storage Plugin Registration"
----
-When you register an instance of a file system data source, you can configure
-one or more workspaces for the instance. A workspace is a directory within the
-file system that you define. Drill searches the workspace to locate data when
-you run a query.
-
-Each workspace that you register defines a schema that you can connect to and
-query. Configuring workspaces is useful when you want to run multiple queries
-on files or tables in a specific directory. You cannot create workspaces for
-`hive` and `hbase` instances, though Hive databases show up as workspaces in
-Drill.
-
-The following example shows an instance of a file type storage plugin with a
-workspace named `json` configured to point Drill to the
-`/users/max/drill/json/` directory in the local file system `(dfs)`:
-
-    {
-      "type" : "file",
-      "enabled" : true,
-      "connection" : "file:///",
-      "workspaces" : {
-        "json" : {
-          "location" : "/users/max/drill/json/",
-          "writable" : false,
-          "storageformat" : json
-       } 
-    },
-
-**Note:** The `connection` parameter in the configuration above is "`file:///`", connecting Drill to the local file system (`dfs`). To connect to a Hadoop or MapR file system the `connection` parameter would be "`hdfs:///" `or` "maprfs:///", `respectively.
-
-To query a file in the example `json` workspace, you can issue the `USE`
-command to tell Drill to use the `json` workspace configured in the `dfs`
-instance for each query that you issue:
-
-**Example**
-
-    USE dfs.json;
-    SELECT * FROM dfs.json.`donuts.json` WHERE type='frosted'
-
-If the `json `workspace did not exist, the query would have to include the
-full path to the `donuts.json` file:
-
-    SELECT * FROM dfs.`/users/max/drill/json/donuts.json` WHERE type='frosted';
-
-Using a workspace alleviates the need to repeatedly enter the directory path
-in subsequent queries on the directory.
-
-### Default Workspaces
-
-Each `file` and `hive` instance includes a `default` workspace. The `default`
-workspace points to the file system or to the Hive metastore. When you query
-files and tables in the` file` or `hive default` workspaces, you can omit the
-workspace name from the query.
-
-For example, you can issue a query on a Hive table in the `default workspace`
-using either of the following formats and get the the same results:
-
-**Example**
-
-    SELECT * FROM hive.customers LIMIT 10;
-    SELECT * FROM hive.`default`.customers LIMIT 10;
-
-**Note:** Default is a reserved word. You must enclose reserved words in back ticks.
-
-Because HBase instances do not have workspaces, you can use the following
-format to query a table in HBase:
-
-    SELECT * FROM hbase.customers LIMIT 10;
-
-After you register a data source as a storage plugin instance with Drill, and
-optionally configure workspaces, you can query the data source.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/0119fdde/_docs/connect/003-reg-fs.md
----------------------------------------------------------------------
diff --git a/_docs/connect/003-reg-fs.md b/_docs/connect/003-reg-fs.md
deleted file mode 100644
index ee385cd..0000000
--- a/_docs/connect/003-reg-fs.md
+++ /dev/null
@@ -1,64 +0,0 @@
----
-title: "Registering a File System"
-parent: "Storage Plugin Registration"
----
-You can register a storage plugin instance that connects Drill to a local file
-system or a distributed file system registered in `core-site.xml`, such as S3
-or HDFS. When you register a storage plugin instance for a file system,
-provide a unique name for the instance, and identify the type as “`file`”. By
-default, Drill includes an instance named `dfs `that points to the local file
-system on your machine. You can update this configuration to point to a
-distributed file system or you can create a new instance to point to a
-distributed file system.
-
-To register a local or a distributed file system with Apache Drill, complete
-the following steps:
-
-  1. Navigate to `[http://localhost:8047](http://localhost:8047/)`, and select the **Storage** tab.
-  2. In the New Storage Plugin window, enter a unique name and then click **Create**.
-  3. In the Configuration window, provide the following configuration information for the type of file system that you are configuring as a data source.
-     1. Local file system example:
-
-            {
-              "type": "file",
-              "enabled": true,
-              "connection": "file:///",
-              "workspaces": {
-                "root": {
-                  "location": "/user/max/donuts",
-                  "writable": false,
-                  "storageformat": null
-                 }
-              },
-                 "formats" : {
-                   "json" : {
-                     "type" : "json"
-                   }
-                 }
-              }
-     2. Distributed file system example:
-    
-            {
-              "type" : "file",
-              "enabled" : true,
-              "connection" : "hdfs://10.10.30.156:8020/",
-              "workspaces" : {
-                "root : {
-                  "location" : "/user/root/drill",
-                  "writable" : true,
-                  "storageformat" : "null"
-                }
-              },
-              "formats" : {
-                "json" : {
-                  "type" : "json"
-                }
-              }
-            }
-
-      To connect to a Hadoop file system, you must include the IP address of the
-name node and the port number.
-  4. Click **Enable**.
-
-Once you have configured a storage plugin instance for the file system, you
-can issue Drill queries against it.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/0119fdde/_docs/connect/003-workspaces.md
----------------------------------------------------------------------
diff --git a/_docs/connect/003-workspaces.md b/_docs/connect/003-workspaces.md
new file mode 100644
index 0000000..6e973e5
--- /dev/null
+++ b/_docs/connect/003-workspaces.md
@@ -0,0 +1,74 @@
+---
+title: "Workspaces"
+parent: "Storage Plugin Configuration"
+---
+When you register an instance of a file system data source, you can configure
+one or more workspaces for the instance. A workspace is a directory within the
+file system that you define. Drill searches the workspace to locate data when
+you run a query.
+
+Each workspace that you register defines a schema that you can connect to and
+query. Configuring workspaces is useful when you want to run multiple queries
+on files or tables in a specific directory. You cannot create workspaces for
+`hive` and `hbase` instances, though Hive databases show up as workspaces in
+Drill.
+
+The following example shows an instance of a file type storage plugin with a
+workspace named `json` configured to point Drill to the
+`/users/max/drill/json/` directory in the local file system `(dfs)`:
+
+    {
+      "type" : "file",
+      "enabled" : true,
+      "connection" : "file:///",
+      "workspaces" : {
+        "json" : {
+          "location" : "/users/max/drill/json/",
+          "writable" : false,
+          "defaultinputformat" : json
+       } 
+    },
+
+**Note:** The `connection` parameter in the configuration above is "`file:///`", connecting Drill to the local file system (`dfs`). To connect to a Hadoop or MapR file system the `connection` parameter would be "`hdfs:///" `or` "maprfs:///", `respectively.
+
+To query a file in the example `json` workspace, you can issue the `USE`
+command to tell Drill to use the `json` workspace configured in the `dfs`
+instance for each query that you issue:
+
+**Example**
+
+    USE dfs.json;
+    SELECT * FROM dfs.json.`donuts.json` WHERE type='frosted'
+
+If the `json` workspace did not exist, the query would have to include the
+full path to the `donuts.json` file:
+
+    SELECT * FROM dfs.`/users/max/drill/json/donuts.json` WHERE type='frosted';
+
+Using a workspace alleviates the need to repeatedly enter the directory path
+in subsequent queries on the directory.
+
+### Default Workspaces
+
+Each `file` and `hive` instance includes a `default` workspace. The `default`
+workspace points to the file system or to the Hive metastore. When you query
+files and tables in the `file` or `hive default` workspaces, you can omit the
+workspace name from the query.
+
+For example, you can issue a query on a Hive table in the `default workspace`
+using either of the following formats and get the the same results:
+
+**Example**
+
+    SELECT * FROM hive.customers LIMIT 10;
+    SELECT * FROM hive.`default`.customers LIMIT 10;
+
+**Note:** Default is a reserved word. You must enclose reserved words in back ticks.
+
+Because HBase instances do not have workspaces, you can use the following
+format to query a table in HBase:
+
+    SELECT * FROM hbase.customers LIMIT 10;
+
+After you register a data source as a storage plugin instance with Drill, and
+optionally configure workspaces, you can query the data source.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/0119fdde/_docs/connect/004-reg-fs.md
----------------------------------------------------------------------
diff --git a/_docs/connect/004-reg-fs.md b/_docs/connect/004-reg-fs.md
new file mode 100644
index 0000000..2b3e287
--- /dev/null
+++ b/_docs/connect/004-reg-fs.md
@@ -0,0 +1,64 @@
+---
+title: "File System Storage Plugin"
+parent: "Storage Plugin Configuration"
+---
+You can register a storage plugin instance that connects Drill to a local file
+system or a distributed file system registered in `core-site.xml`, such as S3
+or HDFS. When you register a storage plugin instance for a file system,
+provide a unique name for the instance, and identify the type as “`file`”. By
+default, Drill includes an instance named `dfs` that points to the local file
+system on your machine. You can update this configuration to point to a
+distributed file system or you can create a new instance to point to a
+distributed file system.
+
+To register a local or a distributed file system with Apache Drill, complete
+the following steps:
+
+  1. Navigate to `[http://localhost:8047](http://localhost:8047/)`, and select the **Storage** tab.
+  2. In the New Storage Plugin window, enter a unique name and then click **Create**.
+  3. In the Configuration window, provide the following configuration information for the type of file system that you are configuring as a data source.
+     1. Local file system example:
+
+            {
+              "type": "file",
+              "enabled": true,
+              "connection": "file:///",
+              "workspaces": {
+                "root": {
+                  "location": "/user/max/donuts",
+                  "writable": false,
+                  "defaultinputformat": null
+                 }
+              },
+                 "formats" : {
+                   "json" : {
+                     "type" : "json"
+                   }
+                 }
+              }
+     2. Distributed file system example:
+    
+            {
+              "type" : "file",
+              "enabled" : true,
+              "connection" : "hdfs://10.10.30.156:8020/",
+              "workspaces" : {
+                "root : {
+                  "location" : "/user/root/drill",
+                  "writable" : true,
+                  "defaultinputformat" : "null"
+                }
+              },
+              "formats" : {
+                "json" : {
+                  "type" : "json"
+                }
+              }
+            }
+
+      To connect to a Hadoop file system, you must include the IP address of the
+name node and the port number.
+  4. Click **Enable**.
+
+Once you have configured a storage plugin instance for the file system, you
+can issue Drill queries against it.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/0119fdde/_docs/connect/004-reg-hbase.md
----------------------------------------------------------------------
diff --git a/_docs/connect/004-reg-hbase.md b/_docs/connect/004-reg-hbase.md
deleted file mode 100644
index 0efd435..0000000
--- a/_docs/connect/004-reg-hbase.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title: "Registering HBase"
-parent: "Storage Plugin Registration"
----
-Register a storage plugin instance and specify a zookeeper quorum to connect
-Drill to an HBase data source. When you register a storage plugin instance for
-an HBase data source, provide a unique name for the instance, and identify the
-type as “hbase” in the Drill Web UI.
-
-Currently, Drill only works with HBase version 0.94.
-
-To register HBase with Drill, complete the following steps:
-
-  1. Navigate to [http://localhost:8047](http://localhost:8047/), and select the **Storage** tab
-  2. In the disabled storage plugins section, click **Update** next to the `hbase` instance.
-  3. In the Configuration window, specify the Zookeeper quorum and port. 
-  
-     **Example**
-  
-        {
-          "type": "hbase",
-          "config": {
-            "hbase.zookeeper.quorum": "<zk1host,zk2host,zk3host> or <localhost>",
-            "hbase.zookeeper.property.clientPort": "2181"
-          },
-          "enabled": false
-        }
-
-  4. Click **Enable**.
-
-Once you have configured a storage plugin instance for the HBase, you can
-issue Drill queries against it.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/0119fdde/_docs/connect/005-reg-hbase.md
----------------------------------------------------------------------
diff --git a/_docs/connect/005-reg-hbase.md b/_docs/connect/005-reg-hbase.md
new file mode 100644
index 0000000..627fb07
--- /dev/null
+++ b/_docs/connect/005-reg-hbase.md
@@ -0,0 +1,34 @@
+---
+title: "HBase Storage Plugin"
+parent: "Storage Plugin Configuration"
+---
+Register a storage plugin instance and specify a zookeeper quorum to connect
+Drill to an HBase data source. When you register a storage plugin instance for
+an HBase data source, provide a unique name for the instance, and identify the
+type as “hbase” in the Drill Web UI.
+
+Drill supports HBase version 0.98.
+
+To register HBase with Drill, complete the following steps:
+
+  1. Navigate to [http://localhost:8047](http://localhost:8047/), and select the **Storage** tab
+  2. In the disabled storage plugins section, click **Update** next to the `hbase` instance.
+  3. In the Configuration window, specify the Zookeeper quorum and port. 
+  
+     **Example**
+  
+        {
+          "type": "hbase",
+          "config": {
+            "hbase.zookeeper.quorum": "<zk1host,zk2host,zk3host> or <localhost>",
+            "hbase.zookeeper.property.clientPort": "2181"
+          },
+          "enabled": false
+        }
+
+  4. Click **Enable**.
+
+After you configure a storage plugin instance for the HBase, you can
+issue Drill queries against it.
+
+In the Drill sandbox, use the `dfs` storage plugin and the [MapR-DB format](/docs/mapr-db-format/) to query HBase files because the sandbox does not include HBase services.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/0119fdde/_docs/connect/005-reg-hive.md
----------------------------------------------------------------------
diff --git a/_docs/connect/005-reg-hive.md b/_docs/connect/005-reg-hive.md
deleted file mode 100644
index 9b44034..0000000
--- a/_docs/connect/005-reg-hive.md
+++ /dev/null
@@ -1,86 +0,0 @@
----
-title: "Registering Hive"
-parent: "Storage Plugin Registration"
----
-You can register a storage plugin instance that connects Drill to a Hive data
-source that has a remote or embedded metastore service. When you register a
-storage plugin instance for a Hive data source, provide a unique name for the
-instance, and identify the type as “`hive`”. You must also provide the
-metastore connection information.
-
-Currently, Drill supports Hive version 0.13. To access Hive tables
-using custom SerDes or InputFormat/OutputFormat, all nodes running Drillbits
-must have the SerDes or InputFormat/OutputFormat `JAR` files in the
-`<drill_installation_directory>/jars/3rdparty` folder.
-
-## Hive Remote Metastore
-
-In this configuration, the Hive metastore runs as a separate service outside
-of Hive. Drill communicates with the Hive metastore through Thrift. The
-metastore service communicates with the Hive database over JDBC. Point Drill
-to the Hive metastore service address, and provide the connection parameters
-in the Drill Web UI to configure a connection to Drill.
-
-**Note:** Verify that the Hive metastore service is running before you register the Hive metastore.
-
-To register a remote Hive metastore with Drill, complete the following steps:
-
-  1. Issue the following command to start the Hive metastore service on the system specified in the `hive.metastore.uris`:
-
-        hive --service metastore
-  2. Navigate to [http://localhost:8047](http://localhost:8047/), and select the **Storage** tab.
-  3. In the disabled storage plugins section, click **Update** next to the `hive` instance.
-  4. In the configuration window, add the `Thrift URI` and port to `hive.metastore.uris`.
-
-     **Example**
-     
-        {
-          "type": "hive",
-          "enabled": true,
-          "configProps": {
-            "hive.metastore.uris": "thrift://<localhost>:<port>",  
-            "hive.metastore.sasl.enabled": "false"
-          }
-        }       
-  5. Click **Enable**.
-  6. Verify that `HADOOP_CLASSPATH` is set in `drill-env.sh`. If you need to set the classpath, add the following line to `drill-env.sh`.
-  
-        export HADOOP_CLASSPATH=/<directory path>/hadoop/hadoop-0.20.2
-
-Once you have configured a storage plugin instance for a Hive data source, you
-can [query Hive tables](/drill/docs/querying-hive/).
-
-## Hive Embedded Metastore
-
-In this configuration, the Hive metastore is embedded within the Drill
-process. Provide the metastore database configuration settings in the Drill
-Web UI. Before you register Hive, verify that the driver you use to connect to
-the Hive metastore is in the Drill classpath located in `/<drill installation
-dirctory>/lib/.` If the driver is not there, copy the driver to `/<drill
-installation directory>/lib` on the Drill node. For more information about
-storage types and configurations, refer to [AdminManual
-MetastoreAdmin](/confluence/display/Hive/AdminManual+MetastoreAdmin).
-
-To register an embedded Hive metastore with Drill, complete the following
-steps:
-
-  1. Navigate to `[http://localhost:8047](http://localhost:8047/)`, and select the **Storage** tab
-  2. In the disabled storage plugins section, click **Update** next to `hive` instance.
-  3. In the configuration window, add the database configuration settings.
-
-     **Example**
-     
-        {
-          "type": "hive",
-          "enabled": true,
-          "configProps": {
-            "javax.jdo.option.ConnectionURL": "jdbc:<database>://<host:port>/<metastore database>;create=true",
-            "hive.metastore.warehouse.dir": "/tmp/drill_hive_wh",
-            "fs.default.name": "file:///",   
-          }
-        }
-  4. Click** Enable.**
-  5. Verify that `HADOOP_CLASSPATH` is set in `drill-env.sh`. If you need to set the classpath, add the following line to `drill-env.sh`.
-  
-        export HADOOP_CLASSPATH=/<directory path>/hadoop/hadoop-0.20.2
- 
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/0119fdde/_docs/connect/006-default-frmt.md
----------------------------------------------------------------------
diff --git a/_docs/connect/006-default-frmt.md b/_docs/connect/006-default-frmt.md
deleted file mode 100644
index 7dc55d5..0000000
--- a/_docs/connect/006-default-frmt.md
+++ /dev/null
@@ -1,60 +0,0 @@
----
-title: "Drill Default Input Format"
-parent: "Storage Plugin Registration"
----
-You can define a default input format to tell Drill what file type exists in a
-workspace within a file system. Drill determines the file type based on file
-extensions and magic numbers when searching a workspace.
-
-Magic numbers are file signatures that Drill uses to identify Parquet files.
-If Drill cannot identify the file type based on file extensions or magic
-numbers, the query fails. Defining a default input format can prevent queries
-from failing in situations where Drill cannot determine the file type.
-
-If you incorrectly define the file type in a workspace and Drill cannot
-determine the file type, the query fails. For example, if the directory for
-which you have defined a workspace contains JSON files and you defined the
-default input format as CSV, the query fails against the workspace.
-
-You can define one default input format per workspace. If you do not define a
-default input format, and Drill cannot detect the file format, the query
-fails. You can define a default input format for any of the file types that
-Drill supports. Currently, Drill supports the following types:
-
-  * CSV
-  * TSV
-  * PSV
-  * Parquet
-  * JSON
-
-## Defining a Default Input Format
-
-You define the default input format for a file system workspace through the
-Drill Web UI. You must have a [defined workspace](/drill/docs/workspaces) before you can define a
-default input format.
-
-To define a default input format for a workspace, complete the following
-steps:
-
-  1. Navigate to the Drill Web UI at `<drill_node_ip_address>:8047`. The Drillbit process must be running on the node before you connect to the Drill Web UI.
-  2. Select **Storage** in the toolbar.
-  3. Click **Update** next to the file system for which you want to define a default input format for a workspace.
-  4. In the Configuration area, locate the workspace for which you would like to define the default input format, and change the `defaultInputFormat` attribute to any of the supported file types.
-
-     **Example**
-     
-        {
-          "type": "file",
-          "enabled": true,
-          "connection": "hdfs:///",
-          "workspaces": {
-            "root": {
-              "location": "/drill/testdata",
-              "writable": false,
-              "defaultInputFormat": csv
-          },
-          "local" : {
-            "location" : "/max/proddata",
-            "writable" : true,
-            "defaultInputFormat" : "json"
-        }
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/0119fdde/_docs/connect/006-reg-hive.md
----------------------------------------------------------------------
diff --git a/_docs/connect/006-reg-hive.md b/_docs/connect/006-reg-hive.md
new file mode 100644
index 0000000..c3d2b1d
--- /dev/null
+++ b/_docs/connect/006-reg-hive.md
@@ -0,0 +1,83 @@
+---
+title: "Hive Storage Plugin"
+parent: "Storage Plugin Configuration"
+---
+You can register a storage plugin instance that connects Drill to a Hive data
+source that has a remote or embedded metastore service. When you register a
+storage plugin instance for a Hive data source, provide a unique name for the
+instance, and identify the type as “`hive`”. You must also provide the
+metastore connection information.
+
+Drill supports Hive 0.13. To access Hive tables
+using custom SerDes or InputFormat/OutputFormat, all nodes running Drillbits
+must have the SerDes or InputFormat/OutputFormat `JAR` files in the 
+`<drill_installation_directory>/jars/3rdparty` folder.
+
+## Hive Remote Metastore
+
+In this configuration, the Hive metastore runs as a separate service outside
+of Hive. Drill communicates with the Hive metastore through Thrift. The
+metastore service communicates with the Hive database over JDBC. Point Drill
+to the Hive metastore service address, and provide the connection parameters
+in the Drill Web UI to configure a connection to Drill.
+
+**Note:** Verify that the Hive metastore service is running before you register the Hive metastore.
+
+To register a remote Hive metastore with Drill, complete the following steps:
+
+  1. Issue the following command to start the Hive metastore service on the system specified in the `hive.metastore.uris`:
+
+        hive --service metastore
+  2. Navigate to [http://localhost:8047](http://localhost:8047/), and select the **Storage** tab.
+  3. In the disabled storage plugins section, click **Update** next to the `hive` instance.
+  4. In the configuration window, add the `Thrift URI` and port to `hive.metastore.uris`.
+
+     **Example**
+     
+        {
+          "type": "hive",
+          "enabled": true,
+          "configProps": {
+            "hive.metastore.uris": "thrift://<localhost>:<port>",  
+            "hive.metastore.sasl.enabled": "false"
+          }
+        }       
+  5. Click **Enable**.
+  6. Verify that `HADOOP_CLASSPATH` is set in `drill-env.sh`. If you need to set the classpath, add the following line to `drill-env.sh`.
+
+Once you have configured a storage plugin instance for a Hive data source, you
+can [query Hive tables](/drill/docs/querying-hive/).
+
+## Hive Embedded Metastore
+
+In this configuration, the Hive metastore is embedded within the Drill
+process. Provide the metastore database configuration settings in the Drill
+Web UI. Before you register Hive, verify that the driver you use to connect to
+the Hive metastore is in the Drill classpath located in `/<drill installation
+dirctory>/lib/.` If the driver is not there, copy the driver to `/<drill
+installation directory>/lib` on the Drill node. For more information about
+storage types and configurations, refer to [AdminManual
+MetastoreAdmin](/confluence/display/Hive/AdminManual+MetastoreAdmin).
+
+To register an embedded Hive metastore with Drill, complete the following
+steps:
+
+  1. Navigate to `[http://localhost:8047](http://localhost:8047/)`, and select the **Storage** tab
+  2. In the disabled storage plugins section, click **Update** next to `hive` instance.
+  3. In the configuration window, add the database configuration settings.
+
+     **Example**
+     
+        {
+          "type": "hive",
+          "enabled": true,
+          "configProps": {
+            "javax.jdo.option.ConnectionURL": "jdbc:<database>://<host:port>/<metastore database>;create=true",
+            "hive.metastore.warehouse.dir": "/tmp/drill_hive_wh",
+            "fs.default.name": "file:///",   
+          }
+        }
+  4. Click** Enable.**
+  5. Verify that `HADOOP_CLASSPATH` is set in `drill-env.sh`. If you need to set the classpath, add the following line to `drill-env.sh`.
+  
+        export HADOOP_CLASSPATH=/<directory path>/hadoop/hadoop-<version-number>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/0119fdde/_docs/connect/007-default-frmt.md
----------------------------------------------------------------------
diff --git a/_docs/connect/007-default-frmt.md b/_docs/connect/007-default-frmt.md
new file mode 100644
index 0000000..3ab52db
--- /dev/null
+++ b/_docs/connect/007-default-frmt.md
@@ -0,0 +1,60 @@
+---
+title: "Drill Default Input Format"
+parent: "Storage Plugin Configuration"
+---
+You can define a default input format to tell Drill what file type exists in a
+workspace within a file system. Drill determines the file type based on file
+extensions and magic numbers when searching a workspace.
+
+Magic numbers are file signatures that Drill uses to identify Parquet files.
+If Drill cannot identify the file type based on file extensions or magic
+numbers, the query fails. Defining a default input format can prevent queries
+from failing in situations where Drill cannot determine the file type.
+
+If you incorrectly define the file type in a workspace and Drill cannot
+determine the file type, the query fails. For example, if the directory for
+which you have defined a workspace contains JSON files and you defined the
+default input format as CSV, the query fails against the workspace.
+
+You can define one default input format per workspace. If you do not define a
+default input format, and Drill cannot detect the file format, the query
+fails. You can define a default input format for any of the file types that
+Drill supports. Currently, Drill supports the following types:
+
+  * CSV
+  * TSV
+  * PSV
+  * Parquet
+  * JSON
+
+## Defining a Default Input Format
+
+You define the default input format for a file system workspace through the
+Drill Web UI. You must have a [defined workspace](/drill/docs/workspaces) before you can define a
+default input format.
+
+To define a default input format for a workspace, complete the following
+steps:
+
+  1. Navigate to the Drill Web UI at `<drill_node_ip_address>:8047`. The Drillbit process must be running on the node before you connect to the Drill Web UI.
+  2. Select **Storage** in the toolbar.
+  3. Click **Update** next to the file system for which you want to define a default input format for a workspace.
+  4. In the Configuration area, locate the workspace for which you would like to define the default input format, and change the `defaultInputFormat` attribute to any of the supported file types.
+
+     **Example**
+     
+        {
+          "type": "file",
+          "enabled": true,
+          "connection": "hdfs:///",
+          "workspaces": {
+            "root": {
+              "location": "/drill/testdata",
+              "writable": false,
+              "defaultInputFormat": csv
+          },
+          "local" : {
+            "location" : "/max/proddata",
+            "writable" : true,
+            "defaultInputFormat" : "json"
+        }
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/0119fdde/_docs/connect/007-mongo-plugin.md
----------------------------------------------------------------------
diff --git a/_docs/connect/007-mongo-plugin.md b/_docs/connect/007-mongo-plugin.md
deleted file mode 100644
index 3ec4fdf..0000000
--- a/_docs/connect/007-mongo-plugin.md
+++ /dev/null
@@ -1,167 +0,0 @@
----
-title: "MongoDB Plugin for Apache Drill"
-parent: "Connect to Data Sources"
----
-## Overview
-
-You can leverage the power of Apache Drill to query data without any upfront
-schema definitions. Drill enables you to create an architecture that works
-with nested and dynamic schemas, making it the perfect SQL query tool to use
-on NoSQL databases, such as MongoDB.
-
-As of Apache Drill 0.6, you can configure MongoDB as a Drill data source.
-Drill provides a mongodb format plugin to connect to MongoDB, and run queries
-on the data using ANSI SQL.
-
-This tutorial assumes that you have Drill installed locally (embedded mode),
-as well as MongoDB. Examples in this tutorial use zip code aggregation data
-provided by MongDB. Before You Begin provides links to download tools and data
-used throughout the tutorial.
-
-**Note:** A local instance of Drill is used in this tutorial for simplicity. You can also run Drill and MongoDB together in distributed mode.
-
-### Before You Begin
-
-Before you can query MongoDB with Drill, you must have Drill and MongoDB
-installed on your machine. You may also want to import the MongoDB zip code
-data to run the example queries on your machine.
-
-  1. [Install Drill](/drill/docs/installing-drill-in-embedded-mode), if you do not already have it installed on your machine.
-  2. [Install MongoDB](http://docs.mongodb.org/manual/installation), if you do not already have it installed on your machine.
-  3. [Import the MongoDB zip code sample data set](http://docs.mongodb.org/manual/tutorial/aggregation-zip-code-data-set). You can use Mongo Import to get the data. 
-
-## Configuring MongoDB
-
-Start Drill and configure the MongoDB storage plugin instance in the Drill Web
-UI to connect to Drill. Drill must be running in order to access the Web UI.
-
-Complete the following steps to configure MongoDB as a data source for Drill:
-
-  1. Navigate to `<drill_installation_directory>/drill-<version>,` and enter the following command to invoke SQLLine and start Drill:
-
-        bin/sqlline -u jdbc:drill:zk=local -n admin -p admin
-     When Drill starts, the following prompt appears: `0: jdbc:drill:zk=local>`
-
-     Do not enter any commands. You will return to the command prompt after
-completing the configuration in the Drill Web UI.
-  2. Open a browser window, and navigate to the Drill Web UI at `http://localhost:8047`.
-  3. In the navigation bar, click **Storage**.
-  4. Under Disabled Storage Plugins, select **Update** next to the `mongo` instance if the instance exists. If the instance does not exist, create an instance for MongoDB.
-  5. In the Configuration window, verify that `"enabled"` is set to ``"true."``
-
-     **Example**
-     
-        {
-          "type": "mongo",
-          "connection": "mongodb://localhost:27017/",
-          "enabled": true
-        }
-
-     **Note:** 27017 is the default port for `mongodb` instances. 
-  6. Click **Enable** to enable the instance, and save the configuration.
-  7. Navigate back to the Drill command line so you can query MongoDB.
-
-## Querying MongoDB
-
-You can issue the `SHOW DATABASES `command to see a list of databases from all
-Drill data sources, including MongoDB. If you downloaded the zip codes file,
-you should see `mongo.zipdb` in the results.
-
-    0: jdbc:drill:zk=local> SHOW DATABASES;
-    +-------------+
-    | SCHEMA_NAME |
-    +-------------+
-    | dfs.default |
-    | dfs.root    |
-    | dfs.tmp     |
-    | sys         |
-    | mongo.zipdb |
-    | cp.default  |
-    | INFORMATION_SCHEMA |
-    +-------------+
-
-If you want all queries that you submit to run on `mongo.zipdb`, you can issue
-the `USE` command to change schema.
-
-### Example Queries
-
-The following example queries are included for reference. However, you can use
-the SQL power of Apache Drill directly on MongoDB. For more information,
-refer to the [Apache Drill SQL
-Reference](/drill/docs/sql-reference).
-
-**Example 1: View mongo.zipdb Dataset**
-
-    0: jdbc:drill:zk=local> SELECT * FROM zipcodes LIMIT 10;
-    +------------+
-    |     *      |
-    +------------+
-    | { "city" : "AGAWAM" , "loc" : [ -72.622739 , 42.070206] , "pop" : 15338 , "state" : "MA"} |
-    | { "city" : "CUSHMAN" , "loc" : [ -72.51565 , 42.377017] , "pop" : 36963 , "state" : "MA"} |
-    | { "city" : "BARRE" , "loc" : [ -72.108354 , 42.409698] , "pop" : 4546 , "state" : "MA"} |
-    | { "city" : "BELCHERTOWN" , "loc" : [ -72.410953 , 42.275103] , "pop" : 10579 , "state" : "MA"} |
-    | { "city" : "BLANDFORD" , "loc" : [ -72.936114 , 42.182949] , "pop" : 1240 , "state" : "MA"} |
-    | { "city" : "BRIMFIELD" , "loc" : [ -72.188455 , 42.116543] , "pop" : 3706 , "state" : "MA"} |
-    | { "city" : "CHESTER" , "loc" : [ -72.988761 , 42.279421] , "pop" : 1688 , "state" : "MA"} |
-    | { "city" : "CHESTERFIELD" , "loc" : [ -72.833309 , 42.38167] , "pop" : 177 , "state" : "MA"} |
-    | { "city" : "CHICOPEE" , "loc" : [ -72.607962 , 42.162046] , "pop" : 23396 , "state" : "MA"} |
-    | { "city" : "CHICOPEE" , "loc" : [ -72.576142 , 42.176443] , "pop" : 31495 , "state" : "MA"} |
-
-**Example 2: Aggregation**
-
-    0: jdbc:drill:zk=local> select state,city,avg(pop)
-    +------------+------------+------------+
-    |   state    |    city    |   EXPR$2   |
-    +------------+------------+------------+
-    | MA         | AGAWAM     | 15338.0    |
-    | MA         | CUSHMAN    | 36963.0    |
-    | MA         | BARRE      | 4546.0     |
-    | MA         | BELCHERTOWN | 10579.0   |
-    | MA         | BLANDFORD  | 1240.0     |
-    | MA         | BRIMFIELD  | 3706.0     |
-    | MA         | CHESTER    | 1688.0     |
-    | MA         | CHESTERFIELD | 177.0    |
-    | MA         | CHICOPEE   | 27445.5    |
-    | MA         | WESTOVER AFB | 1764.0   |
-    +------------+------------+------------+
-
-**Example 3: Nested Data Column Array**
-
-    0: jdbc:drill:zk=local> SELECT loc FROM zipcodes LIMIT 10;
-    +------------------------+
-    |    loc                 |
-    +------------------------+
-    | [-72.622739,42.070206] |
-    | [-72.51565,42.377017]  |
-    | [-72.108354,42.409698] |
-    | [-72.410953,42.275103] |
-    | [-72.936114,42.182949] |
-    | [-72.188455,42.116543] |
-    | [-72.988761,42.279421] |
-    | [-72.833309,42.38167]  |
-    | [-72.607962,42.162046] |
-    | [-72.576142,42.176443] |
-    +------------------------+
-        
-    0: jdbc:drill:zk=local> SELECT loc[0] FROM zipcodes LIMIT 10;
-    +------------+
-    |   EXPR$0   |
-    +------------+
-    | -72.622739 |
-    | -72.51565  |
-    | -72.108354 |
-    | -72.410953 |
-    | -72.936114 |
-    | -72.188455 |
-    | -72.988761 |
-    | -72.833309 |
-    | -72.607962 |
-    | -72.576142 |
-    +------------+
-
-## Using ODBC/JDBC Drivers
-
-You can leverage the power of Apache Drill to query MongoDB through standard
-BI tools, such as Tableau and SQuirreL.
-
-For information about Drill ODBC and JDBC drivers, refer to [Drill Interfaces](/drill/docs/odbc-jdbc-interfaces).

http://git-wip-us.apache.org/repos/asf/drill/blob/0119fdde/_docs/connect/008-mapr-db-plugin.md
----------------------------------------------------------------------
diff --git a/_docs/connect/008-mapr-db-plugin.md b/_docs/connect/008-mapr-db-plugin.md
deleted file mode 100644
index f923bce..0000000
--- a/_docs/connect/008-mapr-db-plugin.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: "MapR-DB Plugin for Apache Drill"
-parent: "Connect to Data Sources"
----
-Drill includes a `maprdb` format plugin for MapR-DB that is defined within the
-default `dfs` storage plugin instance when you install Drill from the `mapr-drill` package on a MapR node. The `maprdb` format plugin improves the
-estimated number of rows that Drill uses to plan a query. It also enables you
-to query tables like you would query files in a file system because MapR-DB
-and MapR-FS share the same namespace.
-
-You can query tables stored across multiple directories. You do not need to
-create a table mapping to a directory before you query a table in the
-directory. You can select from any table in any directory the same way you
-would select from files in MapR-FS, using the same syntax.
-
-Instead of including the name of a file, you include the table name in the
-query.
-
-**Example**
-
-    SELECT * FROM mfs.`/users/max/mytable`;
-
-Drill stores the `maprdb` format plugin in the `dfs` storage plugin instance,
-which you can view in the Drill Web UI. You can access the Web UI at
-[http://localhost:8047/storage](http://localhost:8047/storage). Click **Update** next to the `dfs` instance
-in the Web UI to view the configuration for the `dfs` instance.
-
-The following image shows a portion of the configuration with the `maprdb`
-format plugin for the `dfs` instance:
-
-![drill query flow]({{ site.baseurl }}/docs/img/18.png)

http://git-wip-us.apache.org/repos/asf/drill/blob/0119fdde/_docs/connect/008-mongo-plugin.md
----------------------------------------------------------------------
diff --git a/_docs/connect/008-mongo-plugin.md b/_docs/connect/008-mongo-plugin.md
new file mode 100644
index 0000000..5c5b33d
--- /dev/null
+++ b/_docs/connect/008-mongo-plugin.md
@@ -0,0 +1,167 @@
+---
+title: "MongoDB Plugin for Apache Drill"
+parent: "Connect to a Data Source"
+---
+## Overview
+
+You can leverage the power of Apache Drill to query data without any upfront
+schema definitions. Drill enables you to create an architecture that works
+with nested and dynamic schemas, making it the perfect SQL query tool to use
+on NoSQL databases, such as MongoDB.
+
+As of Apache Drill 0.6, you can configure MongoDB as a Drill data source.
+Drill provides a mongodb format plugin to connect to MongoDB, and run queries
+on the data using ANSI SQL.
+
+This tutorial assumes that you have Drill installed locally (embedded mode),
+as well as MongoDB. Examples in this tutorial use zip code aggregation data
+provided by MongoDB. Before You Begin provides links to download tools and data
+used throughout the tutorial.
+
+**Note:** A local instance of Drill is used in this tutorial for simplicity. You can also run Drill and MongoDB together in distributed mode.
+
+### Before You Begin
+
+Before you can query MongoDB with Drill, you must have Drill and MongoDB
+installed on your machine. You may also want to import the MongoDB zip code
+data to run the example queries on your machine.
+
+  1. [Install Drill](/drill/docs/installing-drill-in-embedded-mode), if you do not already have it installed on your machine.
+  2. [Install MongoDB](http://docs.mongodb.org/manual/installation), if you do not already have it installed on your machine.
+  3. [Import the MongoDB zip code sample data set](http://docs.mongodb.org/manual/tutorial/aggregation-zip-code-data-set). You can use Mongo Import to get the data. 
+
+## Configuring MongoDB
+
+Start Drill and configure the MongoDB storage plugin instance in the Drill Web
+UI to connect to Drill. Drill must be running in order to access the Web UI.
+
+Complete the following steps to configure MongoDB as a data source for Drill:
+
+  1. Navigate to `<drill_installation_directory>/drill-<version>,` and enter the following command to invoke SQLLine and start Drill:
+
+        bin/sqlline -u jdbc:drill:zk=local -n admin -p admin
+     When Drill starts, the following prompt appears: `0: jdbc:drill:zk=local>`
+
+     Do not enter any commands. You will return to the command prompt after
+completing the configuration in the Drill Web UI.
+  2. Open a browser window, and navigate to the Drill Web UI at `http://localhost:8047`.
+  3. In the navigation bar, click **Storage**.
+  4. Under Disabled Storage Plugins, select **Update** next to the `mongo` instance if the instance exists. If the instance does not exist, create an instance for MongoDB.
+  5. In the Configuration window, verify that `"enabled"` is set to ``"true."``
+
+     **Example**
+     
+        {
+          "type": "mongo",
+          "connection": "mongodb://localhost:27017/",
+          "enabled": true
+        }
+
+     **Note:** 27017 is the default port for `mongodb` instances. 
+  6. Click **Enable** to enable the instance, and save the configuration.
+  7. Navigate back to the Drill command line so you can query MongoDB.
+
+## Querying MongoDB
+
+You can issue the `SHOW DATABASES `command to see a list of databases from all
+Drill data sources, including MongoDB. If you downloaded the zip codes file,
+you should see `mongo.zipdb` in the results.
+
+    0: jdbc:drill:zk=local> SHOW DATABASES;
+    +-------------+
+    | SCHEMA_NAME |
+    +-------------+
+    | dfs.default |
+    | dfs.root    |
+    | dfs.tmp     |
+    | sys         |
+    | mongo.zipdb |
+    | cp.default  |
+    | INFORMATION_SCHEMA |
+    +-------------+
+
+If you want all queries that you submit to run on `mongo.zipdb`, you can issue
+the `USE` command to change schema.
+
+### Example Queries
+
+The following example queries are included for reference. However, you can use
+the SQL power of Apache Drill directly on MongoDB. For more information about,
+refer to the [SQL
+Reference](/drill/docs/sql-reference).
+
+**Example 1: View mongo.zipdb Dataset**
+
+    0: jdbc:drill:zk=local> SELECT * FROM zipcodes LIMIT 10;
+    +------------+
+    |     *      |
+    +------------+
+    | { "city" : "AGAWAM" , "loc" : [ -72.622739 , 42.070206] , "pop" : 15338 , "state" : "MA"} |
+    | { "city" : "CUSHMAN" , "loc" : [ -72.51565 , 42.377017] , "pop" : 36963 , "state" : "MA"} |
+    | { "city" : "BARRE" , "loc" : [ -72.108354 , 42.409698] , "pop" : 4546 , "state" : "MA"} |
+    | { "city" : "BELCHERTOWN" , "loc" : [ -72.410953 , 42.275103] , "pop" : 10579 , "state" : "MA"} |
+    | { "city" : "BLANDFORD" , "loc" : [ -72.936114 , 42.182949] , "pop" : 1240 , "state" : "MA"} |
+    | { "city" : "BRIMFIELD" , "loc" : [ -72.188455 , 42.116543] , "pop" : 3706 , "state" : "MA"} |
+    | { "city" : "CHESTER" , "loc" : [ -72.988761 , 42.279421] , "pop" : 1688 , "state" : "MA"} |
+    | { "city" : "CHESTERFIELD" , "loc" : [ -72.833309 , 42.38167] , "pop" : 177 , "state" : "MA"} |
+    | { "city" : "CHICOPEE" , "loc" : [ -72.607962 , 42.162046] , "pop" : 23396 , "state" : "MA"} |
+    | { "city" : "CHICOPEE" , "loc" : [ -72.576142 , 42.176443] , "pop" : 31495 , "state" : "MA"} |
+
+**Example 2: Aggregation**
+
+    0: jdbc:drill:zk=local> select state,city,avg(pop)
+    +------------+------------+------------+
+    |   state    |    city    |   EXPR$2   |
+    +------------+------------+------------+
+    | MA         | AGAWAM     | 15338.0    |
+    | MA         | CUSHMAN    | 36963.0    |
+    | MA         | BARRE      | 4546.0     |
+    | MA         | BELCHERTOWN | 10579.0   |
+    | MA         | BLANDFORD  | 1240.0     |
+    | MA         | BRIMFIELD  | 3706.0     |
+    | MA         | CHESTER    | 1688.0     |
+    | MA         | CHESTERFIELD | 177.0    |
+    | MA         | CHICOPEE   | 27445.5    |
+    | MA         | WESTOVER AFB | 1764.0   |
+    +------------+------------+------------+
+
+**Example 3: Nested Data Column Array**
+
+    0: jdbc:drill:zk=local> SELECT loc FROM zipcodes LIMIT 10;
+    +------------------------+
+    |    loc                 |
+    +------------------------+
+    | [-72.622739,42.070206] |
+    | [-72.51565,42.377017]  |
+    | [-72.108354,42.409698] |
+    | [-72.410953,42.275103] |
+    | [-72.936114,42.182949] |
+    | [-72.188455,42.116543] |
+    | [-72.988761,42.279421] |
+    | [-72.833309,42.38167]  |
+    | [-72.607962,42.162046] |
+    | [-72.576142,42.176443] |
+    +------------------------+
+        
+    0: jdbc:drill:zk=local> SELECT loc[0] FROM zipcodes LIMIT 10;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | -72.622739 |
+    | -72.51565  |
+    | -72.108354 |
+    | -72.410953 |
+    | -72.936114 |
+    | -72.188455 |
+    | -72.988761 |
+    | -72.833309 |
+    | -72.607962 |
+    | -72.576142 |
+    +------------+
+
+## Using ODBC/JDBC Drivers
+
+You can leverage the power of Apache Drill to query MongoDB through standard
+BI tools, such as Tableau and SQuirreL.
+
+For information about Drill ODBC and JDBC drivers, refer to [Drill Interfaces](/drill/docs/odbc-jdbc-interfaces).

http://git-wip-us.apache.org/repos/asf/drill/blob/0119fdde/_docs/connect/009-mapr-db-plugin.md
----------------------------------------------------------------------
diff --git a/_docs/connect/009-mapr-db-plugin.md b/_docs/connect/009-mapr-db-plugin.md
new file mode 100644
index 0000000..a0582f3
--- /dev/null
+++ b/_docs/connect/009-mapr-db-plugin.md
@@ -0,0 +1,30 @@
+---
+title: "MapR-DB Format"
+parent: "Connect to a Data Source"
+---
+Drill includes a `maprdb` format for reading MapR-DB data. The `dfs` storage plugin defines the format when you install Drill from the `mapr-drill` package on a MapR node. The `maprdb` format plugin improves the
+estimated number of rows that Drill uses to plan a query. It also enables you
+to query tables like you would query files in a file system because MapR-DB
+and MapR-FS share the same namespace.
+
+You can query tables stored across multiple directories. You do not need to
+create a table mapping to a directory before you query a table in the
+directory. You can select from any table in any directory the same way you
+would select from files in MapR-FS, using the same syntax.
+
+Instead of including the name of a file, you include the table name in the
+query.
+
+**Example**
+
+    SELECT * FROM mfs.`/users/max/mytable`;
+
+Drill stores the `maprdb` format plugin in the `dfs` storage plugin instance,
+which you can view in the Drill Web UI. You can access the Web UI at
+[http://localhost:8047/storage](http://localhost:8047/storage). Click **Update** next to the `dfs` instance
+in the Web UI to view the configuration for the `dfs` instance.
+
+The following image shows a portion of the configuration with the `maprdb`
+format plugin for the `dfs` instance:
+
+![drill query flow]({{ site.baseurl }}/docs/img/18.png)

http://git-wip-us.apache.org/repos/asf/drill/blob/0119fdde/_docs/img/StoragePluginConfig.png
----------------------------------------------------------------------
diff --git a/_docs/img/StoragePluginConfig.png b/_docs/img/StoragePluginConfig.png
deleted file mode 100644
index e57fd38..0000000
Binary files a/_docs/img/StoragePluginConfig.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/drill/blob/0119fdde/_docs/img/data-sources-schemachg.png
----------------------------------------------------------------------
diff --git a/_docs/img/data-sources-schemachg.png b/_docs/img/data-sources-schemachg.png
new file mode 100644
index 0000000..c94bbd4
Binary files /dev/null and b/_docs/img/data-sources-schemachg.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/0119fdde/_docs/img/datasources-json-bracket.png
----------------------------------------------------------------------
diff --git a/_docs/img/datasources-json-bracket.png b/_docs/img/datasources-json-bracket.png
new file mode 100644
index 0000000..e813568
Binary files /dev/null and b/_docs/img/datasources-json-bracket.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/0119fdde/_docs/img/datasources-json.png
----------------------------------------------------------------------
diff --git a/_docs/img/datasources-json.png b/_docs/img/datasources-json.png
new file mode 100644
index 0000000..85e0363
Binary files /dev/null and b/_docs/img/datasources-json.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/0119fdde/_docs/img/get2kno_plugin.png
----------------------------------------------------------------------
diff --git a/_docs/img/get2kno_plugin.png b/_docs/img/get2kno_plugin.png
new file mode 100644
index 0000000..c87d82e
Binary files /dev/null and b/_docs/img/get2kno_plugin.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/0119fdde/_docs/img/json-workaround.png
----------------------------------------------------------------------
diff --git a/_docs/img/json-workaround.png b/_docs/img/json-workaround.png
index f9f99dd..5f15e10 100644
Binary files a/_docs/img/json-workaround.png and b/_docs/img/json-workaround.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/0119fdde/_docs/img/plugin-default.png
----------------------------------------------------------------------
diff --git a/_docs/img/plugin-default.png b/_docs/img/plugin-default.png
new file mode 100644
index 0000000..e666887
Binary files /dev/null and b/_docs/img/plugin-default.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/0119fdde/_docs/install/001-drill-in-10.md
----------------------------------------------------------------------
diff --git a/_docs/install/001-drill-in-10.md b/_docs/install/001-drill-in-10.md
index eddaf7e..3f74859 100644
--- a/_docs/install/001-drill-in-10.md
+++ b/_docs/install/001-drill-in-10.md
@@ -89,7 +89,7 @@ commands. SQLLine is used as the shell for Drill. Drill follows the ANSI SQL:
 
 You must have the following software installed on your machine to run Drill:
 
-<table ><tbody><tr><td ><strong>Software</strong></td><td ><strong>Description</strong></td></tr><tr><td ><a href="http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html" class="external-link" rel="nofollow">Oracle JDK version 7</a></td><td >A set of programming tools for developing Java applications.</td></tr></tbody></table></div>
+<table ><tbody><tr><td ><strong>Software</strong></td><td ><strong>Description</strong></td></tr><tr><td ><a href="http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html" class="external-link" rel="nofollow">Oracle JDK version 7</a></td><td >A set of programming tools for developing Java applications.</td></tr></tbody></table>
 
   
 ### Prerequisite Validation
@@ -355,7 +355,7 @@ Now that you have an idea about what Drill can do, you might want to:
 
   * [Deploy Drill in a clustered environment.](/drill/docs/deploying-apache-drill-in-a-clustered-environment)
   * [Configure storage plugins to connect Drill to your data sources](/drill/docs/connect-to-data-sources).
-  * Query [Hive](/drill/docs/querying-hive) and [HBase](/drill/docs/registering-hbase) data.
+  * Query [Hive](/drill/docs/querying-hive) and [HBase](/docs/hbase-storage-plugin) data.
   * [Query Complex Data](/drill/docs/querying-complex-data)
   * [Query Plain Text Files](/drill/docs/querying-plain-text-files)
 

http://git-wip-us.apache.org/repos/asf/drill/blob/0119fdde/_docs/sql-ref/data-types/001-date.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/data-types/001-date.md b/_docs/sql-ref/data-types/001-date.md
index 94f3bf7..ef20bc2 100644
--- a/_docs/sql-ref/data-types/001-date.md
+++ b/_docs/sql-ref/data-types/001-date.md
@@ -21,11 +21,11 @@ supported date and time formats as literals:
     select date '2008-2-23', timestamp '2008-1-23 14:24:23', time '10:20:30' from dfs.`/tmp/input.json`;
 
 The following query provides an example where `VARCHAR` data in a file is
-`CAST()` to supported `date `and `time` formats:
+`CAST()` to supported `date` and `time` formats:
 
     select cast(col_A as date), cast(col_B as timestamp), cast(col_C as time) from dfs.`/tmp/dates.json`;
 
-`Date`, t`imestamp`, and` time` data types store values in `UTC`. Currently,
+`Date`, `timestamp`, and `time` data types store values in `UTC`. Currently,
 Apache Drill does not support `timestamp` with time zone.
 
 ## Date
@@ -135,14 +135,14 @@ supports the `interval day` data type in the following format:
 
 The following table provides examples for `interval day` data type:
 
-<table><tbody><tr><th>Use</th><th>Example</th></tr><tr><td valign="top">Literal</td><td valign="top"><code><span style="color: rgb(0,0,0);">select interval '1 10:20:30.123' day to second from dfs.`/tmp/input.json`;<br /></span><span style="color: rgb(0,0,0);">select interval '1 10' day to hour from dfs.`/tmp/input.json`;<br /></span><span style="color: rgb(0,0,0);">select interval '10' day  from dfs.`/tmp/input.json`;<br /></span><span style="color: rgb(0,0,0);">select interval '10' hour  from dfs.`/tmp/input.json`;</span></code><code><span style="color: rgb(0,0,0);">select interval '10.999' second  from dfs.`/tmp/input.json`;</span></code></td></tr><tr><td colspan="1" valign="top"><code>JSON</code> Input</td><td colspan="1" valign="top"><code><span style="color: rgb(0,0,0);">{&quot;col&quot; : &quot;P1DT10H20M30S&quot;}<br /></span><span style="color: rgb(0,0,0);">{&quot;col&quot; : &quot;P1DT10H20M30.123S&quot;}<br /></span><span style="color: rgb(0,0,0);">{&quot;col&quot; : &quot
 ;P1D&quot;}<br /></span><span style="color: rgb(0,0,0);">{&quot;col&quot; : &quot;PT10H&quot;}<br /></span><span style="color: rgb(0,0,0);">{&quot;col&quot; : &quot;PT10.10S&quot;}<br /></span><span style="color: rgb(0,0,0);">{&quot;col&quot; : &quot;PT20S&quot;}<br /></span><span style="color: rgb(0,0,0);">{&quot;col&quot; : &quot;PT10H10S&quot;}</span></code></td></tr><tr><td colspan="1" valign="top"><code>CAST</code> from <code>VARCHAR</code></td><td colspan="1" valign="top"><code><span style="color: rgb(0,0,0);">select cast(col as interval day) from dfs.`/tmp/input.json`;</span></code></td></tr></tbody></table></div> 
+<table ><tbody><tr><th >Use</th><th >Example</th></tr><tr><td valign="top">Literal</td><td valign="top"><code><span style="color: rgb(0,0,0);">select interval '1 10:20:30.123' day to second from dfs.`/tmp/input.json`;<br /></span><span style="color: rgb(0,0,0);">select interval '1 10' day to hour from dfs.`/tmp/input.json`;<br /></span><span style="color: rgb(0,0,0);">select interval '10' day  from dfs.`/tmp/input.json`;<br /></span><span style="color: rgb(0,0,0);">select interval '10' hour  from dfs.`/tmp/input.json`;</span></code><code><span style="color: rgb(0,0,0);">select interval '10.999' second  from dfs.`/tmp/input.json`;</span></code></td></tr><tr><td colspan="1" valign="top"><code>JSON</code> Input</td><td colspan="1" valign="top"><code><span style="color: rgb(0,0,0);">{&quot;col&quot; : &quot;P1DT10H20M30S&quot;}<br /></span><span style="color: rgb(0,0,0);">{&quot;col&quot; : &quot;P1DT10H20M30.123S&quot;}<br /></span><span style="color: rgb(0,0,0);">{&quot;col&quot; : &q
 uot;P1D&quot;}<br /></span><span style="color: rgb(0,0,0);">{&quot;col&quot; : &quot;PT10H&quot;}<br /></span><span style="color: rgb(0,0,0);">{&quot;col&quot; : &quot;PT10.10S&quot;}<br /></span><span style="color: rgb(0,0,0);">{&quot;col&quot; : &quot;PT20S&quot;}<br /></span><span style="color: rgb(0,0,0);">{&quot;col&quot; : &quot;PT10H10S&quot;}</span></code></td></tr><tr><td colspan="1" valign="top"><code>CAST</code> from <code>VARCHAR</code></td><td colspan="1" valign="top"><code><span style="color: rgb(0,0,0);">select cast(col as interval day) from dfs.`/tmp/input.json`;</span></code></td></tr></tbody></table> 
 
 ## Literal
 
 The following table provides a list of `date/time` literals that Drill
 supports with examples of each:
 
-<table ><tbody><tr><th >Format</th><th colspan="1" >Interpretation</th><th >Example</th></tr><tr><td colspan="1" valign="top"><code><span style="color: rgb(0,0,0);">interval '1 10:20:30.123' day to second</span></code></td><td colspan="1" valign="top"><code>1 day, 10 hours, 20 minutes, 30 seconds, and 123 thousandths of a second</code></td><td colspan="1" valign="top"><code><span style="color: rgb(0,0,0);">select interval '1 10:20:30.123' day to second from dfs.`/tmp/input.json`;</span></code></td></tr><tr><td colspan="1" valign="top"><code><span style="color: rgb(0,0,0);">interval '1 10' day to hour</span></code></td><td colspan="1" valign="top"><code>1 day 10 hours</code></td><td colspan="1" valign="top"><code><span style="color: rgb(0,0,0);">select interval '1 10' day to hour from dfs.`/tmp/input.json`;</span></code></td></tr><tr><td colspan="1" valign="top"><code><span style="color: rgb(0,0,0);">interval '10' day</span></code></td><td colspan="1" valign="top"><code>10 days</code
 ></td><td colspan="1" valign="top"><code><span style="color: rgb(0,0,0);">select interval '10' day from dfs.`/tmp/input.json`;</span></code></td></tr><tr><td colspan="1" valign="top"><code><span style="color: rgb(0,0,0);">interval '10' hour</span></code></td><td colspan="1" valign="top"><code>10 hours</code></td><td colspan="1" valign="top"><code><span style="color: rgb(0,0,0);">select interval '10' hour from dfs.`/tmp/input.json`;</span></code></td></tr><tr><td colspan="1" valign="top"><code><span style="color: rgb(0,0,0);">interval '10.999' second</span></code></td><td colspan="1" valign="top"><code>10.999 seconds</code></td><td colspan="1" valign="top"><code><span style="color: rgb(0,0,0);">select interval '10.999' second from dfs.`/tmp/input.json`; </span></code></td></tr></tbody></table></div>
+<table ><tbody><tr><th >Format</th><th colspan="1" >Interpretation</th><th >Example</th></tr><tr><td colspan="1" valign="top"><code><span style="color: rgb(0,0,0);">interval '1 10:20:30.123' day to second</span></code></td><td colspan="1" valign="top"><code>1 day, 10 hours, 20 minutes, 30 seconds, and 123 thousandths of a second</code></td><td colspan="1" valign="top"><code><span style="color: rgb(0,0,0);">select interval '1 10:20:30.123' day to second from dfs.`/tmp/input.json`;</span></code></td></tr><tr><td colspan="1" valign="top"><code><span style="color: rgb(0,0,0);">interval '1 10' day to hour</span></code></td><td colspan="1" valign="top"><code>1 day 10 hours</code></td><td colspan="1" valign="top"><code><span style="color: rgb(0,0,0);">select interval '1 10' day to hour from dfs.`/tmp/input.json`;</span></code></td></tr><tr><td colspan="1" valign="top"><code><span style="color: rgb(0,0,0);">interval '10' day</span></code></td><td colspan="1" valign="top"><code>10 days</code
 ></td><td colspan="1" valign="top"><code><span style="color: rgb(0,0,0);">select interval '10' day from dfs.`/tmp/input.json`;</span></code></td></tr><tr><td colspan="1" valign="top"><code><span style="color: rgb(0,0,0);">interval '10' hour</span></code></td><td colspan="1" valign="top"><code>10 hours</code></td><td colspan="1" valign="top"><code><span style="color: rgb(0,0,0);">select interval '10' hour from dfs.`/tmp/input.json`;</span></code></td></tr><tr><td colspan="1" valign="top"><code><span style="color: rgb(0,0,0);">interval '10.999' second</span></code></td><td colspan="1" valign="top"><code>10.999 seconds</code></td><td colspan="1" valign="top"><code><span style="color: rgb(0,0,0);">select interval '10.999' second from dfs.`/tmp/input.json`; </span></code></td></tr></tbody></table>
 
 
 

http://git-wip-us.apache.org/repos/asf/drill/blob/0119fdde/_docs/tutorial/002-get2kno-sb.md
----------------------------------------------------------------------
diff --git a/_docs/tutorial/002-get2kno-sb.md b/_docs/tutorial/002-get2kno-sb.md
index 9b11b9d..3ed1929 100644
--- a/_docs/tutorial/002-get2kno-sb.md
+++ b/_docs/tutorial/002-get2kno-sb.md
@@ -2,53 +2,56 @@
 title: "Getting to Know the Drill Sandbox"
 parent: "Apache Drill Tutorial"
 ---
-This section describes the configuration of the Apache Drill system that you
-have installed and introduces the overall use case for the tutorial.
+This section covers key information about the Apache Drill tutorial. After [installing the Drill sandbox](/docs/installing-the-apache-drill-sandbox) and starting the sandbox, you can open another terminal window (Linux) or Command Prompt (Windows) and use the secure shell (ssh) to connect to the VM, assuming ssh is installed. Use the following login name and password: mapr/mapr. For
+example:
 
-# Storage Plugins Overview
+    $ ssh mapr@localhost -p 2222
+    Password:
+    Last login: Mon Sep 15 13:46:08 2014 from 10.250.0.28
+    Welcome to your Mapr Demo virtual machine.
 
-The Hadoop cluster within the sandbox is set up with MapR-FS, MapR-DB, and
-Hive, which all serve as data sources for Drill in this tutorial. Before you
-can run queries against these data sources, Drill requires each one to be
-configured as a storage plugin. A storage plugin defines the abstraction on
-the data sources for Drill to talk to and provides interfaces to read/write
-and get metadata from the data source. Each storage plugin also exposes
-optimization rules for Drill to leverage for efficient query execution.
+Using the secure shell instead of the VM interface has some advantages. You can copy/paste commands from the tutorial and avoid mouse control problems.
 
-Take a look at the pre-configured storage plugins by opening the Drill Web UI.
+Drill includes SQLLine, a JDBC utility for connecting to relational databases and executing SQL commands. After logging into the sandbox,  use the `sqlline` command to start SQLLine for executing Drill queries.  
 
-Feel free to skip this section and jump directly to the queries: [Lesson 1:
-Learn About the Data
-Set](/drill/docs/lession-1-learn-about-the-data-set)
+    [mapr@maprdemo ~]# sqlline
+    sqlline version 1.1.6
+    0: jdbc:drill:>
 
-  * Launch a web browser and go to: `http://<IP address of the sandbox>:8047`
-  * Go to the Storage tab
-  * Open the configured storage plugins one at a time by clicking Update
-  * You will see the following plugins configured.
+[Starting SQLLine outside the sandbox](/docs/starting-stopping-drill) for use with Drill requires entering more options than are shown here. 
 
-## maprdb
+In this tutorial you query a number of data sets, including Hive and HBase, and files on the file system, such as CSV, JSON, and Parquet files. To access these diverse data sources, you connect Drill to storage plugins. 
 
-A storage plugin configuration for MapR-DB in the sandbox. Drill uses a single
-storage plugin for connecting to HBase as well as MapR-DB, which is an
-enterprise grade in-Hadoop NoSQL database. In addition to the following brief example, see the [Registering HBase](/drill/docs/registering-hbase) for more
-information on how to configure Drill to query HBase.
+## Storage Plugin Overview
+This section describes storage plugins included in the sandbox. For general information about Drill storage plugins, see ["Connect to a Data Source"](/docs/connect-to-data-sources).
+Take a look at the pre-configured storage plugins for the sandbox by opening the Storage tab in the Drill Web UI. Launch a web browser and go to: `http://<IP address of the sandbox>:8047/storage`. For example:
 
-    {
-      "type" : "hbase",
-      "enabled" : true,
-      "config" : {
-        "hbase.table.namespace.mappings" : "*:/tables"
-      }
-     }
+    http://localhost:8046/storage
+
+The control panel for managing storage plugins appear.
+
+![sandbox plugin]({{ site.baseurl }}/docs/img/get2kno_plugin.png)
 
-## dfs
+You see that the following storage plugin controls:
 
-This is a storage plugin configuration for the MapR file system (MapR-FS) in
-the sandbox. The connection attribute indicates the type of distributed file
-system: in this case, MapR-FS. Drill can work with any distributed system,
-including HDFS, S3, and so on.
+* cp
+* dfs
+* hive
+* maprdb
+* hbase
+* mongo
 
-The configuration also includes a set of workspaces; each one represents a
+Click Update to look at a configuration. 
+
+In some cases, the storage plugin defined for this tutorial differs from the [default storage plugin](/docs/connect-to-data-sources) of the same name in a Drill installation. Typically you create a storage plugin or customize an existing one for analyzing a particular data source. 
+
+The tutorial uses the dfs, hive, maprdb, and hbase storage plugins. 
+
+### dfs
+
+The `dfs` storage plugin in the sandbox configures a connection to the MapR file system (MapR-FS). 
+
+The `dfs` storage plugin configuration in the sandbox also includes a set of workspaces; each one represents a
 location in MapR-FS:
 
   * root: access to the root file system location
@@ -56,20 +59,7 @@ location in MapR-FS:
   * logs: access to flat (non-nested) JSON log data in the logs directory and its subdirectories
   * views: a workspace for creating views
 
-A workspace in Drill is a location where users can easily access a specific
-set of data and collaborate with each other by sharing artifacts. Users can
-create as many workspaces as they need within Drill.
-
-Each workspace can also be configured as “writable” or not, which indicates
-whether users can write data to this location and defines the storage format
-in which the data will be written (parquet, csv, json). These attributes
-become relevant when you explore SQL commands, especially CREATE TABLE
-AS (CTAS) and CREATE VIEW.
-
-Drill can query files and directories directly and can detect the file formats
-based on the file extension or the first few bits of data within the file.
-However, additional information around formats is required for Drill, such as
-delimiters for text files, which are specified in the “formats” section below.
+The `dfs` definition includes format definitions.
 
     {
       "type": "file",
@@ -79,56 +69,46 @@ delimiters for text files, which are specified in the “formats” section belo
         "root": {
           "location": "/mapr/demo.mapr.com/data",
           "writable": false,
-          "storageformat": null
+          "defaultinputformat": null
         },
         "clicks": {
           "location": "/mapr/demo.mapr.com/data/nested",
           "writable": true,
-          "storageformat": "parquet"
+          "defaultinputformat": "parquet"
         },
-        "logs": {
-          "location": "/mapr/demo.mapr.com/data/flat",
-          "writable": true,
-          "storageformat": "parquet"
-        },
-        "views": {
-          "location": "/mapr/demo.mapr.com/data/views",
-          "writable": true,
-          "storageformat": "parquet"
-     },
+     . . .
      "formats": {
-       "psv": {
-         "type": "text",
-         "extensions": [
-           "tbl"
-         ],
-         "delimiter": "|"
-     },
-     "csv": {
-       "type": "text",
-       "extensions": [
-         "csv"
-       ],
-       "delimiter": ","
-     },
-     "tsv": {
-       "type": "text",
-       "extensions": [
-         "tsv"
-       ],
-       "delimiter": "\t"
-     },
-     "parquet": {
-       "type": "parquet"
-     },
-     "json": {
+     . . .
+       "csv": {
+          "type": "text",
+          "extensions": [
+            "csv"
+          ],
+         "delimiter": ","
+      },
+     . . .
+      "json": {
        "type": "json"
+       }
+      }
+    }
+
+### maprdb
+
+The maprdb storage plugin is a configuration for MapR-DB in the sandbox. You use this plugin in the sandbox to query HBase as well as MapR-DB data because the sandbox does not include HBase services. In addition to the following brief example, see the [Registering HBase](/docs/hbase-storage-plugin) for more
+information on how to configure Drill to query HBase.
+
+    {
+      "type" : "hbase",
+      "enabled" : true,
+      "config" : {
+        "hbase.table.namespace.mappings" : "*:/tables"
+      }
      }
-    }}
 
-## hive
+### hive
 
-A storage plugin configuration for a Hive data warehouse within the sandbox.
+The hive storage plugin is a configuration for a Hive data warehouse within the sandbox.
 Drill connects to the Hive metastore by using the configured metastore thrift
 URI. Metadata for Hive tables is automatically available for users to query.
 
@@ -141,89 +121,20 @@ URI. Metadata for Hive tables is automatically available for users to query.
       }
     }
 
-# Client Application Interfaces
-
-Drill also provides additional application interfaces for the client tools to
-connect and access from Drill. The interfaces include the following.
-
-### ODBC/JDBC drivers
-
-Drill provides ODBC/JDBC drivers to connect from BI tools such as Tableau,
-MicroStrategy, SQUirrel, and Jaspersoft; refer to [Using ODBC to Access Apache
-Drill from BI Tools](/drill/docs/odbc-jdbc-interfaces/using-odbc-to- access-apache-drill-from-bi-tools) and [Using JDBC to Access Apache Drill](/drill/docs/odbc-jdbc-interfaces#using-jdbc-to-access-apache-drill-from-squirrel) to learn
-more.
-
-### SQLLine
-
-SQLLine is a JDBC application that comes packaged with Drill. In order to
-start working with it, you can use the command line on the demo cluster to log
-in as root, then enter `sqlline`. Use `mapr` as the login password. For
-example:
-
-    $ ssh root@localhost -p 2222
-    Password:
-    Last login: Mon Sep 15 13:46:08 2014 from 10.250.0.28
-    Welcome to your Mapr Demo virtual machine.
-    [root@maprdemo ~]# sqlline
-    sqlline version 1.1.6
-    0: jdbc:drill:>
-
-### Drill Web UI
-
-The Drill Web UI is a simple user interface for configuring and manage Apache
-Drill. This UI can be launched from any of the nodes in the Drill cluster. The
-configuration for Drill includes setting up storage plugins that represent the
-data sources on which Drill performs queries. The sandbox comes with storage
-plugins configured for the Hive, HBase, MapR file system, and local file
-system.
-
-Users and developers can get the necessary information for tuning and
-performing diagnostics on queries, such as the list of queries executed in a
-session and detailed query plan profiles for each.
-
-Detailed configuration and management of Drill is out of scope for this
-tutorial.
+## Use Case Overview
 
-The Web interface for Apache Drill also provides a query UI where users can
-submit queries to Drill and observe results. Here is a screen shot of the Web
-UI for Apache Drill:
-
-![drill query flow]({{ site.baseurl }}/docs/img/DrillWebUI.png)
-
-### REST API
-
-Drill provides a simple REST API for the users to query data as well as manage
-the system. The Web UI leverages the REST API to talk to Drill.
-
-This tutorial introduces sample queries that you can run by using SQLLine.
-Note that you can run the queries just as easily by launching the Drill Web
-UI. No additional installation or configuration is required.
-
-# Use Case Overview
-
-As you run through the queries in this tutorial, put yourself in the shoes of
-an analyst with basic SQL skills. Let us imagine that the analyst works for an
-emerging online retail business that accepts purchases from its customers
+This section describes the use case that serves as the basis for the tutorial. Imagine being an analyst with basic SQL skills who works for an
+emerging online retail business. The business accepts purchases from its customers
 through both an established web-based interface and a new mobile application.
 
-The analyst is data-driven and operates mostly on the business side with
-little or no interaction with the IT department. Recently the central IT team
+Your job is data-driven and independent with little or no interaction with the IT department. Recently the central IT team
 has implemented a Hadoop-based infrastructure to reduce the cost of the legacy
 database system, and most of the DWH/ETL workload is now handled by
-Hadoop/Hive. The master customer profile information and product catalog are
-managed in MapR-DB, which is a NoSQL database. The IT team has also started
+Hadoop/Hive. MapR DB manages the master customer profile information and product catalog. MapR DB is a NoSQL database. The IT team has also started
 acquiring clickstream data that comes from web and mobile applications. This
 data is stored in Hadoop as JSON files.
 
-The analyst has a number of data sources that he could explore, but exploring
-them in isolation is not the way to go. There are some potentially very
-interesting analytical connections between these data sources. For example, it
-would be good to be able to analyze customer records in the clickstream data
-and tie them to the master customer data in MapR DB.
-
-The analyst decides to explore various data sources and he chooses to do that
-by using Apache Drill. Think about the flexibility and analytic capability of
-Apache Drill as you work through the tutorial.
+You have a number of data sources to explore.  For example, analyzing customer records in the clickstream data and tying them to the master customer data in MapR DB might yield some potentially interesting analytical connections. You decide to explore various data sources by using Apache Drill. You need Apache Drill to provide flexibility and analytic capability.
 
 # What's Next