You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@drill.apache.org by ts...@apache.org on 2015/05/01 20:08:01 UTC

[05/50] [abbrv] drill git commit: missed refactoring and DRILL-994

missed refactoring and DRILL-994


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/48269506
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/48269506
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/48269506

Branch: refs/heads/gh-pages
Commit: 4826950622a739dbe9e025f808872404aa83949e
Parents: fdf289b
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Wed Apr 29 15:17:33 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Wed Apr 29 15:17:33 2015 -0700

----------------------------------------------------------------------
 _docs/connect-a-data-source/020-plugin-reg.md   |  24 ---
 .../020-storage-plugin-registration.md          |  24 +++
 _docs/connect-a-data-source/030-plugin-conf.md  | 135 ---------------
 .../030-storage-plugin-configuration.md         |   5 +
 ...storage-plugin-configuration-introduction.md | 133 +++++++++++++++
 .../050-file-system-storage-plugin.md           |  64 +++++++
 _docs/connect-a-data-source/050-reg-fs.md       |  64 -------
 .../060-hbase-storage-plugin.md                 |  37 ++++
 _docs/connect-a-data-source/060-reg-hbase.md    |  37 ----
 .../070-hive-storage-plugin.md                  |  77 +++++++++
 _docs/connect-a-data-source/070-reg-hive.md     |  77 ---------
 _docs/connect-a-data-source/080-default-frmt.md |  69 --------
 .../080-drill-default-input-format.md           |  69 ++++++++
 _docs/connect-a-data-source/090-mongo-plugin.md | 169 -------------------
 .../090-mongodb-plugin-for-apache-drill.md      | 169 +++++++++++++++++++
 .../connect-a-data-source/100-mapr-db-format.md |  34 ++++
 .../connect-a-data-source/100-mapr-db-plugin.md |  34 ----
 17 files changed, 612 insertions(+), 609 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/48269506/_docs/connect-a-data-source/020-plugin-reg.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/020-plugin-reg.md b/_docs/connect-a-data-source/020-plugin-reg.md
deleted file mode 100644
index 1afe31e..0000000
--- a/_docs/connect-a-data-source/020-plugin-reg.md
+++ /dev/null
@@ -1,24 +0,0 @@
----
-title: "Storage Plugin Registration"
-parent: "Connect a Data Source"
----
-You connect Drill to a file system, Hive, HBase, or other data source using storage plugins. Drill includes a number of storage plugins in the installation. On the Storage tab of the Web UI, you can view, create, reconfigure, and register a storage plugin. To open the Storage tab, go to `http://<IP address>:8047/storage`, where IP address is any one of the installed drill bits:
-
-![drill-installed plugins]({{ site.baseurl }}/docs/img/plugin-default.png)
-
-The Drill installation registers the `cp`, `dfs`, `hbase`, `hive`, and `mongo` storage plugins instances by default.
-
-* `cp`
-  Points to a JAR file in the Drill classpath that contains the Transaction Processing Performance Council (TPC) benchmark schema TPC-H that you can query. 
-* `dfs`
-  Points to the local file system on your machine, but you can configure this instance to
-point to any distributed file system, such as a Hadoop or S3 file system. 
-* `hbase`
-   Provides a connection to HBase/M7.
-* `hive`
-   Integrates Drill with the Hive metadata abstraction of files, HBase/M7, and libraries to read data and operate on SerDes and UDFs.
-* `mongo`
-   Provides a connection to MongoDB data.
-
-In the Drill sandbox,  the `dfs` storage plugin connects you to the MapR File System (MFS). Using an installation of Drill instead of the sandbox, `dfs` connects you to the root of your file system.
-

http://git-wip-us.apache.org/repos/asf/drill/blob/48269506/_docs/connect-a-data-source/020-storage-plugin-registration.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/020-storage-plugin-registration.md b/_docs/connect-a-data-source/020-storage-plugin-registration.md
new file mode 100644
index 0000000..1afe31e
--- /dev/null
+++ b/_docs/connect-a-data-source/020-storage-plugin-registration.md
@@ -0,0 +1,24 @@
+---
+title: "Storage Plugin Registration"
+parent: "Connect a Data Source"
+---
+You connect Drill to a file system, Hive, HBase, or other data source using storage plugins. Drill includes a number of storage plugins in the installation. On the Storage tab of the Web UI, you can view, create, reconfigure, and register a storage plugin. To open the Storage tab, go to `http://<IP address>:8047/storage`, where IP address is any one of the installed drill bits:
+
+![drill-installed plugins]({{ site.baseurl }}/docs/img/plugin-default.png)
+
+The Drill installation registers the `cp`, `dfs`, `hbase`, `hive`, and `mongo` storage plugins instances by default.
+
+* `cp`
+  Points to a JAR file in the Drill classpath that contains the Transaction Processing Performance Council (TPC) benchmark schema TPC-H that you can query. 
+* `dfs`
+  Points to the local file system on your machine, but you can configure this instance to
+point to any distributed file system, such as a Hadoop or S3 file system. 
+* `hbase`
+   Provides a connection to HBase/M7.
+* `hive`
+   Integrates Drill with the Hive metadata abstraction of files, HBase/M7, and libraries to read data and operate on SerDes and UDFs.
+* `mongo`
+   Provides a connection to MongoDB data.
+
+In the Drill sandbox,  the `dfs` storage plugin connects you to the MapR File System (MFS). Using an installation of Drill instead of the sandbox, `dfs` connects you to the root of your file system.
+

http://git-wip-us.apache.org/repos/asf/drill/blob/48269506/_docs/connect-a-data-source/030-plugin-conf.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/030-plugin-conf.md b/_docs/connect-a-data-source/030-plugin-conf.md
deleted file mode 100644
index 36991ab..0000000
--- a/_docs/connect-a-data-source/030-plugin-conf.md
+++ /dev/null
@@ -1,135 +0,0 @@
----
-title: "Storage Plugin Configuration"
-parent: "Connect a Data Source"
----
-When you add or update storage plugin instances on one Drill node in a Drill
-cluster, Drill broadcasts the information to all of the other Drill nodes 
-to have identical storage plugin configurations. You do not need to
-restart any of the Drillbits when you add or update a storage plugin instance.
-
-Use the Drill Web UI to update or add a new storage plugin. Launch a web browser, go to: `http://<IP address of the sandbox>:8047`, and then go to the Storage tab. 
-
-To create and configure a new storage plugin:
-
-1. Enter a storage name in New Storage Plugin.
-   Each storage plugin registered with Drill must have a distinct
-name. Names are case-sensitive.
-2. Click Create.  
-3. In Configuration, configure attributes of the storage plugin, if applicable, using JSON formatting. The Storage Plugin Attributes table in the next section describes attributes typically reconfigured by users. 
-4. Click Create.
-
-Click Update to reconfigure an existing, enabled storage plugin.
-
-## Storage Plugin Attributes
-The following diagram of the dfs storage plugin briefly describes options you configure in a typical storage plugin configuration:
-
-![dfs plugin]({{ site.baseurl }}/docs/img/connect-plugin.png)
-
-The following table describes the attributes you configure for storage plugins in more detail than the diagram. 
-
-<table>
-  <tr>
-    <th>Attribute</th>
-    <th>Example Values</th>
-    <th>Required</th>
-    <th>Description</th>
-  </tr>
-  <tr>
-    <td>"type"</td>
-    <td>"file"<br>"hbase"<br>"hive"<br>"mongo"</td>
-    <td>yes</td>
-    <td>The storage plugin type name supported by Drill.</td>
-  </tr>
-  <tr>
-    <td>"enabled"</td>
-    <td>true<br>false</td>
-    <td>yes</td>
-    <td>The state of the storage plugin.</td>
-  </tr>
-  <tr>
-    <td>"connection"</td>
-    <td>"classpath:///"<br>"file:///"<br>"mongodb://localhost:27017/"<br>"maprfs:///"</td>
-    <td>implementation-dependent</td>
-    <td>The type of distributed file system. Drill can work with any distributed system, such as HDFS and S3, or files in your file system.</td>
-  </tr>
-  <tr>
-    <td>"workspaces"</td>
-    <td>null<br>"logs"</td>
-    <td>no</td>
-    <td>One or more unique workspace names, enclosed in double quotation marks. If a workspace is defined more than once, the latest one overrides the previous ones. Not used with local or distributed file systems.</td>
-  </tr>
-  <tr>
-    <td>"workspaces". . . "location"</td>
-    <td>"location": "/"<br>"location": "/tmp"</td>
-    <td>no</td>
-    <td>The path to a directory on the file system.</td>
-  </tr>
-  <tr>
-    <td>"workspaces". . . "writable"</td>
-    <td>true<br>false</td>
-    <td>no</td>
-    <td>One or more unique workspace names, enclosed in double quotation marks. If a workspace is defined more than once, the latest one overrides the previous ones. Not used with local or distributed file systems.</td>
-  </tr>
-  <tr>
-    <td>"workspaces". . . "defaultInputFormat"</td>
-    <td>null<br>"parquet"<br>"csv"<br>"json"</td>
-    <td>no</td>
-    <td>The format of data Drill reads by default, regardless of extension. Parquet is the default.</td>
-  </tr>
-  <tr>
-    <td>"formats"</td>
-    <td>"psv"<br>"csv"<br>"tsv"<br>"parquet"<br>"json"<br>"maprdb"</td>
-    <td>yes</td>
-    <td>One or more file formats of data Drill can read. Drill can implicitly detect some file formats based on the file extension or the first few bits of data within the file, but you need to configure an option for others.</td>
-  </tr>
-  <tr>
-    <td>"formats" . . . "type"</td>
-    <td>"text"<br>"parquet"<br>"json"<br>"maprdb"</td>
-    <td>yes</td>
-    <td>The type of the format specified. For example, you can define two formats, csv and psv, as type "Text", but having different delimiters. Drill enables the maprdb plugin if you define the maprdb type.</td>
-  </tr>
-  <tr>
-    <td>formats . . . "extensions"</td>
-    <td>["csv"]</td>
-    <td>format-dependent</td>
-    <td>The extensions of the files that Drill can read.</td>
-  </tr>
-  <tr>
-    <td>"formats" . . . "delimiter"</td>
-    <td>"\t"<br>","</td>
-    <td>format-dependent</td>
-    <td>The delimiter used to separate columns in text files such as CSV. Specify a non-printable delimiter in the storage plugin config by using the form \uXXXX, where XXXX is the four numeral hex ascii code for the character.</td>
-  </tr>
-</table>
-
-The configuration of other attributes, such as `size.calculator.enabled` in the hbase plugin and `configProps` in the hive plugin, are implementation-dependent and beyond the scope of this document.
-
-Although Drill can work with different file types in the same directory, restricting a Drill workspace to one file type prevents confusion.
-
-## Case-sensitive Names
-As previously mentioned, workspace and storage plugin names are case-sensitive. For example, the following query uses a storage plugin name `dfs` and a workspace name `clicks`. When you refer to `dfs.clicks` in an SQL statement, use the defined case:
-
-    0: jdbc:drill:> USE dfs.clicks;
-
-For example, using uppercase letters in the query after defining the storage plugin and workspace names using lowercase letters does not work. 
-
-## REST API
-
-Drill provides a REST API that you can use to create a storage plugin. Use an HTTP POST and pass two properties:
-
-* name
-  The plugin name. 
-
-* config
-  The storage plugin definition as you would enter it in the Web UI.
-
-For example, this command creates a plugin named myplugin for reading files of an unknown type located on the root of the file system:
-
-    curl -X POST -/json" -d '{"name":"myplugin", "config": {"type": "file", "enabled": false, "connection": "file:///", "workspaces": { "root": { "location": "/", "writable": false, "defaultInputFormat": null}}, "formats": null}}' http://localhost:8047/storage/myplugin.json
-
-## Bootstrapping a Storage Plugin
-If you need to add a storage plugin to Drill and do not want to use a web browser, you can create a [bootstrap-storage-plugins.json](https://github.com/apache/drill/blob/master/contrib/storage-hbase/src/main/resources/bootstrap-storage-plugins.json) file and include it on the classpath when starting Drill. The storage plugin loads when Drill starts up.
-
-If you configure an HBase storage plugin using bootstrap-storage-plugins.json file and HBase is not install, you might experience a delay when executing the queries. Configure the [HBase client timeout](http://hbase.apache.org/book.html#config.files) and retry settings in the config block of HBase plugin instance configuration.
-
-

http://git-wip-us.apache.org/repos/asf/drill/blob/48269506/_docs/connect-a-data-source/030-storage-plugin-configuration.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/030-storage-plugin-configuration.md b/_docs/connect-a-data-source/030-storage-plugin-configuration.md
new file mode 100644
index 0000000..b75292c
--- /dev/null
+++ b/_docs/connect-a-data-source/030-storage-plugin-configuration.md
@@ -0,0 +1,5 @@
+---
+title: "Storage Plugin Configuration"
+parent: "Connect a Data Source"
+---
+

http://git-wip-us.apache.org/repos/asf/drill/blob/48269506/_docs/connect-a-data-source/035-storage-plugin-configuration-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/035-storage-plugin-configuration-introduction.md b/_docs/connect-a-data-source/035-storage-plugin-configuration-introduction.md
new file mode 100644
index 0000000..7a13e3e
--- /dev/null
+++ b/_docs/connect-a-data-source/035-storage-plugin-configuration-introduction.md
@@ -0,0 +1,133 @@
+---
+title: "Storage Plugin Configuration Introduction"
+parent: "Storage Plugin Configuration"
+---
+When you add or update storage plugin instances on one Drill node in a Drill
+cluster, Drill broadcasts the information to all of the other Drill nodes 
+to have identical storage plugin configurations. You do not need to
+restart any of the Drillbits when you add or update a storage plugin instance.
+
+Use the Drill Web UI to update or add a new storage plugin. Launch a web browser, go to: `http://<IP address of the sandbox>:8047`, and then go to the Storage tab. 
+
+To create and configure a new storage plugin:
+
+1. Enter a storage name in New Storage Plugin.
+   Each storage plugin registered with Drill must have a distinct
+name. Names are case-sensitive.
+2. Click Create.  
+3. In Configuration, configure attributes of the storage plugin, if applicable, using JSON formatting. The Storage Plugin Attributes table in the next section describes attributes typically reconfigured by users. 
+4. Click Create.
+
+Click Update to reconfigure an existing, enabled storage plugin.
+
+## Storage Plugin Attributes
+The following diagram of the dfs storage plugin briefly describes options you configure in a typical storage plugin configuration:
+
+![dfs plugin]({{ site.baseurl }}/docs/img/connect-plugin.png)
+
+The following table describes the attributes you configure for storage plugins in more detail than the diagram. 
+
+<table>
+  <tr>
+    <th>Attribute</th>
+    <th>Example Values</th>
+    <th>Required</th>
+    <th>Description</th>
+  </tr>
+  <tr>
+    <td>"type"</td>
+    <td>"file"<br>"hbase"<br>"hive"<br>"mongo"</td>
+    <td>yes</td>
+    <td>The storage plugin type name supported by Drill.</td>
+  </tr>
+  <tr>
+    <td>"enabled"</td>
+    <td>true<br>false</td>
+    <td>yes</td>
+    <td>The state of the storage plugin.</td>
+  </tr>
+  <tr>
+    <td>"connection"</td>
+    <td>"classpath:///"<br>"file:///"<br>"mongodb://localhost:27017/"<br>"maprfs:///"</td>
+    <td>implementation-dependent</td>
+    <td>The type of distributed file system. Drill can work with any distributed system, such as HDFS and S3, or files in your file system.</td>
+  </tr>
+  <tr>
+    <td>"workspaces"</td>
+    <td>null<br>"logs"</td>
+    <td>no</td>
+    <td>One or more unique workspace names, enclosed in double quotation marks. If a workspace is defined more than once, the latest one overrides the previous ones. Not used with local or distributed file systems.</td>
+  </tr>
+  <tr>
+    <td>"workspaces". . . "location"</td>
+    <td>"location": "/"<br>"location": "/tmp"</td>
+    <td>no</td>
+    <td>The path to a directory on the file system.</td>
+  </tr>
+  <tr>
+    <td>"workspaces". . . "writable"</td>
+    <td>true<br>false</td>
+    <td>no</td>
+    <td>One or more unique workspace names, enclosed in double quotation marks. If a workspace is defined more than once, the latest one overrides the previous ones. Not used with local or distributed file systems.</td>
+  </tr>
+  <tr>
+    <td>"workspaces". . . "defaultInputFormat"</td>
+    <td>null<br>"parquet"<br>"csv"<br>"json"</td>
+    <td>no</td>
+    <td>The format of data Drill reads by default, regardless of extension. Parquet is the default.</td>
+  </tr>
+  <tr>
+    <td>"formats"</td>
+    <td>"psv"<br>"csv"<br>"tsv"<br>"parquet"<br>"json"<br>"maprdb"</td>
+    <td>yes</td>
+    <td>One or more file formats of data Drill can read. Drill can implicitly detect some file formats based on the file extension or the first few bits of data within the file, but you need to configure an option for others.</td>
+  </tr>
+  <tr>
+    <td>"formats" . . . "type"</td>
+    <td>"text"<br>"parquet"<br>"json"<br>"maprdb"</td>
+    <td>yes</td>
+    <td>The type of the format specified. For example, you can define two formats, csv and psv, as type "Text", but having different delimiters. Drill enables the maprdb plugin if you define the maprdb type.</td>
+  </tr>
+  <tr>
+    <td>formats . . . "extensions"</td>
+    <td>["csv"]</td>
+    <td>format-dependent</td>
+    <td>The extensions of the files that Drill can read.</td>
+  </tr>
+  <tr>
+    <td>"formats" . . . "delimiter"</td>
+    <td>"\t"<br>","</td>
+    <td>format-dependent</td>
+    <td>The delimiter used to separate columns in text files such as CSV. Specify a non-printable delimiter in the storage plugin config by using the form \uXXXX, where XXXX is the four numeral hex ascii code for the character.</td>
+  </tr>
+</table>
+
+The configuration of other attributes, such as `size.calculator.enabled` in the hbase plugin and `configProps` in the hive plugin, are implementation-dependent and beyond the scope of this document.
+
+Although Drill can work with different file types in the same directory, restricting a Drill workspace to one file type prevents confusion.
+
+## Case-sensitive Names
+As previously mentioned, workspace and storage plugin names are case-sensitive. For example, the following query uses a storage plugin name `dfs` and a workspace name `clicks`. When you refer to `dfs.clicks` in an SQL statement, use the defined case:
+
+    0: jdbc:drill:> USE dfs.clicks;
+
+For example, using uppercase letters in the query after defining the storage plugin and workspace names using lowercase letters does not work. 
+
+## REST API
+
+Drill provides a REST API that you can use to create a storage plugin. Use an HTTP POST and pass two properties:
+
+* name
+  The plugin name. 
+
+* config
+  The storage plugin definition as you would enter it in the Web UI.
+
+For example, this command creates a plugin named myplugin for reading files of an unknown type located on the root of the file system:
+
+    curl -X POST -/json" -d '{"name":"myplugin", "config": {"type": "file", "enabled": false, "connection": "file:///", "workspaces": { "root": { "location": "/", "writable": false, "defaultInputFormat": null}}, "formats": null}}' http://localhost:8047/storage/myplugin.json
+
+## Bootstrapping a Storage Plugin
+If you need to add a storage plugin to Drill and do not want to use a web browser, you can create a [bootstrap-storage-plugins.json](https://github.com/apache/drill/blob/master/contrib/storage-hbase/src/main/resources/bootstrap-storage-plugins.json) file and include it on the classpath when starting Drill. The storage plugin loads when Drill starts up.
+
+If you configure an HBase storage plugin using bootstrap-storage-plugins.json file and HBase is not install, you might experience a delay when executing the queries. Configure the [HBase client timeout](http://hbase.apache.org/book.html#config.files) and retry settings in the config block of HBase plugin instance configuration.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/48269506/_docs/connect-a-data-source/050-file-system-storage-plugin.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/050-file-system-storage-plugin.md b/_docs/connect-a-data-source/050-file-system-storage-plugin.md
new file mode 100644
index 0000000..2b3e287
--- /dev/null
+++ b/_docs/connect-a-data-source/050-file-system-storage-plugin.md
@@ -0,0 +1,64 @@
+---
+title: "File System Storage Plugin"
+parent: "Storage Plugin Configuration"
+---
+You can register a storage plugin instance that connects Drill to a local file
+system or a distributed file system registered in `core-site.xml`, such as S3
+or HDFS. When you register a storage plugin instance for a file system,
+provide a unique name for the instance, and identify the type as “`file`”. By
+default, Drill includes an instance named `dfs` that points to the local file
+system on your machine. You can update this configuration to point to a
+distributed file system or you can create a new instance to point to a
+distributed file system.
+
+To register a local or a distributed file system with Apache Drill, complete
+the following steps:
+
+  1. Navigate to `[http://localhost:8047](http://localhost:8047/)`, and select the **Storage** tab.
+  2. In the New Storage Plugin window, enter a unique name and then click **Create**.
+  3. In the Configuration window, provide the following configuration information for the type of file system that you are configuring as a data source.
+     1. Local file system example:
+
+            {
+              "type": "file",
+              "enabled": true,
+              "connection": "file:///",
+              "workspaces": {
+                "root": {
+                  "location": "/user/max/donuts",
+                  "writable": false,
+                  "defaultinputformat": null
+                 }
+              },
+                 "formats" : {
+                   "json" : {
+                     "type" : "json"
+                   }
+                 }
+              }
+     2. Distributed file system example:
+    
+            {
+              "type" : "file",
+              "enabled" : true,
+              "connection" : "hdfs://10.10.30.156:8020/",
+              "workspaces" : {
+                "root : {
+                  "location" : "/user/root/drill",
+                  "writable" : true,
+                  "defaultinputformat" : "null"
+                }
+              },
+              "formats" : {
+                "json" : {
+                  "type" : "json"
+                }
+              }
+            }
+
+      To connect to a Hadoop file system, you must include the IP address of the
+name node and the port number.
+  4. Click **Enable**.
+
+Once you have configured a storage plugin instance for the file system, you
+can issue Drill queries against it.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/48269506/_docs/connect-a-data-source/050-reg-fs.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/050-reg-fs.md b/_docs/connect-a-data-source/050-reg-fs.md
deleted file mode 100644
index 2b3e287..0000000
--- a/_docs/connect-a-data-source/050-reg-fs.md
+++ /dev/null
@@ -1,64 +0,0 @@
----
-title: "File System Storage Plugin"
-parent: "Storage Plugin Configuration"
----
-You can register a storage plugin instance that connects Drill to a local file
-system or a distributed file system registered in `core-site.xml`, such as S3
-or HDFS. When you register a storage plugin instance for a file system,
-provide a unique name for the instance, and identify the type as “`file`”. By
-default, Drill includes an instance named `dfs` that points to the local file
-system on your machine. You can update this configuration to point to a
-distributed file system or you can create a new instance to point to a
-distributed file system.
-
-To register a local or a distributed file system with Apache Drill, complete
-the following steps:
-
-  1. Navigate to `[http://localhost:8047](http://localhost:8047/)`, and select the **Storage** tab.
-  2. In the New Storage Plugin window, enter a unique name and then click **Create**.
-  3. In the Configuration window, provide the following configuration information for the type of file system that you are configuring as a data source.
-     1. Local file system example:
-
-            {
-              "type": "file",
-              "enabled": true,
-              "connection": "file:///",
-              "workspaces": {
-                "root": {
-                  "location": "/user/max/donuts",
-                  "writable": false,
-                  "defaultinputformat": null
-                 }
-              },
-                 "formats" : {
-                   "json" : {
-                     "type" : "json"
-                   }
-                 }
-              }
-     2. Distributed file system example:
-    
-            {
-              "type" : "file",
-              "enabled" : true,
-              "connection" : "hdfs://10.10.30.156:8020/",
-              "workspaces" : {
-                "root : {
-                  "location" : "/user/root/drill",
-                  "writable" : true,
-                  "defaultinputformat" : "null"
-                }
-              },
-              "formats" : {
-                "json" : {
-                  "type" : "json"
-                }
-              }
-            }
-
-      To connect to a Hadoop file system, you must include the IP address of the
-name node and the port number.
-  4. Click **Enable**.
-
-Once you have configured a storage plugin instance for the file system, you
-can issue Drill queries against it.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/48269506/_docs/connect-a-data-source/060-hbase-storage-plugin.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/060-hbase-storage-plugin.md b/_docs/connect-a-data-source/060-hbase-storage-plugin.md
new file mode 100644
index 0000000..a8029df
--- /dev/null
+++ b/_docs/connect-a-data-source/060-hbase-storage-plugin.md
@@ -0,0 +1,37 @@
+---
+title: "HBase Storage Plugin"
+parent: "Storage Plugin Configuration"
+---
+Register a storage plugin instance and specify a zookeeper quorum to connect
+Drill to an HBase data source. When you register a storage plugin instance for
+an HBase data source, provide a unique name for the instance, and identify the
+type as “hbase” in the Drill Web UI.
+
+Drill supports HBase version 0.98.
+
+To register HBase with Drill, complete the following steps:
+
+  1. Navigate to [http://localhost:8047](http://localhost:8047/), and select the **Storage** tab
+  2. In the disabled storage plugins section, click **Update** next to the `hbase` instance.
+  3. In the Configuration window, specify the Zookeeper quorum and port. 
+  
+
+     **Example**
+        {
+          "type": "hbase",
+          "config": {
+            "hbase.zookeeper.quorum": "10.10.100.62,10.10.10.52,10.10.10.53",
+            "hbase.zookeeper.property.clientPort": "2181"
+          },
+          "size.calculator.enabled": false,
+          "enabled": true
+        }
+
+  4. Click **Enable**.
+
+The hbase.zookeeper.property.clientPort shown here and in the default hbase storage plugin is 2181. In a MapR cluster, the port is 5181; however, in a MapR cluster, use the maprdb storage plugin format instead of the hbase storage plugin. 
+
+After you configure a storage plugin instance for the HBase, you can
+issue Drill queries against it.
+
+In the Drill sandbox, use the `dfs` storage plugin and the [MapR-DB format]({{ site.baseurl }}/docs/mapr-db-format/) to query HBase files because the sandbox does not include HBase services.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/48269506/_docs/connect-a-data-source/060-reg-hbase.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/060-reg-hbase.md b/_docs/connect-a-data-source/060-reg-hbase.md
deleted file mode 100644
index a8029df..0000000
--- a/_docs/connect-a-data-source/060-reg-hbase.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: "HBase Storage Plugin"
-parent: "Storage Plugin Configuration"
----
-Register a storage plugin instance and specify a zookeeper quorum to connect
-Drill to an HBase data source. When you register a storage plugin instance for
-an HBase data source, provide a unique name for the instance, and identify the
-type as “hbase” in the Drill Web UI.
-
-Drill supports HBase version 0.98.
-
-To register HBase with Drill, complete the following steps:
-
-  1. Navigate to [http://localhost:8047](http://localhost:8047/), and select the **Storage** tab
-  2. In the disabled storage plugins section, click **Update** next to the `hbase` instance.
-  3. In the Configuration window, specify the Zookeeper quorum and port. 
-  
-
-     **Example**
-        {
-          "type": "hbase",
-          "config": {
-            "hbase.zookeeper.quorum": "10.10.100.62,10.10.10.52,10.10.10.53",
-            "hbase.zookeeper.property.clientPort": "2181"
-          },
-          "size.calculator.enabled": false,
-          "enabled": true
-        }
-
-  4. Click **Enable**.
-
-The hbase.zookeeper.property.clientPort shown here and in the default hbase storage plugin is 2181. In a MapR cluster, the port is 5181; however, in a MapR cluster, use the maprdb storage plugin format instead of the hbase storage plugin. 
-
-After you configure a storage plugin instance for the HBase, you can
-issue Drill queries against it.
-
-In the Drill sandbox, use the `dfs` storage plugin and the [MapR-DB format]({{ site.baseurl }}/docs/mapr-db-format/) to query HBase files because the sandbox does not include HBase services.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/48269506/_docs/connect-a-data-source/070-hive-storage-plugin.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/070-hive-storage-plugin.md b/_docs/connect-a-data-source/070-hive-storage-plugin.md
new file mode 100644
index 0000000..146dde8
--- /dev/null
+++ b/_docs/connect-a-data-source/070-hive-storage-plugin.md
@@ -0,0 +1,77 @@
+---
+title: "Hive Storage Plugin"
+parent: "Storage Plugin Configuration"
+---
+You can register a storage plugin instance that connects Drill to a Hive data
+source that has a remote or embedded metastore service. When you register a
+storage plugin instance for a Hive data source, provide a unique name for the
+instance, and identify the type as “`hive`”. You must also provide the
+metastore connection information.
+
+Drill supports Hive 0.13. To access Hive tables
+using custom SerDes or InputFormat/OutputFormat, all nodes running Drillbits
+must have the SerDes or InputFormat/OutputFormat `JAR` files in the 
+`<drill_installation_directory>/jars/3rdparty` folder.
+
+## Hive Remote Metastore
+
+In this configuration, the Hive metastore runs as a separate service outside
+of Hive. Drill communicates with the Hive metastore through Thrift. The
+metastore service communicates with the Hive database over JDBC. Point Drill
+to the Hive metastore service address, and provide the connection parameters
+in the Drill Web UI to configure a connection to Drill.
+
+{% include startnote.html %}Verify that the Hive metastore service is running before you register the Hive metastore.{% include endnote.html %}
+
+To register a remote Hive metastore with Drill, complete the following steps:
+
+  1. Issue the following command to start the Hive metastore service on the system specified in the `hive.metastore.uris`:
+
+        hive --service metastore
+  2. Navigate to [http://localhost:8047](http://localhost:8047/), and select the **Storage** tab.
+  3. In the disabled storage plugins section, click **Update** next to the `hive` instance.
+  4. In the configuration window, add the `Thrift URI` and port to `hive.metastore.uris`.
+
+     **Example**
+     
+        {
+          "type": "hive",
+          "enabled": true,
+          "configProps": {
+            "hive.metastore.uris": "thrift://<localhost>:<port>",  
+            "hive.metastore.sasl.enabled": "false"
+          }
+        }       
+  5. Click **Enable**.
+  6. Verify that `HADOOP_CLASSPATH` is set in `drill-env.sh`. If you need to set the classpath, add the following line to `drill-env.sh`.
+
+Once you have configured a storage plugin instance for a Hive data source, you
+can [query Hive tables]({{ site.baseurl }}/docs/querying-hive/).
+
+## Hive Embedded Metastore
+
+In this configuration, the Hive metastore is embedded within the Drill process. Provide the metastore database configuration settings in the Drill Web UI. Before you register Hive, verify that the driver you use to connect to the Hive metastore is in the Drill classpath located in `/<drill installation dirctory>/lib/.` If the driver is not there, copy the driver to `/<drill
+installation directory>/lib` on the Drill node. For more information about storage types and configurations, refer to ["Hive Metastore Administration"](https://cwiki.apache.org/confluence/display/Hive/AdminManual+MetastoreAdmin).
+
+To register an embedded Hive metastore with Drill, complete the following
+steps:
+
+  1. Navigate to `[http://localhost:8047](http://localhost:8047/)`, and select the **Storage** tab
+  2. In the disabled storage plugins section, click **Update** next to `hive` instance.
+  3. In the configuration window, add the database configuration settings.
+
+     **Example**
+     
+        {
+          "type": "hive",
+          "enabled": true,
+          "configProps": {
+            "javax.jdo.option.ConnectionURL": "jdbc:<database>://<host:port>/<metastore database>;create=true",
+            "hive.metastore.warehouse.dir": "/tmp/drill_hive_wh",
+            "fs.default.name": "file:///",   
+          }
+        }
+  4. Click **Enable**.
+  5. Verify that `HADOOP_CLASSPATH` is set in `drill-env.sh`. If you need to set the classpath, add the following line to `drill-env.sh`.
+  
+        export HADOOP_CLASSPATH=/<directory path>/hadoop/hadoop-<version-number>

http://git-wip-us.apache.org/repos/asf/drill/blob/48269506/_docs/connect-a-data-source/070-reg-hive.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/070-reg-hive.md b/_docs/connect-a-data-source/070-reg-hive.md
deleted file mode 100644
index 146dde8..0000000
--- a/_docs/connect-a-data-source/070-reg-hive.md
+++ /dev/null
@@ -1,77 +0,0 @@
----
-title: "Hive Storage Plugin"
-parent: "Storage Plugin Configuration"
----
-You can register a storage plugin instance that connects Drill to a Hive data
-source that has a remote or embedded metastore service. When you register a
-storage plugin instance for a Hive data source, provide a unique name for the
-instance, and identify the type as “`hive`”. You must also provide the
-metastore connection information.
-
-Drill supports Hive 0.13. To access Hive tables
-using custom SerDes or InputFormat/OutputFormat, all nodes running Drillbits
-must have the SerDes or InputFormat/OutputFormat `JAR` files in the 
-`<drill_installation_directory>/jars/3rdparty` folder.
-
-## Hive Remote Metastore
-
-In this configuration, the Hive metastore runs as a separate service outside
-of Hive. Drill communicates with the Hive metastore through Thrift. The
-metastore service communicates with the Hive database over JDBC. Point Drill
-to the Hive metastore service address, and provide the connection parameters
-in the Drill Web UI to configure a connection to Drill.
-
-{% include startnote.html %}Verify that the Hive metastore service is running before you register the Hive metastore.{% include endnote.html %}
-
-To register a remote Hive metastore with Drill, complete the following steps:
-
-  1. Issue the following command to start the Hive metastore service on the system specified in the `hive.metastore.uris`:
-
-        hive --service metastore
-  2. Navigate to [http://localhost:8047](http://localhost:8047/), and select the **Storage** tab.
-  3. In the disabled storage plugins section, click **Update** next to the `hive` instance.
-  4. In the configuration window, add the `Thrift URI` and port to `hive.metastore.uris`.
-
-     **Example**
-     
-        {
-          "type": "hive",
-          "enabled": true,
-          "configProps": {
-            "hive.metastore.uris": "thrift://<localhost>:<port>",  
-            "hive.metastore.sasl.enabled": "false"
-          }
-        }       
-  5. Click **Enable**.
-  6. Verify that `HADOOP_CLASSPATH` is set in `drill-env.sh`. If you need to set the classpath, add the following line to `drill-env.sh`.
-
-Once you have configured a storage plugin instance for a Hive data source, you
-can [query Hive tables]({{ site.baseurl }}/docs/querying-hive/).
-
-## Hive Embedded Metastore
-
-In this configuration, the Hive metastore is embedded within the Drill process. Provide the metastore database configuration settings in the Drill Web UI. Before you register Hive, verify that the driver you use to connect to the Hive metastore is in the Drill classpath located in `/<drill installation dirctory>/lib/.` If the driver is not there, copy the driver to `/<drill
-installation directory>/lib` on the Drill node. For more information about storage types and configurations, refer to ["Hive Metastore Administration"](https://cwiki.apache.org/confluence/display/Hive/AdminManual+MetastoreAdmin).
-
-To register an embedded Hive metastore with Drill, complete the following
-steps:
-
-  1. Navigate to `[http://localhost:8047](http://localhost:8047/)`, and select the **Storage** tab
-  2. In the disabled storage plugins section, click **Update** next to `hive` instance.
-  3. In the configuration window, add the database configuration settings.
-
-     **Example**
-     
-        {
-          "type": "hive",
-          "enabled": true,
-          "configProps": {
-            "javax.jdo.option.ConnectionURL": "jdbc:<database>://<host:port>/<metastore database>;create=true",
-            "hive.metastore.warehouse.dir": "/tmp/drill_hive_wh",
-            "fs.default.name": "file:///",   
-          }
-        }
-  4. Click **Enable**.
-  5. Verify that `HADOOP_CLASSPATH` is set in `drill-env.sh`. If you need to set the classpath, add the following line to `drill-env.sh`.
-  
-        export HADOOP_CLASSPATH=/<directory path>/hadoop/hadoop-<version-number>

http://git-wip-us.apache.org/repos/asf/drill/blob/48269506/_docs/connect-a-data-source/080-default-frmt.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/080-default-frmt.md b/_docs/connect-a-data-source/080-default-frmt.md
deleted file mode 100644
index e817343..0000000
--- a/_docs/connect-a-data-source/080-default-frmt.md
+++ /dev/null
@@ -1,69 +0,0 @@
----
-title: "Drill Default Input Format"
-parent: "Storage Plugin Configuration"
----
-You can define a default input format to tell Drill what file type exists in a
-workspace within a file system. Drill determines the file type based on file
-extensions and magic numbers when searching a workspace.
-
-Magic numbers are file signatures that Drill uses to identify Parquet files.
-If Drill cannot identify the file type based on file extensions or magic
-numbers, the query fails. Defining a default input format can prevent queries
-from failing in situations where Drill cannot determine the file type.
-
-If you incorrectly define the file type in a workspace and Drill cannot
-determine the file type, the query fails. For example, if the directory for
-which you have defined a workspace contains JSON files and you defined the
-default input format as CSV, the query fails against the workspace.
-
-You can define one default input format per workspace. If you do not define a
-default input format, and Drill cannot detect the file format, the query
-fails. You can define a default input format for any of the file types that
-Drill supports. Currently, Drill supports the following types:
-
-  * CSV
-  * TSV
-  * PSV
-  * Parquet
-  * JSON
-
-## Defining a Default Input Format
-
-You define the default input format for a file system workspace through the
-Drill Web UI. You must have a [defined workspace]({{ site.baseurl }}/docs/workspaces) before you can define a
-default input format.
-
-To define a default input format for a workspace, complete the following
-steps:
-
-  1. Navigate to the Drill Web UI at `<drill_node_ip_address>:8047`. The Drillbit process must be running on the node before you connect to the Drill Web UI.
-  2. Select **Storage** in the toolbar.
-  3. Click **Update** next to the file system for which you want to define a default input format for a workspace.
-  4. In the Configuration area, locate the workspace for which you would like to define the default input format, and change the `defaultInputFormat` attribute to any of the supported file types.
-
-     **Example**
-     
-        {
-          "type": "file",
-          "enabled": true,
-          "connection": "hdfs:///",
-          "workspaces": {
-            "root": {
-              "location": "/drill/testdata",
-              "writable": false,
-              "defaultInputFormat": csv
-          },
-          "local" : {
-            "location" : "/max/proddata",
-            "writable" : true,
-            "defaultInputFormat" : "json"
-        }
-
-## Querying Compressed JSON
-
-You can use Drill 0.8 and later to query compressed JSON in .gz files as well as uncompressed files having the .json extension. First, add the gz extension to a storage plugin, and then use that plugin to query the compressed file.
-
-      "extensions": [
-        "json",
-        "gz"
-      ]

http://git-wip-us.apache.org/repos/asf/drill/blob/48269506/_docs/connect-a-data-source/080-drill-default-input-format.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/080-drill-default-input-format.md b/_docs/connect-a-data-source/080-drill-default-input-format.md
new file mode 100644
index 0000000..e817343
--- /dev/null
+++ b/_docs/connect-a-data-source/080-drill-default-input-format.md
@@ -0,0 +1,69 @@
+---
+title: "Drill Default Input Format"
+parent: "Storage Plugin Configuration"
+---
+You can define a default input format to tell Drill what file type exists in a
+workspace within a file system. Drill determines the file type based on file
+extensions and magic numbers when searching a workspace.
+
+Magic numbers are file signatures that Drill uses to identify Parquet files.
+If Drill cannot identify the file type based on file extensions or magic
+numbers, the query fails. Defining a default input format can prevent queries
+from failing in situations where Drill cannot determine the file type.
+
+If you incorrectly define the file type in a workspace and Drill cannot
+determine the file type, the query fails. For example, if the directory for
+which you have defined a workspace contains JSON files and you defined the
+default input format as CSV, the query fails against the workspace.
+
+You can define one default input format per workspace. If you do not define a
+default input format, and Drill cannot detect the file format, the query
+fails. You can define a default input format for any of the file types that
+Drill supports. Currently, Drill supports the following types:
+
+  * CSV
+  * TSV
+  * PSV
+  * Parquet
+  * JSON
+
+## Defining a Default Input Format
+
+You define the default input format for a file system workspace through the
+Drill Web UI. You must have a [defined workspace]({{ site.baseurl }}/docs/workspaces) before you can define a
+default input format.
+
+To define a default input format for a workspace, complete the following
+steps:
+
+  1. Navigate to the Drill Web UI at `<drill_node_ip_address>:8047`. The Drillbit process must be running on the node before you connect to the Drill Web UI.
+  2. Select **Storage** in the toolbar.
+  3. Click **Update** next to the file system for which you want to define a default input format for a workspace.
+  4. In the Configuration area, locate the workspace for which you would like to define the default input format, and change the `defaultInputFormat` attribute to any of the supported file types.
+
+     **Example**
+     
+        {
+          "type": "file",
+          "enabled": true,
+          "connection": "hdfs:///",
+          "workspaces": {
+            "root": {
+              "location": "/drill/testdata",
+              "writable": false,
+              "defaultInputFormat": csv
+          },
+          "local" : {
+            "location" : "/max/proddata",
+            "writable" : true,
+            "defaultInputFormat" : "json"
+        }
+
+## Querying Compressed JSON
+
+You can use Drill 0.8 and later to query compressed JSON in .gz files as well as uncompressed files having the .json extension. First, add the gz extension to a storage plugin, and then use that plugin to query the compressed file.
+
+      "extensions": [
+        "json",
+        "gz"
+      ]

http://git-wip-us.apache.org/repos/asf/drill/blob/48269506/_docs/connect-a-data-source/090-mongo-plugin.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/090-mongo-plugin.md b/_docs/connect-a-data-source/090-mongo-plugin.md
deleted file mode 100644
index 9e2d41e..0000000
--- a/_docs/connect-a-data-source/090-mongo-plugin.md
+++ /dev/null
@@ -1,169 +0,0 @@
----
-title: "MongoDB Plugin for Apache Drill"
-parent: "Connect a Data Source"
----
-## Overview
-
-You can leverage the power of Apache Drill to query data without any upfront
-schema definitions. Drill enables you to create an architecture that works
-with nested and dynamic schemas, making it the perfect SQL query tool to use
-on NoSQL databases, such as MongoDB.
-
-As of Apache Drill 0.6, you can configure MongoDB as a Drill data source.
-Drill provides a mongodb format plugin to connect to MongoDB, and run queries
-on the data using ANSI SQL.
-
-This tutorial assumes that you have Drill installed locally (embedded mode),
-as well as MongoDB. Examples in this tutorial use zip code aggregation data
-provided by MongoDB. Before You Begin provides links to download tools and data
-used throughout the tutorial.
-
-{% include startnote.html %}A local instance of Drill is used in this tutorial for simplicity. {% include endnote.html %}
-
-You can also run Drill and MongoDB together in distributed mode.
-
-### Before You Begin
-
-Before you can query MongoDB with Drill, you must have Drill and MongoDB
-installed on your machine. You may also want to import the MongoDB zip code
-data to run the example queries on your machine.
-
-  1. [Install Drill]({{ site.baseurl }}/docs/installing-drill-in-embedded-mode), if you do not already have it installed on your machine.
-  2. [Install MongoDB](http://docs.mongodb.org/manual/installation), if you do not already have it installed on your machine.
-  3. [Import the MongoDB zip code sample data set](http://docs.mongodb.org/manual/tutorial/aggregation-zip-code-data-set). You can use Mongo Import to get the data. 
-
-## Configuring MongoDB
-
-Start Drill and configure the MongoDB storage plugin instance in the Drill Web
-UI to connect to Drill. Drill must be running in order to access the Web UI.
-
-Complete the following steps to configure MongoDB as a data source for Drill:
-
-  1. Navigate to `<drill_installation_directory>/drill-<version>,` and enter the following command to invoke SQLLine and start Drill:
-
-        bin/sqlline -u jdbc:drill:zk=local -n admin -p admin
-     When Drill starts, the following prompt appears: `0: jdbc:drill:zk=local>`
-
-     Do not enter any commands. You will return to the command prompt after
-completing the configuration in the Drill Web UI.
-  2. Open a browser window, and navigate to the Drill Web UI at `http://localhost:8047`.
-  3. In the navigation bar, click **Storage**.
-  4. Under Disabled Storage Plugins, select **Update** next to the `mongo` instance if the instance exists. If the instance does not exist, create an instance for MongoDB.
-  5. In the Configuration window, verify that `"enabled"` is set to ``"true."``
-
-     **Example**
-     
-        {
-          "type": "mongo",
-          "connection": "mongodb://localhost:27017/",
-          "enabled": true
-        }
-
-     {% include startnote.html %}27017 is the default port for `mongodb` instances.{% include endnote.html %} 
-  6. Click **Enable** to enable the instance, and save the configuration.
-  7. Navigate back to the Drill command line so you can query MongoDB.
-
-## Querying MongoDB
-
-You can issue the `SHOW DATABASES `command to see a list of databases from all
-Drill data sources, including MongoDB. If you downloaded the zip codes file,
-you should see `mongo.zipdb` in the results.
-
-    0: jdbc:drill:zk=local> SHOW DATABASES;
-    +-------------+
-    | SCHEMA_NAME |
-    +-------------+
-    | dfs.default |
-    | dfs.root    |
-    | dfs.tmp     |
-    | sys         |
-    | mongo.zipdb |
-    | cp.default  |
-    | INFORMATION_SCHEMA |
-    +-------------+
-
-If you want all queries that you submit to run on `mongo.zipdb`, you can issue
-the `USE` command to change schema.
-
-### Example Queries
-
-The following example queries are included for reference. However, you can use
-the SQL power of Apache Drill directly on MongoDB. For more information about,
-refer to the [SQL
-Reference]({{ site.baseurl }}/docs/sql-reference).
-
-**Example 1: View mongo.zipdb Dataset**
-
-    0: jdbc:drill:zk=local> SELECT * FROM zipcodes LIMIT 10;
-    +------------+
-    |     *      |
-    +------------+
-    | { "city" : "AGAWAM" , "loc" : [ -72.622739 , 42.070206] , "pop" : 15338 , "state" : "MA"} |
-    | { "city" : "CUSHMAN" , "loc" : [ -72.51565 , 42.377017] , "pop" : 36963 , "state" : "MA"} |
-    | { "city" : "BARRE" , "loc" : [ -72.108354 , 42.409698] , "pop" : 4546 , "state" : "MA"} |
-    | { "city" : "BELCHERTOWN" , "loc" : [ -72.410953 , 42.275103] , "pop" : 10579 , "state" : "MA"} |
-    | { "city" : "BLANDFORD" , "loc" : [ -72.936114 , 42.182949] , "pop" : 1240 , "state" : "MA"} |
-    | { "city" : "BRIMFIELD" , "loc" : [ -72.188455 , 42.116543] , "pop" : 3706 , "state" : "MA"} |
-    | { "city" : "CHESTER" , "loc" : [ -72.988761 , 42.279421] , "pop" : 1688 , "state" : "MA"} |
-    | { "city" : "CHESTERFIELD" , "loc" : [ -72.833309 , 42.38167] , "pop" : 177 , "state" : "MA"} |
-    | { "city" : "CHICOPEE" , "loc" : [ -72.607962 , 42.162046] , "pop" : 23396 , "state" : "MA"} |
-    | { "city" : "CHICOPEE" , "loc" : [ -72.576142 , 42.176443] , "pop" : 31495 , "state" : "MA"} |
-
-**Example 2: Aggregation**
-
-    0: jdbc:drill:zk=local> select state,city,avg(pop)
-    +------------+------------+------------+
-    |   state    |    city    |   EXPR$2   |
-    +------------+------------+------------+
-    | MA         | AGAWAM     | 15338.0    |
-    | MA         | CUSHMAN    | 36963.0    |
-    | MA         | BARRE      | 4546.0     |
-    | MA         | BELCHERTOWN | 10579.0   |
-    | MA         | BLANDFORD  | 1240.0     |
-    | MA         | BRIMFIELD  | 3706.0     |
-    | MA         | CHESTER    | 1688.0     |
-    | MA         | CHESTERFIELD | 177.0    |
-    | MA         | CHICOPEE   | 27445.5    |
-    | MA         | WESTOVER AFB | 1764.0   |
-    +------------+------------+------------+
-
-**Example 3: Nested Data Column Array**
-
-    0: jdbc:drill:zk=local> SELECT loc FROM zipcodes LIMIT 10;
-    +------------------------+
-    |    loc                 |
-    +------------------------+
-    | [-72.622739,42.070206] |
-    | [-72.51565,42.377017]  |
-    | [-72.108354,42.409698] |
-    | [-72.410953,42.275103] |
-    | [-72.936114,42.182949] |
-    | [-72.188455,42.116543] |
-    | [-72.988761,42.279421] |
-    | [-72.833309,42.38167]  |
-    | [-72.607962,42.162046] |
-    | [-72.576142,42.176443] |
-    +------------------------+
-        
-    0: jdbc:drill:zk=local> SELECT loc[0] FROM zipcodes LIMIT 10;
-    +------------+
-    |   EXPR$0   |
-    +------------+
-    | -72.622739 |
-    | -72.51565  |
-    | -72.108354 |
-    | -72.410953 |
-    | -72.936114 |
-    | -72.188455 |
-    | -72.988761 |
-    | -72.833309 |
-    | -72.607962 |
-    | -72.576142 |
-    +------------+
-
-## Using ODBC/JDBC Drivers
-
-You can leverage the power of Apache Drill to query MongoDB through standard
-BI tools, such as Tableau and SQuirreL.
-
-For information about Drill ODBC and JDBC drivers, refer to [Drill Interfaces]({{ site.baseurl }}/docs/odbc-jdbc-interfaces).

http://git-wip-us.apache.org/repos/asf/drill/blob/48269506/_docs/connect-a-data-source/090-mongodb-plugin-for-apache-drill.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/090-mongodb-plugin-for-apache-drill.md b/_docs/connect-a-data-source/090-mongodb-plugin-for-apache-drill.md
new file mode 100644
index 0000000..9e2d41e
--- /dev/null
+++ b/_docs/connect-a-data-source/090-mongodb-plugin-for-apache-drill.md
@@ -0,0 +1,169 @@
+---
+title: "MongoDB Plugin for Apache Drill"
+parent: "Connect a Data Source"
+---
+## Overview
+
+You can leverage the power of Apache Drill to query data without any upfront
+schema definitions. Drill enables you to create an architecture that works
+with nested and dynamic schemas, making it the perfect SQL query tool to use
+on NoSQL databases, such as MongoDB.
+
+As of Apache Drill 0.6, you can configure MongoDB as a Drill data source.
+Drill provides a mongodb format plugin to connect to MongoDB, and run queries
+on the data using ANSI SQL.
+
+This tutorial assumes that you have Drill installed locally (embedded mode),
+as well as MongoDB. Examples in this tutorial use zip code aggregation data
+provided by MongoDB. Before You Begin provides links to download tools and data
+used throughout the tutorial.
+
+{% include startnote.html %}A local instance of Drill is used in this tutorial for simplicity. {% include endnote.html %}
+
+You can also run Drill and MongoDB together in distributed mode.
+
+### Before You Begin
+
+Before you can query MongoDB with Drill, you must have Drill and MongoDB
+installed on your machine. You may also want to import the MongoDB zip code
+data to run the example queries on your machine.
+
+  1. [Install Drill]({{ site.baseurl }}/docs/installing-drill-in-embedded-mode), if you do not already have it installed on your machine.
+  2. [Install MongoDB](http://docs.mongodb.org/manual/installation), if you do not already have it installed on your machine.
+  3. [Import the MongoDB zip code sample data set](http://docs.mongodb.org/manual/tutorial/aggregation-zip-code-data-set). You can use Mongo Import to get the data. 
+
+## Configuring MongoDB
+
+Start Drill and configure the MongoDB storage plugin instance in the Drill Web
+UI to connect to Drill. Drill must be running in order to access the Web UI.
+
+Complete the following steps to configure MongoDB as a data source for Drill:
+
+  1. Navigate to `<drill_installation_directory>/drill-<version>,` and enter the following command to invoke SQLLine and start Drill:
+
+        bin/sqlline -u jdbc:drill:zk=local -n admin -p admin
+     When Drill starts, the following prompt appears: `0: jdbc:drill:zk=local>`
+
+     Do not enter any commands. You will return to the command prompt after
+completing the configuration in the Drill Web UI.
+  2. Open a browser window, and navigate to the Drill Web UI at `http://localhost:8047`.
+  3. In the navigation bar, click **Storage**.
+  4. Under Disabled Storage Plugins, select **Update** next to the `mongo` instance if the instance exists. If the instance does not exist, create an instance for MongoDB.
+  5. In the Configuration window, verify that `"enabled"` is set to ``"true."``
+
+     **Example**
+     
+        {
+          "type": "mongo",
+          "connection": "mongodb://localhost:27017/",
+          "enabled": true
+        }
+
+     {% include startnote.html %}27017 is the default port for `mongodb` instances.{% include endnote.html %} 
+  6. Click **Enable** to enable the instance, and save the configuration.
+  7. Navigate back to the Drill command line so you can query MongoDB.
+
+## Querying MongoDB
+
+You can issue the `SHOW DATABASES `command to see a list of databases from all
+Drill data sources, including MongoDB. If you downloaded the zip codes file,
+you should see `mongo.zipdb` in the results.
+
+    0: jdbc:drill:zk=local> SHOW DATABASES;
+    +-------------+
+    | SCHEMA_NAME |
+    +-------------+
+    | dfs.default |
+    | dfs.root    |
+    | dfs.tmp     |
+    | sys         |
+    | mongo.zipdb |
+    | cp.default  |
+    | INFORMATION_SCHEMA |
+    +-------------+
+
+If you want all queries that you submit to run on `mongo.zipdb`, you can issue
+the `USE` command to change schema.
+
+### Example Queries
+
+The following example queries are included for reference. However, you can use
+the SQL power of Apache Drill directly on MongoDB. For more information about,
+refer to the [SQL
+Reference]({{ site.baseurl }}/docs/sql-reference).
+
+**Example 1: View mongo.zipdb Dataset**
+
+    0: jdbc:drill:zk=local> SELECT * FROM zipcodes LIMIT 10;
+    +------------+
+    |     *      |
+    +------------+
+    | { "city" : "AGAWAM" , "loc" : [ -72.622739 , 42.070206] , "pop" : 15338 , "state" : "MA"} |
+    | { "city" : "CUSHMAN" , "loc" : [ -72.51565 , 42.377017] , "pop" : 36963 , "state" : "MA"} |
+    | { "city" : "BARRE" , "loc" : [ -72.108354 , 42.409698] , "pop" : 4546 , "state" : "MA"} |
+    | { "city" : "BELCHERTOWN" , "loc" : [ -72.410953 , 42.275103] , "pop" : 10579 , "state" : "MA"} |
+    | { "city" : "BLANDFORD" , "loc" : [ -72.936114 , 42.182949] , "pop" : 1240 , "state" : "MA"} |
+    | { "city" : "BRIMFIELD" , "loc" : [ -72.188455 , 42.116543] , "pop" : 3706 , "state" : "MA"} |
+    | { "city" : "CHESTER" , "loc" : [ -72.988761 , 42.279421] , "pop" : 1688 , "state" : "MA"} |
+    | { "city" : "CHESTERFIELD" , "loc" : [ -72.833309 , 42.38167] , "pop" : 177 , "state" : "MA"} |
+    | { "city" : "CHICOPEE" , "loc" : [ -72.607962 , 42.162046] , "pop" : 23396 , "state" : "MA"} |
+    | { "city" : "CHICOPEE" , "loc" : [ -72.576142 , 42.176443] , "pop" : 31495 , "state" : "MA"} |
+
+**Example 2: Aggregation**
+
+    0: jdbc:drill:zk=local> select state,city,avg(pop)
+    +------------+------------+------------+
+    |   state    |    city    |   EXPR$2   |
+    +------------+------------+------------+
+    | MA         | AGAWAM     | 15338.0    |
+    | MA         | CUSHMAN    | 36963.0    |
+    | MA         | BARRE      | 4546.0     |
+    | MA         | BELCHERTOWN | 10579.0   |
+    | MA         | BLANDFORD  | 1240.0     |
+    | MA         | BRIMFIELD  | 3706.0     |
+    | MA         | CHESTER    | 1688.0     |
+    | MA         | CHESTERFIELD | 177.0    |
+    | MA         | CHICOPEE   | 27445.5    |
+    | MA         | WESTOVER AFB | 1764.0   |
+    +------------+------------+------------+
+
+**Example 3: Nested Data Column Array**
+
+    0: jdbc:drill:zk=local> SELECT loc FROM zipcodes LIMIT 10;
+    +------------------------+
+    |    loc                 |
+    +------------------------+
+    | [-72.622739,42.070206] |
+    | [-72.51565,42.377017]  |
+    | [-72.108354,42.409698] |
+    | [-72.410953,42.275103] |
+    | [-72.936114,42.182949] |
+    | [-72.188455,42.116543] |
+    | [-72.988761,42.279421] |
+    | [-72.833309,42.38167]  |
+    | [-72.607962,42.162046] |
+    | [-72.576142,42.176443] |
+    +------------------------+
+        
+    0: jdbc:drill:zk=local> SELECT loc[0] FROM zipcodes LIMIT 10;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | -72.622739 |
+    | -72.51565  |
+    | -72.108354 |
+    | -72.410953 |
+    | -72.936114 |
+    | -72.188455 |
+    | -72.988761 |
+    | -72.833309 |
+    | -72.607962 |
+    | -72.576142 |
+    +------------+
+
+## Using ODBC/JDBC Drivers
+
+You can leverage the power of Apache Drill to query MongoDB through standard
+BI tools, such as Tableau and SQuirreL.
+
+For information about Drill ODBC and JDBC drivers, refer to [Drill Interfaces]({{ site.baseurl }}/docs/odbc-jdbc-interfaces).

http://git-wip-us.apache.org/repos/asf/drill/blob/48269506/_docs/connect-a-data-source/100-mapr-db-format.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/100-mapr-db-format.md b/_docs/connect-a-data-source/100-mapr-db-format.md
new file mode 100644
index 0000000..f00d2d2
--- /dev/null
+++ b/_docs/connect-a-data-source/100-mapr-db-format.md
@@ -0,0 +1,34 @@
+---
+title: "MapR-DB Format"
+parent: "Connect a Data Source"
+---
+Drill includes a `maprdb` format plugin for accessing data stored in MapR-DB. The Drill Sandbox also includes the following `maprdb` format plugin on a MapR node:
+
+    {
+      "type": "hbase",
+      "config": {
+        "hbase.table.namespace.mappings": "*:/tables"
+      },
+      "size.calculator.enabled": false,
+      "enabled": true
+    }
+
+Using the Sandbox and this `maprdb` format plugin, you can query HBase tables located in the `/tables` directory, as shown in the ["Query HBase"]({{ site.baseurl }}/docs/querying-hbase) examples.
+
+The `dfs` storage plugin includes the maprdb format when you install Drill from the `mapr-drill` package on a MapR node. Click **Update** next to the `dfs` instance
+in the Web UI of the Drill Sandbox to view the configuration for the `dfs` instance:
+
+![drill query flow]({{ site.baseurl }}/docs/img/18.png)
+
+
+The examples of the [CONVERT_TO/FROM functions]({{ site.baseurl }}/docs/data-type-conversion#convert_to-and-convert_from) show how to adapt the `dfs` storage plugin to use the `maprdb` format plugin to query HBase tables on the Sandbox.
+
+You modify the `dfs` storage plugin to create a table mapping to a directory in the MapR-FS file system. You then select the table by name.
+
+**Example**
+
+    SELECT * FROM myplugin.`mytable`;
+
+The `maprdb` format plugin improves the
+estimated number of rows that Drill uses to plan a query. Using the `dfs` storage plugin, you can query HBase and MapR-DB tables as you would query files in a file system. MapR-DB, MapR-FS, and Hadoop files share the same namespace.
+

http://git-wip-us.apache.org/repos/asf/drill/blob/48269506/_docs/connect-a-data-source/100-mapr-db-plugin.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/100-mapr-db-plugin.md b/_docs/connect-a-data-source/100-mapr-db-plugin.md
deleted file mode 100644
index f00d2d2..0000000
--- a/_docs/connect-a-data-source/100-mapr-db-plugin.md
+++ /dev/null
@@ -1,34 +0,0 @@
----
-title: "MapR-DB Format"
-parent: "Connect a Data Source"
----
-Drill includes a `maprdb` format plugin for accessing data stored in MapR-DB. The Drill Sandbox also includes the following `maprdb` format plugin on a MapR node:
-
-    {
-      "type": "hbase",
-      "config": {
-        "hbase.table.namespace.mappings": "*:/tables"
-      },
-      "size.calculator.enabled": false,
-      "enabled": true
-    }
-
-Using the Sandbox and this `maprdb` format plugin, you can query HBase tables located in the `/tables` directory, as shown in the ["Query HBase"]({{ site.baseurl }}/docs/querying-hbase) examples.
-
-The `dfs` storage plugin includes the maprdb format when you install Drill from the `mapr-drill` package on a MapR node. Click **Update** next to the `dfs` instance
-in the Web UI of the Drill Sandbox to view the configuration for the `dfs` instance:
-
-![drill query flow]({{ site.baseurl }}/docs/img/18.png)
-
-
-The examples of the [CONVERT_TO/FROM functions]({{ site.baseurl }}/docs/data-type-conversion#convert_to-and-convert_from) show how to adapt the `dfs` storage plugin to use the `maprdb` format plugin to query HBase tables on the Sandbox.
-
-You modify the `dfs` storage plugin to create a table mapping to a directory in the MapR-FS file system. You then select the table by name.
-
-**Example**
-
-    SELECT * FROM myplugin.`mytable`;
-
-The `maprdb` format plugin improves the
-estimated number of rows that Drill uses to plan a query. Using the `dfs` storage plugin, you can query HBase and MapR-DB tables as you would query files in a file system. MapR-DB, MapR-FS, and Hadoop files share the same namespace.
-