You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@drill.apache.org by ts...@apache.org on 2015/05/01 20:08:35 UTC

[39/50] [abbrv] drill git commit: commands reorg

http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/manage-drill/010-manage-drill-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/manage-drill/010-manage-drill-introduction.md b/_docs/manage-drill/010-manage-drill-introduction.md
index f3d4e04..bc9179a 100644
--- a/_docs/manage-drill/010-manage-drill-introduction.md
+++ b/_docs/manage-drill/010-manage-drill-introduction.md
@@ -2,6 +2,6 @@
 title: "Manage Drill Introduction"
 parent: "Manage Drill"
 ---
-When using Drill, you need to configure memory to make sufficient memory available for your application. The more memory for Drill, the better. You may need to modify options for performance or functionality. For example, the default storage format for CTAS
-statements is Parquet. You can modify the default setting so that output data
-is stored in CSV or JSON format. The many options you can configure are covered in this section. This section also includes stopping and restarting a Drillbit on a node, ports used by Drill, and partition pruning.
+When using Drill, you need to make sufficient memory available Drill and other workloads running on the cluster. You might want to modify options for performance or functionality. For example, the default storage format for CTAS
+statements is Parquet. Using a configuration option, you can modify the default setting so that output data
+is stored in CSV or JSON format. The section covers the many options you can configure and how to configure memory resources for Drill running along side other workloads. This section also includes stopping and restarting a Drillbit on a node, ports used by Drill, and partition pruning.

http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/manage-drill/011-configuring-memory.md
----------------------------------------------------------------------
diff --git a/_docs/manage-drill/011-configuring-memory.md b/_docs/manage-drill/011-configuring-memory.md
index 9058cdb..68c2bf0 100644
--- a/_docs/manage-drill/011-configuring-memory.md
+++ b/_docs/manage-drill/011-configuring-memory.md
@@ -1,5 +1,5 @@
 ---
-title: "Configuring Memory"
+title: "Configuring Memory for Drill"
 parent: "Manage Drill"
 ---
 

http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/manage-drill/012-configuring-different-workloads.md
----------------------------------------------------------------------
diff --git a/_docs/manage-drill/012-configuring-different-workloads.md b/_docs/manage-drill/012-configuring-different-workloads.md
new file mode 100644
index 0000000..874839e
--- /dev/null
+++ b/_docs/manage-drill/012-configuring-different-workloads.md
@@ -0,0 +1,5 @@
+---
+title: "Configuring Different Workloads"
+parent: "Manage Drill"
+---
+

http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/manage-drill/012-multitenant-and-multi-instance-architectures.md
----------------------------------------------------------------------
diff --git a/_docs/manage-drill/012-multitenant-and-multi-instance-architectures.md b/_docs/manage-drill/012-multitenant-and-multi-instance-architectures.md
deleted file mode 100644
index db61b45..0000000
--- a/_docs/manage-drill/012-multitenant-and-multi-instance-architectures.md
+++ /dev/null
@@ -1,5 +0,0 @@
----
-title: "Multitenant and Multi-instance Architectures"
-parent: "Manage Drill"
----
-

http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/manage-drill/013-configuring-dfferent-workloads-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/manage-drill/013-configuring-dfferent-workloads-introduction.md b/_docs/manage-drill/013-configuring-dfferent-workloads-introduction.md
new file mode 100644
index 0000000..29b830c
--- /dev/null
+++ b/_docs/manage-drill/013-configuring-dfferent-workloads-introduction.md
@@ -0,0 +1,22 @@
+---
+title: "Configuring Different Workloads Introduction"
+parent: "Configuring Different Workloads"
+---
+
+Drill supports multiple users sharing a drillbit. You can also run separate drillbits running on different nodes in the cluster.
+
+Drill typically runs along side other workloads, including the following:  
+
+* Mapreduce  
+* Yarn  
+* HBase  
+* Hive and Pig  
+* Spark  
+
+You need to plan and configure these resources for use with Drill and other workloads: 
+
+* [Memory]({{site.baseurl}}/docs/configuring-memory)  
+* [CPU]({{site.baseurl}}/docs/how-to-manage-drill-cpu-resources)  
+* Disk  
+
+["How to Run Drill in a Cluster"]({{site.baseurl}}/docs/how-to-run-drill-in-a-cluster) covers configuration for sharing a drillbit and ["How Multiple Users Share a Drillbit"]({{site.baseurl}}/docs/how-multiple-users-share-a-drillbit) covers configuration for drillbits running on different nodes in the cluster.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/manage-drill/013-multitenant-and-multi-instance-architecture-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/manage-drill/013-multitenant-and-multi-instance-architecture-introduction.md b/_docs/manage-drill/013-multitenant-and-multi-instance-architecture-introduction.md
deleted file mode 100644
index c517586..0000000
--- a/_docs/manage-drill/013-multitenant-and-multi-instance-architecture-introduction.md
+++ /dev/null
@@ -1,22 +0,0 @@
----
-title: "Multitenant and Multi-instance Architectures Introduction"
-parent: "Multitenant and Multi-instance Architectures"
----
-
-Drill supports multitenant and multi-instance architectures. In a multitenant architecture, multiple users share a drillbit. In a multi-instance architectures, tenants use separate drillbits running on different nodes in the cluster.
-
-Drill typically runs along side many application frameworks, including the following:  
-
-* Mapreduce  
-* Yarn  
-* HBase  
-* Hive and Pig  
-* Spark  
-
-You need to plan and configure these resources for use with Drill in a multitenant or multi-instance environment: 
-
-* [Memory]({{site.baseurl}}/docs/configuring-memory)  
-* [CPU]({{site.baseurl}}/docs/how-to-manage-drill-cpu-resources)  
-* Disk  
-
-["How to Run Drill in a Cluster"]({{site.baseurl}}/docs/how-to-run-drill-in-a-cluster) covers configuration for a multitenant environment and ["How Multiple Users Share a Drillbit"]({{site.baseurl}}/docs/how-multiple-users-share-a-drillbit) covers configuration for a multi-instance environment.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/manage-drill/015-configuring-drill-in-a-cluster.md
----------------------------------------------------------------------
diff --git a/_docs/manage-drill/015-configuring-drill-in-a-cluster.md b/_docs/manage-drill/015-configuring-drill-in-a-cluster.md
new file mode 100644
index 0000000..11e4747
--- /dev/null
+++ b/_docs/manage-drill/015-configuring-drill-in-a-cluster.md
@@ -0,0 +1,78 @@
+---
+title: "Configuring Drill in a Cluster"
+parent: "Configuring Different Workloads"
+---
+Drill operations are memory and CPU-intensive. You need to statically partition the cluster to designate which partition handles which workload. 
+
+To reserve memory resources for Drill, you modify the `warden.drill-bits.conf` file in `/opt/mapr/conf/conf.d`. This file is created automatically when you install Drill on a node. 
+
+    [root@centos23 conf.d]# pwd
+    /opt/mapr/conf/conf.d
+    [root@centos23 conf.d]# ls
+    warden.drill-bits.conf  warden.nodemanager.conf  warden.resourcemanager.conf
+
+You add the following lines to the file:
+
+    service.heapsize.min=<some value in MB>
+    service.heapsize.max=<some value in MB>
+    service.heapsize.percent=<some whole number value>
+
+## Configuring Drill in a YARN-enabled MapR Cluster
+
+For example, you have 120G of available memory that you allocate to following workloads in a Yarn-enabled cluster:
+
+File system = 20G  
+HBase = 20G  
+Yarn = 20G  
+OS = 8G  
+
+To add Drill to the cluster, how do you change memory allocation? It depends on your application. If Yarn does most of the work, give Drill 20G, for example, and give Yarn 60G. If you expect a heavy query load, give Drill 60G and Drill 20G.
+
+{% include startnote.html %}Drill will execute queries within Yarn soon.{% include endnote.html %} [DRILL-142](https://issues.apache.org/jira/browse/DRILL-142)
+
+<!-- YARN consists of 2 main services ResourceManager(one instance in cluster, more if HA is configured) and NodeManager(one instance per node). See mr1.memory.percent, mr1.cpu.percent and 
+mr1.disk.percent in warden.conf. Rest is given to YARN applications.
+Also see /opt/mapr/conf/conf.d/warden.resourcemanager.conf and
+ /opt/mapr/conf/conf.d/warden.nodemanager.conf for resources given to ResourceManager and NodeManager respectively.
+
+## Configuring Drill in a MapReduce V1-enabled cluster
+
+Similar files exist for other installed services, as described in [MapR documentation](http://doc.mapr.com/display/MapR/warden.%3Cservicename%3E.conf). For example:
+## What are the memory/resource configurations set in warden in a non-YARN cluster? 
+
+Every service will have 3 values defined in warden.conf (/opt/mapr/conf)
+service.command.<servicename>.heapsize.percent
+service.command.<servicename>.heapsize.max
+service.command.<servicename>.heapsize.min
+This is percentage of memory for that service bounded by min and max values.
+
+There will also be additional files in /opt/mapr/conf/conf.d in format 
+warden.<servicename>.conf. They will have entries like
+service.heapsize.min
+service.heapsize.max
+service.heapsize.percent -->
+
+## Managing Memory
+
+To run Drill in a cluster with MapReduce, HBase, Spark, and other workloads, manage memory according to your application needs. 
+
+To run Drill in a MapR cluster, allocate memory by configuring settings in warden.conf, as described in the [MapR documentation]().
+
+### Drill Memory
+You can configure the amount of direct memory allocated to a Drillbit for
+query processing, as described in the section, ["Configuring Memory"](({{site.baseurl}}/docs/configuring-memory).
+
+### Memory in a MapR Cluster
+Memory and disk for Drill and other services that are not associated with roles on a MapR cluster are shared with other services. You manage the chunk of memory for these services in os heap settings in `warden.conf` and in configuration files of the particular service. The warden os heap settings are:
+
+    service.command.os.heapsize.percent
+    service.command.os.heapsize.max
+    service.command.os.heapsize.min
+
+For more information about managing memory in a MapR cluster, see the following sections in the MapR documentation:
+* [Memory Allocation for Nodes](http://doc.mapr.com/display/MapR40x/Memory+Allocation+for+Nodes)
+* [Cluster Resource Allocation](http://doc.mapr.com/display/MapR40x/Cluster+Resource+Allocation)
+* [Customizing Memory Settings for MapReduce v1](http://doc.mapr.com/display/MapR40x/Customize+Memory+Settings+for+MapReduce+v1)
+
+## How to Manage Drill CPU Resources
+Currently, you do not manage CPU resources within Drill. [Use Linux `cgroups`](http://en.wikipedia.org/wiki/Cgroups) to manage the CPU resources.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/manage-drill/015-how-to-run-drill-in-a-cluster.md
----------------------------------------------------------------------
diff --git a/_docs/manage-drill/015-how-to-run-drill-in-a-cluster.md b/_docs/manage-drill/015-how-to-run-drill-in-a-cluster.md
deleted file mode 100644
index d8609c5..0000000
--- a/_docs/manage-drill/015-how-to-run-drill-in-a-cluster.md
+++ /dev/null
@@ -1,41 +0,0 @@
----
-title: "How to Run Drill in a Cluster"
-parent: "Multitenant and Multi-instance Architectures"
----
-Drill operations are memory and CPU-intensive. You need to statically partition the cluster to designation which partition handles which workload. For example, you have 120G of available memory that you allocate to following workloads in a Yarn-enabled cluster:
-
-File system = 20G  
-HBase = 20G  
-Yarn = 20G  
-OS = 8G  
-
-To add Drill to the cluster, how do you change memory allocation? It depends on your application. If Yarn does most of the work, give Drill 20G, for example, and give Yarn 60G. If you expect a heavy query load, give Drill 60G and Drill 20G.
-
-{% include startnote.html %}Drill will execute queries within Yarn soon.{% include endnote.html %}
-
-For information about Drill and Yarn, see [DRILL-142](https://issues.apache.org/jira/browse/DRILL-142).
-
-## Managing Memory
-
-To run Drill in a cluster with MapReduce, HBase, Spark, and other workloads, manage memory according to your application needs. 
-
-To run Drill in a MapR cluster, allocate memory by configuring settings in warden.conf, as described in the [MapR documentation]().
-
-### Drill Memory
-You can configure the amount of direct memory allocated to a Drillbit for
-query processing, as described in the section, ["Configuring Memory"](({{site.baseurl}}/docs/configuring-memory).
-
-### Memory in a MapR Cluster
-Memory and disk for Drill and other services that are not associated with roles on a MapR cluster are shared with other services. You manage the chunk of memory for these services in os heap settings in `warden.conf` and in configuration files of the particular service. The warden os heap settings are:
-
-    service.command.os.heapsize.percent
-    service.command.os.heapsize.max
-    service.command.os.heapsize.min
-
-For more information about managing memory in a MapR cluster, see the following sections in the MapR documentation:
-* [Memory Allocation for Nodes](http://doc.mapr.com/display/MapR40x/Memory+Allocation+for+Nodes)
-* [Cluster Resource Allocation](http://doc.mapr.com/display/MapR40x/Cluster+Resource+Allocation)
-* [Customizing Memory Settings for MapReduce v1](http://doc.mapr.com/display/MapR40x/Customize+Memory+Settings+for+MapReduce+v1)
-
-## How to Manage Drill CPU Resources
-Currently, you do not manage CPU resources within Drill. [Use Linux `cgroups`](http://en.wikipedia.org/wiki/Cgroups) to manage the CPU resources.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/manage-drill/017-configuring-a-shared-drillbit.md
----------------------------------------------------------------------
diff --git a/_docs/manage-drill/017-configuring-a-shared-drillbit.md b/_docs/manage-drill/017-configuring-a-shared-drillbit.md
new file mode 100644
index 0000000..0466d3b
--- /dev/null
+++ b/_docs/manage-drill/017-configuring-a-shared-drillbit.md
@@ -0,0 +1,65 @@
+---
+title: "Configuring a Shared Drillbit"
+parent: "Configuring Different Workloads"
+---
+To manage a cluster in which multiple users share a Drillbit, you configure Drill queuing and parallelization.
+
+##Configuring Drill Query Queuing
+
+Set [options in sys.options]({{site.baseurl}}/docs/configuration-options-introduction/) to enable and manage query queuing, which is turned off by default. There are two types of queues: large and small. You configure a maximum number of queries that each queue allows by configuring the following options in the `sys.options` table:
+
+* exec.queue.large  
+* exec.queue.small  
+
+### Example Configuration
+
+For example, you configure the queue reserved for large queries to hold a 5-query maximum. You configure the queue reserved for small queries to hold 20 queries. Users start to run queries, and Drill receives the following query requests in this order:
+
+* Query A (blue): 1 billion records, Drill estimates 10 million rows will be processed  
+* Query B (red): 2 billion records, Drill estimates 20 million rows will be processed  
+* Query C: 1 billion records  
+* Query D: 100 records
+
+The exec.queue.threshold default is 30 million, which is the estimated rows to be processed by the query. Queries A and B are queued in the large queue. The estimated rows to be processed reaches the 30 million threshold, filling the queue to capacity. The query C request arrives and goes on the wait list, and then query D arrives. Query D is queued immediately in the small queue because of its small size, as shown in the following diagram: 
+
+![drill queuing]({{ site.baseurl }}/docs/img/queuing.png)
+
+The Drill queuing configuration in this example tends to give many users running small queries a rapid response. Users running a large query might experience some delay until an earlier-received large query returns, freeing space in the large queue to process queries that are waiting.
+
+## Controlling Parallelization
+
+By default, Drill parallelizes operations when number of records manipulated within a fragment reaches 100,000. When parallelization of operations is high, the cluster operates as fast as possible, which is fine for a single user. In a contentious multi-tenant situation, however, you need to reduce parallelization to levels based on user needs.
+
+### Parallelization Configuration Procedure
+
+To configure parallelization, configure the following options in the `sys.options` table:
+
+* `planner.width.max.per.node`  
+  The maximum degree of distribution of a query across cores and cluster nodes.
+* `planner.width.max.per.query`  
+  Same as max per node but applies to the query as executed by the entire cluster.
+
+Configure the `planner.width.max.per.node` to achieve fine grained, absolute control over parallelization. 
+
+<!-- ??For example, setting the `planner.width.max.per.query` to 60 will not accelerate Drill operations because overlapping does not occur when executing 60 queries at the same time.??
+
+### Example of Configuring Parallelization
+
+For example, the default settings parallelize 70 percent of operations up to 1,000 cores. If you have 30 cores per node in a 10-node cluster, or 300 cores, parallelization occurs on approximately 210 cores. Consequently, a single user can get 70 percent usage from a cluster and no more due to the constraints configured by the `planner.width.max.per.query`.
+
+A parallelizer in the Foreman transforms the physical plan into multiple phases. A complicated query can have multiple, major fragments. A default parallelization of 70 percent of operations allows some overlap of query phases. In the example, 210 ??for each core or major fragment to a maximum of 410??.
+
+??Drill uses pipelines, blocking/nonblocking, memory is not fungible. CPU resources are fungible. There is contention for CPUs.?? -->
+
+## Data Isolation
+
+Tenants can share data on a cluster using Drill views and impersonation. ??Link to impersonation doc.??
+
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/manage-drill/017-how-multiple-users-share-a-drillbit.md
----------------------------------------------------------------------
diff --git a/_docs/manage-drill/017-how-multiple-users-share-a-drillbit.md b/_docs/manage-drill/017-how-multiple-users-share-a-drillbit.md
deleted file mode 100644
index 82f020a..0000000
--- a/_docs/manage-drill/017-how-multiple-users-share-a-drillbit.md
+++ /dev/null
@@ -1,65 +0,0 @@
----
-title: "How Multiple Users Share a Drillbit"
-parent: "Multitenant and Multi-instance Architectures"
----
-To manage a cluster in which multiple users share a Drillbit, you configure Drill queuing and parallelization.
-
-##Configuring Drill Query Queuing
-
-Set [options in sys.options]({{site.baseurl}}/docs/configuration-options-introduction/) to enable and manage query queuing, which is turned off by default. There are two types of queues: large and small. You configure a maximum number of queries that each queue allows by configuring the following options in the `sys.options` table:
-
-* exec.queue.large  
-* exec.queue.small  
-
-### Example Configuration
-
-For example, you configure the queue reserved for large queries to hold a 5-query maximum. You configure the queue reserved for small queries to hold 20 queries. Users start to run queries, and Drill receives the following query requests in this order:
-
-* Query A (blue): 1 billion records, Drill estimates 10 million rows will be processed  
-* Query B (red): 2 billion records, Drill estimates 20 million rows will be processed  
-* Query C: 1 billion records  
-* Query D: 100 records
-
-The exec.queue.threshold default is 30 million, which is the estimated rows to be processed by the query. Queries A and B are queued in the large queue. The estimated rows to be processed reaches the 30 million threshold, filling the queue to capacity. The query C request arrives and goes on the wait list, and then query D arrives. Query D is queued immediately in the small queue because of its small size, as shown in the following diagram: 
-
-![drill queuing]({{ site.baseurl }}/docs/img/queuing.png)
-
-The Drill queuing configuration in this example tends to give many users running small queries a rapid response. Users running a large query might experience some delay until an earlier-received large query returns, freeing space in the large queue to process queries that are waiting.
-
-## Controlling Parallelization
-
-By default, Drill parallelizes operations when number of records manipulated within a fragment reaches 100,000. When parallelization of operations is high, the cluster operates as fast as possible, which is fine for a single user. In a contentious multi-tenant situation, however, you need to reduce parallelization to levels based on user needs.
-
-### Parallelization Configuration Procedure
-
-To configure parallelization, configure the following options in the `sys.options` table:
-
-* `planner.width.max.per.node`  
-  The maximum degree of distribution of a query across cores and cluster nodes.
-* `planner.width.max.per.query`  
-  Same as max per node but applies to the query as executed by the entire cluster.
-
-Configure the `planner.width.max.per.node` to achieve fine grained, absolute control over parallelization. 
-
-<!-- ??For example, setting the `planner.width.max.per.query` to 60 will not accelerate Drill operations because overlapping does not occur when executing 60 queries at the same time.??
-
-### Example of Configuring Parallelization
-
-For example, the default settings parallelize 70 percent of operations up to 1,000 cores. If you have 30 cores per node in a 10-node cluster, or 300 cores, parallelization occurs on approximately 210 cores. Consequently, a single user can get 70 percent usage from a cluster and no more due to the constraints configured by the `planner.width.max.per.query`.
-
-A parallelizer in the Foreman transforms the physical plan into multiple phases. A complicated query can have multiple, major fragments. A default parallelization of 70 percent of operations allows some overlap of query phases. In the example, 210 ??for each core or major fragment to a maximum of 410??.
-
-??Drill uses pipelines, blocking/nonblocking, memory is not fungible. CPU resources are fungible. There is contention for CPUs.?? -->
-
-## Data Isolation
-
-Tenants can share data on a cluster using Drill views and impersonation. ??Link to impersonation doc.??
-
-
-
-
-
-
-
-
-

http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/manage-drill/configuration-options/015-configuring-a-cluster-for-different-workloads.md
----------------------------------------------------------------------
diff --git a/_docs/manage-drill/configuration-options/015-configuring-a-cluster-for-different-workloads.md b/_docs/manage-drill/configuration-options/015-configuring-a-cluster-for-different-workloads.md
deleted file mode 100644
index 23157af..0000000
--- a/_docs/manage-drill/configuration-options/015-configuring-a-cluster-for-different-workloads.md
+++ /dev/null
@@ -1,61 +0,0 @@
----
-title: "Configuring a Cluster for Different Workloads"
-parent: "Configuration Options"
----
-In this release of Drill, to configure a Drill cluster for different workloads, you re-allocate memory resources only. To re-allocate memory for services, establish baselines for performance testing, make changes in small increments, test and compare the effects of the change. 
-
-Warden allocates resources for MapR Hadoop services, such as Zookeeper or NFS, associated with roles installed on the node. You modify `warden.conf` to manage the memory allocation. For example, you might modify these default settings:
-
-    service.command.nfs.heapsize.percent=3
-    service.command.nfs.heapsize.min=64
-    service.command.nfs.heapsize.max=64
-    . . .
-    service.command.zk.heapsize.percent=1
-    service.command.zk.heapsize.max=1500
-    service.command.zk.heapsize.min=256
-
-MapR shares memory and disk among services, such as Drill and Impala, that are not associated with roles on the cluster. You manage the memory for these services in multiple files:
-
-* The os heap settings in `warden.conf`
-* Configuration files of the particular service, such as [Drill](#drill-memory-configuration), [Impala](#impala-memory-configuration), and [JobTracker](#jobtracker-memory-configuration) configuration files
-
-The names of os heap settings are:
-
-    service.command.os.heapsize.percent
-    service.command.os.heapsize.max
-    service.command.os.heapsize.min
-
-## Drill Memory Configuration
-You can configure the amount of direct memory allocated to a Drillbit for
-query processing. The default limit is 8G, but Drill prefers 16G or more
-depending on the workload. The total amount of direct memory that a Drillbit
-allocates to query operations cannot exceed the limit set.
-
-Drill mainly uses Java direct memory and performs well when executing
-operations in memory instead of storing the operations on disk. Drill does not
-write to disk unless absolutely necessary, unlike MapReduce where everything
-is written to disk during each phase of a job.
-
-The JVM heap memory does not limit the amount of direct memory available in
-a Drillbit. The on-heap memory for Drill is only about 4-8G, which should
-suffice because Drill avoids having data sit in heap memory.
-
-You can modify memory for each Drillbit node in your cluster. To modify the
-memory for a Drillbit, edit the `XX:MaxDirectMemorySize` parameter in the
-Drillbit startup script located in `<drill_installation_directory>/conf/drill-
-env.sh`.
-
-{% include startnote.html %}If this parameter is not set, the limit depends on the amount of available system memory.{% include endnote.html %}
-
-After you edit `<drill_installation_directory>/conf/drill-env.sh`, [restart
-the Drillbit
-]({{ site.baseurl }}/docs/starting-stopping-drill#starting-a-drillbit)on
-the node.
-
-## Impala Memory Configuration
-
-The configuration service for the Impala service is in the Impala env.sh file.
-
-## JobTracker Memory Configuration
-
-Memory allocated for JobTracker in `warden.conf` is used only to calculate total memory required for services to run. The -Xmx JobTracker itself is not set, allowing memory on JobTracker to grow as needed. If you want to set an upper limit on memory, set the HADOOP_HEAPSIZE env. variable in `/opt/mapr/hadoop/hadoop-0.20.2/conf/hadoop-env.sh`.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/sql-reference/070-sql-commands-summary.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/070-sql-commands-summary.md b/_docs/sql-reference/070-sql-commands-summary.md
index 636cedd..df07d9d 100644
--- a/_docs/sql-reference/070-sql-commands-summary.md
+++ b/_docs/sql-reference/070-sql-commands-summary.md
@@ -1,4 +1,4 @@
 ---
-title: "SQL Commands Summary"
+title: "SQL Commands"
 parent: "SQL Reference"
 ---

http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/sql-reference/sql-commands-summary/005-supported-sql-commands.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands-summary/005-supported-sql-commands.md b/_docs/sql-reference/sql-commands-summary/005-supported-sql-commands.md
deleted file mode 100644
index 40c4c0f..0000000
--- a/_docs/sql-reference/sql-commands-summary/005-supported-sql-commands.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-title: Supported SQL Commands
-parent: "SQL Commands Summary"
----
-The following table provides a list of the SQL commands that Drill supports,
-with their descriptions and example syntax:
-
-<table style='table-layout:fixed;width:100%'>
-    <tr><th >Command</th><th >Description</th><th >Syntax</th></tr><tr><td valign="top" width="15%"><a href="/docs/alter-session-command">ALTER SESSION</a></td><td valign="top" width="60%">Changes a system setting for the duration of a session. A session ends when you quit the Drill shell. For a list of Drill options and their descriptions, refer to <a href="/docs/planning-and-execution-options">Planning and Execution Options</a>.</td><td valign="top"><pre>ALTER SESSION SET `&lt;option_name&gt;`=&lt;value&gt;;</pre></td></tr><tr><td valign="top" ><a href="/docs/alter-system-command">ALTER SYSTEM</a></td><td valign="top" >Permanently changes a system setting. The new settings persist across all sessions. For a list of Drill options and their descriptions, refer to <a href="/docs/planning-and-execution-options">Planning and Execution Options</a>.</td><td valign="top" ><pre>ALTER SYSTEM SET `&lt;option_name&gt;`=&lt;value&gt;;</pre></td></tr><tr><td valign="top" ><p><a href="/docs/crea
 te-table-as--ctas-command">CREATE TABLE AS<br />(CTAS)</a></p></td><td valign="top" >Creates a new table and populates the new table with rows returned from a SELECT query. Use the CREATE TABLE AS (CTAS) statement in place of INSERT INTO. When you issue the CTAS command, you create a directory that contains parquet or CSV files. Each workspace in a file system has a default file type.<br />You can specify which writer you want Drill to use when creating a table: parquet, CSV, or JSON (as specified with the <code>store.format</code> option).</td><td valign="top" ><pre class="programlisting">CREATE TABLE new_table_name AS &lt;query&gt;;</pre></td></tr><tr><td - valign="top" ><a href="/docs/create-view-command">CREATE VIEW </a></td><td - valign="top" >Creates a virtual structure for the result set of a stored query.-</td><td -valign="top" ><pre>CREATE [OR REPLACE] VIEW [workspace.]view_name [ (column_name [, ...]) ] AS &lt;query&gt;;</pre></td></tr><tr><td  valign="top" ><a href="/docs
 /describe-command">DESCRIBE</a></td><td  valign="top" >Returns information about columns in a table or view.</td><td valign="top" ><pre>DESCRIBE [workspace.]table_name|view_name</pre></td></tr><tr><td valign="top" ><a href="/docs/drop-view-command">DROP VIEW</a></td><td valign="top" >Removes a view.</td><td valign="top" ><pre>DROP VIEW [workspace.]view_name ;</pre></td></tr><tr><td  valign="top" ><a href="/docs/explain-commands">EXPLAIN PLAN FOR</a></td><td valign="top" >Returns the physical plan for a particular query.</td><td valign="top" ><pre>EXPLAIN PLAN FOR &lt;query&gt;;</pre></td></tr><tr><td valign="top" ><a href="/docs/explain-commands">EXPLAIN PLAN WITHOUT IMPLEMENTATION FOR</a></td><td valign="top" >Returns the logical plan for a particular query.</td><td  valign="top" ><pre>EXPLAIN PLAN WITHOUT IMPLEMENTATION FOR &lt;query&gt;;</pre></td></tr><tr><td colspan="1" valign="top" ><a href="/docs/select-statements" rel="nofollow">SELECT</a></td><td valign="top" >Retrieves dat
 a from tables and files.</td><td  valign="top" ><pre>[WITH subquery]<br />SELECT column_list FROM table_name <br />[WHERE clause]<br />[GROUP BY clause]<br />[HAVING clause]<br />[ORDER BY clause];</pre></td></tr><tr><td  valign="top" ><a href="/docs/show-databases-and-show-schemas-commands">SHOW DATABASES </a></td><td valign="top" >Returns a list of available schemas. Equivalent to SHOW SCHEMAS.</td><td valign="top" ><pre>SHOW DATABASES;</pre></td></tr><tr><td valign="top" ><a href="/docs/show-files-command" >SHOW FILES</a></td><td valign="top" >Returns a list of files in a file system schema.</td><td valign="top" ><pre>SHOW FILES IN filesystem.`schema_name`;<br />SHOW FILES FROM filesystem.`schema_name`;</pre></td></tr><tr><td valign="top" ><a href="/docs/show-databases-and-show-schemas-commands">SHOW SCHEMAS</a></td><td - valign="top" >Returns a list of available schemas. Equivalent to SHOW DATABASES.</td><td valign="top" ><pre>SHOW SCHEMAS;</pre></td></tr><tr><td valign="top" ><
 a href="/docs/show-tables-command">SHOW TABLES</a></td><td valign="top" >Returns a list of tables and views.</td><td valign="top" ><pre>SHOW TABLES;</pre></td></tr><tr><td valign="top" ><a href="/docs/use-command">USE</a></td><td valign="top" >Change to a particular schema. When you opt to use a particular schema, Drill issues queries on that schema only.</td><td valign="top" ><pre>USE schema_name;</pre></td></tr></table>

http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/sql-reference/sql-commands-summary/010-alter-session-command.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands-summary/010-alter-session-command.md b/_docs/sql-reference/sql-commands-summary/010-alter-session-command.md
deleted file mode 100644
index 8077930..0000000
--- a/_docs/sql-reference/sql-commands-summary/010-alter-session-command.md
+++ /dev/null
@@ -1,74 +0,0 @@
----
-title: "ALTER SESSION Command"
-parent: "SQL Commands Summary"
----
-The ALTER SESSION command changes a system setting for the duration of a
-session. Session level settings override system level settings.
-
-## Syntax
-
-The ALTER SESSION command supports the following syntax:
-
-    ALTER SESSION SET `<option_name>`=<value>;
-
-## Parameters
-
-*option_name*  
-This is the option name as it appears in the systems table.
-
-*value*  
-A value of the type listed in the sys.options table: number, string, boolean,
-or float. Use the appropriate value type for each option that you set.
-
-## Usage Notes
-
-Use the ALTER SESSION command to set Drill query planning and execution
-options per session in a cluster. The options that you set using the ALTER
-SESSION command only apply to queries that run during the current Drill
-connection. A session ends when you quit the Drill shell. You can set any of
-the system level options at the session level.
-
-You can run the following query to see a complete list of planning and
-execution options that are currently set at the system or session level:
-
-    0: jdbc:drill:zk=local> SELECT name, type FROM sys.options WHERE type in ('SYSTEM','SESSION') order by name;
-    +------------+----------------------------------------------+
-    |   name                                       |    type    |
-    +----------------------------------------------+------------+
-    | drill.exec.functions.cast_empty_string_to_null | SYSTEM   |
-    | drill.exec.storage.file.partition.column.label | SYSTEM   |
-    | exec.errors.verbose                          | SYSTEM     |
-    | exec.java_compiler                           | SYSTEM     |
-    | exec.java_compiler_debug                     | SYSTEM     |
-    …
-    +------------+----------------------------------------------+
-
-{% include startnote.html %}This is a truncated version of the list.{% include endnote.html %}
-
-## Example
-
-This example demonstrates how to use the ALTER SESSION command to set the
-`store.json.all_text_mode` option to “true” for the current Drill session.
-Setting this option to “true” enables text mode so that Drill reads everything
-in JSON as a text object instead of trying to interpret data types. This
-allows complicated JSON to be read using CASE and CAST.
-
-    0: jdbc:drill:zk=local> alter session set `store.json.all_text_mode`= true;
-    +------------+------------+
-    |   ok  |  summary   |
-    +------------+------------+
-    | true      | store.json.all_text_mode updated. |
-    +------------+------------+
-    1 row selected (0.046 seconds)
-
-You can issue a query to see all of the session level settings. Note that the
-option type is case-sensitive.
-
-    0: jdbc:drill:zk=local> SELECT name, type, bool_val FROM sys.options WHERE type = 'SESSION' order by name;
-    +------------+------------+------------+
-    |   name    |   type    |  bool_val  |
-    +------------+------------+------------+
-    | store.json.all_text_mode | SESSION    | true      |
-    +------------+------------+------------+
-    1 row selected (0.176 seconds)
-

http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/sql-reference/sql-commands-summary/020-alter-system.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands-summary/020-alter-system.md b/_docs/sql-reference/sql-commands-summary/020-alter-system.md
deleted file mode 100644
index 7e8200f..0000000
--- a/_docs/sql-reference/sql-commands-summary/020-alter-system.md
+++ /dev/null
@@ -1,102 +0,0 @@
----
-title: "ALTER SYSTEM Command"
-parent: "SQL Commands Summary"
----
-The ALTER SYSTEM command permanently changes a system setting. The new setting
-persists across all sessions. Session level settings override system level
-settings.
-
-## Syntax
-
-The ALTER SYSTEM command supports the following syntax:
-
-    ALTER SYSTEM SET `<option_name>`=<value>;
-
-## Parameters
-
-*option_name*
-
-This is the option name as it appears in the systems table.
-
-_value_
-
-A value of the type listed in the sys.options table: number, string, boolean,
-or float. Use the appropriate value type for each option that you set.
-
-## Usage Notes
-
-Use the ALTER SYSTEM command to permanently set Drill query planning and
-execution options per cluster. Options set at the system level affect the
-entire system and persist between restarts.
-
-You can run the following query to see a complete list of planning and
-execution options that you can set at the system level:
-
-    0: jdbc:drill:zk=local> select name, type, num_val, string_val, bool_val, float_val from sys.options where type like 'SYSTEM' order by name;
-    +------------+------------+------------+------------+------------+------------+
-    |    name    |    type    |  num_val   | string_val |  bool_val  | float_val  |
-    +------------+------------+------------+------------+------------+------------+
-    | drill.exec.functions.cast_empty_string_to_null | SYSTEM     | null       | null       | false      | null       |
-    | drill.exec.storage.file.partition.column.label | SYSTEM     | null       | dir        | null       | null       |
-    | exec.errors.verbose | SYSTEM     | null       | null       | false      | null       |
-    | exec.java_compiler | SYSTEM     | null       | DEFAULT    | null       | null       |
-    | exec.java_compiler_debug | SYSTEM     | null       | null       | true       | null       |
-    | exec.java_compiler_janino_maxsize | SYSTEM     | 262144     | null       | null       | null       |
-    | exec.queue.timeout_millis | SYSTEM     | 400000     | null       | null       | null       |
-    | planner.add_producer_consumer | SYSTEM     | null       | null       | true       | null       |
-    | planner.affinity_factor | SYSTEM     | null       | null       | null       | 1.2        |
-    | planner.broadcast_threshold | SYSTEM     | 1000000    | null       | null       | null       |
-    | planner.disable_exchanges | SYSTEM     | null       | null       | false      | null       |
-    | planner.enable_broadcast_join | SYSTEM     | null       | null       | true       | null       |
-    | planner.enable_hash_single_key | SYSTEM     | null       | null       | true       | null       |
-    | planner.enable_hashagg | SYSTEM     | null       | null       | true       | null       |
-    | planner.enable_hashjoin | SYSTEM     | null       | null       | true       | null       |
-    | planner.slice_target | SYSTEM     | 100000     | null       | null       | null       |
-    | planner.width.max_per_node | SYSTEM     | 2          | null       | null       | null       |
-    | planner.width.max_per_query | SYSTEM     | 1000       | null       | null       | null       |
-    | store.format | SYSTEM     | null       | parquet    | null       | null       |
-    | store.json.all_text_mode | SYSTEM     | null       | null       | false      | null       |
-    | store.mongo.all_text_mode | SYSTEM     | null       | null       | false      | null       |
-    | store.parquet.block-size | SYSTEM     | 536870912  | null       | null       | null       |
-    | store.parquet.use_new_reader | SYSTEM     | null       | null       | false      | null       |
-    | store.parquet.vector_fill_check_threshold | SYSTEM     | 10         | null       | null       | null       |
-    | store.parquet.vector_fill_threshold | SYSTEM     | 85         | null       | null       | null       |
-    +------------+------------+------------+------------+------------+------------+
-
-{% include startnote.html %}This is a truncated version of the list.{% include endnote.html %}
-
-## Example
-
-This example demonstrates how to use the ALTER SYSTEM command to set the
-`planner.add_producer_consumer` option to “true.” This option enables a
-secondary reading thread to prefetch data from disk.
-
-    0: jdbc:drill:zk=local> alter system set `planner.add_producer_consumer` = true;
-    +------------+------------+
-    |   ok  |  summary   |
-    +------------+------------+
-    | true      | planner.add_producer_consumer updated. |
-    +------------+------------+
-    1 row selected (0.046 seconds)
-
-You can issue a query to see all of the system level settings set to “true.”
-Note that the option type is case-sensitive.
-
-    0: jdbc:drill:zk=local> SELECT name, type, bool_val FROM sys.options WHERE type = 'SYSTEM' and bool_val=true;
-    +------------+------------+------------+
-    |   name    |   type    |  bool_val  |
-    +------------+------------+------------+
-    | exec.java_compiler_debug | SYSTEM     | true      |
-    | planner.enable_mergejoin | SYSTEM     | true      |
-    | planner.enable_broadcast_join | SYSTEM    | true      |
-    | planner.enable_hashagg | SYSTEM   | true      |
-    | planner.add_producer_consumer | SYSTEM    | true      |
-    | planner.enable_hash_single_key | SYSTEM   | true      |
-    | planner.enable_multiphase_agg | SYSTEM    | true      |
-    | planner.enable_streamagg | SYSTEM     | true      |
-    | planner.enable_hashjoin | SYSTEM  | true      |
-    +------------+------------+------------+
-    9 rows selected (0.159 seconds)
-
-  
-

http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/sql-reference/sql-commands-summary/030-create-table-as-command.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands-summary/030-create-table-as-command.md b/_docs/sql-reference/sql-commands-summary/030-create-table-as-command.md
deleted file mode 100644
index 573c295..0000000
--- a/_docs/sql-reference/sql-commands-summary/030-create-table-as-command.md
+++ /dev/null
@@ -1,134 +0,0 @@
----
-title: "CREATE TABLE AS (CTAS) command"
-parent: "SQL Commands Summary"
----
-You can create tables in Drill by using the CTAS command:
-
-    CREATE TABLE new_table_name AS <query>;
-
-where query is any valid Drill query. Each table you create must have a unique
-name. You can include an optional column list for the new table. For example:
-
-    create table logtable(transid, prodid) as select transaction_id, product_id from ...
-
-You can store table data in one of three formats:
-
-  * csv
-  * parquet
-  * json
-
-The parquet and json formats can be used to store complex data.
-
-To set the output format for a Drill table, set the `store.format` option with
-the ALTER SYSTEM or ALTER SESSION command. For example:
-
-    alter session set `store.format`='json';
-
-Table data is stored in the location specified by the workspace that is in use
-when you run the CTAS statement. By default, a directory is created, using the
-exact table name specified in the CTAS statement. A .json, .csv, or .parquet
-file inside that directory contains the data.
-
-You can only create new tables in workspaces. You cannot create tables in
-other storage plugins such as Hive and HBase.
-
-You must use a writable (mutable) workspace when creating Drill tables. For
-example:
-
-	"tmp": {
-	      "location": "/tmp",
-	      "writable": true,
-	       }
-
-## Example
-
-The following query returns one row from a JSON file:
-
-	0: jdbc:drill:zk=local> select id, type, name, ppu
-	from dfs.`/Users/brumsby/drill/donuts.json`;
-	+------------+------------+------------+------------+
-	|     id     |    type    |    name    |    ppu     |
-	+------------+------------+------------+------------+
-	| 0001       | donut      | Cake       | 0.55       |
-	+------------+------------+------------+------------+
-	1 row selected (0.248 seconds)
-
-To create and verify the contents of a table that contains this row:
-
-  1. Set the workspace to a writable workspace.
-  2. Set the `store.format` option appropriately.
-  3. Run a CTAS statement that contains the query.
-  4. Go to the directory where the table is stored and check the contents of the file.
-  5. Run a query against the new table.
-
-The following sqlline output captures this sequence of steps.
-
-### Workspace Definition
-
-	"tmp": {
-	      "location": "/tmp",
-	      "writable": true,
-	       }
-
-### ALTER SESSION Command
-
-    alter session set `store.format`='json';
-
-### USE Command
-
-	0: jdbc:drill:zk=local> use dfs.tmp;
-	+------------+------------+
-	|     ok     |  summary   |
-	+------------+------------+
-	| true       | Default schema changed to 'dfs.tmp' |
-	+------------+------------+
-	1 row selected (0.03 seconds)
-
-### CTAS Command
-
-	0: jdbc:drill:zk=local> create table donuts_json as
-	select id, type, name, ppu from dfs.`/Users/brumsby/drill/donuts.json`;
-	+------------+---------------------------+
-	|  Fragment  | Number of records written |
-	+------------+---------------------------+
-	| 0_0        | 1                         |
-	+------------+---------------------------+
-	1 row selected (0.107 seconds)
-
-### File Contents
-
-	administorsmbp7:tmp brumsby$ pwd
-	/tmp
-	administorsmbp7:tmp brumsby$ cd donuts_json
-	administorsmbp7:donuts_json brumsby$ more 0_0_0.json
-	{
-	 "id" : "0001",
-	  "type" : "donut",
-	  "name" : "Cake",
-	  "ppu" : 0.55
-	}
-
-### Query Against New Table
-
-	0: jdbc:drill:zk=local> select * from donuts_json;
-	+------------+------------+------------+------------+
-	|     id     |    type    |    name    |    ppu     |
-	+------------+------------+------------+------------+
-	| 0001       | donut      | Cake       | 0.55       |
-	+------------+------------+------------+------------+
-	1 row selected (0.053 seconds)
-
-### Use a Different Output Format
-
-You can run the same sequence again with a different storage format set for
-the system or session (csv or parquet). For example, if the format is set to
-csv, and you name the table donuts_csv, the resulting file would look like
-this:
-
-	administorsmbp7:tmp brumsby$ cd donuts_csv
-	administorsmbp7:donuts_csv brumsby$ ls
-	0_0_0.csv
-	administorsmbp7:donuts_csv brumsby$ more 0_0_0.csv
-	id,type,name,ppu
-	0001,donut,Cake,0.55
-

http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/sql-reference/sql-commands-summary/050-create-view-command.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands-summary/050-create-view-command.md b/_docs/sql-reference/sql-commands-summary/050-create-view-command.md
deleted file mode 100644
index 0d9361c..0000000
--- a/_docs/sql-reference/sql-commands-summary/050-create-view-command.md
+++ /dev/null
@@ -1,197 +0,0 @@
----
-title: "CREATE VIEW command"
-parent: "SQL Commands Summary"
----
-The CREATE VIEW command creates a virtual structure for the result set of a
-stored query. A view can combine data from multiple underlying data sources
-and provide the illusion that all of the data is from one source. You can use
-views to protect sensitive data, for data aggregation, and to hide data
-complexity from users. You can create Drill views from files in your local and
-distributed file systems, Hive, HBase, and MapR-DB tables, as well as from
-existing views or any other available storage plugin data sources.
-
-## Syntax
-
-The CREATE VIEW command supports the following syntax:
-
-    CREATE [OR REPLACE] VIEW [workspace.]view_name [ (column_name [, ...]) ] AS <query>;
-
-Use CREATE VIEW to create a new view. Use CREATE OR REPLACE VIEW to replace an
-existing view with the same name. When you replace a view, the query must
-generate the same set of columns with the same column names and data types.
-
-**Note:** Follow Drill’s rules for identifiers when you name the view. See coming soon...
-
-## Parameters
-
-_workspace_  
-The location where you want the view to exist. By default, the view is created
-in the current workspace. See
-[Workspaces]({{ site.baseurl }}/docs/Workspaces).
-
-_view_name_  
-The name that you give the view. The view must have a unique name. It cannot
-have the same name as any other view or table in the workspace.
-
-_column_name_  
-Optional list of column names in the view. If you do not supply column names,
-they are derived from the query.
-
-_query_  
-A SELECT statement that defines the columns and rows in the view.
-
-## Usage Notes
-
-### Storage
-
-Drill stores views in the location specified by the workspace that you use
-when you run the CREATE VIEW command. If the workspace is not defined, Drill
-creates the view in the current workspace. You must use a writable workspace
-when you create a view. Currently, Drill only supports views created in the
-file system or distributed file system.
-
-The following example shows a writable workspace as defined within the storage
-plugin in the `/tmp` directory of the file system:
-
-    "tmp": {
-          "location": "/tmp",
-          "writable": true,
-           }
-
-Drill stores the view definition in JSON format with the name that you specify
-when you run the CREATE VIEW command, suffixed `by .view.drill`. For example,
-if you create a view named `myview`, Drill stores the view in the designated
-workspace as `myview.view.drill`.
-
-Data Sources
-
-Drill considers data sources to have either a strong schema or a weak schema.  
-
-##### Strong Schema
-
-With the exception of text file data sources, Drill verifies that data sources
-associated with a strong schema contain data types compatible with those used
-in the query. Drill also verifies that the columns referenced in the query
-exist in the underlying data sources. If the columns do not exist, CREATE VIEW
-fails.
-
-#### Weak Schema
-
-Drill does not verify that data sources associated with a weak schema contain
-data types compatible with those used in the query. Drill does not verify if
-columns referenced in a query on a Parquet data source exist, therefore CREATE
-VIEW always succeeds. In the case of JSON files, Drill does not verify if the
-files contain the maps specified in the view.
-
-The following table lists the current categories of schema and the data
-sources associated with each:
-
-<table>
-  <tr>
-    <th></th>
-    <th>Strong Schema</th>
-    <th>Weak Schema</th>
-  </tr>
-  <tr>
-    <td valign="top">Data Sources</td>
-    <td>views<br>hive tables<br>hbase column families<br>text</td>
-    <td>json<br>mongodb<br>hbase column qualifiers<br>parquet</td>
-  </tr>
-</table>
-  
-## Related Commands
-
-After you create a view using the CREATE VIEW command, you can issue the
-following commands against the view:
-
-  * SELECT 
-  * DESCRIBE 
-  * DROP 
-
-{% include startnote.html %}You cannot update, insert into, or delete from a view.{% include endnote.html %}
-
-## Example
-
-This example shows you some steps that you can follow when you want to create
-a view in Drill using the CREATE VIEW command. A workspace named “donuts” was
-created for the steps in this example.
-
-Complete the following steps to create a view in Drill:
-
-  1. Decide which workspace you will use to create the view, and verify that the writable option is set to “true.” You can use an existing workspace, or you can create a new workspace. See [Workspaces](https://cwiki.apache.org/confluence/display/DRILL/Workspaces) for more information.  
-  
-        "workspaces": {
-           "donuts": {
-             "location": "/home/donuts",
-             "writable": true,
-             "defaultInputFormat": null
-           }
-         },
-
-  2. Run SHOW DATABASES to verify that Drill recognizes the workspace.  
-
-        0: jdbc:drill:zk=local> show databases;
-        +-------------+
-        | SCHEMA_NAME |
-        +-------------+
-        | dfs.default |
-        | dfs.root  |
-        | dfs.donuts  |
-        | dfs.tmp   |
-        | cp.default  |
-        | sys       |
-        | INFORMATION_SCHEMA |
-        +-------------+
-
-  3. Use the writable workspace.  
-
-        0: jdbc:drill:zk=local> use dfs.donuts;
-        +------------+------------+
-        |     ok    |  summary   |
-        +------------+------------+
-        | true      | Default schema changed to 'dfs.donuts' |
-        +------------+------------+
-
-  4. Test run the query that you plan to use with the CREATE VIEW command.  
-
-        0: jdbc:drill:zk=local> select id, type, name, ppu from `donuts.json`;
-        +------------+------------+------------+------------+
-        |     id    |   type    |   name    |    ppu    |
-        +------------+------------+------------+------------+
-        | 0001      | donut      | Cake     | 0.55      |
-        +------------+------------+------------+------------+
-
-  5. Run the CREATE VIEW command with the query.  
-
-        0: jdbc:drill:zk=local> create view mydonuts as select id, type, name, ppu from `donuts.json`;
-        +------------+------------+
-        |     ok    |  summary   |
-        +------------+------------+
-        | true      | View 'mydonuts' created successfully in 'dfs.donuts' schema |
-        +------------+------------+
-
-  6. Create a new view in another workspace from the current workspace.  
-
-        0: jdbc:drill:zk=local> create view dfs.tmp.yourdonuts as select id, type, name from `donuts.json`;
-        +------------+------------+
-        |   ok  |  summary   |
-        +------------+------------+
-        | true      | View 'yourdonuts' created successfully in 'dfs.tmp' schema |
-        +------------+------------+
-
-  7. Query the view created in both workspaces.
-
-        0: jdbc:drill:zk=local> select * from mydonuts;
-        +------------+------------+------------+------------+
-        |     id    |   type    |   name    |    ppu    |
-        +------------+------------+------------+------------+
-        | 0001      | donut      | Cake     | 0.55      |
-        +------------+------------+------------+------------+
-         
-         
-        0: jdbc:drill:zk=local> select * from dfs.tmp.yourdonuts;
-        +------------+------------+------------+
-        |   id  |   type    |   name    |
-        +------------+------------+------------+
-        | 0001      | donut     | Cake      |
-        +------------+------------+------------+

http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/sql-reference/sql-commands-summary/060-describe-command.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands-summary/060-describe-command.md b/_docs/sql-reference/sql-commands-summary/060-describe-command.md
deleted file mode 100644
index b9eb4f6..0000000
--- a/_docs/sql-reference/sql-commands-summary/060-describe-command.md
+++ /dev/null
@@ -1,99 +0,0 @@
----
-title: "DESCRIBE Command"
-parent: "SQL Commands Summary"
----
-The DESCRIBE command returns information about columns in a table or view.
-
-## Syntax
-
-The DESCRIBE command supports the following syntax:
-
-    DESCRIBE [workspace.]table_name|view_name
-
-## Usage Notes
-
-You can issue the DESCRIBE command against views created in a workspace and
-tables created in Hive, HBase, and MapR-DB. You can issue the DESCRIBE command
-on a table or view from any schema. For example, if you are working in the
-`dfs.myworkspace` schema, you can issue the DESCRIBE command on a view or
-table in another schema. Currently, DESCRIBE does not support tables created
-in a file system.
-
-Drill only supports SQL data types. Verify that all data types in an external
-data source, such as Hive or HBase, map to supported data types in Drill. See
-Drill Data Type Mapping for more information.
-
-## Example
-
-The following example demonstrates the steps that you can follow when you want
-to use the DESCRIBE command to see column information for a view and for Hive
-and HBase tables.
-
-Complete the following steps to use the DESCRIBE command:
-
-  1. Issue the USE command to switch to a particular schema.
-
-        0: jdbc:drill:zk=drilldemo:5181> use hive;
-        +------------+------------+
-        |   ok  |  summary   |
-        +------------+------------+
-        | true      | Default schema changed to 'hive' |
-        +------------+------------+
-        1 row selected (0.025 seconds)
-
-  2. Issue the SHOW TABLES command to see the existing tables in the schema.
-
-        0: jdbc:drill:zk=drilldemo:5181> show tables;
-        +--------------+------------+
-        | TABLE_SCHEMA | TABLE_NAME |
-        +--------------+------------+
-        | hive.default | orders     |
-        | hive.default | products   |
-        +--------------+------------+
-        2 rows selected (0.438 seconds)
-
-  3. Issue the DESCRIBE command on a table.
-
-        0: jdbc:drill:zk=drilldemo:5181> describe orders;
-        +-------------+------------+-------------+
-        | COLUMN_NAME | DATA_TYPE  | IS_NULLABLE |
-        +-------------+------------+-------------+
-        | order_id  | BIGINT    | YES       |
-        | month     | VARCHAR   | YES       |
-        | purchdate   | TIMESTAMP  | YES        |
-        | cust_id   | BIGINT    | YES       |
-        | state     | VARCHAR   | YES       |
-        | prod_id   | BIGINT    | YES       |
-        | order_total | INTEGER | YES       |
-        +-------------+------------+-------------+
-        7 rows selected (0.64 seconds)
-
-  4. Issue the DESCRIBE command on a table in another schema from the current schema.
-
-        0: jdbc:drill:zk=drilldemo:5181> describe hbase.customers;
-        +-------------+------------+-------------+
-        | COLUMN_NAME | DATA_TYPE  | IS_NULLABLE |
-        +-------------+------------+-------------+
-        | row_key   | ANY       | NO        |
-        | address   | (VARCHAR(1), ANY) MAP | NO        |
-        | loyalty   | (VARCHAR(1), ANY) MAP | NO        |
-        | personal  | (VARCHAR(1), ANY) MAP | NO        |
-        +-------------+------------+-------------+
-        4 rows selected (0.671 seconds)
-
-  5. Issue the DESCRIBE command on a view in another schema from the current schema.
-
-        0: jdbc:drill:zk=drilldemo:5181> describe dfs.views.customers_vw;
-        +-------------+------------+-------------+
-        | COLUMN_NAME | DATA_TYPE  | IS_NULLABLE |
-        +-------------+------------+-------------+
-        | cust_id   | BIGINT    | NO        |
-        | name      | VARCHAR   | NO        |
-        | address   | VARCHAR   | NO        |
-        | gender    | VARCHAR   | NO        |
-        | age       | VARCHAR   | NO        |
-        | agg_rev   | VARCHAR   | NO        |
-        | membership  | VARCHAR | NO        |
-        +-------------+------------+-------------+
-        7 rows selected (0.403 seconds)
-

http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/sql-reference/sql-commands-summary/070-explain-commands.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands-summary/070-explain-commands.md b/_docs/sql-reference/sql-commands-summary/070-explain-commands.md
deleted file mode 100644
index 8ce6432..0000000
--- a/_docs/sql-reference/sql-commands-summary/070-explain-commands.md
+++ /dev/null
@@ -1,156 +0,0 @@
----
-title: "EXPLAIN commands"
-parent: "SQL Commands Summary"
----
-EXPLAIN is a useful tool for examining the steps that a query goes through
-when it is executed. You can use the EXPLAIN output to gain a deeper
-understanding of the parallel processing that Drill queries exploit. You can
-also look at costing information, troubleshoot performance issues, and
-diagnose routine errors that may occur when you run queries.
-
-Drill provides two variations on the EXPLAIN command, one that returns the
-physical plan and one that returns the logical plan. A logical plan takes the
-SQL query (as written by the user and accepted by the parser) and translates
-it into a logical series of operations that correspond to SQL language
-constructs (without defining the specific algorithms that will be implemented
-to run the query). A physical plan translates the logical plan into a specific
-series of steps that will be used when the query runs. For example, a logical
-plan may indicate a join step in general and classify it as inner or outer,
-but the corresponding physical plan will indicate the specific type of join
-operator that will run, such as a merge join or a hash join. The physical plan
-is operational and reveals the specific _access methods_ that will be used for
-the query.
-
-An EXPLAIN command for a query that is run repeatedly under the exact same
-conditions against the same data will return the same plan. However, if you
-change a configuration option, for example, or update the tables or files that
-you are selecting from, you are likely to see plan changes.
-
-## EXPLAIN Syntax
-
-The EXPLAIN command supports the following syntax:
-
-    explain plan [ including all attributes ] [ with implementation | without implementation ] for <query> ;
-
-where `query` is any valid SELECT statement supported by Drill.
-
-##### INCLUDING ALL ATTRIBUTES
-
-This option returns costing information. You can use this option for both
-physical and logical plans.
-
-#### WITH IMPLEMENTATION | WITHOUT IMPLEMENTATION
-
-These options return the physical and logical plan information, respectively.
-The default is physical (WITH IMPLEMENTATION).
-
-## EXPLAIN for Physical Plans
-
-The EXPLAIN PLAN FOR <query> command returns the chosen physical execution
-plan for a query statement without running the query. You can use this command
-to see what kind of execution operators Drill implements. For example, you can
-find out what kind of join algorithm is chosen when tables or files are
-joined. You can also use this command to analyze errors and troubleshoot
-queries that do not run. For example, if you run into a casting error, the
-query plan text may help you isolate the problem.
-
-Use the following syntax:
-
-    explain plan for <query> ;
-
-The following set command increases the default text display (number of
-characters). By default, most of the plan output is not displayed.
-
-    0: jdbc:drill:zk=local> !set maxwidth 10000
-
-Do not use a semicolon to terminate set commands.
-
-For example, here is the top portion of the explain output for a
-COUNT(DISTINCT) query on a JSON file:
-
-    0: jdbc:drill:zk=local> !set maxwidth 10000
-	0: jdbc:drill:zk=local> explain plan for select type t, count(distinct id) from dfs.`/home/donuts/donuts.json` where type='donut' group by type;
-	+------------+------------+
-	|   text    |   json    |
-	+------------+------------+
-	| 00-00 Screen
-	00-01   Project(t=[$0], EXPR$1=[$1])
-	00-02       Project(t=[$0], EXPR$1=[$1])
-	00-03       HashAgg(group=[{0}], EXPR$1=[COUNT($1)])
-	00-04           HashAgg(group=[{0, 1}])
-	00-05           SelectionVectorRemover
-	00-06               Filter(condition=[=($0, 'donut')])
-	00-07               Scan(groupscan=[EasyGroupScan [selectionRoot=/home/donuts/donuts.json, numFiles=1, columns=[`type`, `id`], files=[file:/home/donuts/donuts.json]]])...
-	...
-
-Read the text output from bottom to top to understand the sequence of
-operators that will execute the query. Note that the physical plan starts with
-a scan of the JSON file that is being queried. The selected columns are
-projected and filtered, then the aggregate function is applied.
-
-The EXPLAIN text output is followed by detailed JSON output, which is reusable
-for submitting the query via Drill APIs.
-
-	| {
-	  "head" : {
-	    "version" : 1,
-	    "generator" : {
-	      "type" : "ExplainHandler",
-	      "info" : ""
-	    },
-	    "type" : "APACHE_DRILL_PHYSICAL",
-	    "options" : [ ],
-	    "queue" : 0,
-	    "resultMode" : "EXEC"
-	  },
-	....
-
-## Costing Information
-
-Add the INCLUDING ALL ATTRIBUTES option to the EXPLAIN command to see cost
-estimates for the query plan. For example:
-
-	0: jdbc:drill:zk=local> !set maxwidth 10000
-	0: jdbc:drill:zk=local> explain plan including all attributes for select * from dfs.`/home/donuts/donuts.json` where type='donut';
-	+------------+------------+
-	|   text    |   json    |
-	+------------+------------+
-	| 00-00 Screen: rowcount = 1.0, cumulative cost = {5.1 rows, 21.1 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 889
-	00-01   Project(*=[$0]): rowcount = 1.0, cumulative cost = {5.0 rows, 21.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 888
-	00-02       Project(T1¦¦*=[$0]): rowcount = 1.0, cumulative cost = {4.0 rows, 17.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 887
-	00-03       SelectionVectorRemover: rowcount = 1.0, cumulative cost = {3.0 rows, 13.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 886
-	00-04           Filter(condition=[=($1, 'donut')]): rowcount = 1.0, cumulative cost = {2.0 rows, 12.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 885
-	00-05           Project(T1¦¦*=[$0], type=[$1]): rowcount = 1.0, cumulative cost = {1.0 rows, 8.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 884
-	00-06               Scan(groupscan=[EasyGroupScan [selectionRoot=/home/donuts/donuts.json, numFiles=1, columns=[`*`], files=[file:/home/donuts/donuts.json]]]): rowcount = 1.0, cumulative cost = {0.0 rows, 0.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 883
-
-## EXPLAIN for Logical Plans
-
-To return the logical plan for a query (again, without actually running the
-query), use the EXPLAIN PLAN WITHOUT IMPLEMENTATION syntax:
-
-    explain plan without implementation for <query> ;
-
-For example:
-
-	0: jdbc:drill:zk=local> explain plan without implementation for select type t, count(distinct id) from dfs.`/home/donuts/donuts.json` where type='donut' group by type;
-	+------------+------------+
-	|   text    |   json    |
-	+------------+------------+
-	| DrillScreenRel
-	  DrillProjectRel(t=[$0], EXPR$1=[$1])
-	    DrillAggregateRel(group=[{0}], EXPR$1=[COUNT($1)])
-	    DrillAggregateRel(group=[{0, 1}])
-	        DrillFilterRel(condition=[=($0, 'donut')])
-	        DrillScanRel(table=[[dfs, /home/donuts/donuts.json]], groupscan=[EasyGroupScan [selectionRoot=/home/donuts/donuts.json, numFiles=1, columns=[`type`, `id`], files=[file:/home/donuts/donuts.json]]]) | {
-	  | {
-	  "head" : {
-	    "version" : 1,
-	    "generator" : {
-	    "type" : "org.apache.drill.exec.planner.logical.DrillImplementor",
-	    "info" : ""
-	    },
-	    "type" : "APACHE_DRILL_LOGICAL",
-	    "options" : null,
-	    "queue" : 0,
-	    "resultMode" : "LOGICAL"
-	  },...

http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/sql-reference/sql-commands-summary/080-select.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands-summary/080-select.md b/_docs/sql-reference/sql-commands-summary/080-select.md
deleted file mode 100644
index 4ffd4b3..0000000
--- a/_docs/sql-reference/sql-commands-summary/080-select.md
+++ /dev/null
@@ -1,178 +0,0 @@
----
-title: "SELECT Statements"
-parent: "SQL Commands Summary"
----
-Drill supports the following ANSI standard clauses in the SELECT statement:
-
-  * WITH clause
-  * SELECT list
-  * FROM clause
-  * WHERE clause
-  * GROUP BY clause
-  * HAVING clause
-  * ORDER BY clause (with an optional LIMIT clause)
-
-You can use the same SELECT syntax in the following commands:
-
-  * CREATE TABLE AS (CTAS)
-  * CREATE VIEW
-
-INSERT INTO SELECT is not yet supported.
-
-## Column Aliases
-
-You can use named column aliases in the SELECT list to provide meaningful
-names for regular columns and computed columns, such as the results of
-aggregate functions. See the section on running queries for examples.
-
-You cannot reference column aliases in the following clauses:
-
-  * WHERE
-  * GROUP BY
-  * HAVING
-
-Because Drill works with schema-less data sources, you cannot use positional
-aliases (1, 2, etc.) to refer to SELECT list columns, except in the ORDER BY
-clause.
-
-## UNION ALL Set Operator
-
-Drill supports the UNION ALL set operator to combine two result sets. The
-distinct UNION operator is not yet supported.
-
-The EXCEPT, EXCEPT ALL, INTERSECT, and INTERSECT ALL operators are not yet
-supported.
-
-## Joins
-
-Drill supports ANSI standard joins in the FROM and WHERE clauses:
-
-  * Inner joins
-  * Left, full, and right outer joins
-
-The following types of join syntax are supported:
-
-Join type| Syntax  
----|---  
-Join condition in WHERE clause|FROM table1, table 2 WHERE table1.col1=table2.col1  
-USING join in FROM clause|FROM table1 JOIN table2 USING(col1, ...)  
-ON join in FROM clause|FROM table1 JOIN table2 ON table1.col1=table2.col1  
-NATURAL JOIN in FROM clause|FROM table 1 NATURAL JOIN table 2  
-
-Cross-joins are not yet supported. You must specify a join condition when more
-than one table is listed in the FROM clause.
-
-Non-equijoins are supported if the join also contains an equality condition on
-the same two tables as part of a conjunction:
-
-    table1.col1 = table2.col1 AND table1.c2 < table2.c2
-
-This restriction applies to both inner and outer joins.
-
-## Subqueries
-
-You can use the following subquery operators in Drill queries. These operators
-all return Boolean results.
-
-  * ALL
-  * ANY
-  * EXISTS
-  * IN
-  * SOME
-
-In general, correlated subqueries are supported. EXISTS and NOT EXISTS
-subqueries that do not contain a correlation join are not yet supported.
-
-## WITH Clause
-
-The WITH clause is an optional clause used to contain one or more common table
-expressions (CTE) where each CTE defines a temporary table that exists for the
-duration of the query. Each subquery in the WITH clause specifies a table
-name, an optional list of column names, and a SELECT statement.
-
-## Syntax
-
-The WITH clause supports the following syntax:
-
-    [ WITH with_subquery [, ...] ]
-    where with_subquery is:
-    with_subquery_table_name [ ( column_name [, ...] ) ] AS ( query ) 
-
-## Parameters
-
-_with_subquery_table_name_
-
-A unique name for a temporary table that defines the results of a WITH clause
-subquery. You cannot use duplicate names within a single WITH clause. You must
-give each subquery a table name that can be referenced in the FROM clause.
-
-_column_name_
-
-An optional list of output column names for the WITH clause subquery,
-separated by commas. The number of column names specified must be equal to or
-less than the number of columns defined by the subquery.
-
-_query_
-
-Any SELECT query that Drill supports. See
-[SELECT]({{ site.baseurl }}/docs/SELECT+Statements).
-
-## Usage Notes
-
-Use the WITH clause to efficiently define temporary tables that Drill can
-access throughout the execution of a single query. The WITH clause is
-typically a simpler alternative to using subqueries in the main body of the
-SELECT statement. In some cases, Drill can evaluate a WITH subquery once and
-reuse the results for query optimization.
-
-You can use a WITH clause in the following SQL statements:
-
-  * SELECT (including subqueries within SELECT statements)
-
-  * CREATE TABLE AS
-
-  * CREATE VIEW
-
-  * EXPLAIN
-
-You can reference the temporary tables in the FROM clause of the query. If the
-FROM clause does not reference any tables defined by the WITH clause, Drill
-ignores the WITH clause and executes the query as normal.
-
-Drill can only reference a table defined by a WITH clause subquery in the
-scope of the SELECT query that the WITH clause begins. For example, you can
-reference such a table in the FROM clause of a subquery in the SELECT list,
-WHERE clause, or HAVING clause. You cannot use a WITH clause in a subquery and
-reference its table in the FROM clause of the main query or another subquery.
-
-You cannot specify another WITH clause inside a WITH clause subquery.
-
-For example, the following query includes a forward reference to table t2 in
-the definition of table t1:
-
-## Example
-
-The following example shows the WITH clause used to create a WITH query named
-`emp_data` that selects all of the rows from the `employee.json` file. The
-main query selects the `full_name, position_title, salary`, and `hire_date`
-rows from the `emp_data` temporary table (created from the WITH subquery) and
-orders the results by the hire date. The `emp_data` table only exists for the
-duration of the query.
-
-**Note:** The `employee.json` file is included with the Drill installation. It is located in the `cp.default` workspace which is configured by default. 
-
-    0: jdbc:drill:zk=local> with emp_data as (select * from cp.`employee.json`) select full_name, position_title, salary, hire_date from emp_data order by hire_date limit 10;
-    +------------------+-------------------------+------------+-----------------------+
-    | full_name        | position_title          |   salary   | hire_date             |
-    +------------------+-------------------------+------------+-----------------------+
-    | Bunny McCown     | Store Assistant Manager | 8000.0     | 1993-05-01 00:00:00.0 |
-    | Danielle Johnson | Store Assistant Manager | 8000.0     | 1993-05-01 00:00:00.0 |
-    | Dick Brummer     | Store Assistant Manager | 7900.0     | 1993-05-01 00:00:00.0 |
-    | Gregory Whiting  | Store Assistant Manager | 10000.0    | 1993-05-01 00:00:00.0 |
-    | Juanita Sharp    | HQ Human Resources      | 6700.0     | 1994-01-01 00:00:00.0 |
-    | Sheri Nowmer     | President               | 80000.0    | 1994-12-01 00:00:00.0 |
-    | Rebecca Kanagaki | VP Human Resources      | 15000.0    | 1994-12-01 00:00:00.0 |
-    | Shauna Wyro      | Store Manager           | 15000.0    | 1994-12-01 00:00:00.0 |
-    | Roberta Damstra  | VP Information Systems  | 25000.0    | 1994-12-01 00:00:00.0 |
-    | Pedro Castillo   | VP Country Manager      | 35000.0    | 1994-12-01 00:00:00.0 |
-    +------------+----------------+--------------+------------------------------------+
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/sql-reference/sql-commands-summary/090-show-databases-and-show-schemas.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands-summary/090-show-databases-and-show-schemas.md b/_docs/sql-reference/sql-commands-summary/090-show-databases-and-show-schemas.md
deleted file mode 100644
index 924631f..0000000
--- a/_docs/sql-reference/sql-commands-summary/090-show-databases-and-show-schemas.md
+++ /dev/null
@@ -1,59 +0,0 @@
----
-title: "SHOW DATABASES AND SHOW SCHEMAS Command"
-parent: "SQL Commands Summary"
----
-The SHOW DATABASES and SHOW SCHEMAS commands generate a list of available Drill schemas that you can query.
-
-## Syntax
-
-The SHOW DATABASES and SHOW SCHEMAS commands support the following syntax:
-
-    SHOW DATABASES;
-    SHOW SCHEMAS;
-
-{% include startnote.html %}These commands generate the same results.{% include endnote.html %}
-
-## Usage Notes
-
-You may want to run the SHOW DATABASES or SHOW SCHEMAS command to see a list of the configured storage plugins and workspaces in Drill before you issue the USE command to switch to a particular schema for your queries.
-
-In Drill, a database or schema is a configured storage plugin instance or a configured storage plugin instance with a configured workspace. For example, dfs.donuts where dfs is the file system configured as a storage plugin instance, and donuts is a configured workspace.
-
-You can configure and use multiple storage plugins and workspaces in Drill.  See [Storage Plugin Registration]({{ site.baseurl }}/docs/storage-plugin-registration) and [Workspaces]({{ site.baseurl }}/docs/workspaces).
-
-## Example
-
-The following example uses the SHOW DATABASES and SHOW SCHEMAS commands to generate a list of the available schemas in Drill. Some of the results that display are specific to all Drill installations, such as `cp.default` and `dfs.default`, while others vary based on your specific storage plugin and workspace configurations.
-
-	0: jdbc:drill:zk=local> show databases;
-	+-------------+
-	| SCHEMA_NAME |
-	+-------------+
-	| dfs.default |
-	| dfs.root  |
-	| dfs.donuts  |
-	| dfs.tmp   |
-	| dfs.customers |
-	| dfs.yelp  |
-	| cp.default  |
-	| sys       |
-	| INFORMATION_SCHEMA |
-	+-------------+
-	9 rows selected (0.07 seconds)
-	 
-	 
-	0: jdbc:drill:zk=local> show schemas;
-	+-------------+
-	| SCHEMA_NAME |
-	+-------------+
-	| dfs.default |
-	| dfs.root  |
-	| dfs.donuts  |
-	| dfs.tmp   |
-	| dfs.customers |
-	| dfs.yelp  |
-	| cp.default  |
-	| sys       |
-	| INFORMATION_SCHEMA |
-	+-------------+
-	9 rows selected (0.058 seconds)
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/sql-reference/sql-commands-summary/100-show-files.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands-summary/100-show-files.md b/_docs/sql-reference/sql-commands-summary/100-show-files.md
deleted file mode 100644
index 1fcf395..0000000
--- a/_docs/sql-reference/sql-commands-summary/100-show-files.md
+++ /dev/null
@@ -1,65 +0,0 @@
----
-title: "SHOW FILES Command"
-parent: "SQL Commands Summary"
----
-The SHOW FILES command provides a quick report of the file systems that are
-visible to Drill for query purposes. This command is unique to Apache Drill.
-
-## Syntax
-
-The SHOW FILES command supports the following syntax.
-
-    SHOW FILES [ FROM filesystem.directory_name | IN filesystem.directory_name ];
-
-The FROM or IN clause is required if you do not specify a default file system
-first. You can do this with the USE command. FROM and IN are synonyms.
-
-The directory name is optional. (If the directory name is a Drill reserved
-word, you must use back ticks around the name.)
-
-The command returns standard Linux `stat` information for each file or
-directory, such as permissions, owner, and group values. This information is
-not specific to Drill.
-
-## Examples
-
-The following example returns information about directories and files in the
-local (`dfs`) file system.
-
-	0: jdbc:drill:> use dfs;
-	 
-	+------------+------------+
-	|     ok     |  summary   |
-	+------------+------------+
-	| true       | Default schema changed to 'dfs' |
-	+------------+------------+
-	1 row selected (0.318 seconds)
-	 
-	0: jdbc:drill:> show files;
-	+------------+-------------+------------+------------+------------+------------+-------------+------------+------------------+
-	|    name    | isDirectory |   isFile   |   length   |   owner    |   group    | permissions | accessTime | modificationTime |
-	+------------+-------------+------------+------------+------------+------------+-------------+------------+------------------+
-	| user       | true        | false      | 1          | mapr       | mapr       | rwxr-xr-x   | 2014-07-30 21:37:06.0 | 2014-07-31 22:15:53.193 |
-	| backup.tgz | false       | true       | 36272      | root       | root       | rw-r--r--   | 2014-07-31 22:09:13.0 | 2014-07-31 22:09:13.211 |
-	| JSON       | true        | false      | 1          | root       | root       | rwxr-xr-x   | 2014-07-31 15:22:42.0 | 2014-08-04 15:43:07.083 |
-	| scripts    | true        | false      | 3          | root       | root       | rwxr-xr-x   | 2014-07-31 22:10:51.0 | 2014-08-04 18:23:09.236 |
-	| temp       | true        | false      | 2          | root       | root       | rwxr-xr-x   | 2014-08-01 20:07:37.0 | 2014-08-01 20:09:42.595 |
-	| hbase      | true        | false      | 10         | mapr       | mapr       | rwxr-xr-x   | 2014-07-30 21:36:08.0 | 2014-08-04 18:31:13.778 |
-	| tables     | true        | false      | 0          | root       | root       | rwxrwxrwx   | 2014-07-31 22:14:35.0 | 2014-08-04 15:42:43.415 |
-	| CSV        | true        | false      | 4          | root       | root       | rwxrwxrwx   | 2014-07-31 17:34:53.0 | 2014-08-04
-	...
-
-The following example shows the files in a specific directory in the `dfs`
-file system:
-
-	0: jdbc:drill:> show files in dfs.CSV;
-	 
-	+------------+-------------+------------+------------+------------+------------+-------------+------------+------------------+
-	|    name    | isDirectory |   isFile   |   length   |   owner    |   group    | permissions | accessTime | modificationTime |
-	+------------+-------------+------------+------------+------------+------------+-------------+------------+------------------+
-	| customers.csv | false       | true       | 62011      | root       | root       | rw-r--r--   | 2014-08-04 18:30:39.0 | 2014-08-04 18:30:39.314 |
-	| products.csv.small | false       | true       | 34972      | root       | root       | rw-r--r--   | 2014-07-31 23:58:42.0 | 2014-07-31 23:59:16.849 |
-	| products.csv | false       | true       | 34972      | root       | root       | rw-r--r--   | 2014-08-01 06:39:34.0 | 2014-08-04 15:58:09.325 |
-	| products.csv.bad | false       | true       | 62307      | root       | root       | rw-r--r--   | 2014-08-04 15:58:02.0 | 2014-08-04 15:58:02.612 |
-	+------------+-------------+------------+------------+------------+------------+-------------+------------+------------------+
-	4 rows selected (0.165 seconds)
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/sql-reference/sql-commands-summary/110-show-tables-command.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands-summary/110-show-tables-command.md b/_docs/sql-reference/sql-commands-summary/110-show-tables-command.md
deleted file mode 100644
index c55975f..0000000
--- a/_docs/sql-reference/sql-commands-summary/110-show-tables-command.md
+++ /dev/null
@@ -1,136 +0,0 @@
----
-title: "SHOW TABLES Command"
-parent: "SQL Commands Summary"
----
-The SHOW TABLES command returns a list of views created within a schema. It
-also returns the tables that exist in Hive, HBase, and MapR-DB when you have
-these data sources configured as storage plugin instances. See[ Storage Plugin
-Registration]({{ site.baseurl }}/docs/storage-plugin-registration).
-
-## Syntax
-
-The SHOW TABLES command supports the following syntax:
-
-    SHOW TABLES;
-
-## Usage Notes
-
-First issue the USE command to identify the schema for which you want to view
-tables or views. For example, the following USE statement tells Drill that you
-only want information from the `dfs.myviews` schema:
-
-    USE dfs.myviews;
-
-In this example, “`myviews`” is a workspace created withing an instance of the
-`dfs` storage plugin.
-
-When you use a particular schema and then issue the SHOW TABLES command, Drill
-returns the tables and views within that schema.
-
-#### Limitations
-
-  * You can create and query tables within the file system, however Drill does not return these tables when you issue the SHOW TABLES command. You can issue the [SHOW FILES ]({{ site.baseurl }}/docs/show-files-command)command to see a list of all files, tables, and views, including those created in Drill. 
-
-  * You cannot create Hive, HBase, or MapR-DB tables in Drill. 
-
-## Examples
-
-The following examples demonstrate the steps that you can follow when you want
-to issue the SHOW TABLES command on the file system, Hive, and HBase.  
-  
-Complete the following steps to see views that exist in a file system and
-tables that exist in Hive and HBase data sources:
-
-  1. Issue the SHOW SCHEMAS command to see a list of available schemas.
-
-        0: jdbc:drill:zk=drilldemo:5181> show schemas;
-        +-------------+
-        | SCHEMA_NAME |
-        +-------------+
-        | hive.default |
-        | dfs.reviews |
-        | dfs.flatten |
-        | dfs.default |
-        | dfs.root  |
-        | dfs.logs  |
-        | dfs.myviews   |
-        | dfs.clicks  |
-        | dfs.tmp   |
-        | sys       |
-        | hbase     |
-        | INFORMATION_SCHEMA |
-        | s3.twitter  |
-        | s3.reviews  |
-        | s3.default  |
-        +-------------+
-        15 rows selected (0.072 seconds)
-
-  2. Issue the USE command to switch to a particular schema. When you use a particular schema, Drill searches or queries within that schema only. 
-
-        0: jdbc:drill:zk=drilldemo:5181> use dfs.myviews;
-        +------------+------------+
-        |   ok  |  summary   |
-        +------------+------------+
-        | true      | Default schema changed to 'dfs.myviews' |
-        +------------+------------+
-        1 row selected (0.025 seconds)
-
-  3. Issue the SHOW TABLES command to see the views or tables that exist within workspace.
-
-        0: jdbc:drill:zk=drilldemo:5181> show tables;
-        +--------------+------------+
-        | TABLE_SCHEMA | TABLE_NAME |
-        +--------------+------------+
-        | dfs.myviews   | logs_vw   |
-        | dfs.myviews   | customers_vw |
-        | dfs.myviews   | s3_review_vw |
-        | dfs.myviews   | clicks_vw  |
-        | dfs.myviews   | nestedclickview |
-        | dfs.myviews   | s3_user_vw |
-        | dfs.myviews   | s3_bus_vw  |
-        +--------------+------------+
-        7 rows selected (0.499 seconds)
-        0: jdbc:drill:zk=drilldemo:5181>
-
-  4. Switch to the Hive schema and issue the SHOW TABLES command to see the Hive tables that exist.
-
-        0: jdbc:drill:zk=drilldemo:5181> use hive;
-        +------------+------------+
-        |   ok  |  summary   |
-        +------------+------------+
-        | true      | Default schema changed to 'hive' |
-        +------------+------------+
-        1 row selected (0.043 seconds)
-         
-        0: jdbc:drill:zk=drilldemo:5181> show tables;
-        +--------------+------------+
-        | TABLE_SCHEMA | TABLE_NAME |
-        +--------------+------------+
-        | hive.default | orders     |
-        | hive.default | products   |
-        +--------------+------------+
-        2 rows selected (0.552 seconds)
-
-  5. Switch to the HBase schema and issue the SHOW TABLES command to see the HBase tables that exist within the schema.
-
-        0: jdbc:drill:zk=drilldemo:5181> use hbase;
-        +------------+------------+
-        |   ok  |  summary   |
-        +------------+------------+
-        | true      | Default schema changed to 'hbase' |
-        +------------+------------+
-        1 row selected (0.043 seconds)
-         
-         
-        0: jdbc:drill:zk=drilldemo:5181> show tables;
-        +--------------+------------+
-        | TABLE_SCHEMA | TABLE_NAME |
-        +--------------+------------+
-        | hbase     | customers  |
-        +--------------+------------+
-        1 row selected (0.412 seconds)
-
-  
-
-  
-