You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@drill.apache.org by br...@apache.org on 2016/08/30 22:29:16 UTC

[01/17] drill git commit: Updates to docs for 1.8 - DESCRIBE SCHEMA, DROP TABLE|VIEW IF EXISTS,

Repository: drill
Updated Branches:
  refs/heads/gh-pages 0602842d4 -> a0d5528eb


Updates to docs for 1.8 - DESCRIBE SCHEMA, DROP TABLE|VIEW IF EXISTS,


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/40029390
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/40029390
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/40029390

Branch: refs/heads/gh-pages
Commit: 400293902c8439ee9377258ad7e1e71b7a040092
Parents: 0602842
Author: Bridget Bevens <bb...@maprtech.com>
Authored: Wed Aug 3 17:23:07 2016 -0700
Committer: Bridget Bevens <bb...@maprtech.com>
Committed: Wed Aug 3 17:23:07 2016 -0700

----------------------------------------------------------------------
 .../020-hive-to-drill-data-type-mapping.md      |  6 +-
 _docs/query-data/050-querying-hive.md           |  6 +-
 _docs/sql-reference/080-reserved-keywords.md    |  4 +-
 .../sql-reference/sql-commands/053-describe.md  | 81 ++++++++++++++++----
 .../sql-commands/055-drop-table.md              | 65 +++++++++++++---
 .../sql-reference/sql-commands/056-drop-view.md | 47 ++++++++++--
 6 files changed, 173 insertions(+), 36 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/40029390/_docs/data-sources-and-file-formats/020-hive-to-drill-data-type-mapping.md
----------------------------------------------------------------------
diff --git a/_docs/data-sources-and-file-formats/020-hive-to-drill-data-type-mapping.md b/_docs/data-sources-and-file-formats/020-hive-to-drill-data-type-mapping.md
index fe196f6..7d0e433 100644
--- a/_docs/data-sources-and-file-formats/020-hive-to-drill-data-type-mapping.md
+++ b/_docs/data-sources-and-file-formats/020-hive-to-drill-data-type-mapping.md
@@ -1,6 +1,6 @@
 ---
 title: "Hive-to-Drill Data Type Mapping"
-date: 2016-06-29 01:29:05 UTC
+date: 2016-08-04 00:23:08 UTC
 parent: "Data Sources and File Formats"
 ---
 Using Drill you can read tables created in Hive that use data types compatible with Drill. Drill currently does not support writing Hive tables. The map of SQL types and Hive types shows that several Hive types need to be cast to the supported SQL type in a Drill query:
@@ -86,7 +86,9 @@ You check that Hive mapped the data from the CSV to the typed values as as expec
     8223372036854775807	true	3.5	-1231.4	3.14	42	"SomeText"	2015-03-25   2015-03-25 01:23:15
     Time taken: 0.524 seconds, Fetched: 1 row(s)
 
-### Connect Drill to Hive and Query the Data
+### Connect Drill to Hive and Query the Data  
+
+{% include startnote.html %}Drill 1.8 implements the IF EXISTS parameter for the DROP TABLE and DROP VIEW commands, making IF a reserved word in Drill. As a result, you must include backticks around the Hive \``IF`` conditional function when you use it in a query on Hive tables. Alternatively, you can use the CASE statement instead of the IF function.{% include endnote.html %}
 
 In Drill, you use the [Hive storage plugin]({{site.baseurl}}/docs/hive-storage-plugin). Using the Hive storage plugin connects Drill to the Hive metastore containing the data.
 	

http://git-wip-us.apache.org/repos/asf/drill/blob/40029390/_docs/query-data/050-querying-hive.md
----------------------------------------------------------------------
diff --git a/_docs/query-data/050-querying-hive.md b/_docs/query-data/050-querying-hive.md
index 422c9ac..46481e0 100644
--- a/_docs/query-data/050-querying-hive.md
+++ b/_docs/query-data/050-querying-hive.md
@@ -1,11 +1,13 @@
 ---
 title: "Querying Hive"
-date:  
+date: 2016-08-04 00:23:09 UTC
 parent: "Query Data"
 ---
 This is a simple exercise that provides steps for creating a Hive table and
 inserting data that you can query using Drill. Before you perform the steps,
-download the [customers.csv](http://doc.mapr.com/download/attachments/28868943/customers.csv?version=1&modificationDate=1426874930765&api=v2) file.
+download the [customers.csv](http://doc.mapr.com/download/attachments/28868943/customers.csv?version=1&modificationDate=1426874930765&api=v2) file.  
+
+{% include startnote.html %}Drill 1.8 implements the IF EXISTS parameter for the DROP TABLE and DROP VIEW commands, making IF a reserved word in Drill. As a result, you must include backticks around the Hive \``IF`` conditional function when you use it in a query on Hive tables. Alternatively, you can use the CASE statement instead of the IF function.{% include endnote.html %}
 
 To create a Hive table and query it with Drill, complete the following steps:
 

http://git-wip-us.apache.org/repos/asf/drill/blob/40029390/_docs/sql-reference/080-reserved-keywords.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/080-reserved-keywords.md b/_docs/sql-reference/080-reserved-keywords.md
index 7e14f3a..e631adf 100644
--- a/_docs/sql-reference/080-reserved-keywords.md
+++ b/_docs/sql-reference/080-reserved-keywords.md
@@ -1,6 +1,6 @@
 ---
 title: "Reserved Keywords"
-date:  
+date: 2016-08-04 00:23:09 UTC
 parent: "SQL Reference"
 ---
 When you use a reserved keyword in a Drill query, enclose the word in
@@ -13,5 +13,5 @@ keyword:
 The following table provides the Drill reserved keywords that require back
 ticks:
 
-<table ><tbody><tr><td valign="top" ><h1 id="ReservedKeywords-A">A</h1><p>ABS<br />ALL<br />ALLOCATE<br />ALLOW<br />ALTER<br />AND<br />ANY<br />ARE<br />ARRAY<br />AS<br />ASENSITIVE<br />ASYMMETRIC<br />AT<br />ATOMIC<br />AUTHORIZATION<br />AVG</p><h1 id="ReservedKeywords-B">B</h1><p>BEGIN<br />BETWEEN<br />BIGINT<br />BINARY<br />BIT<br />BLOB<br />BOOLEAN<br />BOTH<br />BY</p><h1 id="ReservedKeywords-C">C</h1><p>CALL<br />CALLED<br />CARDINALITY<br />CASCADED<br />CASE<br />CAST<br />CEIL<br />CEILING<br />CHAR<br />CHARACTER<br />CHARACTER_LENGTH<br />CHAR_LENGTH<br />CHECK<br />CLOB<br />CLOSE<br />COALESCE<br />COLLATE<br />COLLECT<br />COLUMN<br />COMMIT<br />CONDITION<br />CONNECT<br />CONSTRAINT<br />CONVERT<br />CORR<br />CORRESPONDING<br />COUNT<br />COVAR_POP<br />COVAR_SAMP<br />CREATE<br />CROSS<br />CUBE<br />CUME_DIST<br />CURRENT<br />CURRENT_CATALOG<br />CURRENT_DATE<br />CURRENT_DEFAULT_TRANSFORM_GROUP<br />CURRENT_PATH<br />CURRENT_ROLE<br />CURRENT_SCHEMA<br 
 />CURRENT_TIME<br />CURRENT_TIMESTAMP<br />CURRENT_TRANSFORM_GROUP_FOR_TYPE<br />CURRENT_USER<br />CURSOR<br />CYCLE</p></td><td valign="top" ><h1 id="ReservedKeywords-D">D</h1><p>DATABASES<br />DATE<br />DAY<br />DEALLOCATE<br />DEC<br />DECIMAL<br />DECLARE<br />DEFAULT<br />DEFAULT_KW<br />DELETE<br />DENSE_RANK<br />DEREF<br />DESCRIBE<br />DETERMINISTIC<br />DISALLOW<br />DISCONNECT<br />DISTINCT<br />DOUBLE<br />DROP<br />DYNAMIC</p><h1 id="ReservedKeywords-E">E</h1><p>EACH<br />ELEMENT<br />ELSE<br />END<br />END_EXEC<br />ESCAPE<br />EVERY<br />EXCEPT<br />EXEC<br />EXECUTE<br />EXISTS<br />EXP<br />EXPLAIN<br />EXTERNAL<br />EXTRACT</p><h1 id="ReservedKeywords-F">F</h1><p>FALSE<br />FETCH<br />FILES<br />FILTER<br />FIRST_VALUE<br />FLOAT<br />FLOOR<br />FOR<br />FOREIGN<br />FREE<br />FROM<br />FULL<br />FUNCTION<br />FUSION</p><h1 id="ReservedKeywords-G">G</h1><p>GET<br />GLOBAL<br />GRANT<br />GROUP<br />GROUPING</p><h1 id="ReservedKeywords-H">H</h1><p>HAVING<br />HOLD<b
 r />HOUR</p></td><td valign="top" ><h1 id="ReservedKeywords-I">I</h1><p>IDENTITY<br />IMPORT<br />IN<br />INDICATOR<br />INNER<br />INOUT<br />INSENSITIVE<br />INSERT<br />INT<br />INTEGER<br />INTERSECT<br />INTERSECTION<br />INTERVAL<br />INTO<br />IS</p><h1 id="ReservedKeywords-J">J</h1><p>JOIN</p><h1 id="ReservedKeywords-L">L</h1><p>LANGUAGE<br />LARGE<br />LAST_VALUE<br />LATERAL<br />LEADING<br />LEFT<br />LIKE<br />LIMIT<br />LN<br />LOCAL<br />LOCALTIME<br />LOCALTIMESTAMP<br />LOWER</p><h1 id="ReservedKeywords-M">M</h1><p>MATCH<br />MAX<br />MEMBER<br />MERGE<br />METHOD<br />MIN<br />MINUTE<br />MOD<br />MODIFIES<br />MODULE<br />MONTH<br />MULTISET</p><h1 id="ReservedKeywords-N">N</h1><p>NATIONAL<br />NATURAL<br />NCHAR<br />NCLOB<br />NEW<br />NO<br />NONE<br />NORMALIZE<br />NOT<br />NULL<br />NULLIF<br />NUMERIC</p><h1 id="ReservedKeywords-O">O</h1><p>OCTET_LENGTH<br />OF<br />OFFSET<br />OLD<br />ON<br />ONLY<br />OPEN<br />OR<br />ORDER<br />OUT<br />OUTER<br />OVER<
 br />OVERLAPS<br />OVERLAY</p></td><td valign="top" colspan="1" ><h1 id="ReservedKeywords-P">P</h1><p>PARAMETER<br />PARTITION<br />PERCENTILE_CONT<br />PERCENTILE_DISC<br />PERCENT_RANK<br />POSITION<br />POWER<br />PRECISION<br />PREPARE<br />PRIMARY<br />PROCEDURE</p><h1 id="ReservedKeywords-R">R</h1><p>RANGE<br />RANK<br />READS<br />REAL<br />RECURSIVE<br />REF<br />REFERENCES<br />REFERENCING<br />REGR_AVGX<br />REGR_AVGY<br />REGR_COUNT<br />REGR_INTERCEPT<br />REGR_R2<br />REGR_SLOPE<br />REGR_SXX<br />REGR_SXY<br />RELEASE<br />REPLACE<br />RESULT<br />RETURN<br />RETURNS<br />REVOKE<br />RIGHT<br />ROLLBACK<br />ROLLUP<br />ROW<br />ROWS<br />ROW_NUMBER</p><h1 id="ReservedKeywords-S">S</h1><p>SAVEPOINT<br />SCHEMAS<br />SCOPE<br />SCROLL<br />SEARCH<br />SECOND<br />SELECT<br />SENSITIVE<br />SESSION_USER<br />SET<br />SHOW<br />SIMILAR<br />SMALLINT<br />SOME<br />SPECIFIC<br />SPECIFICTYPE<br />SQL<br />SQLEXCEPTION<br />SQLSTATE<br />SQLWARNING<br />SQRT<br />START<br /
 >STATIC<br />STDDEV_POP<br />STDDEV_SAMP<br />SUBMULTISET<br />SUBSTRING<br />SUM<br />SYMMETRIC<br />SYSTEM<br />SYSTEM_USER</p></td><td valign="top" colspan="1" ><h1 id="ReservedKeywords-T">T</h1><p>TABLE<br />TABLES<br />TABLESAMPLE<br />THEN<br />TIME<br />TIMESTAMP<br />TIMEZONE_HOUR<br />TIMEZONE_MINUTE<br />TINYINT<br />TO<br />TRAILING<br />TRANSLATE<br />TRANSLATION<br />TREAT<br />TRIGGER<br />TRIM<br />TRUE</p><h1 id="ReservedKeywords-U">U</h1><p>UESCAPE<br />UNION<br />UNIQUE<br />UNKNOWN<br />UNNEST<br />UPDATE<br />UPPER<br />USE<br />USER<br />USING</p><h1 id="ReservedKeywords-V">V</h1><p>VALUE<br />VALUES<br />VARBINARY<br />VARCHAR<br />VARYING<br />VAR_POP<br />VAR_SAMP</p><h1 id="ReservedKeywords-W">W</h1><p>WHEN<br />WHENEVER<br />WHERE<br />WIDTH_BUCKET<br />WINDOW<br />WITH<br />WITHIN<br />WITHOUT</p><h1 id="ReservedKeywords-Y">Y</h1><p>YEAR</p></td></tr></tbody></table></div>
+<table ><tbody><tr><td valign="top" ><h1 id="ReservedKeywords-A">A</h1><p>ABS<br />ALL<br />ALLOCATE<br />ALLOW<br />ALTER<br />AND<br />ANY<br />ARE<br />ARRAY<br />AS<br />ASENSITIVE<br />ASYMMETRIC<br />AT<br />ATOMIC<br />AUTHORIZATION<br />AVG</p><h1 id="ReservedKeywords-B">B</h1><p>BEGIN<br />BETWEEN<br />BIGINT<br />BINARY<br />BIT<br />BLOB<br />BOOLEAN<br />BOTH<br />BY</p><h1 id="ReservedKeywords-C">C</h1><p>CALL<br />CALLED<br />CARDINALITY<br />CASCADED<br />CASE<br />CAST<br />CEIL<br />CEILING<br />CHAR<br />CHARACTER<br />CHARACTER_LENGTH<br />CHAR_LENGTH<br />CHECK<br />CLOB<br />CLOSE<br />COALESCE<br />COLLATE<br />COLLECT<br />COLUMN<br />COMMIT<br />CONDITION<br />CONNECT<br />CONSTRAINT<br />CONVERT<br />CORR<br />CORRESPONDING<br />COUNT<br />COVAR_POP<br />COVAR_SAMP<br />CREATE<br />CROSS<br />CUBE<br />CUME_DIST<br />CURRENT<br />CURRENT_CATALOG<br />CURRENT_DATE<br />CURRENT_DEFAULT_TRANSFORM_GROUP<br />CURRENT_PATH<br />CURRENT_ROLE<br />CURRENT_SCHEMA<br 
 />CURRENT_TIME<br />CURRENT_TIMESTAMP<br />CURRENT_TRANSFORM_GROUP_FOR_TYPE<br />CURRENT_USER<br />CURSOR<br />CYCLE</p></td><td valign="top" ><h1 id="ReservedKeywords-D">D</h1><p>DATABASES<br />DATE<br />DAY<br />DEALLOCATE<br />DEC<br />DECIMAL<br />DECLARE<br />DEFAULT<br />DEFAULT_KW<br />DELETE<br />DENSE_RANK<br />DEREF<br />DESCRIBE<br />DETERMINISTIC<br />DISALLOW<br />DISCONNECT<br />DISTINCT<br />DOUBLE<br />DROP<br />DYNAMIC</p><h1 id="ReservedKeywords-E">E</h1><p>EACH<br />ELEMENT<br />ELSE<br />END<br />END_EXEC<br />ESCAPE<br />EVERY<br />EXCEPT<br />EXEC<br />EXECUTE<br />EXISTS<br />EXP<br />EXPLAIN<br />EXTERNAL<br />EXTRACT</p><h1 id="ReservedKeywords-F">F</h1><p>FALSE<br />FETCH<br />FILES<br />FILTER<br />FIRST_VALUE<br />FLOAT<br />FLOOR<br />FOR<br />FOREIGN<br />FREE<br />FROM<br />FULL<br />FUNCTION<br />FUSION</p><h1 id="ReservedKeywords-G">G</h1><p>GET<br />GLOBAL<br />GRANT<br />GROUP<br />GROUPING</p><h1 id="ReservedKeywords-H">H</h1><p>HAVING<br />HOLD<b
 r />HOUR</p></td><td valign="top" ><h1 id="ReservedKeywords-I">I</h1><p>IDENTITY<br />IF<br />IMPORT<br />IN<br />INDICATOR<br />INNER<br />INOUT<br />INSENSITIVE<br />INSERT<br />INT<br />INTEGER<br />INTERSECT<br />INTERSECTION<br />INTERVAL<br />INTO<br />IS</p><h1 id="ReservedKeywords-J">J</h1><p>JOIN</p><h1 id="ReservedKeywords-L">L</h1><p>LANGUAGE<br />LARGE<br />LAST_VALUE<br />LATERAL<br />LEADING<br />LEFT<br />LIKE<br />LIMIT<br />LN<br />LOCAL<br />LOCALTIME<br />LOCALTIMESTAMP<br />LOWER</p><h1 id="ReservedKeywords-M">M</h1><p>MATCH<br />MAX<br />MEMBER<br />MERGE<br />METHOD<br />MIN<br />MINUTE<br />MOD<br />MODIFIES<br />MODULE<br />MONTH<br />MULTISET</p><h1 id="ReservedKeywords-N">N</h1><p>NATIONAL<br />NATURAL<br />NCHAR<br />NCLOB<br />NEW<br />NO<br />NONE<br />NORMALIZE<br />NOT<br />NULL<br />NULLIF<br />NUMERIC</p><h1 id="ReservedKeywords-O">O</h1><p>OCTET_LENGTH<br />OF<br />OFFSET<br />OLD<br />ON<br />ONLY<br />OPEN<br />OR<br />ORDER<br />OUT<br />OUTER<br
  />OVER<br />OVERLAPS<br />OVERLAY</p></td><td valign="top" colspan="1" ><h1 id="ReservedKeywords-P">P</h1><p>PARAMETER<br />PARTITION<br />PERCENTILE_CONT<br />PERCENTILE_DISC<br />PERCENT_RANK<br />POSITION<br />POWER<br />PRECISION<br />PREPARE<br />PRIMARY<br />PROCEDURE</p><h1 id="ReservedKeywords-R">R</h1><p>RANGE<br />RANK<br />READS<br />REAL<br />RECURSIVE<br />REF<br />REFERENCES<br />REFERENCING<br />REGR_AVGX<br />REGR_AVGY<br />REGR_COUNT<br />REGR_INTERCEPT<br />REGR_R2<br />REGR_SLOPE<br />REGR_SXX<br />REGR_SXY<br />RELEASE<br />REPLACE<br />RESULT<br />RETURN<br />RETURNS<br />REVOKE<br />RIGHT<br />ROLLBACK<br />ROLLUP<br />ROW<br />ROWS<br />ROW_NUMBER</p><h1 id="ReservedKeywords-S">S</h1><p>SAVEPOINT<br />SCHEMAS<br />SCOPE<br />SCROLL<br />SEARCH<br />SECOND<br />SELECT<br />SENSITIVE<br />SESSION_USER<br />SET<br />SHOW<br />SIMILAR<br />SMALLINT<br />SOME<br />SPECIFIC<br />SPECIFICTYPE<br />SQL<br />SQLEXCEPTION<br />SQLSTATE<br />SQLWARNING<br />SQRT<br />ST
 ART<br />STATIC<br />STDDEV_POP<br />STDDEV_SAMP<br />SUBMULTISET<br />SUBSTRING<br />SUM<br />SYMMETRIC<br />SYSTEM<br />SYSTEM_USER</p></td><td valign="top" colspan="1" ><h1 id="ReservedKeywords-T">T</h1><p>TABLE<br />TABLES<br />TABLESAMPLE<br />THEN<br />TIME<br />TIMESTAMP<br />TIMEZONE_HOUR<br />TIMEZONE_MINUTE<br />TINYINT<br />TO<br />TRAILING<br />TRANSLATE<br />TRANSLATION<br />TREAT<br />TRIGGER<br />TRIM<br />TRUE</p><h1 id="ReservedKeywords-U">U</h1><p>UESCAPE<br />UNION<br />UNIQUE<br />UNKNOWN<br />UNNEST<br />UPDATE<br />UPPER<br />USE<br />USER<br />USING</p><h1 id="ReservedKeywords-V">V</h1><p>VALUE<br />VALUES<br />VARBINARY<br />VARCHAR<br />VARYING<br />VAR_POP<br />VAR_SAMP</p><h1 id="ReservedKeywords-W">W</h1><p>WHEN<br />WHENEVER<br />WHERE<br />WIDTH_BUCKET<br />WINDOW<br />WITH<br />WITHIN<br />WITHOUT</p><h1 id="ReservedKeywords-Y">Y</h1><p>YEAR</p></td></tr></tbody></table></div>
 

http://git-wip-us.apache.org/repos/asf/drill/blob/40029390/_docs/sql-reference/sql-commands/053-describe.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/053-describe.md b/_docs/sql-reference/sql-commands/053-describe.md
index 1a71f44..1bd347e 100644
--- a/_docs/sql-reference/sql-commands/053-describe.md
+++ b/_docs/sql-reference/sql-commands/053-describe.md
@@ -1,30 +1,51 @@
 ---
 title: "DESCRIBE"
-date:  
+date: 2016-08-04 00:23:09 UTC
 parent: "SQL Commands"
 ---
-The DESCRIBE command returns information about columns in a table or view.
+The DESCRIBE command returns information about columns in a table, view, or schema.
 
 ## Syntax
 
 The DESCRIBE command supports the following syntax:
 
     DESCRIBE [workspace.]table_name|view_name
+    DESCRIBE SCHEMA|DATABASE <name>[.workspace]
+
+
+##Parameters  
+*workspace*  
+The location, within a schema, where a table or view exists.  
+ 
+*view_name*  
+The unique name of a view.  
+
+*table_name*  
+The unique name of a table.  
+
+*schema/database*  
+A configured storage plugin instance with or without a configured workspace. 
+
 
 ## Usage Notes
 
-You can issue the DESCRIBE command against views created in a workspace and
-tables created in Hive and HBase. You can issue the DESCRIBE command
-on a table or view from any schema. For example, if you are working in the
-`dfs.myworkspace` schema, you can issue the DESCRIBE command on a view or
-table in another schema. Currently, DESCRIBE does not support tables created
-in a file system.
+Drill only supports SQL data types. Verify that all data types in an external data source, such as Hive or HBase, map to supported data types in Drill. See [Data Types]({{site.baseurl}}/docs/data-types/) for more information.  
 
-Drill only supports SQL data types. Verify that all data types in an external
-data source, such as Hive or HBase, map to supported data types in Drill. See
-Drill Data Type Mapping for more information.
+###DESCRIBE
+- You can issue the DESCRIBE command against views created in a workspace, tables created in Hive and HBase, or schemas.  
+- You can issue the DESCRIBE command on a table or view from any schema. For example, if you are working in the dfs.myworkspace schema, you can issue the DESCRIBE command on a view or table in another schema, such hive or dfs.devworkspace.  
+- Currently, DESCRIBE does not support tables created in a file system.
 
-## Example
+###DESCRIBE SCHEMA  
+- You can issue the DESCRIBE SCHEMA command on any schema. However, you can only include workspaces for file schemas, such as `dfs.myworkspace`.  
+- When you issue the DESCRIBE SCHEMA command on a particular schema, Drill returns all of the schema properties. The schema properties correlate with the configuration information in the Storage tab of the Drill Web Console for that schema.  
+- When you issue DESCRIBE SCHEMA against a schema and workspace, such as `dfs.myworkspace`, Drill returns the workspace properties in addition to all of the schema properties.  
+- When you issue DESCRIBE SCHEMA against the `dfs` schema, Drill also returns the properties of the \u201cdefault\u201d workspace. Issuing DESCRIBE SCHEMA against `dfs` or `` dfs.`default` `` returns the same results. 
+
+
+## Examples
+
+###DESCRIBE  
 
 The following example demonstrates the steps that you can follow when you want
 to use the DESCRIBE command to see column information for a view and for Hive
@@ -96,5 +117,37 @@ Complete the following steps to use the DESCRIBE command:
         | agg_rev   | VARCHAR   | NO        |
         | membership  | VARCHAR | NO        |
         +-------------+------------+-------------+
-        7 rows selected (0.403 seconds)
-
+        7 rows selected (0.403 seconds)  
+ 
+6. Issue the DESCRIBE SCHEMA command on `dfs.tmp` (the `dfs` schema and `tmp` workspace configured within the `dfs` schema) from the current schema.  
+  
+        0: jdbc:drill:zk=drilldemo:5181> describe schema dfs.tmp;  
+       
+              {
+                "type" : "file",
+                "enabled" : true,
+                "connection" : "file:///",
+                "config" : null,
+                "formats" : {
+                  "psv" : {
+                    "type" : "text",
+                    "extensions" : [ "tbl" ],
+                    "delimiter" : "|"
+                  },
+                  "csv" : {
+                    "type" : "text",
+                    "extensions" : [ "csv", "bcp" ],
+                    "delimiter" : ","
+                  },
+                 ... 
+                },
+                "location" : "/tmp",
+                "writable" : true,
+                "defaultInputFormat" : null
+              }  
+
+
+
+       
+              
+       

http://git-wip-us.apache.org/repos/asf/drill/blob/40029390/_docs/sql-reference/sql-commands/055-drop-table.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/055-drop-table.md b/_docs/sql-reference/sql-commands/055-drop-table.md
index 8a1ea49..6857d25 100644
--- a/_docs/sql-reference/sql-commands/055-drop-table.md
+++ b/_docs/sql-reference/sql-commands/055-drop-table.md
@@ -1,17 +1,26 @@
 ---
 title: "DROP TABLE"
-date:  
+date: 2016-08-04 00:23:10 UTC
 parent: "SQL Commands"
 ---
 
-As of Drill 1.2, you can use the DROP TABLE command to remove tables (files or directories) from a file system when the file system is configured as a DFS storage plugin. See [Storage Plugin Registration]({{ site.baseurl }}/docs/storage-plugin-registration/). Currently, you can only issue the DROP TABLE command against file system data sources.
+As of Drill 1.2, you can use the DROP TABLE command to remove tables (files or directories) from a file system when the file system is configured as a DFS storage plugin. See [Storage Plugin Registration]({{ site.baseurl }}/docs/storage-plugin-registration/). As of Drill 1.8, you can include the IF EXISTS parameter with the DROP TABLE command. Currently, you can only issue the DROP TABLE command against file system data sources.
 
 ## Syntax
 The DROP TABLE command supports the following syntax: 
 
-       DROP TABLE [workspace.]name;
+    DROP TABLE [IF EXISTS] [workspace.]name;  
 
-*name* is a unique directory or file name, optionally prefaced by a storage plugin name, such as dfs, and a workspace, such as tmp using dot notation.
+## Parameters  
+
+IF EXISTS  
+Drill does not throw an error if the table does not exist. Instead, Drill returns "`Table [name] not found.`"  
+
+*workspace*  
+The location of the table in subdirectories of a local or distributed file system.
+
+*name*  
+A unique directory or file name, optionally prefaced by a storage plugin name, such as `dfs`, and a workspace, such as `tmp` using dot notation.  
 
 
 ## Usage Notes
@@ -200,7 +209,7 @@ Issuing the DROP TABLE command against the directory removes the directory and d
 ###Example 4: Dropping a table that does not exist
 The following example shows the result of dropping a table that does not exist because it was either already dropped or never existed. 
 
-       0: jdbc:drill:zk=local> use use dfs.tmp;
+       0: jdbc:drill:zk=local> use dfs.tmp;
        +-------+--------------------------------------+
        |  ok   |               summary                |
        +-------+--------------------------------------+
@@ -211,9 +220,47 @@ The following example shows the result of dropping a table that does not exist b
        0: jdbc:drill:zk=local> drop table name_key;
 
        Error: VALIDATION ERROR: Table [name_key] not found
-       [Error Id: fc6bfe17-d009-421c-8063-d759d7ea2f4e on 10.250.56.218:31010] (state=,code=0)
+       [Error Id: fc6bfe17-d009-421c-8063-d759d7ea2f4e on 10.250.56.218:31010] (state=,code=0)  
+
+### Example 5: Dropping a table that does not exist using the IF EXISTS parameter  
+The following example shows the result of dropping a table that does not exist (because it was already dropped or never existed) using the IF EXISTS parameter with the DROP TABLE command:  
+
+       0: jdbc:drill:zk=local> use dfs.tmp;
+       +-------+--------------------------------------+
+       |  ok   |               summary                |
+       +-------+--------------------------------------+
+       | true  | Default schema changed to 'dfs.tmp'  |
+       +-------+--------------------------------------+
+       1 row selected (0.289 seconds)  
+
+       0: jdbc:drill:zk=local> drop table if exists name_key;
+       +-------+-----------------------------+
+       |  ok   |         summary             |
+       +-------+-----------------------------+
+       | true  | Table 'name_key' not found  |
+       +-------+-----------------------------+
+       1 row selected (0.083 seconds)  
 
-###Example 5: Dropping a table without permissions 
+### Example 6: Dropping a table that exists using the IF EXISTS parameter  
+
+The following example shows the result of dropping a table that exists using the IF EXISTS parameter with the DROP TABLE command.  
+
+       0: jdbc:drill:zk=local> use dfs.tmp;
+       +-------+--------------------------------------+
+       |  ok   |               summary                |
+       +-------+--------------------------------------+
+       | true  | Default schema changed to 'dfs.tmp'  |
+       +-------+--------------------------------------+
+       1 row selected (0.289 seconds)
+       
+       0: jdbc:drill:zk=local> drop table if exists name_key;
+       +-------+---------------------------+
+       |  ok   |        summary            |
+       +-------+---------------------------+
+       | true  | Table 'name_key' dropped  |
+       +-------+---------------------------+  
+       
+###Example 7: Dropping a table without permissions 
 The following example shows the result of dropping a table without the appropriate permissions in the file system.
 
        0: jdbc:drill:zk=local> drop table name_key;
@@ -221,7 +268,7 @@ The following example shows the result of dropping a table without the appropria
        Error: PERMISSION ERROR: Unauthorized to drop table
        [Error Id: 36f6b51a-786d-4950-a4a7-44250f153c55 on 10.10.30.167:31010] (state=,code=0)  
 
-###Example 6: Dropping and querying a table concurrently  
+###Example 8: Dropping and querying a table concurrently  
 
 The result of this scenario depends on the delta in time between one user dropping a table and another user issuing a query against the table. Results can also vary. In some instances the drop may succeed and the query fails completely or the query completes partially and then the table is dropped returning an exception in the middle of the query results.
 
@@ -245,7 +292,7 @@ The following example shows the result of dropping a table and issuing a query a
        Fragment 1:0
        [Error Id: 6e3c6a8d-8cfd-4033-90c4-61230af80573 on 10.10.30.167:31010] (state=,code=0)
 
-###Example 7: Dropping a table with different file formats
+###Example 9: Dropping a table with different file formats
 The following example shows the result of dropping a table when multiple file formats exists in the directory. In this scenario, the `sales_dir` table resides in the `dfs.sales` workspace and contains Parquet, CSV, and JSON files.
 
 Running `ls` on `sales_dir` shows the different file types that exist in the directory.

http://git-wip-us.apache.org/repos/asf/drill/blob/40029390/_docs/sql-reference/sql-commands/056-drop-view.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/056-drop-view.md b/_docs/sql-reference/sql-commands/056-drop-view.md
index a0924aa..acf3139 100644
--- a/_docs/sql-reference/sql-commands/056-drop-view.md
+++ b/_docs/sql-reference/sql-commands/056-drop-view.md
@@ -1,16 +1,27 @@
 ---
 title: "DROP VIEW"
-date:  
+date: 2016-08-04 00:23:10 UTC
 parent: "SQL Commands"
 ---
 
-The DROP VIEW command removes a view that was created in a workspace using the CREATE VIEW command.
+The DROP VIEW command removes a view that was created in a workspace using the CREATE VIEW command. As of Drill 1.8, you can include the IF EXISTS parameter with the DROP VIEW command.
 
 ## Syntax
 
 The DROP VIEW command supports the following syntax:
 
-     DROP VIEW [workspace.]view_name;
+    DROP VIEW [IF EXISTS] [workspace.]view_name;  
+
+## Parameters  
+
+IF EXISTS  
+Drill does not throw an error if the view does not exist. Instead, Drill returns "`View [view_name] not found in schema [workspace].`"  
+
+*workspace*  
+The location of the view in subdirectories of a local or distributed file system.
+
+*name*  
+A unique directory or file name, optionally prefaced by a storage plugin name, such as `dfs`, and a workspace, such as `tmp` using dot notation. 
 
 ## Usage Notes
 
@@ -18,8 +29,10 @@ When you drop a view, all information about the view is deleted from the workspa
 
 ## Example
 
-This example shows you some steps to follow when you want to drop a view in Drill using the DROP VIEW command. A workspace named \u201cdonuts\u201d was created for the steps in this example.
-Complete the following steps to drop a view in Drill:
+This example shows you some steps to follow when you want to drop a view in Drill using the DROP VIEW command. A workspace named \u201cdonuts\u201d was created for the steps in this example.  
+
+Complete the following steps to drop a view in Drill:  
+
 Use the writable workspace from which the view was created.
 
     0: jdbc:drill:zk=local> use dfs.donuts;
@@ -44,5 +57,25 @@ Use the DROP VIEW command to remove a view created in another workspace.
     +------------+------------+
     |   ok  |  summary   |
     +------------+------------+
-    | true      | View 'yourdonuts' deleted successfully from 'dfs.tmp' schema |
-    +------------+------------+
+    | true      | View 'yourdonuts' deleted successfully from 'dfs.tmp' schema |  
+    +------------+------------+  
+
+Use the DROP VIEW command with or without the IF EXISTS parameter to remove a view created in the current workspace.  
+
+    0: jdbc:drill:zk=local> drop view if exists mydonuts;
+    +------------+--------------------------------------------------------------+
+    |     ok     | summary                                                      |
+    +------------+--------------------------------------------------------------+
+    | true       | View 'mydonuts' deleted successfully from 'dfs.donuts' schema|
+    +------------+--------------------------------------------------------------+
+
+Use the DROP VIEW command with the IF EXISTS parameter to remove a view that does not exist in the current workspace, either because it was never created or it was already removed.
+
+    0: jdbc:drill:zk=local> drop view if exists mydonuts;
+    +-------+---------------------------------------------------+
+    |  ok   |                   summary                         |
+    +-------+---------------------------------------------------+
+    | true  | View 'mydonuts' not found in schema 'dfs.donuts'  |
+    +-------+---------------------------------------------------+
+    1 row selected (0.085 seconds)
+


[13/17] drill git commit: Updates to docs for Drill 1.8

Posted by br...@apache.org.
Updates to docs for Drill 1.8


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/5465a443
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/5465a443
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/5465a443

Branch: refs/heads/gh-pages
Commit: 5465a443bc50047ac1fc49965dd31a2ab4b2ae1f
Parents: ddb7fcf
Author: Bridget Bevens <bb...@maprtech.com>
Authored: Sat Aug 13 12:26:05 2016 -0700
Committer: Bridget Bevens <bb...@maprtech.com>
Committed: Sat Aug 13 12:26:05 2016 -0700

----------------------------------------------------------------------
 .../010-partition-pruning-introduction.md       |  4 +-
 _docs/rn/003-1.8.0-rn.md                        | 47 +++++++++++++++-----
 2 files changed, 38 insertions(+), 13 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/5465a443/_docs/performance-tuning/partition-pruning/010-partition-pruning-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/performance-tuning/partition-pruning/010-partition-pruning-introduction.md b/_docs/performance-tuning/partition-pruning/010-partition-pruning-introduction.md
index e5f4e5f..f64a633 100644
--- a/_docs/performance-tuning/partition-pruning/010-partition-pruning-introduction.md
+++ b/_docs/performance-tuning/partition-pruning/010-partition-pruning-introduction.md
@@ -1,12 +1,12 @@
 ---
 title: "Partition Pruning Introduction"
-date: 2016-08-11 19:02:20 UTC
+date: 2016-08-08 18:42:19 UTC
 parent: "Partition Pruning"
 --- 
 
 Partition pruning is a performance optimization that limits the number of files and partitions that Drill reads when querying file systems and Hive tables. When you partition data, Drill only reads a subset of the files that reside in a file system or a subset of the partitions in a Hive table when a query matches certain filter criteria.
 
-As of Drill 1.8, partition pruning also applies to the Parquet metadata cache. When data is partitioned in a directory hierarchy, Drill attempts to read the metadata cache file from a sub-partition, based on matching filter criteria instead of reading from the top level partition, to reduce the amount of metadata read during the query planning time. If you created a metadata cache file in a previous version of Drill, you must issue the REFRESH TABLE METADATA command to regenerate the metadata cache file before running queries for partition pruning to occur. See [Optimizing Parquet Metadata Reading]({{site.baseurl}}/docs/optimizing-parquet-metadata-reading/) for more information.  
+As of Drill 1.8, partition pruning also applies to the Parquet metadata cache. When data is partitioned in a directory hierarchy, Drill attempts to read the metadata cache file from a sub-partition, based on matching filter criteria instead of reading from the top level partition, to reduce the amount of metadata read during the query planning time. If you created a metadata cache file in a previous version of Drill, you must issue the REFRESH TABLE METADATA command to regenerate the metadata cache file before running queries for metadata cache pruning to occur. See [Optimizing Parquet Metadata Reading]({{site.baseurl}}/docs/optimizing-parquet-metadata-reading/) for more information.  
 
 The query planner in Drill performs partition pruning by evaluating the filters. If no partition filters are present, the underlying Scan operator reads all files in all directories and then sends the data to operators, such as Filter, downstream. When partition filters are present, the query planner pushes the filters down to the Scan if possible. The Scan reads only the directories that match the partition filters, thus reducing disk I/O.
 

http://git-wip-us.apache.org/repos/asf/drill/blob/5465a443/_docs/rn/003-1.8.0-rn.md
----------------------------------------------------------------------
diff --git a/_docs/rn/003-1.8.0-rn.md b/_docs/rn/003-1.8.0-rn.md
index 65de9a2..78db742 100644
--- a/_docs/rn/003-1.8.0-rn.md
+++ b/_docs/rn/003-1.8.0-rn.md
@@ -3,24 +3,33 @@ title: "Apache Drill 1.8.0 Release Notes"
 parent: "Release Notes"
 ---
 
-**Release date:**  August, 2016
+**Release date:**  August 15, 2016
 
 Today, we're happy to announce the availability of Drill 1.8.0. You can download it [here](https://drill.apache.org/download/).
 
-This release provides metadata cache pruning, support for the IF EXISTS parameter with the DROP TABLE and DROP VIEW commands, support for the DESCRIBE SCHEMA command, multi-byte delimiter support, new parameters for filter selectivity estimates, and the following bug fixes and improvements:  
+This release provides metadata cache pruning, support for the IF EXISTS parameter with the DROP TABLE and DROP VIEW commands, support for the DESCRIBE SCHEMA command, multi-byte delimiter support, and new parameters for filter selectivity estimates.  
+
+## Configuration and Launch Script Changes 
+This release of Drill also includes the following changes to the configuration and launch scripts: 
+
+- Default Drill settings now reside in `$DRILL_HOME/bin/drill-config.sh`. You can override many settings by creating an entry in `$DRILL_HOME/conf/drill-env.sh`. The file includes descriptions of the options that you can set.  ([DRILL-4581](https://issues.apache.org/jira/browse/DRILL-4581))  
+- Due to issues at high concurrency, the native Linux epoll transport is now disabled by default. ([DRILL-4623](https://issues.apache.org/jira/browse/DRILL-4623))  
+ 
+If you upgrade to Drill 1.8, you must merge your custom settings with the latest settings in the `drill-override.conf` and `drill-env.sh` file that ships with Drill. As of Drill 1.8, all Drill defaults reside in the Drill scripts. The `drill-env.sh` script contains only your customizations. When you merge your existing `drill-env.sh` file with the 1.8 version of the file, you can remove all of the settings in your file except for those you created yourself. Consult the original `drill-env.sh` file from the prior Drill release to determine which settings you can remove.
+
+
+
+Drill 1.8 provides the following bug fixes and improvements:  
 
-    
 <h2>        Sub-task
 </h2>
 <ul>
-<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4560'>DRILL-4560</a>] -         ZKClusterCoordinator does not call DrillbitStatusListener.drillbitRegistered for new bits
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4581'>DRILL-4581</a>] -         Various problems in the Drill startup scripts
 </li>
 <li>[<a href='https://issues.apache.org/jira/browse/DRILL-4728'>DRILL-4728</a>] -         Add support for new metadata fetch APIs
 </li>
 <li>[<a href='https://issues.apache.org/jira/browse/DRILL-4729'>DRILL-4729</a>] -         Add support for prepared statement implementation on server side
 </li>
-<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4730'>DRILL-4730</a>] -         Update JDBC DatabaseMetaData implementation to use new Metadata APIs
-</li>
 <li>[<a href='https://issues.apache.org/jira/browse/DRILL-4732'>DRILL-4732</a>] -         Update JDBC driver to use the new prepared statement APIs on DrillClient
 </li>
 </ul>
@@ -38,12 +47,16 @@ This release provides metadata cache pruning, support for the IF EXISTS paramete
 </li>
 <li>[<a href='https://issues.apache.org/jira/browse/DRILL-4574'>DRILL-4574</a>] -         Avro Plugin: Flatten does not work correctly on record items
 </li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4623'>DRILL-4623</a>] -         Disable Epoll by Default
+</li>
 <li>[<a href='https://issues.apache.org/jira/browse/DRILL-4658'>DRILL-4658</a>] -         cannot specify tab as a fieldDelimiter in table function
 </li>
 <li>[<a href='https://issues.apache.org/jira/browse/DRILL-4664'>DRILL-4664</a>] -         ScanBatch.isNewSchema() returns wrong result for map datatype
 </li>
 <li>[<a href='https://issues.apache.org/jira/browse/DRILL-4665'>DRILL-4665</a>] -         Partition pruning not working for hive partitioned table with &#39;LIKE&#39; and &#39;=&#39; filter
 </li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4704'>DRILL-4704</a>] -         select statement behavior is inconsistent for decimal values in parquet
+</li>
 <li>[<a href='https://issues.apache.org/jira/browse/DRILL-4707'>DRILL-4707</a>] -         Conflicting columns names under case-insensitive policy lead to either memory leak or incorrect result
 </li>
 <li>[<a href='https://issues.apache.org/jira/browse/DRILL-4715'>DRILL-4715</a>] -         Java compilation error for a query with large number of expressions
@@ -68,6 +81,10 @@ This release provides metadata cache pruning, support for the IF EXISTS paramete
 </li>
 <li>[<a href='https://issues.apache.org/jira/browse/DRILL-4825'>DRILL-4825</a>] -         Wrong data with UNION ALL when querying different sub-directories under the same table
 </li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4836'>DRILL-4836</a>] -         ZK Issue during Drillbit startup, possibly due to race condition
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4846'>DRILL-4846</a>] -         Eliminate extra operations during metadata cache pruning
+</li>
 </ul>
                         
 <h2>        Improvement
@@ -83,11 +100,11 @@ This release provides metadata cache pruning, support for the IF EXISTS paramete
 </li>
 <li>[<a href='https://issues.apache.org/jira/browse/DRILL-4751'>DRILL-4751</a>] -         Remove dumpcat script from Drill distribution
 </li>
-<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4752'>DRILL-4752</a>] -         Remove submit_plan script from Drill distribution
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4766'>DRILL-4766</a>] -         FragmentExecutor should use EventProcessor and avoid blocking rpc threads
 </li>
 <li>[<a href='https://issues.apache.org/jira/browse/DRILL-4786'>DRILL-4786</a>] -         Improve metadata cache performance for queries with multiple partitions
 </li>
-<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4792'>DRILL-4792</a>] -         Include session options used for a query as part of the profile
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4822'>DRILL-4822</a>] -         Extend distrib-env.sh search to consider site directory
 </li>
 </ul>
             
@@ -98,9 +115,17 @@ This release provides metadata cache pruning, support for the IF EXISTS paramete
 </li>
 <li>[<a href='https://issues.apache.org/jira/browse/DRILL-4673'>DRILL-4673</a>] -         Implement &quot;DROP TABLE IF EXISTS&quot; for drill to prevent FAILED status on command return
 </li>
-<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4714'>DRILL-4714</a>] -         Add metadata and prepared statement APIs to DrillClient&lt;-&gt;Drillbit interface
-</li>
 <li>[<a href='https://issues.apache.org/jira/browse/DRILL-4819'>DRILL-4819</a>] -         Update MapR version to 5.2.0
 </li>
 </ul>
-                                                                   
\ No newline at end of file
+                                                        
+<h2>        Task
+</h2>
+<ul>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4499'>DRILL-4499</a>] -         Remove unused classes
+</li>
+</ul>
+                  
+
+    
+                                           
\ No newline at end of file


[07/17] drill git commit: edit

Posted by br...@apache.org.
edit


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/365747ae
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/365747ae
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/365747ae

Branch: refs/heads/gh-pages
Commit: 365747ae285c90cfa5c73c52d09350cbfafabe2c
Parents: baa4edd
Author: Bridget Bevens <bb...@maprtech.com>
Authored: Thu Aug 4 15:55:54 2016 -0700
Committer: Bridget Bevens <bb...@maprtech.com>
Committed: Thu Aug 4 15:55:54 2016 -0700

----------------------------------------------------------------------
 .../010-configuration-options-introduction.md   | 154 +++++++++----------
 1 file changed, 77 insertions(+), 77 deletions(-)
----------------------------------------------------------------------



[02/17] drill git commit: Doc updates for 1.8 - add support for multi-byte delimiter support to plugin configuration basics

Posted by br...@apache.org.
Doc updates for 1.8 - add support for multi-byte delimiter support to plugin configuration basics


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/640b2c4e
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/640b2c4e
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/640b2c4e

Branch: refs/heads/gh-pages
Commit: 640b2c4e85caf91590fd8fabb5335ce52e3e3d8a
Parents: 4002939
Author: Bridget Bevens <bb...@maprtech.com>
Authored: Thu Aug 4 09:47:21 2016 -0700
Committer: Bridget Bevens <bb...@maprtech.com>
Committed: Thu Aug 4 09:47:21 2016 -0700

----------------------------------------------------------------------
 _docs/connect-a-data-source/035-plugin-configuration-basics.md | 6 +++---
 _docs/sql-reference/sql-commands/055-drop-table.md             | 4 ++--
 2 files changed, 5 insertions(+), 5 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/640b2c4e/_docs/connect-a-data-source/035-plugin-configuration-basics.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/035-plugin-configuration-basics.md b/_docs/connect-a-data-source/035-plugin-configuration-basics.md
index 3cb542e..0a9de00 100644
--- a/_docs/connect-a-data-source/035-plugin-configuration-basics.md
+++ b/_docs/connect-a-data-source/035-plugin-configuration-basics.md
@@ -1,6 +1,6 @@
 ---
 title: "Plugin Configuration Basics"
-date:  
+date: 2016-08-04 16:47:22 UTC
 parent: "Storage Plugin Configuration"
 ---
 When you add or update storage plugin configurations on one Drill node in a 
@@ -103,9 +103,9 @@ The following table describes the attributes you configure for storage plugins i
   </tr>
   <tr>
     <td>"formats" . . . "delimiter"</td>
-    <td>"\t"<br>","</td>
+    <td>"\n"<br>"\r"<br>"\t"<br>"\r\n"<br>","</td>
     <td>format-dependent</td>
-    <td>Sequence of one or more characters that serve as a record separator in a delimited text file, such as CSV. Use a 4-digit hex code syntax \uXXXX for a non-printable delimiter. </td>
+    <td>Sequence of one or more characters that signifies the end of a line of text and the start of a new line in a delimited text file, such as CSV. Drill treats \n as the standard line delimiter. As of Drill 1.8, Drill supports multi-byte delimiters, such as \r\n. Use a 4-digit hex code syntax \uXXXX for a non-printable delimiter. </td>
   </tr>
   <tr>
     <td>"formats" . . . "quote"</td>

http://git-wip-us.apache.org/repos/asf/drill/blob/640b2c4e/_docs/sql-reference/sql-commands/055-drop-table.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/055-drop-table.md b/_docs/sql-reference/sql-commands/055-drop-table.md
index 6857d25..35501b8 100644
--- a/_docs/sql-reference/sql-commands/055-drop-table.md
+++ b/_docs/sql-reference/sql-commands/055-drop-table.md
@@ -1,6 +1,6 @@
 ---
 title: "DROP TABLE"
-date: 2016-08-04 00:23:10 UTC
+date: 2016-08-04 16:47:22 UTC
 parent: "SQL Commands"
 ---
 
@@ -42,7 +42,7 @@ A unique directory or file name, optionally prefaced by a storage plugin name, s
 * When user impersonation is not enabled in Drill, Drill accesses the file system as the user running the Drillbit. This user is typically a super user who has permission to delete most files. In this scenario, use the DROP TABLE command with caution to avoid deleting critical files and directories.  
 
 ###Views
-* Views are independent of tables. Views that reference dropped tables become invalid. You must explicitly drop any view that references a dropped table using the [DROP VIEW command]({{ site.baseurl }}/docs/drop-view/).  
+* Views are independent of tables. If you drop a base table on which views were defined, the views become invalid, but users can still access them. You must explicitly drop any view that references a dropped table using the [DROP VIEW command]({{ site.baseurl }}/docs/drop-view/).  
 
 ###Concurrency 
 * Concurrency occurs when two processes try to access and/or change data at the same time. Currently, Drill does not have a mechanism in place, such as read locks on files, to address concurrency issues. For example, if one user runs a query that references a table that another user simultaneously issues the DROP TABLE command against, there is no mechanism in place to prevent a collision of the two processes. In such a scenario, Drill may return partial query results or a system error to the user running the query when the table is dropped. 


[14/17] drill git commit: Edits to Drill 1.8 doc updates

Posted by br...@apache.org.
Edits to Drill 1.8 doc updates


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/e14071ff
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/e14071ff
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/e14071ff

Branch: refs/heads/gh-pages
Commit: e14071ff113eccd8aaad0286b691286e58ab64c4
Parents: 5465a44
Author: Bridget Bevens <bb...@maprtech.com>
Authored: Mon Aug 15 11:40:26 2016 -0700
Committer: Bridget Bevens <bb...@maprtech.com>
Committed: Mon Aug 15 11:40:26 2016 -0700

----------------------------------------------------------------------
 .../010-partition-pruning-introduction.md                |  2 +-
 _docs/rn/003-1.8.0-rn.md                                 | 11 +++++++++--
 2 files changed, 10 insertions(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/e14071ff/_docs/performance-tuning/partition-pruning/010-partition-pruning-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/performance-tuning/partition-pruning/010-partition-pruning-introduction.md b/_docs/performance-tuning/partition-pruning/010-partition-pruning-introduction.md
index f64a633..5a157fd 100644
--- a/_docs/performance-tuning/partition-pruning/010-partition-pruning-introduction.md
+++ b/_docs/performance-tuning/partition-pruning/010-partition-pruning-introduction.md
@@ -1,6 +1,6 @@
 ---
 title: "Partition Pruning Introduction"
-date: 2016-08-08 18:42:19 UTC
+date: 2016-08-15 18:40:27 UTC
 parent: "Partition Pruning"
 --- 
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e14071ff/_docs/rn/003-1.8.0-rn.md
----------------------------------------------------------------------
diff --git a/_docs/rn/003-1.8.0-rn.md b/_docs/rn/003-1.8.0-rn.md
index 78db742..0d4e1c7 100644
--- a/_docs/rn/003-1.8.0-rn.md
+++ b/_docs/rn/003-1.8.0-rn.md
@@ -7,7 +7,14 @@ parent: "Release Notes"
 
 Today, we're happy to announce the availability of Drill 1.8.0. You can download it [here](https://drill.apache.org/download/).
 
-This release provides metadata cache pruning, support for the IF EXISTS parameter with the DROP TABLE and DROP VIEW commands, support for the DESCRIBE SCHEMA command, multi-byte delimiter support, and new parameters for filter selectivity estimates.  
+## New Features
+This release of Drill provides the following new features: 
+
+- Metadata cache pruning
+- IF EXISTS parameter with the DROP TABLE and DROP VIEW commands
+- DESCRIBE SCHEMA command
+- Multi-byte delimiter support
+- New parameters for filter selectivity estimates  
 
 ## Configuration and Launch Script Changes 
 This release of Drill also includes the following changes to the configuration and launch scripts: 
@@ -19,7 +26,7 @@ If you upgrade to Drill 1.8, you must merge your custom settings with the latest
 
 
 
-Drill 1.8 provides the following bug fixes and improvements:  
+The following sections list additional bug fixes and improvements:
 
 <h2>        Sub-task
 </h2>


[11/17] drill git commit: update to partition pruning intro to include refresh command for metadata cache file

Posted by br...@apache.org.
update to partition pruning intro to include refresh command for metadata cache file


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/2bc38da0
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/2bc38da0
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/2bc38da0

Branch: refs/heads/gh-pages
Commit: 2bc38da0e9ff9159b2337f3285aaaae05e5979aa
Parents: 21c41f5
Author: Bridget Bevens <bb...@maprtech.com>
Authored: Thu Aug 11 12:02:19 2016 -0700
Committer: Bridget Bevens <bb...@maprtech.com>
Committed: Thu Aug 11 12:02:19 2016 -0700

----------------------------------------------------------------------
 .../partition-pruning/010-partition-pruning-introduction.md     | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/2bc38da0/_docs/performance-tuning/partition-pruning/010-partition-pruning-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/performance-tuning/partition-pruning/010-partition-pruning-introduction.md b/_docs/performance-tuning/partition-pruning/010-partition-pruning-introduction.md
index 315b062..e5f4e5f 100644
--- a/_docs/performance-tuning/partition-pruning/010-partition-pruning-introduction.md
+++ b/_docs/performance-tuning/partition-pruning/010-partition-pruning-introduction.md
@@ -1,13 +1,12 @@
 ---
 title: "Partition Pruning Introduction"
-date: 2016-08-08 18:42:19 UTC
+date: 2016-08-11 19:02:20 UTC
 parent: "Partition Pruning"
 --- 
 
 Partition pruning is a performance optimization that limits the number of files and partitions that Drill reads when querying file systems and Hive tables. When you partition data, Drill only reads a subset of the files that reside in a file system or a subset of the partitions in a Hive table when a query matches certain filter criteria.
 
-As of Drill 1.8, partition pruning also applies to the parquet metadata cache. See [Optimizing Parquet Metadata Reading]({{site.baseurl}}/docs/optimizing-parquet-metadata-reading/) to see how to create a parquet metadata cache. When data is partitioned in a directory hierarchy, Drill attempts to read the metadata cache file from a sub-partition, based on matching filter criteria instead of reading from the top level partition, to reduce the amount of metadata read during the query planning time. 
-
+As of Drill 1.8, partition pruning also applies to the Parquet metadata cache. When data is partitioned in a directory hierarchy, Drill attempts to read the metadata cache file from a sub-partition, based on matching filter criteria instead of reading from the top level partition, to reduce the amount of metadata read during the query planning time. If you created a metadata cache file in a previous version of Drill, you must issue the REFRESH TABLE METADATA command to regenerate the metadata cache file before running queries for partition pruning to occur. See [Optimizing Parquet Metadata Reading]({{site.baseurl}}/docs/optimizing-parquet-metadata-reading/) for more information.  
 
 The query planner in Drill performs partition pruning by evaluating the filters. If no partition filters are present, the underlying Scan operator reads all files in all directories and then sends the data to operators, such as Filter, downstream. When partition filters are present, the query planner pushes the filters down to the Scan if possible. The Scan reads only the directories that match the partition filters, thus reducing disk I/O.
 


[12/17] drill git commit: edit config option intro to include note about change to default drill setting locatin - /bin/drill-config.sh

Posted by br...@apache.org.
edit config option intro to include note about change to default drill setting locatin - /bin/drill-config.sh


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/ddb7fcf0
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/ddb7fcf0
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/ddb7fcf0

Branch: refs/heads/gh-pages
Commit: ddb7fcf0610740479f4c9203b2db98b35cfa0e56
Parents: 2bc38da
Author: Bridget Bevens <bb...@maprtech.com>
Authored: Thu Aug 11 18:11:41 2016 -0700
Committer: Bridget Bevens <bb...@maprtech.com>
Committed: Thu Aug 11 18:11:41 2016 -0700

----------------------------------------------------------------------
 .../010-configuration-options-introduction.md           | 12 +++++-------
 1 file changed, 5 insertions(+), 7 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/ddb7fcf0/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md b/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
index c0a9dd9..a834291 100644
--- a/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
+++ b/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
@@ -1,14 +1,12 @@
 ---
 title: "Configuration Options Introduction"
-date: 2016-08-05 18:24:50 UTC
+date: 2016-08-12 01:11:42 UTC
 parent: "Configuration Options"
 ---
-Drill provides many configuration options that you can enable, disable, or
-modify. Modifying certain configuration options can impact Drill
-performance. Many of configuration options reside in the `drill-
-env.sh` and `drill-override.conf` files in the
-`/conf` directory. Drill sources` /etc/drill/conf` if it exists. Otherwise,
-Drill sources the local `<drill_installation_directory>/conf` directory.
+
+Drill provides many configuration options that you can enable, disable, or modify. Modifying certain configuration options can impact Drill performance. Many of the configuration options reside in the `drill-env.sh` script and the `drill-override.conf` configuration file located in the `$DRILL_HOME/conf` directory. Drill loads these files from `/etc/drill/conf`, if it exists. Otherwise, Drill loads the files from the `$DRILL_HOME/conf` directory.  
+
+{% include startnote.html %}As of Drill 1.8, default Drill settings reside in `$DRILL_HOME/bin/drill-config.sh`. You can override many settings by creating an entry in `$DRILL_HOME/conf/drill-env.sh`. The file includes descriptions of the options that you can set.{% include endnote.html %}
 
 The sys.options table contains information about system and session options. The sys.boot table contains information about Drill start-up options. The section, ["Start-up Options"]({{site.baseurl}}/docs/start-up-options), covers how to configure and view key boot options. The following table lists the system options in alphabetical order and provides a brief description of supported options.
 


[09/17] drill git commit: Update partition pruning intro for 1.8 - pp on parquet metadata cache

Posted by br...@apache.org.
Update partition pruning intro for 1.8 - pp on parquet metadata cache


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/bb118573
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/bb118573
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/bb118573

Branch: refs/heads/gh-pages
Commit: bb11857308c2b0e05d758288973bf63fa65b73ea
Parents: 2347455
Author: Bridget Bevens <bb...@maprtech.com>
Authored: Mon Aug 8 11:42:18 2016 -0700
Committer: Bridget Bevens <bb...@maprtech.com>
Committed: Mon Aug 8 11:42:18 2016 -0700

----------------------------------------------------------------------
 .../010-partition-pruning-introduction.md       | 47 +++++++++++---------
 1 file changed, 25 insertions(+), 22 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/bb118573/_docs/performance-tuning/partition-pruning/010-partition-pruning-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/performance-tuning/partition-pruning/010-partition-pruning-introduction.md b/_docs/performance-tuning/partition-pruning/010-partition-pruning-introduction.md
index 20e314b..315b062 100644
--- a/_docs/performance-tuning/partition-pruning/010-partition-pruning-introduction.md
+++ b/_docs/performance-tuning/partition-pruning/010-partition-pruning-introduction.md
@@ -1,22 +1,25 @@
----
-title: "Partition Pruning Introduction"
-date:  
-parent: "Partition Pruning"
---- 
-
-Partition pruning is a performance optimization that limits the number of files and partitions that Drill reads when querying file systems and Hive tables. When you partition data, Drill only reads a subset of the files that reside in a file system or a subset of the partitions in a Hive table when a query matches certain filter criteria.
-
-The query planner in Drill performs partition pruning by evaluating the filters. If no partition filters are present, the underlying Scan operator reads all files in all directories and then sends the data to operators, such as Filter, downstream. When partition filters are present, the query planner pushes the filters down to the Scan if possible. The Scan reads only the directories that match the partition filters, thus reducing disk I/O.
-
-## Using Partitioned Drill Data
-Before using Parquet data created by Drill 1.2 or earlier in later releases, you need to migrate the data. Migrate Parquet data as described in ["Migrating Parquet Data"]({{site.baseurl}}/docs/migrating-parquet-data/). 
-
-{% include startimportant.html %}Migrate only Parquet files that Drill generated.{% include endimportant.html %}
-
-## Partitioning Data
-In early versions of Drill, partition pruning involved time-consuming manual setup tasks. Using the PARTITION BY clause in the CTAS command simplifies the process.
-
-
-
-
-
+---
+title: "Partition Pruning Introduction"
+date: 2016-08-08 18:42:19 UTC
+parent: "Partition Pruning"
+--- 
+
+Partition pruning is a performance optimization that limits the number of files and partitions that Drill reads when querying file systems and Hive tables. When you partition data, Drill only reads a subset of the files that reside in a file system or a subset of the partitions in a Hive table when a query matches certain filter criteria.
+
+As of Drill 1.8, partition pruning also applies to the parquet metadata cache. See [Optimizing Parquet Metadata Reading]({{site.baseurl}}/docs/optimizing-parquet-metadata-reading/) to see how to create a parquet metadata cache. When data is partitioned in a directory hierarchy, Drill attempts to read the metadata cache file from a sub-partition, based on matching filter criteria instead of reading from the top level partition, to reduce the amount of metadata read during the query planning time. 
+
+
+The query planner in Drill performs partition pruning by evaluating the filters. If no partition filters are present, the underlying Scan operator reads all files in all directories and then sends the data to operators, such as Filter, downstream. When partition filters are present, the query planner pushes the filters down to the Scan if possible. The Scan reads only the directories that match the partition filters, thus reducing disk I/O.
+
+## Using Partitioned Drill Data
+Before using Parquet data created by Drill 1.2 or earlier in later releases, you need to migrate the data. Migrate Parquet data as described in ["Migrating Parquet Data"]({{site.baseurl}}/docs/migrating-parquet-data/). 
+
+{% include startimportant.html %}Migrate only Parquet files that Drill generated.{% include endimportant.html %}
+
+## Partitioning Data
+In early versions of Drill, partition pruning involved time-consuming manual setup tasks. Using the PARTITION BY clause in the CTAS command simplifies the process.
+
+
+
+
+


[03/17] drill git commit: 1.8 edit

Posted by br...@apache.org.
1.8 edit


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/d2d9f432
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/d2d9f432
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/d2d9f432

Branch: refs/heads/gh-pages
Commit: d2d9f4323878bc3fbcbe0a1df447752b439a5825
Parents: 640b2c4
Author: Bridget Bevens <bb...@maprtech.com>
Authored: Thu Aug 4 15:01:43 2016 -0700
Committer: Bridget Bevens <bb...@maprtech.com>
Committed: Thu Aug 4 15:01:43 2016 -0700

----------------------------------------------------------------------
 _docs/sql-reference/sql-commands/055-drop-table.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/d2d9f432/_docs/sql-reference/sql-commands/055-drop-table.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/055-drop-table.md b/_docs/sql-reference/sql-commands/055-drop-table.md
index 35501b8..533a1e8 100644
--- a/_docs/sql-reference/sql-commands/055-drop-table.md
+++ b/_docs/sql-reference/sql-commands/055-drop-table.md
@@ -1,6 +1,6 @@
 ---
 title: "DROP TABLE"
-date: 2016-08-04 16:47:22 UTC
+date: 2016-08-04 22:01:44 UTC
 parent: "SQL Commands"
 ---
 
@@ -42,7 +42,7 @@ A unique directory or file name, optionally prefaced by a storage plugin name, s
 * When user impersonation is not enabled in Drill, Drill accesses the file system as the user running the Drillbit. This user is typically a super user who has permission to delete most files. In this scenario, use the DROP TABLE command with caution to avoid deleting critical files and directories.  
 
 ###Views
-* Views are independent of tables. If you drop a base table on which views were defined, the views become invalid, but users can still access them. You must explicitly drop any view that references a dropped table using the [DROP VIEW command]({{ site.baseurl }}/docs/drop-view/).  
+* Views are independent of tables. Views that reference dropped tables become invalid. You must explicitly drop any view that references a dropped table using the [DROP VIEW command]({{ site.baseurl }}/docs/drop-view/).  
 
 ###Concurrency 
 * Concurrency occurs when two processes try to access and/or change data at the same time. Currently, Drill does not have a mechanism in place, such as read locks on files, to address concurrency issues. For example, if one user runs a query that references a table that another user simultaneously issues the DROP TABLE command against, there is no mechanism in place to prevent a collision of the two processes. In such a scenario, Drill may return partial query results or a system error to the user running the query when the table is dropped. 


[06/17] drill git commit: edit

Posted by br...@apache.org.
http://git-wip-us.apache.org/repos/asf/drill/blob/365747ae/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md b/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
index 9e5ad08..fe422b6 100644
--- a/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
+++ b/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
@@ -1,6 +1,6 @@
 ---
 title: "Configuration Options Introduction"
-date: 2016-08-04 22:49:29 UTC
+date: 2016-08-04 22:55:54 UTC
 parent: "Configuration Options"
 ---
 Drill provides many configuration options that you can enable, disable, or
@@ -15,79 +15,79 @@ The sys.options table contains information about system and session options. The
 ## System Options
 The sys.options table lists the following options that you can set as a system or session option as described in the section, ["Planning and Execution Options"]({{site.baseurl}}/docs/planning-and-execution-options). 
 
-| Name                                           | Default            | Comments                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     
                                                                                                                                                              |
-|------------------------------------------------|--------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 -------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| drill.exec.functions.cast_empty_string_to_null | FALSE              | In   a text file, treat empty fields as NULL values instead of empty string.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
                                                                                                                                                              |
-| drill.exec.storage.file.partition.column.label | dir                | The   column label for directory levels in results of queries of files in a   directory. Accepts a string input.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             
                                                                                                                                                              |
-| exec.enable_union_type                         | FALSE              | Enable   support for Avro union type.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        
                                                                                                                                                              |
-| exec.errors.verbose                            | FALSE              | Toggles   verbose output of executable error messages                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        
                                                                                                                                                              |
-| exec.java_compiler                             | DEFAULT            | Switches   between DEFAULT, JDK, and JANINO mode for the current session. Uses Janino by   default for generated source code of less than   exec.java_compiler_janino_maxsize; otherwise, switches to the JDK compiler.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      
                                                                                                                                                              |
-| exec.java_compiler_debug                       | TRUE               | Toggles   the output of debug-level compiler error messages in runtime generated code.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       
                                                                                                                                                              |
-| exec.java_compiler_janino_maxsize              | 262144             | See   the exec.java_compiler option comment. Accepts inputs of type LONG.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
                                                                                                                                                              |
-| exec.max_hash_table_size                       | 1073741824         | Ending   size in buckets for hash tables. Range: 0 - 1073741824.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             
                                                                                                                                                              |
-| exec.min_hash_table_size                       | 65536              | Starting   size in bucketsfor hash tables. Increase according to available memory to   improve performance. Increasing for very large aggregations or joins when you   have large amounts of memory for Drill to use. Range: 0 - 1073741824.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
                                                                                                                                                              |
-| exec.queue.enable                              | FALSE              | Changes   the state of query queues. False allows unlimited concurrent queries.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              
                                                                                                                                                              |
-| exec.queue.large                               | 10                 | Sets   the number of large queries that can run concurrently in the cluster. Range:   0-1000                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
                                                                                                                                                              |
-| exec.queue.small                               | 100                | Sets   the number of small queries that can run concurrently in the cluster. Range:   0-1001                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
                                                                                                                                                              |
-| exec.queue.threshold                           | 30000000           | Sets   the cost threshold, which depends on the complexity of the queries in queue,   for determining whether query is large or small. Complex queries have higher   thresholds. Range: 0-9223372036854775807                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                
                                                                                                                                                              |
-| exec.queue.timeout_millis                      | 300000             | Indicates   how long a query can wait in queue before the query fails. Range:   0-9223372036854775807                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        
                                                                                                                                                              |
-| exec.schedule.assignment.old                   | FALSE              | Used   to prevent query failure when no work units are assigned to a minor fragment,   particularly when the number of files is much larger than the number of leaf   fragments.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             
                                                                                                                                                              |
-| exec.storage.enable_new_text_reader            | TRUE               | Enables   the text reader that complies with the RFC 4180 standard for text/csv files.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       
                                                                                                                                                              |
-| new_view_default_permissions                   | 700                | Sets   view permissions using an octal code in the Unix tradition.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           
                                                                                                                                                              |
-| planner.add_producer_consumer                  | FALSE              | Increase   prefetching of data from disk. Disable for in-memory reads.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       
                                                                                                                                                              |
-| planner.affinity_factor                        | 1.2                | Factor   by which a node with endpoint affinity is favored while creating assignment.   Accepts inputs of type DOUBLE.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       
                                                                                                                                                              |
-| planner.broadcast_factor                       | 1                  | A   heuristic parameter for influencing the broadcast of records as part of a   query.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       
                                                                                                                                                              |
-| planner.broadcast_threshold                    | 10000000           | The   maximum number of records allowed to be broadcast as part of a query. After   one million records, Drill reshuffles data rather than doing a broadcast to   one side of the join. Range: 0-2147483647                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  
                                                                                                                                                              |
-| planner.disable_exchanges                      | FALSE              | Toggles   the state of hashing to a random exchange.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         
                                                                                                                                                              |
-| planner.enable_broadcast_join                  | TRUE               | Changes   the state of aggregation and join operators. The broadcast join can be used   for hash join, merge join and nested loop join. Use to join a large (fact)   table to relatively smaller (dimension) tables. Do not disable.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         
                                                                                                                                                              |
-| planner.enable_constant_folding                | TRUE               | If   one side of a filter condition is a constant expression, constant folding   evaluates the expression in the planning phase and replaces the expression   with the constant value. For example, Drill can rewrite WHERE age + 5 < 42   as WHERE age < 37.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                
                                                                                                                                                              |
-| planner.enable_decimal_data_type               | FALSE              | False   disables the DECIMAL data type, including casting to DECIMAL and reading   DECIMAL types from Parquet and Hive.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      
                                                                                                                                                              |
-| planner.enable_demux_exchange                  | FALSE              | Toggles   the state of hashing to a demulitplexed exchange.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  
                                                                                                                                                              |
-| planner.enable_hash_single_key                 | TRUE               | Each   hash key is associated with a single value.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           
                                                                                                                                                              |
-| planner.enable_hashagg                         | TRUE               | Enable   hash aggregation; otherwise, Drill does a sort-based aggregation. Does not   write to disk. Enable is recommended.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  
                                                                                                                                                              |
-| planner.enable_hashjoin                        | TRUE               | Enable   the memory hungry hash join. Drill assumes that a query with have adequate   memory to complete and tries to use the fastest operations possible to   complete the planned inner, left, right, or full outer joins using a hash   table. Does not write to disk. Disabling hash join allows Drill to manage   arbitrarily large data in a small memory footprint.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   
                                                                                                                                                              |
-| planner.enable_hashjoin_swap                   | TRUE               | Enables   consideration of multiple join order sequences during the planning phase.   Might negatively affect the performance of some queries due to inaccuracy of   estimated row count especially after a filter, join, or aggregation.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
                                                                                                                                                              |
-| planner.enable_hep_join_opt                    |                    | Enables   the heuristic planner for joins.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   
                                                                                                                                                              |
-| planner.enable_mergejoin                       | TRUE               | Sort-based   operation. A merge join is used for inner join, left and right outer joins.   Inputs to the merge join must be sorted. It reads the sorted input streams   from both sides and finds matching rows. Writes to disk.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             
                                                                                                                                                              |
-| planner.enable_multiphase_agg                  | TRUE               | Each   minor fragment does a local aggregation in phase 1, distributes on a hash   basis using GROUP-BY keys partially aggregated results to other fragments,   and all the fragments perform a total aggregation using this data.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           
                                                                                                                                                              |
-| planner.enable_mux_exchange                    | TRUE               | Toggles   the state of hashing to a multiplexed exchange.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
                                                                                                                                                              |
-| planner.enable_nestedloopjoin                  | TRUE               | Sort-based   operation. Writes to disk.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      
                                                                                                                                                              |
-| planner.enable_nljoin_for_scalar_only          | TRUE               | Supports   nested loop join planning where the right input is scalar in order to enable   NOT-IN, Inequality, Cartesian, and uncorrelated EXISTS planning.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   
                                                                                                                                                              |
-| planner.enable_streamagg                       | TRUE               | Sort-based   operation. Writes to disk.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      
                                                                                                                                                              |
-| planner.filter.max_selectivity_estimate_factor | -                  | Sets the maximum filter   selectivity estimate. The selectivity can vary between 0 and 1. For more   details, see planner.filter.min_selectivity_estimate_factor.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            
                                                                                                                                                              |
-| planner.filter.min_selectivity_estimate_factor | -                  | Sets the minimum filter   selectivity estimate to increase the parallelization of the major fragment   performing a join. This option is useful for deeply nested queries with   complicated predicates and serves as a workaround when statistics are   insufficient or unavailable. The selectivity can vary between 0 and 1. The   value of this option caps the estimated SELECTIVITY. The estimated ROWCOUNT   is derived by multiplying the estimated SELECTIVITY by the estimated ROWCOUNT   of the upstream operator. The estimated ROWCOUNT displays when you use the   EXPLAIN PLAN INCLUDING ALL ATTRIBUTES FOR command. This option does not   control the estimated ROWCOUNT of downstream operators (post FILTER).   However, estimated ROWCOUNTs may change because the operator ROWCOUNTs depend   on their downstream operators. The FILTER operator relies on the input of its   immediate upstream operator, for example SCAN, AGGREGATE. 
 If two filters are   present in a plan, each filter may have a different estimated ROWCOUNT based   on the immediate upstream operator's estimated ROWCOUNT. |
-| planner.identifier_max_length                  | 1024               | A   minimum length is needed because option names are identifiers themselves.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                
                                                                                                                                                              |
-| planner.join.hash_join_swap_margin_factor      | 10                 | The   number of join order sequences to consider during the planning phase.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  
                                                                                                                                                              |
-| planner.join.row_count_estimate_factor         | 1                  | The   factor for adjusting the estimated row count when considering multiple join   order sequences during the planning phase.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               
                                                                                                                                                              |
-| planner.memory.average_field_width             | 8                  | Used   in estimating memory requirements.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
                                                                                                                                                              |
-| planner.memory.enable_memory_estimation        | FALSE              | Toggles   the state of memory estimation and re-planning of the query. When enabled,   Drill conservatively estimates memory requirements and typically excludes   these operators from the plan and negatively impacts performance.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         
                                                                                                                                                              |
-| planner.memory.hash_agg_table_factor           | 1.1                | A   heuristic value for influencing the size of the hash aggregation table.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  
                                                                                                                                                              |
-| planner.memory.hash_join_table_factor          | 1.1                | A   heuristic value for influencing the size of the hash aggregation table.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  
                                                                                                                                                              |
-| planner.memory.max_query_memory_per_node       | 2147483648   bytes | Sets   the maximum amount of direct memory allocated to the sort operator in each   query on a node. If a query plan contains multiple sort operators, they all   share this memory. If you encounter memory issues when running queries with   sort operators, increase the value of this option.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           
                                                                                                                                                              |
-| planner.memory.non_blocking_operators_memory   | 64                 | Extra   query memory per node for non-blocking operators. This option is currently   used only for memory estimation. Range: 0-2048 MB                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       
                                                                                                                                                              |
-| planner.memory_limit                           | 268435456   bytes  | Defines   the maximum amount of direct memory allocated to a query for planning. When   multiple queries run concurrently, each query is allocated the amount of   memory set by this parameter.Increase the value of this parameter and rerun   the query if partition pruning failed due to insufficient memory.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           
                                                                                                                                                              |
-| planner.nestedloopjoin_factor                  | 100                | A   heuristic value for influencing the nested loop join.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
                                                                                                                                                              |
-| planner.partitioner_sender_max_threads         | 8                  | Upper   limit of threads for outbound queuing.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               
                                                                                                                                                              |
-| planner.partitioner_sender_set_threads         | -1                 | Overwrites   the number of threads used to send out batches of records. Set to -1 to   disable. Typically not changed.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       
                                                                                                                                                              |
-| planner.partitioner_sender_threads_factor      | 2                  | A   heuristic param to use to influence final number of threads. The higher the   value the fewer the number of threads.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     
                                                                                                                                                              |
-| planner.producer_consumer_queue_size           | 10                 | How   much data to prefetch from disk in record batches out-of-band of query   execution. The larger the queue size, the greater the amount of memory that   the queue and overall query execution consumes.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
                                                                                                                                                              |
-| planner.slice_target                           | 100000             | The   number of records manipulated within a fragment before Drill parallelizes   operations.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                
                                                                                                                                                              |
-| planner.width.max_per_node                     | 3                  | Maximum   number of threads that can run in parallel for a query on a node. A slice is   an individual thread. This number indicates the maximum number of slices per   query for the query\u2019s major fragment on a node.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
                                                                                                                                                                |
-| planner.width.max_per_query                    | 1000               | Same   as max per node but applies to the query as executed by the entire cluster.   For example, this value might be the number of active Drillbits, or a higher   number to return results faster.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         
                                                                                                                                                              |
-| security.admin.user_groups                     | n/a                | Unsupported   as of 1.4. A comma-separated list of administrator groups for Web Console   security.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          
                                                                                                                                                              |
-| security.admin.users                           |                    | Unsupported   as of 1.4. A comma-separated list of user names who you want to give   administrator privileges.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               
                                                                                                                                                              |
-| store.format                                   | parquet            | Output   format for data written to tables with the CREATE TABLE AS (CTAS) command.   Allowed values are parquet, json, psv, csv, or tsv.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
                                                                                                                                                              |
-| store.hive.optimize_scan_with_native_readers   | FALSE              | Optimize   reads of Parquet-backed external tables from Hive by using Drill native   readers instead of the Hive Serde interface. (Drill 1.2 and later)                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      
                                                                                                                                                              |
-| store.json.all_text_mode                       | FALSE              | Drill   reads all data from the JSON files as VARCHAR. Prevents schema change errors.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        
                                                                                                                                                              |
-| store.json.extended_types                      | FALSE              | Turns   on special JSON structures that Drill serializes for storing more type   information than the four basic JSON types.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
                                                                                                                                                              |
-| store.json.read_numbers_as_double              | FALSE              | Reads   numbers with or without a decimal point as DOUBLE. Prevents schema change   errors.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  
                                                                                                                                                              |
-| store.mongo.all_text_mode                      | FALSE              | Similar   to store.json.all_text_mode for MongoDB.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           
                                                                                                                                                              |
-| store.mongo.read_numbers_as_double             | FALSE              | Similar   to store.json.read_numbers_as_double.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              
                                                                                                                                                              |
-| store.parquet.block-size                       | 536870912          | Sets   the size of a Parquet row group to the number of bytes less than or equal to   the block size of MFS, HDFS, or the file system.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       
                                                                                                                                                              |
-| store.parquet.compression                      | snappy             | Compression   type for storing Parquet output. Allowed values: snappy, gzip, none                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            
                                                                                                                                                              |
-| store.parquet.enable_dictionary_encoding       | FALSE              | For   internal use. Do not change.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           
                                                                                                                                                              |
-| store.parquet.dictionary.page-size             | 1048576            |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              
                                                                                                                                                              |
-| store.parquet.use_new_reader                   | FALSE              | Not   supported in this release.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             
                                                                                                                                                              |
-| store.partition.hash_distribute                | FALSE              | Uses   a hash algorithm to distribute data on partition keys in a CTAS partitioning   operation. An alpha option--for experimental use at this stage. Do not use in   production systems.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
                                                                                                                                                              |
-| store.text.estimated_row_size_bytes            | 100                | Estimate   of the row size in a delimited text file, such as csv. The closer to actual,   the better the query plan. Used for all csv files in the system/session where   the value is set. Impacts the decision to plan a broadcast join or not.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            
                                                                                                                                                              |
-| window.enable                                  | TRUE               | Enable   or disable window functions in Drill 1.1 and later.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
                                                                                                                                                              |
\ No newline at end of file
+| Name                                           | Default            | Comments                                                                                                                                                                                                                                                        

<TRUNCATED>

[17/17] drill git commit: updates for drill 1.8 release

Posted by br...@apache.org.
updates for drill 1.8 release


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/a0d5528e
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/a0d5528e
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/a0d5528e

Branch: refs/heads/gh-pages
Commit: a0d5528ebce96b4a8f8fccde00af71c08db9ebd7
Parents: 22d454b
Author: Bridget Bevens <bb...@maprtech.com>
Authored: Tue Aug 30 15:24:07 2016 -0700
Committer: Bridget Bevens <bb...@maprtech.com>
Committed: Tue Aug 30 15:24:07 2016 -0700

----------------------------------------------------------------------
 _data/version.json                           |  4 +--
 blog/_posts/2016-08-15-drill-1.8-released.md | 31 -----------------------
 blog/_posts/2016-08-30-drill-1.8-released.md | 30 ++++++++++++++++++++++
 3 files changed, 32 insertions(+), 33 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/a0d5528e/_data/version.json
----------------------------------------------------------------------
diff --git a/_data/version.json b/_data/version.json
index 72d2ff0..f476f07 100644
--- a/_data/version.json
+++ b/_data/version.json
@@ -1,7 +1,7 @@
 {
   "display_version": "1.8",
   "full_version": "1.8.0",
-  "release_date": "August 15, 2016",
-  "blog_post":"/blog/2016/08/15/drill-1.8-released",
+  "release_date": "August 30, 2016",
+  "blog_post":"/blog/2016/08/30/drill-1.8-released",
   "release_notes": "https://drill.apache.org/docs/apache-drill-1-8-0-release-notes/"
 }

http://git-wip-us.apache.org/repos/asf/drill/blob/a0d5528e/blog/_posts/2016-08-15-drill-1.8-released.md
----------------------------------------------------------------------
diff --git a/blog/_posts/2016-08-15-drill-1.8-released.md b/blog/_posts/2016-08-15-drill-1.8-released.md
deleted file mode 100644
index 2eac1fa..0000000
--- a/blog/_posts/2016-08-15-drill-1.8-released.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-layout: post
-title: "Drill 1.8 Released"
-code: drill-1.8-released
-excerpt: Apache Drill 1.8's highlights are&#58; metadata cache pruning, IF EXISTS support, DESCRIBE SCHEMA command, multi-byte delimiter support, and new parameters for filter selectivity estimates.
-authors: ["bbevens"]
----
-
-Today, we're happy to announce the availability of Drill 1.8.0. You can download it [here](https://drill.apache.org/download/).
-
-The release provides the following bug fixes and improvements:
-
-## Metadata Cache Pruning 
-Drill now applies partition pruning to the metadata cache file. See [Partition Pruning Introduction](https://drill.apache.org/docs/partition-pruning-introduction/) and [Optimizing Parquet Metadata Reading](https://drill.apache.org/docs/optimizing-parquet-metadata-reading/). 
-
-## IF EXISTS Support  
-You can include the new IF EXISTS parameter with the DROP TABLE and DROP VIEW commands to prevent Drill from returning error messages when a table or view does not exist. See [DROP TABLE](https://drill.apache.org/docs/drop-table/) and [DROP VIEW](https://drill.apache.org/docs/drop-view/).
-
-
-## DESCRIBE SCHEMA Command 
-Drill now supports the DESCRIBE SCHEMA command which provides schema properties for storage plugin configurations and workspaces. See [DESCRIBE](https://drill.apache.org/docs/describe/).  
-
-## Multi-Byte Delimiter Support  
-Drill now supports multi-byte delimiters for text files, such as \r\n. See [List of Attributes and Definitions](https://drill.apache.org/docs/plugin-configuration-basics/#list-of-attributes-and-definitions).  
-
-## Filter Selectivity Estimate Parameters  
-New parameters set the minimum filter selectivity estimate to increase the parallelization of the major fragment performing a join. See [System Options](https://drill.apache.org/docs/configuration-options-introduction/#system-options). 
- 
-
-A complete list of JIRAs resolved in the 1.8.0 release can be found [here](https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12334768&styleName=Html&projectId=12313820&Create=Create&atl_token=A5KQ-2QAV-T4JA-FDED%7C0721f2a625165c1e2cc6c0d2cfb41b437cc68769%7Clin).
-

http://git-wip-us.apache.org/repos/asf/drill/blob/a0d5528e/blog/_posts/2016-08-30-drill-1.8-released.md
----------------------------------------------------------------------
diff --git a/blog/_posts/2016-08-30-drill-1.8-released.md b/blog/_posts/2016-08-30-drill-1.8-released.md
new file mode 100644
index 0000000..c3174e9
--- /dev/null
+++ b/blog/_posts/2016-08-30-drill-1.8-released.md
@@ -0,0 +1,30 @@
+---
+layout: post
+title: "Drill 1.8 Released"
+code: drill-1.8-released
+excerpt: Apache Drill 1.8's highlights are&#58; metadata cache pruning, IF EXISTS support, DESCRIBE SCHEMA command, multi-byte delimiter support, and new parameters for filter selectivity estimates.
+authors: ["bbevens"]
+---
+
+Today, we're happy to announce the availability of Drill 1.8.0. You can download it [here](https://drill.apache.org/download/).
+
+The release provides the following bug fixes and improvements:
+
+## Metadata Cache Pruning 
+Drill now applies partition pruning to the metadata cache file. See [Partition Pruning Introduction](https://drill.apache.org/docs/partition-pruning-introduction/) and [Optimizing Parquet Metadata Reading](https://drill.apache.org/docs/optimizing-parquet-metadata-reading/). 
+
+## IF EXISTS Support  
+You can include the new IF EXISTS parameter with the DROP TABLE and DROP VIEW commands to prevent Drill from returning error messages when a table or view does not exist. See [DROP TABLE](https://drill.apache.org/docs/drop-table/) and [DROP VIEW](https://drill.apache.org/docs/drop-view/).
+
+## DESCRIBE SCHEMA Command 
+Drill now supports the DESCRIBE SCHEMA command which provides schema properties for storage plugin configurations and workspaces. See [DESCRIBE](https://drill.apache.org/docs/describe/).  
+
+## Multi-Byte Delimiter Support  
+Drill now supports multi-byte delimiters for text files, such as \r\n. See [List of Attributes and Definitions](https://drill.apache.org/docs/plugin-configuration-basics/#list-of-attributes-and-definitions).  
+
+## Filter Selectivity Estimate Parameters  
+New parameters set the minimum filter selectivity estimate to increase the parallelization of the major fragment performing a join. See [System Options](https://drill.apache.org/docs/configuration-options-introduction/#system-options). 
+ 
+
+A complete list of JIRAs resolved in the 1.8.0 release can be found [here](https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12334768&styleName=Html&projectId=12313820&Create=Create&atl_token=A5KQ-2QAV-T4JA-FDED%7Ce8d020149d9a6082481af301e563adbe35c76a87%7Clout).
+


[15/17] drill git commit: update to 1.8 release notes

Posted by br...@apache.org.
update to 1.8 release notes


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/1c97d7c0
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/1c97d7c0
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/1c97d7c0

Branch: refs/heads/gh-pages
Commit: 1c97d7c00907f5ae8704d5d161a7288072980065
Parents: e14071f
Author: Bridget Bevens <bb...@maprtech.com>
Authored: Mon Aug 29 12:37:26 2016 -0700
Committer: Bridget Bevens <bb...@maprtech.com>
Committed: Mon Aug 29 12:37:26 2016 -0700

----------------------------------------------------------------------
 _docs/rn/003-1.8.0-rn.md | 22 +++++++++++++++++-----
 1 file changed, 17 insertions(+), 5 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/1c97d7c0/_docs/rn/003-1.8.0-rn.md
----------------------------------------------------------------------
diff --git a/_docs/rn/003-1.8.0-rn.md b/_docs/rn/003-1.8.0-rn.md
index 0d4e1c7..449b062 100644
--- a/_docs/rn/003-1.8.0-rn.md
+++ b/_docs/rn/003-1.8.0-rn.md
@@ -22,12 +22,13 @@ This release of Drill also includes the following changes to the configuration a
 - Default Drill settings now reside in `$DRILL_HOME/bin/drill-config.sh`. You can override many settings by creating an entry in `$DRILL_HOME/conf/drill-env.sh`. The file includes descriptions of the options that you can set.  ([DRILL-4581](https://issues.apache.org/jira/browse/DRILL-4581))  
 - Due to issues at high concurrency, the native Linux epoll transport is now disabled by default. ([DRILL-4623](https://issues.apache.org/jira/browse/DRILL-4623))  
  
-If you upgrade to Drill 1.8, you must merge your custom settings with the latest settings in the `drill-override.conf` and `drill-env.sh` file that ships with Drill. As of Drill 1.8, all Drill defaults reside in the Drill scripts. The `drill-env.sh` script contains only your customizations. When you merge your existing `drill-env.sh` file with the 1.8 version of the file, you can remove all of the settings in your file except for those you created yourself. Consult the original `drill-env.sh` file from the prior Drill release to determine which settings you can remove.
+If you upgrade to Drill 1.8, you must merge your custom settings with the latest settings in the `drill-override.conf` and `drill-env.sh` file that ships with Drill. As of Drill 1.8, all Drill defaults reside in the Drill scripts. The `drill-env.sh` script contains only your customizations. When you merge your existing `drill-env.sh` file with the 1.8 version of the file, you can remove all of the settings in your file except for those that you created yourself. Consult the original `drill-env.sh` file from the prior Drill release to determine which settings you can remove.
 
 
 
 The following sections list additional bug fixes and improvements:
 
+    
 <h2>        Sub-task
 </h2>
 <ul>
@@ -74,8 +75,14 @@ The following sections list additional bug fixes and improvements:
 </li>
 <li>[<a href='https://issues.apache.org/jira/browse/DRILL-4759'>DRILL-4759</a>] -         Drill throwing array index out of bound exception when reading a parquet file written by map reduce program.
 </li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4767'>DRILL-4767</a>] -         Parquet reader throw IllegalArgumentException for int32 type with GZIP compression
+</li>
 <li>[<a href='https://issues.apache.org/jira/browse/DRILL-4768'>DRILL-4768</a>] -         Drill may leak hive meta store connection if hive meta store client call hits error
 </li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4769'>DRILL-4769</a>] -         forman spins query int32 data with snappy compression
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4770'>DRILL-4770</a>] -         ParquetRecordReader throws NPE querying a single int64 column file
+</li>
 <li>[<a href='https://issues.apache.org/jira/browse/DRILL-4783'>DRILL-4783</a>] -         Flatten on CONVERT_FROM fails with ClassCastException if resultset is empty
 </li>
 <li>[<a href='https://issues.apache.org/jira/browse/DRILL-4785'>DRILL-4785</a>] -         Limit 0 queries regressed in Drill 1.7.0 
@@ -88,10 +95,18 @@ The following sections list additional bug fixes and improvements:
 </li>
 <li>[<a href='https://issues.apache.org/jira/browse/DRILL-4825'>DRILL-4825</a>] -         Wrong data with UNION ALL when querying different sub-directories under the same table
 </li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4833'>DRILL-4833</a>] -         Union-All with a small cardinality input on one side does not get parallelized
+</li>
 <li>[<a href='https://issues.apache.org/jira/browse/DRILL-4836'>DRILL-4836</a>] -         ZK Issue during Drillbit startup, possibly due to race condition
 </li>
 <li>[<a href='https://issues.apache.org/jira/browse/DRILL-4846'>DRILL-4846</a>] -         Eliminate extra operations during metadata cache pruning
 </li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4852'>DRILL-4852</a>] -         COUNT(*) query against a large JSON table slower by 2x
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4854'>DRILL-4854</a>] -         Incorrect logic in log directory checks in drill-config.sh
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4857'>DRILL-4857</a>] -         When no partition pruning occurs with metadata caching there&#39;s a performance regression
+</li>
 </ul>
                         
 <h2>        Improvement
@@ -132,7 +147,4 @@ The following sections list additional bug fixes and improvements:
 <li>[<a href='https://issues.apache.org/jira/browse/DRILL-4499'>DRILL-4499</a>] -         Remove unused classes
 </li>
 </ul>
-                  
-
-    
-                                           
\ No newline at end of file
+                
\ No newline at end of file


[08/17] drill git commit: edit

Posted by br...@apache.org.
edit


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/23474554
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/23474554
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/23474554

Branch: refs/heads/gh-pages
Commit: 23474554d7976b48a2b307335457438613fa4ad6
Parents: 365747a
Author: Bridget Bevens <bb...@maprtech.com>
Authored: Fri Aug 5 11:24:49 2016 -0700
Committer: Bridget Bevens <bb...@maprtech.com>
Committed: Fri Aug 5 11:24:49 2016 -0700

----------------------------------------------------------------------
 .../010-configuration-options-introduction.md                  | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/23474554/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md b/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
index fe422b6..c0a9dd9 100644
--- a/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
+++ b/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
@@ -1,6 +1,6 @@
 ---
 title: "Configuration Options Introduction"
-date: 2016-08-04 22:55:54 UTC
+date: 2016-08-05 18:24:50 UTC
 parent: "Configuration Options"
 ---
 Drill provides many configuration options that you can enable, disable, or
@@ -54,8 +54,8 @@ The sys.options table lists the following options that you can set as a system o
 | planner.enable_nestedloopjoin                  | TRUE               | Sort-based   operation. Writes to disk.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      
                                                                                                                                                                                         |
 | planner.enable_nljoin_for_scalar_only          | TRUE               | Supports   nested loop join planning where the right input is scalar in order to enable   NOT-IN, Inequality, Cartesian, and uncorrelated EXISTS planning.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   
                                                                                                                                                                                         |
 | planner.enable_streamagg                       | TRUE               | Sort-based   operation. Writes to disk.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      
                                                                                                                                                                                         |
-| planner.filter.max_selectivity_estimate_factor | -                  | Available as of Drill 1.8. Sets the maximum filter  selectivity estimate. The selectivity can vary between 0 and 1. For more  details, see planner.filter.min_selectivity_estimate_factor.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   
                                                                                                                                                                                         |
-| planner.filter.min_selectivity_estimate_factor | -                  | Available as of Drill 1.8. Sets the minimum filter   selectivity estimate to increase the parallelization of the major fragment   performing a join. This option is useful for deeply nested queries with   complicated predicates and serves as a workaround when statistics are   insufficient or unavailable. The selectivity can vary between 0 and 1. The   value of this option caps the estimated SELECTIVITY. The estimated ROWCOUNT   is derived by multiplying the estimated SELECTIVITY by the estimated ROWCOUNT   of the upstream operator. The estimated ROWCOUNT displays when you use the   EXPLAIN PLAN INCLUDING ALL ATTRIBUTES FOR command. This option does not   control the estimated ROWCOUNT of downstream operators (post FILTER).   However, estimated ROWCOUNTs may change because the operator ROWCOUNTs depend   on their downstream operators. The FILTER operator relies on the input of its   immediate upstream operator, fo
 r example SCAN, AGGREGATE. If two filters are   present in a plan, each filter may have a different estimated ROWCOUNT based   on the immediate upstream operator's estimated ROWCOUNT. |
+| planner.filter.max_selectivity_estimate_factor | 1                  | Available as of Drill 1.8. Sets the maximum filter  selectivity estimate. The selectivity can vary between 0 and 1. For more  details, see planner.filter.min_selectivity_estimate_factor.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   
                                                                                                                                                                                         |
+| planner.filter.min_selectivity_estimate_factor | 0                  | Available as of Drill 1.8. Sets the minimum filter   selectivity estimate to increase the parallelization of the major fragment   performing a join. This option is useful for deeply nested queries with   complicated predicates and serves as a workaround when statistics are   insufficient or unavailable. The selectivity can vary between 0 and 1. The   value of this option caps the estimated SELECTIVITY. The estimated ROWCOUNT   is derived by multiplying the estimated SELECTIVITY by the estimated ROWCOUNT   of the upstream operator. The estimated ROWCOUNT displays when you use the   EXPLAIN PLAN INCLUDING ALL ATTRIBUTES FOR command. This option does not   control the estimated ROWCOUNT of downstream operators (post FILTER).   However, estimated ROWCOUNTs may change because the operator ROWCOUNTs depend   on their downstream operators. The FILTER operator relies on the input of its   immediate upstream operator, fo
 r example SCAN, AGGREGATE. If two filters are   present in a plan, each filter may have a different estimated ROWCOUNT based   on the immediate upstream operator's estimated ROWCOUNT. |
 | planner.identifier_max_length                  | 1024               | A   minimum length is needed because option names are identifiers themselves.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                
                                                                                                                                                                                         |
 | planner.join.hash_join_swap_margin_factor      | 10                 | The   number of join order sequences to consider during the planning phase.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  
                                                                                                                                                                                         |
 | planner.join.row_count_estimate_factor         | 1                  | The   factor for adjusting the estimated row count when considering multiple join   order sequences during the planning phase.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               
                                                                                                                                                                                         |


[04/17] drill git commit: add HashJoin's not fully parallelized in query plan - new parameters for DRILL-4743

Posted by br...@apache.org.
http://git-wip-us.apache.org/repos/asf/drill/blob/baa4eddb/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md b/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
index df2fcb9..9e5ad08 100644
--- a/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
+++ b/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
@@ -1,6 +1,6 @@
 ---
 title: "Configuration Options Introduction"
-date: 2016-02-06 00:18:12 UTC
+date: 2016-08-04 22:49:29 UTC
 parent: "Configuration Options"
 ---
 Drill provides many configuration options that you can enable, disable, or
@@ -15,77 +15,79 @@ The sys.options table contains information about system and session options. The
 ## System Options
 The sys.options table lists the following options that you can set as a system or session option as described in the section, ["Planning and Execution Options"]({{site.baseurl}}/docs/planning-and-execution-options). 
 
-| Name                                           | Default          | Comments                                                                                                                                                                                                                                                                                                                                                         |
-|------------------------------------------------|------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| drill.exec.functions.cast_empty_string_to_null | FALSE            | In a text file, treat empty fields as NULL values instead of empty string.                                                                                                                                                                                                                                                                                       |
-| drill.exec.storage.file.partition.column.label | dir              | The column label for directory levels in results of queries of files in a directory. Accepts a string input.                                                                                                                                                                                                                                                     |
-| exec.enable_union_type                         | false            | Enable support for Avro union type.                                                                                                                                                                                                                                                                                                                              |
-| exec.errors.verbose                            | FALSE            | Toggles verbose output of executable error messages                                                                                                                                                                                                                                                                                                              |
-| exec.java_compiler                             | DEFAULT          | Switches between DEFAULT, JDK, and JANINO mode for the current session. Uses Janino by default for generated source code of less than exec.java_compiler_janino_maxsize; otherwise, switches to the JDK compiler.                                                                                                                                                |
-| exec.java_compiler_debug                       | TRUE             | Toggles the output of debug-level compiler error messages in runtime generated code.                                                                                                                                                                                                                                                                             |
-| exec.java_compiler_janino_maxsize              | 262144           | See the exec.java_compiler option comment. Accepts inputs of type LONG.                                                                                                                                                                                                                                                                                          |
-| exec.max_hash_table_size                       | 1073741824       | Ending size in buckets for hash tables. Range: 0 - 1073741824.                                                                                                                                                                                                                                                                                                   |
-| exec.min_hash_table_size                       | 65536            | Starting size in bucketsfor hash tables. Increase according to available memory to improve performance. Increasing for very large aggregations or joins when you have large amounts of memory for Drill to use. Range: 0 - 1073741824.                                                                                                                           |
-| exec.queue.enable                              | FALSE            | Changes the state of query queues. False allows unlimited concurrent queries.                                                                                                                                                                                                                                                                                    |
-| exec.queue.large                               | 10               | Sets the number of large queries that can run concurrently in the cluster. Range: 0-1000                                                                                                                                                                                                                                                                         |
-| exec.queue.small                               | 100              | Sets the number of small queries that can run concurrently in the cluster. Range: 0-1001                                                                                                                                                                                                                                                                         |
-| exec.queue.threshold                           | 30000000         | Sets the cost threshold, which depends on the complexity of the queries in queue, for determining whether query is large or small. Complex queries have higher thresholds. Range: 0-9223372036854775807                                                                                                                                                          |
-| exec.queue.timeout_millis                      | 300000           | Indicates how long a query can wait in queue before the query fails. Range: 0-9223372036854775807                                                                                                                                                                                                                                                                |
-| exec.schedule.assignment.old                   | FALSE            | Used to prevent query failure when no work units are assigned to a minor fragment, particularly when the number of files is much larger than the number of leaf fragments.                                                                                                                                                                                       |
-| exec.storage.enable_new_text_reader            | TRUE             | Enables the text reader that complies with the RFC 4180 standard for text/csv files.                                                                                                                                                                                                                                                                             |
-| new_view_default_permissions                   | 700              | Sets view permissions using an octal code in the Unix tradition.                                                                                                                                                                                                                                                                                                 |
-| planner.add_producer_consumer                  | FALSE            | Increase prefetching of data from disk. Disable for in-memory reads.                                                                                                                                                                                                                                                                                             |
-| planner.affinity_factor                        | 1.2              | Factor by which a node with endpoint affinity is favored while creating assignment. Accepts inputs of type DOUBLE.                                                                                                                                                                                                                                               |
-| planner.broadcast_factor                       | 1                | A heuristic parameter for influencing the broadcast of records as part of a query.                                                                                                                                                                                                                                                                               |
-| planner.broadcast_threshold                    | 10000000         | The maximum number of records allowed to be broadcast as part of a query. After one million records, Drill reshuffles data rather than doing a broadcast to one side of the join. Range: 0-2147483647                                                                                                                                                            |
-| planner.disable_exchanges                      | FALSE            | Toggles the state of hashing to a random exchange.                                                                                                                                                                                                                                                                                                               |
-| planner.enable_broadcast_join                  | TRUE             | Changes the state of aggregation and join operators. The broadcast join can be used for hash join, merge join and nested loop join. Use to join a large (fact) table to relatively smaller (dimension) tables. Do not disable.                                                                                                                                   |
-| planner.enable_constant_folding                | TRUE             | If one side of a filter condition is a constant expression, constant folding evaluates the expression in the planning phase and replaces the expression with the constant value. For example, Drill can rewrite WHERE age + 5 < 42 as WHERE age < 37.                                                                                                            |
-| planner.enable_decimal_data_type               | FALSE            | False disables the DECIMAL data type, including casting to DECIMAL and reading DECIMAL types from Parquet and Hive.                                                                                                                                                                                                                                              |
-| planner.enable_demux_exchange                  | FALSE            | Toggles the state of hashing to a demulitplexed exchange.                                                                                                                                                                                                                                                                                                        |
-| planner.enable_hash_single_key                 | TRUE             | Each hash key is associated with a single value.                                                                                                                                                                                                                                                                                                                 |
-| planner.enable_hashagg                         | TRUE             | Enable hash aggregation; otherwise, Drill does a sort-based aggregation. Does not write to disk. Enable is recommended.                                                                                                                                                                                                                                          |
-| planner.enable_hashjoin                        | TRUE             | Enable the memory hungry hash join. Drill assumes that a query with have adequate memory to complete and tries to use the fastest operations possible to complete the planned inner, left, right, or full outer joins using a hash table. Does not write to disk. Disabling hash join allows Drill to manage arbitrarily large data in a small memory footprint. |
-| planner.enable_hashjoin_swap                   | TRUE             | Enables consideration of multiple join order sequences during the planning phase. Might negatively affect the performance of some queries due to inaccuracy of estimated row count especially after a filter, join, or aggregation.                                                                                                                              |
-| planner.enable_hep_join_opt                    |                  | Enables the heuristic planner for joins.                                                                                                                                                                                                                                                                                                                         |
-| planner.enable_mergejoin                       | TRUE             | Sort-based operation. A merge join is used for inner join, left and right outer joins. Inputs to the merge join must be sorted. It reads the sorted input streams from both sides and finds matching rows. Writes to disk.                                                                                                                                       |
-| planner.enable_multiphase_agg                  | TRUE             | Each minor fragment does a local aggregation in phase 1, distributes on a hash basis using GROUP-BY keys partially aggregated results to other fragments, and all the fragments perform a total aggregation using this data.                                                                                                                                     |
-| planner.enable_mux_exchange                    | TRUE             | Toggles the state of hashing to a multiplexed exchange.                                                                                                                                                                                                                                                                                                          |
-| planner.enable_nestedloopjoin                  | TRUE             | Sort-based operation. Writes to disk.                                                                                                                                                                                                                                                                                                                            |
-| planner.enable_nljoin_for_scalar_only          | TRUE             | Supports nested loop join planning where the right input is scalar in order to enable NOT-IN, Inequality, Cartesian, and uncorrelated EXISTS planning.                                                                                                                                                                                                           |
-| planner.enable_streamagg                       | TRUE             | Sort-based operation. Writes to disk.                                                                                                                                                                                                                                                                                                                            |
-| planner.identifier_max_length                  | 1024             | A minimum length is needed because option names are identifiers themselves.                                                                                                                                                                                                                                                                                      |
-| planner.join.hash_join_swap_margin_factor      | 10               | The number of join order sequences to consider during the planning phase.                                                                                                                                                                                                                                                                                        |
-| planner.join.row_count_estimate_factor         | 1                | The factor for adjusting the estimated row count when considering multiple join order sequences during the planning phase.                                                                                                                                                                                                                                       |
-| planner.memory.average_field_width             | 8                | Used in estimating memory requirements.                                                                                                                                                                                                                                                                                                                          |
-| planner.memory.enable_memory_estimation        | FALSE            | Toggles the state of memory estimation and re-planning of the query. When enabled, Drill conservatively estimates memory requirements and typically excludes these operators from the plan and negatively impacts performance.                                                                                                                                   |
-| planner.memory.hash_agg_table_factor           | 1.1              | A heuristic value for influencing the size of the hash aggregation table.                                                                                                                                                                                                                                                                                        |
-| planner.memory.hash_join_table_factor          | 1.1              | A heuristic value for influencing the size of the hash aggregation table.                                                                                                                                                                                                                                                                                        |
-| planner.memory.max_query_memory_per_node       | 2147483648 bytes | Sets the maximum amount of direct memory allocated to the sort operator in each query on a node. If a query plan contains multiple sort operators, they all share this memory. If you encounter memory issues when running queries with sort operators, increase the value of this option.                                                                                                                                                                                                    |
-| planner.memory.non_blocking_operators_memory   | 64               | Extra query memory per node for non-blocking operators. This option is currently used only for memory estimation. Range: 0-2048 MB                                                                                                                                                                                                                               |
-| planner.memory_limit                           | 268435456 bytes  | Defines the maximum amount of direct memory allocated to a query for planning. When multiple queries run concurrently, each query is allocated the amount of memory set by this parameter.Increase the value of this parameter and rerun the query if partition pruning failed due to insufficient memory.                                                       |
-| planner.nestedloopjoin_factor                  | 100              | A heuristic value for influencing the nested loop join.                                                                                                                                                                                                                                                                                                          |
-| planner.partitioner_sender_max_threads         | 8                | Upper limit of threads for outbound queuing.                                                                                                                                                                                                                                                                                                                     |
-| planner.partitioner_sender_set_threads         | -1               | Overwrites the number of threads used to send out batches of records. Set to -1 to disable. Typically not changed.                                                                                                                                                                                                                                               |
-| planner.partitioner_sender_threads_factor      | 2                | A heuristic param to use to influence final number of threads. The higher the value the fewer the number of threads.                                                                                                                                                                                                                                             |
-| planner.producer_consumer_queue_size           | 10               | How much data to prefetch from disk in record batches out-of-band of query execution. The larger the queue size, the greater the amount of memory that the queue and overall query execution consumes.                                                                                                                                                           |
-| planner.slice_target                           | 100000           | The number of records manipulated within a fragment before Drill parallelizes operations.                                                                                                                                                                                                                                                                        |
-| planner.width.max_per_node                     | 3                | Maximum number of threads that can run in parallel for a query on a node. A slice is an individual thread. This number indicates the maximum number of slices per query for the query\u2019s major fragment on a node.                                                                                                                                                |
-| planner.width.max_per_query                    | 1000             | Same as max per node but applies to the query as executed by the entire cluster. For example, this value might be the number of active Drillbits, or a higher number to return results faster.                                                                                                                                                                   |
-| security.admin.user_groups                     | n/a              | Unsupported as of 1.4. A comma-separated list of administrator groups for Web Console security.                                                                                                                                                                                                                                                                  |
-| security.admin.users                           | <a name>         | Unsupported as of 1.4. A comma-separated list of user names who you want to give administrator privileges.                                                                                                                                                                                                                                                       |
-| store.format                                   | parquet          | Output format for data written to tables with the CREATE TABLE AS (CTAS) command. Allowed values are parquet, json, psv, csv, or tsv.                                                                                                                                                                                                                            |
-| store.hive.optimize_scan_with_native_readers   | FALSE            | Optimize reads of Parquet-backed external tables from Hive by using Drill native readers instead of the Hive Serde interface. (Drill 1.2 and later)                                                                                                                                                                                                              |
-| store.json.all_text_mode                       | FALSE            | Drill reads all data from the JSON files as VARCHAR. Prevents schema change errors.                                                                                                                                                                                                                                                                              |
-| store.json.extended_types                      | FALSE            | Turns on special JSON structures that Drill serializes for storing more type information than the four basic JSON types.                                                                                                                                                                                                                                         |
-| store.json.read_numbers_as_double              | FALSE            | Reads numbers with or without a decimal point as DOUBLE. Prevents schema change errors.                                                                                                                                                                                                                                                                          |
-| store.mongo.all_text_mode                      | FALSE            | Similar to store.json.all_text_mode for MongoDB.                                                                                                                                                                                                                                                                                                                 |
-| store.mongo.read_numbers_as_double             | FALSE            | Similar to store.json.read_numbers_as_double.                                                                                                                                                                                                                                                                                                                    |
-| store.parquet.block-size                       | 536870912        | Sets the size of a Parquet row group to the number of bytes less than or equal to the block size of MFS, HDFS, or the file system.                                                                                                                                                                                                                               |
-| store.parquet.compression                      | snappy           | Compression type for storing Parquet output. Allowed values: snappy, gzip, none                                                                                                                                                                                                                                                                                  |
-| store.parquet.enable_dictionary_encoding       | FALSE            | For internal use. Do not change.                                                                                                                                                                                                                                                                                                                                 |
-| store.parquet.dictionary.page-size             | 1048576          |                                                                                                                                                                                                                                                                                                                                                                  |
-| store.parquet.use_new_reader                   | FALSE            | Not supported in this release.                                                                                                                                                                                                                                                                                                                                   |
-| store.partition.hash_distribute                | FALSE            | Uses a hash algorithm to distribute data on partition keys in a CTAS partitioning operation. An alpha option--for experimental use at this stage. Do not use in production systems.                                                                                                                                                                              |
-| store.text.estimated_row_size_bytes            | 100              | Estimate of the row size in a delimited text file, such as csv. The closer to actual, the better the query plan. Used for all csv files in the system/session where the value is set. Impacts the decision to plan a broadcast join or not.                                                                                                                      |
-| window.enable                                  | TRUE             | Enable or disable window functions in Drill 1.1 and later.                                                                                                                                                                                                                                                                                                       |
+| Name                                           | Default            | Comments                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     
                                                                                                                                                              |
+|------------------------------------------------|--------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 -------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| drill.exec.functions.cast_empty_string_to_null | FALSE              | In   a text file, treat empty fields as NULL values instead of empty string.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
                                                                                                                                                              |
+| drill.exec.storage.file.partition.column.label | dir                | The   column label for directory levels in results of queries of files in a   directory. Accepts a string input.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             
                                                                                                                                                              |
+| exec.enable_union_type                         | FALSE              | Enable   support for Avro union type.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        
                                                                                                                                                              |
+| exec.errors.verbose                            | FALSE              | Toggles   verbose output of executable error messages                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        
                                                                                                                                                              |
+| exec.java_compiler                             | DEFAULT            | Switches   between DEFAULT, JDK, and JANINO mode for the current session. Uses Janino by   default for generated source code of less than   exec.java_compiler_janino_maxsize; otherwise, switches to the JDK compiler.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      
                                                                                                                                                              |
+| exec.java_compiler_debug                       | TRUE               | Toggles   the output of debug-level compiler error messages in runtime generated code.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       
                                                                                                                                                              |
+| exec.java_compiler_janino_maxsize              | 262144             | See   the exec.java_compiler option comment. Accepts inputs of type LONG.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
                                                                                                                                                              |
+| exec.max_hash_table_size                       | 1073741824         | Ending   size in buckets for hash tables. Range: 0 - 1073741824.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             
                                                                                                                                                              |
+| exec.min_hash_table_size                       | 65536              | Starting   size in bucketsfor hash tables. Increase according to available memory to   improve performance. Increasing for very large aggregations or joins when you   have large amounts of memory for Drill to use. Range: 0 - 1073741824.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
                                                                                                                                                              |
+| exec.queue.enable                              | FALSE              | Changes   the state of query queues. False allows unlimited concurrent queries.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              
                                                                                                                                                              |
+| exec.queue.large                               | 10                 | Sets   the number of large queries that can run concurrently in the cluster. Range:   0-1000                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
                                                                                                                                                              |
+| exec.queue.small                               | 100                | Sets   the number of small queries that can run concurrently in the cluster. Range:   0-1001                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
                                                                                                                                                              |
+| exec.queue.threshold                           | 30000000           | Sets   the cost threshold, which depends on the complexity of the queries in queue,   for determining whether query is large or small. Complex queries have higher   thresholds. Range: 0-9223372036854775807                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                
                                                                                                                                                              |
+| exec.queue.timeout_millis                      | 300000             | Indicates   how long a query can wait in queue before the query fails. Range:   0-9223372036854775807                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        
                                                                                                                                                              |
+| exec.schedule.assignment.old                   | FALSE              | Used   to prevent query failure when no work units are assigned to a minor fragment,   particularly when the number of files is much larger than the number of leaf   fragments.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             
                                                                                                                                                              |
+| exec.storage.enable_new_text_reader            | TRUE               | Enables   the text reader that complies with the RFC 4180 standard for text/csv files.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       
                                                                                                                                                              |
+| new_view_default_permissions                   | 700                | Sets   view permissions using an octal code in the Unix tradition.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           
                                                                                                                                                              |
+| planner.add_producer_consumer                  | FALSE              | Increase   prefetching of data from disk. Disable for in-memory reads.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       
                                                                                                                                                              |
+| planner.affinity_factor                        | 1.2                | Factor   by which a node with endpoint affinity is favored while creating assignment.   Accepts inputs of type DOUBLE.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       
                                                                                                                                                              |
+| planner.broadcast_factor                       | 1                  | A   heuristic parameter for influencing the broadcast of records as part of a   query.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       
                                                                                                                                                              |
+| planner.broadcast_threshold                    | 10000000           | The   maximum number of records allowed to be broadcast as part of a query. After   one million records, Drill reshuffles data rather than doing a broadcast to   one side of the join. Range: 0-2147483647                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  
                                                                                                                                                              |
+| planner.disable_exchanges                      | FALSE              | Toggles   the state of hashing to a random exchange.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         
                                                                                                                                                              |
+| planner.enable_broadcast_join                  | TRUE               | Changes   the state of aggregation and join operators. The broadcast join can be used   for hash join, merge join and nested loop join. Use to join a large (fact)   table to relatively smaller (dimension) tables. Do not disable.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         
                                                                                                                                                              |
+| planner.enable_constant_folding                | TRUE               | If   one side of a filter condition is a constant expression, constant folding   evaluates the expression in the planning phase and replaces the expression   with the constant value. For example, Drill can rewrite WHERE age + 5 < 42   as WHERE age < 37.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                
                                                                                                                                                              |
+| planner.enable_decimal_data_type               | FALSE              | False   disables the DECIMAL data type, including casting to DECIMAL and reading   DECIMAL types from Parquet and Hive.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      
                                                                                                                                                              |
+| planner.enable_demux_exchange                  | FALSE              | Toggles   the state of hashing to a demulitplexed exchange.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  
                                                                                                                                                              |
+| planner.enable_hash_single_key                 | TRUE               | Each   hash key is associated with a single value.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           
                                                                                                                                                              |
+| planner.enable_hashagg                         | TRUE               | Enable   hash aggregation; otherwise, Drill does a sort-based aggregation. Does not   write to disk. Enable is recommended.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  
                                                                                                                                                              |
+| planner.enable_hashjoin                        | TRUE               | Enable   the memory hungry hash join. Drill assumes that a query with have adequate   memory to complete and tries to use the fastest operations possible to   complete the planned inner, left, right, or full outer joins using a hash   table. Does not write to disk. Disabling hash join allows Drill to manage   arbitrarily large data in a small memory footprint.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   
                                                                                                                                                              |
+| planner.enable_hashjoin_swap                   | TRUE               | Enables   consideration of multiple join order sequences during the planning phase.   Might negatively affect the performance of some queries due to inaccuracy of   estimated row count especially after a filter, join, or aggregation.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
                                                                                                                                                              |
+| planner.enable_hep_join_opt                    |                    | Enables   the heuristic planner for joins.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   
                                                                                                                                                              |
+| planner.enable_mergejoin                       | TRUE               | Sort-based   operation. A merge join is used for inner join, left and right outer joins.   Inputs to the merge join must be sorted. It reads the sorted input streams   from both sides and finds matching rows. Writes to disk.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             
                                                                                                                                                              |
+| planner.enable_multiphase_agg                  | TRUE               | Each   minor fragment does a local aggregation in phase 1, distributes on a hash   basis using GROUP-BY keys partially aggregated results to other fragments,   and all the fragments perform a total aggregation using this data.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           
                                                                                                                                                              |
+| planner.enable_mux_exchange                    | TRUE               | Toggles   the state of hashing to a multiplexed exchange.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
                                                                                                                                                              |
+| planner.enable_nestedloopjoin                  | TRUE               | Sort-based   operation. Writes to disk.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      
                                                                                                                                                              |
+| planner.enable_nljoin_for_scalar_only          | TRUE               | Supports   nested loop join planning where the right input is scalar in order to enable   NOT-IN, Inequality, Cartesian, and uncorrelated EXISTS planning.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   
                                                                                                                                                              |
+| planner.enable_streamagg                       | TRUE               | Sort-based   operation. Writes to disk.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      
                                                                                                                                                              |
+| planner.filter.max_selectivity_estimate_factor | -                  | Sets the maximum filter   selectivity estimate. The selectivity can vary between 0 and 1. For more   details, see planner.filter.min_selectivity_estimate_factor.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            
                                                                                                                                                              |
+| planner.filter.min_selectivity_estimate_factor | -                  | Sets the minimum filter   selectivity estimate to increase the parallelization of the major fragment   performing a join. This option is useful for deeply nested queries with   complicated predicates and serves as a workaround when statistics are   insufficient or unavailable. The selectivity can vary between 0 and 1. The   value of this option caps the estimated SELECTIVITY. The estimated ROWCOUNT   is derived by multiplying the estimated SELECTIVITY by the estimated ROWCOUNT   of the upstream operator. The estimated ROWCOUNT displays when you use the   EXPLAIN PLAN INCLUDING ALL ATTRIBUTES FOR command. This option does not   control the estimated ROWCOUNT of downstream operators (post FILTER).   However, estimated ROWCOUNTs may change because the operator ROWCOUNTs depend   on their downstream operators. The FILTER operator relies on the input of its   immediate upstream operator, for example SCAN, AGGREGATE. 
 If two filters are   present in a plan, each filter may have a different estimated ROWCOUNT based   on the immediate upstream operator's estimated ROWCOUNT. |
+| planner.identifier_max_length                  | 1024               | A   minimum length is needed because option names are identifiers themselves.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                
                                                                                                                                                              |
+| planner.join.hash_join_swap_margin_factor      | 10                 | The   number of join order sequences to consider during the planning phase.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  
                                                                                                                                                              |
+| planner.join.row_count_estimate_factor         | 1                  | The   factor for adjusting the estimated row count when considering multiple join   order sequences during the planning phase.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               
                                                                                                                                                              |
+| planner.memory.average_field_width             | 8                  | Used   in estimating memory requirements.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
                                                                                                                                                              |
+| planner.memory.enable_memory_estimation        | FALSE              | Toggles   the state of memory estimation and re-planning of the query. When enabled,   Drill conservatively estimates memory requirements and typically excludes   these operators from the plan and negatively impacts performance.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         
                                                                                                                                                              |
+| planner.memory.hash_agg_table_factor           | 1.1                | A   heuristic value for influencing the size of the hash aggregation table.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  
                                                                                                                                                              |
+| planner.memory.hash_join_table_factor          | 1.1                | A   heuristic value for influencing the size of the hash aggregation table.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  
                                                                                                                                                              |
+| planner.memory.max_query_memory_per_node       | 2147483648   bytes | Sets   the maximum amount of direct memory allocated to the sort operator in each   query on a node. If a query plan contains multiple sort operators, they all   share this memory. If you encounter memory issues when running queries with   sort operators, increase the value of this option.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           
                                                                                                           

<TRUNCATED>

[10/17] drill git commit: updates for Drill 1.8 release - release notes, blog, version.json

Posted by br...@apache.org.
updates for Drill 1.8 release - release notes, blog, version.json


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/21c41f55
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/21c41f55
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/21c41f55

Branch: refs/heads/gh-pages
Commit: 21c41f5507c0b56c0045cad4f1910aaddf64bb2b
Parents: bb11857
Author: Bridget Bevens <bb...@maprtech.com>
Authored: Mon Aug 8 14:02:11 2016 -0700
Committer: Bridget Bevens <bb...@maprtech.com>
Committed: Mon Aug 8 14:02:11 2016 -0700

----------------------------------------------------------------------
 _data/version.json                           |  10 +-
 _docs/rn/003-1.8.0-rn.md                     | 106 ++++++++++++++++++++++
 blog/_posts/2016-08-15-drill-1.8-released.md |  31 +++++++
 3 files changed, 142 insertions(+), 5 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/21c41f55/_data/version.json
----------------------------------------------------------------------
diff --git a/_data/version.json b/_data/version.json
index 90a8474..72d2ff0 100644
--- a/_data/version.json
+++ b/_data/version.json
@@ -1,7 +1,7 @@
 {
-  "display_version": "1.7",
-  "full_version": "1.7.0",
-  "release_date": "June 28, 2016",
-  "blog_post":"/blog/2016/06/28/drill-1.7-released",
-  "release_notes": "https://drill.apache.org/docs/apache-drill-1-7-0-release-notes/"
+  "display_version": "1.8",
+  "full_version": "1.8.0",
+  "release_date": "August 15, 2016",
+  "blog_post":"/blog/2016/08/15/drill-1.8-released",
+  "release_notes": "https://drill.apache.org/docs/apache-drill-1-8-0-release-notes/"
 }

http://git-wip-us.apache.org/repos/asf/drill/blob/21c41f55/_docs/rn/003-1.8.0-rn.md
----------------------------------------------------------------------
diff --git a/_docs/rn/003-1.8.0-rn.md b/_docs/rn/003-1.8.0-rn.md
new file mode 100644
index 0000000..65de9a2
--- /dev/null
+++ b/_docs/rn/003-1.8.0-rn.md
@@ -0,0 +1,106 @@
+---
+title: "Apache Drill 1.8.0 Release Notes"
+parent: "Release Notes"
+---
+
+**Release date:**  August, 2016
+
+Today, we're happy to announce the availability of Drill 1.8.0. You can download it [here](https://drill.apache.org/download/).
+
+This release provides metadata cache pruning, support for the IF EXISTS parameter with the DROP TABLE and DROP VIEW commands, support for the DESCRIBE SCHEMA command, multi-byte delimiter support, new parameters for filter selectivity estimates, and the following bug fixes and improvements:  
+
+    
+<h2>        Sub-task
+</h2>
+<ul>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4560'>DRILL-4560</a>] -         ZKClusterCoordinator does not call DrillbitStatusListener.drillbitRegistered for new bits
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4728'>DRILL-4728</a>] -         Add support for new metadata fetch APIs
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4729'>DRILL-4729</a>] -         Add support for prepared statement implementation on server side
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4730'>DRILL-4730</a>] -         Update JDBC DatabaseMetaData implementation to use new Metadata APIs
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4732'>DRILL-4732</a>] -         Update JDBC driver to use the new prepared statement APIs on DrillClient
+</li>
+</ul>
+                            
+<h2>        Bug
+</h2>
+<ul>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-3726'>DRILL-3726</a>] -         Drill is not properly interpreting CRLF (0d0a). CR gets read as content.
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4147'>DRILL-4147</a>] -         Union All operator runs in a single fragment
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4175'>DRILL-4175</a>] -         IOBE may occur in Calcite RexProgramBuilder when queries are submitted concurrently
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4341'>DRILL-4341</a>] -         Fails to parse string literals containing escaped quotes
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4574'>DRILL-4574</a>] -         Avro Plugin: Flatten does not work correctly on record items
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4658'>DRILL-4658</a>] -         cannot specify tab as a fieldDelimiter in table function
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4664'>DRILL-4664</a>] -         ScanBatch.isNewSchema() returns wrong result for map datatype
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4665'>DRILL-4665</a>] -         Partition pruning not working for hive partitioned table with &#39;LIKE&#39; and &#39;=&#39; filter
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4707'>DRILL-4707</a>] -         Conflicting columns names under case-insensitive policy lead to either memory leak or incorrect result
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4715'>DRILL-4715</a>] -         Java compilation error for a query with large number of expressions
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4743'>DRILL-4743</a>] -         HashJoin&#39;s not fully parallelized in query plan
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4746'>DRILL-4746</a>] -         Verification Failures (Decimal values) in drill&#39;s regression tests
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4759'>DRILL-4759</a>] -         Drill throwing array index out of bound exception when reading a parquet file written by map reduce program.
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4768'>DRILL-4768</a>] -         Drill may leak hive meta store connection if hive meta store client call hits error
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4783'>DRILL-4783</a>] -         Flatten on CONVERT_FROM fails with ClassCastException if resultset is empty
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4785'>DRILL-4785</a>] -         Limit 0 queries regressed in Drill 1.7.0 
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4794'>DRILL-4794</a>] -         Regression: Wrong result for query with disjunctive partition filters
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4801'>DRILL-4801</a>] -         Setting extractHeader attribute for CSV format does not propagate to all drillbits 
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4816'>DRILL-4816</a>] -         sqlline -f failed to read the query file
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4825'>DRILL-4825</a>] -         Wrong data with UNION ALL when querying different sub-directories under the same table
+</li>
+</ul>
+                        
+<h2>        Improvement
+</h2>
+<ul>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-2330'>DRILL-2330</a>] -         Add support for nested aggregate expressions for window aggregates
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-3149'>DRILL-3149</a>] -         TextReader should support multibyte line delimiters
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-3710'>DRILL-3710</a>] -         Make the 20 in-list optimization configurable
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4530'>DRILL-4530</a>] -         Improve metadata cache performance for queries with single partition 
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4751'>DRILL-4751</a>] -         Remove dumpcat script from Drill distribution
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4752'>DRILL-4752</a>] -         Remove submit_plan script from Drill distribution
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4786'>DRILL-4786</a>] -         Improve metadata cache performance for queries with multiple partitions
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4792'>DRILL-4792</a>] -         Include session options used for a query as part of the profile
+</li>
+</ul>
+            
+<h2>        New Feature
+</h2>
+<ul>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4514'>DRILL-4514</a>] -         Add describe schema &lt;schema_name&gt; command
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4673'>DRILL-4673</a>] -         Implement &quot;DROP TABLE IF EXISTS&quot; for drill to prevent FAILED status on command return
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4714'>DRILL-4714</a>] -         Add metadata and prepared statement APIs to DrillClient&lt;-&gt;Drillbit interface
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4819'>DRILL-4819</a>] -         Update MapR version to 5.2.0
+</li>
+</ul>
+                                                                   
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/21c41f55/blog/_posts/2016-08-15-drill-1.8-released.md
----------------------------------------------------------------------
diff --git a/blog/_posts/2016-08-15-drill-1.8-released.md b/blog/_posts/2016-08-15-drill-1.8-released.md
new file mode 100644
index 0000000..2eac1fa
--- /dev/null
+++ b/blog/_posts/2016-08-15-drill-1.8-released.md
@@ -0,0 +1,31 @@
+---
+layout: post
+title: "Drill 1.8 Released"
+code: drill-1.8-released
+excerpt: Apache Drill 1.8's highlights are&#58; metadata cache pruning, IF EXISTS support, DESCRIBE SCHEMA command, multi-byte delimiter support, and new parameters for filter selectivity estimates.
+authors: ["bbevens"]
+---
+
+Today, we're happy to announce the availability of Drill 1.8.0. You can download it [here](https://drill.apache.org/download/).
+
+The release provides the following bug fixes and improvements:
+
+## Metadata Cache Pruning 
+Drill now applies partition pruning to the metadata cache file. See [Partition Pruning Introduction](https://drill.apache.org/docs/partition-pruning-introduction/) and [Optimizing Parquet Metadata Reading](https://drill.apache.org/docs/optimizing-parquet-metadata-reading/). 
+
+## IF EXISTS Support  
+You can include the new IF EXISTS parameter with the DROP TABLE and DROP VIEW commands to prevent Drill from returning error messages when a table or view does not exist. See [DROP TABLE](https://drill.apache.org/docs/drop-table/) and [DROP VIEW](https://drill.apache.org/docs/drop-view/).
+
+
+## DESCRIBE SCHEMA Command 
+Drill now supports the DESCRIBE SCHEMA command which provides schema properties for storage plugin configurations and workspaces. See [DESCRIBE](https://drill.apache.org/docs/describe/).  
+
+## Multi-Byte Delimiter Support  
+Drill now supports multi-byte delimiters for text files, such as \r\n. See [List of Attributes and Definitions](https://drill.apache.org/docs/plugin-configuration-basics/#list-of-attributes-and-definitions).  
+
+## Filter Selectivity Estimate Parameters  
+New parameters set the minimum filter selectivity estimate to increase the parallelization of the major fragment performing a join. See [System Options](https://drill.apache.org/docs/configuration-options-introduction/#system-options). 
+ 
+
+A complete list of JIRAs resolved in the 1.8.0 release can be found [here](https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12334768&styleName=Html&projectId=12313820&Create=Create&atl_token=A5KQ-2QAV-T4JA-FDED%7C0721f2a625165c1e2cc6c0d2cfb41b437cc68769%7Clin).
+


[05/17] drill git commit: add HashJoin's not fully parallelized in query plan - new parameters for DRILL-4743

Posted by br...@apache.org.
add HashJoin's not fully parallelized in query plan - new parameters for DRILL-4743


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/baa4eddb
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/baa4eddb
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/baa4eddb

Branch: refs/heads/gh-pages
Commit: baa4eddb7c0f37b5ac06360e1cb2f4ac75c264ed
Parents: d2d9f43
Author: Bridget Bevens <bb...@maprtech.com>
Authored: Thu Aug 4 15:49:28 2016 -0700
Committer: Bridget Bevens <bb...@maprtech.com>
Committed: Thu Aug 4 15:49:28 2016 -0700

----------------------------------------------------------------------
 .../010-configuration-options-introduction.md   | 152 ++++++++++---------
 1 file changed, 77 insertions(+), 75 deletions(-)
----------------------------------------------------------------------



[16/17] drill git commit: updates to docs for Drill 1.8 release

Posted by br...@apache.org.
updates to docs for Drill 1.8 release


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/22d454b1
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/22d454b1
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/22d454b1

Branch: refs/heads/gh-pages
Commit: 22d454b1d2b7f7dec8c1ef521966018398999fae
Parents: 1c97d7c
Author: Bridget Bevens <bb...@maprtech.com>
Authored: Tue Aug 30 15:16:38 2016 -0700
Committer: Bridget Bevens <bb...@maprtech.com>
Committed: Tue Aug 30 15:16:38 2016 -0700

----------------------------------------------------------------------
 .../047-installing-drill-on-the-cluster.md      |  8 ++---
 ...20-installing-drill-on-linux-and-mac-os-x.md |  8 ++---
 .../040-installing-drill-on-windows.md          |  4 +--
 _docs/rn/003-1.8.0-rn.md                        | 31 +++++++++++++++++---
 _docs/tutorials/020-drill-in-10-minutes.md      | 10 +++----
 5 files changed, 42 insertions(+), 19 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/22d454b1/_docs/install/047-installing-drill-on-the-cluster.md
----------------------------------------------------------------------
diff --git a/_docs/install/047-installing-drill-on-the-cluster.md b/_docs/install/047-installing-drill-on-the-cluster.md
index ee08ae5..7123416 100644
--- a/_docs/install/047-installing-drill-on-the-cluster.md
+++ b/_docs/install/047-installing-drill-on-the-cluster.md
@@ -1,13 +1,13 @@
 ---
 title: "Installing Drill on the Cluster"
-date: 2016-06-29 02:15:53 UTC
+date: 2016-08-30 22:16:39 UTC
 parent: "Installing Drill in Distributed Mode"
 ---
 You install Drill on nodes in the cluster, configure a cluster ID, and add Zookeeper information, as described in the following steps:
 
-  1. Download the latest version of Apache Drill [here](http://www.apache.org/dyn/closer.lua?filename=drill/drill-1.7.0/apache-drill-1.7.0.tar.gz&action=download) or from the [Apache Drill mirror site](http://www.apache.org/dyn/closer.cgi/drill/drill-1.7.0/apache-drill-1.7.0.tar.gz) with the command appropriate for your system:  
-       * `wget http://apache.mesi.com.ar/drill/drill-1.7.0/apache-drill-1.7.0.tar.gz`  
-       * `curl -o apache-drill-1.7.0.tar.gz http://apache.mesi.com.ar/drill/drill-1.7.0/apache-drill-1.7.0.tar.gz`  
+  1. Download the latest version of Apache Drill [here](http://apache.mirrors.hoobly.com/drill/drill-1.8.0/apache-drill-1.8.0.tar.gz) or from the [Apache Drill mirror site](http://www.apache.org/dyn/closer.cgi/drill/drill-1.8.0/apache-drill-1.8.0.tar.gz) with the command appropriate for your system:  
+       * `wget http://apache.mirrors.hoobly.com/drill/drill-1.8.0/apache-drill-1.8.0.tar.gz`  
+       * `curl -o apache-drill-1.8.0.tar.gz http://apache.mirrors.hoobly.com/drill/drill-1.8.0/apache-drill-1.8.0.tar.gz`  
   2. Extract the tarball to the directory of your choice, such as `/opt`:  
   `tar -xzvf apache-drill-<version>.tar.gz`
   3. In `drill-override.conf,` use the Drill `cluster ID`, and provide ZooKeeper host names and port numbers to configure a connection to your ZooKeeper quorum.  

http://git-wip-us.apache.org/repos/asf/drill/blob/22d454b1/_docs/install/installing-drill-in-embedded-mode/020-installing-drill-on-linux-and-mac-os-x.md
----------------------------------------------------------------------
diff --git a/_docs/install/installing-drill-in-embedded-mode/020-installing-drill-on-linux-and-mac-os-x.md b/_docs/install/installing-drill-in-embedded-mode/020-installing-drill-on-linux-and-mac-os-x.md
index 8a18f75..f75451e 100644
--- a/_docs/install/installing-drill-in-embedded-mode/020-installing-drill-on-linux-and-mac-os-x.md
+++ b/_docs/install/installing-drill-in-embedded-mode/020-installing-drill-on-linux-and-mac-os-x.md
@@ -1,6 +1,6 @@
 ---
 title: "Installing Drill on Linux and Mac OS X"
-date: 2016-06-29 02:15:53 UTC
+date: 2016-08-30 22:16:39 UTC
 parent: "Installing Drill in Embedded Mode"
 ---
 First, check that you [meet the prerequisites]({{site.baseurl}}/docs/embedded-mode-prerequisites), and then install Apache Drill on Linux or Mac OS X:
@@ -8,9 +8,9 @@ First, check that you [meet the prerequisites]({{site.baseurl}}/docs/embedded-mo
 Complete the following steps to install Drill:  
 
 1. In a terminal window, change to the directory where you want to install Drill.  
-2. Download the latest version of Apache Drill [here](http://www.apache.org/dyn/closer.lua?filename=drill/drill-1.7.0/apache-drill-1.7.0.tar.gz&action=download) or from the [Apache Drill mirror site](http://www.apache.org/dyn/closer.cgi/drill/drill-1.7.0/apache-drill-1.7.0.tar.gz) with the command appropriate for your system:  
-       * `wget http://apache.mesi.com.ar/drill/drill-1.7.0/apache-drill-1.7.0.tar.gz`  
-       * `curl -o apache-drill-1.7.0.tar.gz http://apache.mesi.com.ar/drill/drill-1.7.0/apache-drill-1.7.0.tar.gz`  
+2. Download the latest version of Apache Drill [here](http://apache.mirrors.hoobly.com/drill/drill-1.8.0/apache-drill-1.8.0.tar.gz) or from the [Apache Drill mirror site](http://www.apache.org/dyn/closer.cgi/drill/drill-1.8.0/apache-drill-1.8.0.tar.gz) with the command appropriate for your system:  
+       * `wget http://apache.mirrors.hoobly.com/drill/drill-1.8.0/apache-drill-1.8.0.tar.gz`  
+       * `curl -o apache-drill-1.8.0.tar.gz http://apache.mirrors.hoobly.com/drill/drill-1.8.0/apache-drill-1.8.0.tar.gz`  
 3. Copy the downloaded file to the directory where you want to install Drill.  
 4. Extract the contents of the Drill `.tar.gz` file. Use sudo only if necessary:  
 `tar -xvzf <.tar.gz file name>`  

http://git-wip-us.apache.org/repos/asf/drill/blob/22d454b1/_docs/install/installing-drill-in-embedded-mode/040-installing-drill-on-windows.md
----------------------------------------------------------------------
diff --git a/_docs/install/installing-drill-in-embedded-mode/040-installing-drill-on-windows.md b/_docs/install/installing-drill-in-embedded-mode/040-installing-drill-on-windows.md
index 53e0fda..3d4177c 100644
--- a/_docs/install/installing-drill-in-embedded-mode/040-installing-drill-on-windows.md
+++ b/_docs/install/installing-drill-in-embedded-mode/040-installing-drill-on-windows.md
@@ -1,11 +1,11 @@
 ---
 title: "Installing Drill on Windows"
-date: 2016-06-29 02:15:53 UTC
+date: 2016-08-30 22:16:40 UTC
 parent: "Installing Drill in Embedded Mode"
 ---
 First, check that you [meet the prerequisites]({{site.baseurl}}/docs/embedded-mode-prerequisites), including setting the JAVA_HOME environment variable, and then install Drill. Currently, Drill supports 64-bit Windows only. Complete the following steps to install Drill:
 
-1. Download the latest version of Apache Drill [here](http://www.apache.org/dyn/closer.lua?filename=drill/drill-1.7.0/apache-drill-1.7.0.tar.gz&action=download).
+1. Download the latest version of Apache Drill [here](http://apache.mirrors.hoobly.com/drill/drill-1.8.0/apache-drill-1.8.0.tar.gz).
 2. Move the downloaded file to the directory where you want to install Drill.
 3. Unzip the GZ file using a third-party tool. If the tool you use does not unzip the underlying TAR file as well as the GZ file, perform a second unzip to extract the Drill software. The extraction process creates the installation directory containing the Drill software. 
 

http://git-wip-us.apache.org/repos/asf/drill/blob/22d454b1/_docs/rn/003-1.8.0-rn.md
----------------------------------------------------------------------
diff --git a/_docs/rn/003-1.8.0-rn.md b/_docs/rn/003-1.8.0-rn.md
index 449b062..c577ad5 100644
--- a/_docs/rn/003-1.8.0-rn.md
+++ b/_docs/rn/003-1.8.0-rn.md
@@ -3,7 +3,7 @@ title: "Apache Drill 1.8.0 Release Notes"
 parent: "Release Notes"
 ---
 
-**Release date:**  August 15, 2016
+**Release date:**  August 30, 2016
 
 Today, we're happy to announce the availability of Drill 1.8.0. You can download it [here](https://drill.apache.org/download/).
 
@@ -19,14 +19,37 @@ This release of Drill provides the following new features:
 ## Configuration and Launch Script Changes 
 This release of Drill also includes the following changes to the configuration and launch scripts: 
 
-- Default Drill settings now reside in `$DRILL_HOME/bin/drill-config.sh`. You can override many settings by creating an entry in `$DRILL_HOME/conf/drill-env.sh`. The file includes descriptions of the options that you can set.  ([DRILL-4581](https://issues.apache.org/jira/browse/DRILL-4581))  
+- The `$DRILL_HOME/conf/drill-env.sh` file has been simplified. Default Drill settings have moved out of this file and now reside in `$DRILL_HOME/bin/drill-config.sh`. The `drill-env.sh` file now ships with descriptions of the options that you can set. You can override many settings by creating an entry in `$DRILL_HOME/conf/drill-env.sh`.  ([DRILL-4581](https://issues.apache.org/jira/browse/DRILL-4581))  
 - Due to issues at high concurrency, the native Linux epoll transport is now disabled by default. ([DRILL-4623](https://issues.apache.org/jira/browse/DRILL-4623))  
  
-If you upgrade to Drill 1.8, you must merge your custom settings with the latest settings in the `drill-override.conf` and `drill-env.sh` file that ships with Drill. As of Drill 1.8, all Drill defaults reside in the Drill scripts. The `drill-env.sh` script contains only your customizations. When you merge your existing `drill-env.sh` file with the 1.8 version of the file, you can remove all of the settings in your file except for those that you created yourself. Consult the original `drill-env.sh` file from the prior Drill release to determine which settings you can remove.
+Changes to the Drill launch scripts provide a new option that simplifies the Drill upgrade for Drill 1.8 and later. The new scripts support a \u201csite directory\u201d that holds site-specific files separate from the Drill product directory. The site directory is a simple extension of the config directory in previous Drill releases. You can add a \u201cjars\u201d subdirectory to the config directory for your custom jars instead of storing them in $DRILL_HOME. You can even add native libraries in the new \u201clib\u201d directory and Drill automatically loads them. 
 
+Example:  
 
+       /my/conf/dir
+       |- drill-override.conf
+       |- drill-env.sh
+       |- jars
+          |- myudf.jar
+       |- lib
+          |- mysecurity.so
 
-The following sections list additional bug fixes and improvements:
+To use the site directory:
+
+       drillbit.sh \u2014site /my/conf/dir start
+
+Note that `--config` still works as well.
+
+You can set an environment variable for the directory:
+
+       export DRILL_CONF_DIR=/my/conf/dir
+       drillbit.sh start
+
+The site directory works with `drillbit.sh` and the various Drill client scripts.
+
+To upgrade Drill using the new site directory, just delete the old Drill product directory and expand the Drill archive to create a new one. There is no need to back up and merge files each time you upgrade because the site files are not affected by an upgrade. Using the site directory, you can have different site directories for different Drill sessions: one for development, another for test, and so on. You can use the site directory to run multiple Drill clusters from as single Drill installation by creating a site directory for each Drill cluster and then configuring Drill. 
+
+The following sections list additional bug fixes and improvements:  
 
     
 <h2>        Sub-task

http://git-wip-us.apache.org/repos/asf/drill/blob/22d454b1/_docs/tutorials/020-drill-in-10-minutes.md
----------------------------------------------------------------------
diff --git a/_docs/tutorials/020-drill-in-10-minutes.md b/_docs/tutorials/020-drill-in-10-minutes.md
index 2a02439..6259968 100644
--- a/_docs/tutorials/020-drill-in-10-minutes.md
+++ b/_docs/tutorials/020-drill-in-10-minutes.md
@@ -1,6 +1,6 @@
 ---
 title: "Drill in 10 Minutes"
-date: 2016-06-29 02:15:54 UTC
+date: 2016-08-30 22:16:41 UTC
 parent: "Tutorials"
 description: Get started with Drill in 10 minutes or less.
 ---
@@ -45,9 +45,9 @@ The output looks something like this:
 Complete the following steps to install Drill:  
 
 1. In a terminal window, change to the directory where you want to install Drill.  
-2. Download the latest version of Apache Drill [here](http://www.apache.org/dyn/closer.lua?filename=drill/drill-1.7.0/apache-drill-1.7.0.tar.gz&action=download) or from the [Apache Drill mirror site](http://www.apache.org/dyn/closer.cgi/drill/drill-1.7.0/apache-drill-1.7.0.tar.gz) with the command appropriate for your system:  
-       * `wget http://apache.mesi.com.ar/drill/drill-1.7.0/apache-drill-1.7.0.tar.gz`  
-       * `curl -o apache-drill-1.7.0.tar.gz http://apache.mesi.com.ar/drill/drill-1.7.0/apache-drill-1.7.0.tar.gz`   
+2. Download the latest version of Apache Drill [here](http://apache.mirrors.hoobly.com/drill/drill-1.8.0/apache-drill-1.8.0.tar.gz) or from the [Apache Drill mirror site](http://www.apache.org/dyn/closer.cgi/drill/drill-1.8.0/apache-drill-1.8.0.tar.gz) with the command appropriate for your system:  
+       * `wget http://apache.mirrors.hoobly.com/drill/drill-1.8.0/apache-drill-1.8.0.tar.gz`  
+       * `curl -o apache-drill-1.8.0.tar.gz http://apache.mirrors.hoobly.com/drill/drill-1.8.0/apache-drill-1.8.0.tar.gz`   
 3. Copy the downloaded file to the directory where you want to install Drill.  
 4. Extract the contents of the Drill .tar.gz file. Use `sudo` if necessary:  
 `tar -xvzf <.tar.gz file name>`  
@@ -75,7 +75,7 @@ Start Drill in embedded mode using the `drill-embedded` command:
 
 You can install Drill on Windows. First, set the JAVA_HOME environment variable, and then install Drill. Complete the following steps to install Drill:
 
-1. Download the latest version of Apache Drill [here](http://www.apache.org/dyn/closer.lua?filename=drill/drill-1.6.0/apache-drill-1.6.0.tar.gz&action=download) or go to the [Apache Drill mirror site](http://www.apache.org/dyn/closer.cgi/drill/drill-1.6.0/apache-drill-1.6.0.tar.gz).  
+1. Download the latest version of Apache Drill [here](http://apache.mirrors.hoobly.com/drill/drill-1.8.0/apache-drill-1.8.0.tar.gz) or from the [Apache Drill mirror site](http://www.apache.org/dyn/closer.cgi/drill/drill-1.8.0/apache-drill-1.8.0.tar.gz).  
 2. Move the `apache-drill-<version>.tar.gz` file to a directory where you want to install Drill.  
 3. Unzip the `TAR.GZ` file using a third-party tool. If the tool you use does not unzip the TAR file as well as the `TAR.GZ` file, unzip the `apache-drill-<version>.tar` to extract the Drill software. The extraction process creates the installation directory named apache-drill-<version> containing the Drill software.