You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@drill.apache.org by br...@apache.org on 2015/07/03 01:31:54 UTC

drill git commit: add info from RN, wordsmith

Repository: drill
Updated Branches:
  refs/heads/gh-pages f0a7565fb -> 1b5fb2ee9


add info from RN, wordsmith

1.1 update

fix cosmetics

DRILL-3438

Bridget's 1.1 updates

fix links

remove angle brackets

fix formatting problem

fix format

fix file name

minor edit

DRILL-3246

Bridget's 1.1 updates

get rid of 1.1.0 rn

minor Bridget's edits

add unix_timestamp function

fix link, minor edit

Hive 1.0 support

1.1 updates

Bridget's 1.1 update


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/1b5fb2ee
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/1b5fb2ee
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/1b5fb2ee

Branch: refs/heads/gh-pages
Commit: 1b5fb2ee9f063a7918af865dc48df0eb9278c489
Parents: f0a7565
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Tue Jun 30 16:14:14 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Thu Jul 2 16:22:09 2015 -0700

----------------------------------------------------------------------
 _data/docs.json                                 | 71 +++++++++++++---
 .../120-configuring-the-drill-shell.md          |  2 +-
 .../070-hive-storage-plugin.md                  |  6 +-
 .../060-text-files-csv-tsv-psv.md               | 89 ++++++++++++++++++++
 .../020-develop-a-simple-function.md            |  2 +-
 .../030-developing-an-aggregate-function.md     |  2 +-
 .../sql-commands/030-create-table-as.md         |  2 +-
 .../sql-commands/035-partition-by-clause.md     |  2 +-
 .../sql-commands/050-create-view.md             |  2 +-
 .../sql-commands/087-union-set-operator.md      | 11 +--
 .../030-date-time-functions-and-arithmetic.md   | 72 +++++++++++++---
 11 files changed, 225 insertions(+), 36 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/1b5fb2ee/_data/docs.json
----------------------------------------------------------------------
diff --git a/_data/docs.json b/_data/docs.json
index 997d36d..5739608 100644
--- a/_data/docs.json
+++ b/_data/docs.json
@@ -2084,14 +2084,31 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "Develop Custom Functions", 
-                    "next_url": "/docs/develop-custom-functions/", 
+                    "next_title": "Text Files: CSV, TSV, PSV", 
+                    "next_url": "/docs/text-files-csv-tsv-psv/", 
                     "parent": "Data Sources and File Formats", 
                     "previous_title": "Parquet Format", 
                     "previous_url": "/docs/parquet-format/", 
                     "relative_path": "_docs/data-sources-and-file-formats/050-json-data-model.md", 
                     "title": "JSON Data Model", 
                     "url": "/docs/json-data-model/"
+                }, 
+                {
+                    "breadcrumbs": [
+                        {
+                            "title": "Data Sources and File Formats", 
+                            "url": "/docs/data-sources-and-file-formats/"
+                        }
+                    ], 
+                    "children": [], 
+                    "next_title": "Develop Custom Functions", 
+                    "next_url": "/docs/develop-custom-functions/", 
+                    "parent": "Data Sources and File Formats", 
+                    "previous_title": "JSON Data Model", 
+                    "previous_url": "/docs/json-data-model/", 
+                    "relative_path": "_docs/data-sources-and-file-formats/060-text-files-csv-tsv-psv.md", 
+                    "title": "Text Files: CSV, TSV, PSV", 
+                    "url": "/docs/text-files-csv-tsv-psv/"
                 }
             ], 
             "next_title": "Data Sources and File Formats Introduction", 
@@ -2513,8 +2530,8 @@
             "next_title": "Develop Custom Functions Introduction", 
             "next_url": "/docs/develop-custom-functions-introduction/", 
             "parent": "", 
-            "previous_title": "JSON Data Model", 
-            "previous_url": "/docs/json-data-model/", 
+            "previous_title": "Text Files: CSV, TSV, PSV", 
+            "previous_url": "/docs/text-files-csv-tsv-psv/", 
             "relative_path": "_docs/100-develop-custom-functions.md", 
             "title": "Develop Custom Functions", 
             "url": "/docs/develop-custom-functions/"
@@ -4261,8 +4278,8 @@
                 }
             ], 
             "children": [], 
-            "next_title": "Develop Custom Functions", 
-            "next_url": "/docs/develop-custom-functions/", 
+            "next_title": "Text Files: CSV, TSV, PSV", 
+            "next_url": "/docs/text-files-csv-tsv-psv/", 
             "parent": "Data Sources and File Formats", 
             "previous_title": "Parquet Format", 
             "previous_url": "/docs/parquet-format/", 
@@ -10385,6 +10402,23 @@
             "title": "Testing the ODBC Connection", 
             "url": "/docs/testing-the-odbc-connection/"
         }, 
+        "Text Files: CSV, TSV, PSV": {
+            "breadcrumbs": [
+                {
+                    "title": "Data Sources and File Formats", 
+                    "url": "/docs/data-sources-and-file-formats/"
+                }
+            ], 
+            "children": [], 
+            "next_title": "Develop Custom Functions", 
+            "next_url": "/docs/develop-custom-functions/", 
+            "parent": "Data Sources and File Formats", 
+            "previous_title": "JSON Data Model", 
+            "previous_url": "/docs/json-data-model/", 
+            "relative_path": "_docs/data-sources-and-file-formats/060-text-files-csv-tsv-psv.md", 
+            "title": "Text Files: CSV, TSV, PSV", 
+            "url": "/docs/text-files-csv-tsv-psv/"
+        }, 
         "Troubleshooting": {
             "breadcrumbs": [], 
             "children": [], 
@@ -15097,14 +15131,31 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "Develop Custom Functions", 
-                    "next_url": "/docs/develop-custom-functions/", 
+                    "next_title": "Text Files: CSV, TSV, PSV", 
+                    "next_url": "/docs/text-files-csv-tsv-psv/", 
                     "parent": "Data Sources and File Formats", 
                     "previous_title": "Parquet Format", 
                     "previous_url": "/docs/parquet-format/", 
                     "relative_path": "_docs/data-sources-and-file-formats/050-json-data-model.md", 
                     "title": "JSON Data Model", 
                     "url": "/docs/json-data-model/"
+                }, 
+                {
+                    "breadcrumbs": [
+                        {
+                            "title": "Data Sources and File Formats", 
+                            "url": "/docs/data-sources-and-file-formats/"
+                        }
+                    ], 
+                    "children": [], 
+                    "next_title": "Develop Custom Functions", 
+                    "next_url": "/docs/develop-custom-functions/", 
+                    "parent": "Data Sources and File Formats", 
+                    "previous_title": "JSON Data Model", 
+                    "previous_url": "/docs/json-data-model/", 
+                    "relative_path": "_docs/data-sources-and-file-formats/060-text-files-csv-tsv-psv.md", 
+                    "title": "Text Files: CSV, TSV, PSV", 
+                    "url": "/docs/text-files-csv-tsv-psv/"
                 }
             ], 
             "next_title": "Data Sources and File Formats Introduction", 
@@ -15225,8 +15276,8 @@
             "next_title": "Develop Custom Functions Introduction", 
             "next_url": "/docs/develop-custom-functions-introduction/", 
             "parent": "", 
-            "previous_title": "JSON Data Model", 
-            "previous_url": "/docs/json-data-model/", 
+            "previous_title": "Text Files: CSV, TSV, PSV", 
+            "previous_url": "/docs/text-files-csv-tsv-psv/", 
             "relative_path": "_docs/100-develop-custom-functions.md", 
             "title": "Develop Custom Functions", 
             "url": "/docs/develop-custom-functions/"

http://git-wip-us.apache.org/repos/asf/drill/blob/1b5fb2ee/_docs/configure-drill/120-configuring-the-drill-shell.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/120-configuring-the-drill-shell.md b/_docs/configure-drill/120-configuring-the-drill-shell.md
index 1597fa0..4a0e861 100644
--- a/_docs/configure-drill/120-configuring-the-drill-shell.md
+++ b/_docs/configure-drill/120-configuring-the-drill-shell.md
@@ -2,7 +2,7 @@
 title: "Configuring the Drill Shell"
 parent: "Configure Drill"
 ---
-After [starting the Drill shell]({{site.baseurl}}/docs/starting-drill-on-linux-and-mac-os-x/), you can type queries on the shell command line. At the Drill shell command prompt, typing "help" lists the configuration and other options you can set to manage shell functionality. Apache Drill 1.0 formats the resultset output tables for readability if possible. In this release, columns having 70 characters or more cannot be formatted. This document formats all output for readability and example purposes.
+After [starting the Drill shell]({{site.baseurl}}/docs/starting-drill-on-linux-and-mac-os-x/), you can type queries on the shell command line. At the Drill shell command prompt, typing "help" lists the configuration and other options you can set to manage shell functionality. Apache Drill 1.0 and later formats the resultset output tables for readability if possible. In this release, columns having 70 characters or more cannot be formatted. This document formats all output for readability and example purposes.
 
 Formatting tables takes time, which you might notice if running a huge query using the default `outputFormat` setting, which is `table` of the Drill shell. You can set another, more performant table formatting such as `csv`, as shown in the [examples]({{site.baseurl}}/docs/configuring-the-drill-shell/#examples-of-configuring-the-drill-shell). 
 

http://git-wip-us.apache.org/repos/asf/drill/blob/1b5fb2ee/_docs/connect-a-data-source/070-hive-storage-plugin.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/070-hive-storage-plugin.md b/_docs/connect-a-data-source/070-hive-storage-plugin.md
index f88b002..0e30fd8 100644
--- a/_docs/connect-a-data-source/070-hive-storage-plugin.md
+++ b/_docs/connect-a-data-source/070-hive-storage-plugin.md
@@ -8,7 +8,7 @@ storage plugin instance for a Hive data source, provide a unique name for the
 instance, and identify the type as “`hive`”. You must also provide the
 metastore connection information.
 
-Drill supports Hive 0.13. To access Hive tables
+Drill 1.0 supports Hive 0.13. Drill 1.1 supports Hive 1.0. To access Hive tables
 using custom SerDes or InputFormat/OutputFormat, all nodes running Drillbits
 must have the SerDes or InputFormat/OutputFormat `JAR` files in the 
 `<drill_installation_directory>/jars/3rdparty` folder.
@@ -50,13 +50,13 @@ can [query Hive tables]({{ site.baseurl }}/docs/querying-hive/).
 
 ## Hive Embedded Metastore
 
-In this configuration, the Hive metastore is embedded within the Drill process. Provide the metastore database configuration settings in the Drill Web UI. Before you register Hive, verify that the driver you use to connect to the Hive metastore is in the Drill classpath located in `/<drill installation dirctory>/lib/.` If the driver is not there, copy the driver to `/<drill
+In this configuration, the Hive metastore is embedded within the Drill process. Provide the metastore database configuration settings in the Drill Web UI. Before you register Hive, verify that the driver you use to connect to the Hive metastore is in the Drill classpath located in `/<drill installation directory>/lib/.` If the driver is not there, copy the driver to `/<drill
 installation directory>/lib` on the Drill node. For more information about storage types and configurations, refer to ["Hive Metastore Administration"](https://cwiki.apache.org/confluence/display/Hive/AdminManual+MetastoreAdmin).
 
 To register an embedded Hive metastore with Drill, complete the following
 steps:
 
-  1. Navigate to `[http://localhost:8047](http://localhost:8047/)`, and select the **Storage** tab
+  1. Navigate to [http://localhost:8047](http://localhost:8047/), and select the **Storage** tab
   2. In the disabled storage plugins section, click **Update** next to `hive` instance.
   3. In the configuration window, add the database configuration settings.
 

http://git-wip-us.apache.org/repos/asf/drill/blob/1b5fb2ee/_docs/data-sources-and-file-formats/060-text-files-csv-tsv-psv.md
----------------------------------------------------------------------
diff --git a/_docs/data-sources-and-file-formats/060-text-files-csv-tsv-psv.md b/_docs/data-sources-and-file-formats/060-text-files-csv-tsv-psv.md
new file mode 100644
index 0000000..268d037
--- /dev/null
+++ b/_docs/data-sources-and-file-formats/060-text-files-csv-tsv-psv.md
@@ -0,0 +1,89 @@
+---
+title: "Text Files: CSV, TSV, PSV"
+parent: "Data Sources and File Formats"
+---
+The section ["Plugin Configuration Basics"]({{site.baseurl}}/docs/plugin-configuration-basics) covers attributes that you configure for use with a CSV, TSV, PSV (comma-, tab-, pipe-separated) values text file. This section presents examples of how to use those attributes and [tips for performant querying]({{site.baseurl}}/docs/text-files-csv-tsv-psv/#tips-for-performant-querying) of these text files. 
+
+## Managing Headers in Text Files
+In the storage plugin configuration, you set attributes on the text reader format configuration. This section presents examples of using the following attributes defined in ["List of Attributes and Definitions"]({{site.baseurl}}/docs/plugin-configuration-basics/#list-of-attributes-and-definitions):
+
+* String lineDelimiter = "\n";  
+  One or more characters used to denote a new record. Allows reading files with windows line endings.  
+* char fieldDelimiter = ',';  
+  A single character used to separate each value.  
+* char quote = '"';  
+  A single character used to start/end a quoted value.  
+* char escape = '"';  
+  A single character used to escape a quote inside of a value.  
+* char comment = '#';  
+  A single character used to denote a comment line.  
+* boolean skipFirstLine = false;  
+  Set to true to avoid reading headers as data.  
+
+You can deal with a mix of text files with and without headers either by creating two separate format plugins or by creating two format plugins within the same storage plugin. The former approach is typically easier than the latter.
+
+### Creating Two Separate Format Plugins
+Format plugins are associated with a particular storage plugin. Storage plugins define a root directory that Drill targets when using the storage plugin. You can define separate storage plugins for different root directories, and define each of the format attributes to match the files stored below that directory. All files can use the .csv extension.
+
+For example:
+
+Storage Plugin A
+
+    "csv": {
+      "type": "text",
+      "extensions": [
+        "csv"
+      ],
+      "delimiter": ","
+    },
+    . . .
+
+
+Storage Plugin B
+
+    "csv": {
+      "type": "text",
+      "extensions": [
+        "csv"
+      ],
+      "comment": "&",
+      "skipFirstLine": true,
+      "delimiter": ","
+    },
+
+### Creating Two Format Plugins within the Same Storage Plugin
+Give a different extension to files with a header and to files without a header, and use a storage plugin that looks something like the following example. This method requires renaming some files to use the csv2 extension.
+
+For example:
+
+    "csv": {
+      "type": "text",
+      "extensions": [
+        "csv"
+      ],
+      "delimiter": ","
+    },
+    "csv_with_header": {
+      "type": "text",
+      "extensions": [
+        "csv2"
+      ],
+      "comment": "&",
+      "skipFirstLine": true,
+      "delimiter": ","
+    },
+
+## Tips for Performant Querying
+
+Converting these files to another format, such as Parquet, using the CTAS command and a SELECT * statement is not recommended. Drill reads CSV, TSV, and PSV files into a list of
+VARCHARS, rather than individual columns. While parquet supports lists and
+Drill reads them, the read path for complex data is not yet optimized. Select data from particular columns using the [COLUMN[n] syntax]({{site.baseurl}}/docs/querying-plain-text-files), and then assign meaningful column
+names using aliases. For example:
+
+CREATE TABLE parquet_users AS SELECT CAST(COLUMNS[0] AS INT) AS user_id,
+COLUMNS[1] AS username, CAST(COLUMNS[2] AS TIMESTAMP) AS registration_date
+FROM `users.csv1`;
+
+Cast the VARCHAR data to INT, FLOAT, DATETIME, and so on. You get better performance reading fixed-width than reading VARCHAR data. 
+
+Using a distributed file system, such as HDFS, instead of a local file system to query the files also improves performance because currently Drill does not split files on block splits.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/1b5fb2ee/_docs/develop-custom-functions/020-develop-a-simple-function.md
----------------------------------------------------------------------
diff --git a/_docs/develop-custom-functions/020-develop-a-simple-function.md b/_docs/develop-custom-functions/020-develop-a-simple-function.md
index 4a4250c..cc0c899 100644
--- a/_docs/develop-custom-functions/020-develop-a-simple-function.md
+++ b/_docs/develop-custom-functions/020-develop-a-simple-function.md
@@ -15,7 +15,7 @@ function interface:
 		<dependency>
 		<groupId>org.apache.drill.exec</groupId>
 		<artifactId>drill-java-exec</artifactId>
-		<version>1.0.0</version>
+		<version>1.1.0</version>
 		</dependency>
 
   2. Create a class that implements the `DrillSimpleFunc` interface and identify the scope as `FunctionScope.SIMPLE`.

http://git-wip-us.apache.org/repos/asf/drill/blob/1b5fb2ee/_docs/develop-custom-functions/030-developing-an-aggregate-function.md
----------------------------------------------------------------------
diff --git a/_docs/develop-custom-functions/030-developing-an-aggregate-function.md b/_docs/develop-custom-functions/030-developing-an-aggregate-function.md
index ac28d9e..45ecf3c 100644
--- a/_docs/develop-custom-functions/030-developing-an-aggregate-function.md
+++ b/_docs/develop-custom-functions/030-developing-an-aggregate-function.md
@@ -14,7 +14,7 @@ Complete the following steps to create an aggregate function:
 		<dependency>
 		<groupId>org.apache.drill.exec</groupId>
 		<artifactId>drill-java-exec</artifactId>
-		<version>1.0.0</version>
+		<version>1.1.0</version>
 		</dependency>
   2. Create a class that implements the `DrillAggFunc` interface and identify the scope as `FunctionTemplate.FunctionScope.POINT_AGGREGATE`.
 

http://git-wip-us.apache.org/repos/asf/drill/blob/1b5fb2ee/_docs/sql-reference/sql-commands/030-create-table-as.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/030-create-table-as.md b/_docs/sql-reference/sql-commands/030-create-table-as.md
index e742eb1..8a21333 100644
--- a/_docs/sql-reference/sql-commands/030-create-table-as.md
+++ b/_docs/sql-reference/sql-commands/030-create-table-as.md
@@ -10,7 +10,7 @@ You can create tables in Drill by using the CTAS command.
 
 *name* is a unique directory name, optionally prefaced by a storage plugin name, such as dfs, and a workspace, such as tmp using [dot notation]({{site.baseurl}}/docs/workspaces).  
 *column list* is an optional list of column names or aliases in the new table.  
-*query* is a SELECT statement that needs to include aliases for ambiguous column names, such as COLUMNS[0]. 
+*query* is a SELECT statement that needs to include aliases for ambiguous column names, such as COLUMNS[0]. Using SELECT * is [not recommended]({{site.baseurl}}/docs/text-files-csv-tsv-psv/#tips-for-performant-querying) when selecting CSV, TSV, and PSV data.
 
 You can use the [PARTITION BY]({{site.baseurl}}/docs/partition-by-clause) clause in a CTAS command.
 

http://git-wip-us.apache.org/repos/asf/drill/blob/1b5fb2ee/_docs/sql-reference/sql-commands/035-partition-by-clause.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/035-partition-by-clause.md b/_docs/sql-reference/sql-commands/035-partition-by-clause.md
index e472f0f..785d00d 100644
--- a/_docs/sql-reference/sql-commands/035-partition-by-clause.md
+++ b/_docs/sql-reference/sql-commands/035-partition-by-clause.md
@@ -137,7 +137,7 @@ a file to have this extension.
         +-------+----------------------------------------------+-------------+
         31,100 rows selected (5.45 seconds)
 
-    Drill performs partition pruning when you query partitioned data, which improves performance.
+    Drill performs partition pruning when you query partitioned data, which improves performance. Performance can be improved further by casting the yr and occurrances columns to INTEGER, as described in section ["Tips for Performant Querying"](/docs/text-files-csv-tsv-psv/#tips-for-performant-querying).
 9. Distributed mode: Query the unpartitioned data to compare the performance of the query of the partitioned data in the last step.
 
         SELECT * FROM `/googlebooks-eng-all-5gram-20120701-zo.tsv` WHERE (columns[1] = '1993');

http://git-wip-us.apache.org/repos/asf/drill/blob/1b5fb2ee/_docs/sql-reference/sql-commands/050-create-view.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/050-create-view.md b/_docs/sql-reference/sql-commands/050-create-view.md
index ceb8bac..3c8c0d9 100644
--- a/_docs/sql-reference/sql-commands/050-create-view.md
+++ b/_docs/sql-reference/sql-commands/050-create-view.md
@@ -118,7 +118,7 @@ created for the steps in this example.
 
 Complete the following steps to create a view in Drill:
 
-  1. Decide which workspace you will use to create the view, and verify that the writable option is set to “true.” You can use an existing workspace, or you can create a new workspace. See [Workspaces](https://cwiki.apache.org/confluence/display/DRILL/Workspaces) for more information.  
+  1. Decide which workspace you will use to create the view, and verify that the writable option is set to “true.” You can use an existing workspace, or you can create a new workspace. See [Workspaces]({{site.baseurl}}/docs/workspaces/) for more information.  
   
         "workspaces": {
            "donuts": {

http://git-wip-us.apache.org/repos/asf/drill/blob/1b5fb2ee/_docs/sql-reference/sql-commands/087-union-set-operator.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/087-union-set-operator.md b/_docs/sql-reference/sql-commands/087-union-set-operator.md
old mode 100755
new mode 100644
index 68558b9..4c78c33
--- a/_docs/sql-reference/sql-commands/087-union-set-operator.md
+++ b/_docs/sql-reference/sql-commands/087-union-set-operator.md
@@ -2,26 +2,27 @@
 title: "UNION Set Operator"
 parent: "SQL Commands"
 ---
-The UNION set operator returns all rows in the result sets of two separate query expressions. For example, if two employee tables exist, you can use the UNION set operator to merge the two tables and build a complete list of all the employees. Drill supports UNION ALL only. Drill does not support DISTINCT.
+The UNION set operator combines the result sets of two separate query expressions. The result set of each query must have the same number of columns and compatible data types. UNION automatically removes duplicate records from the result set. UNION ALL returns all duplicate records.
 
 
 ## Syntax
 The UNION set operator supports the following syntax:
 
        query
-       { UNION ALL }
+       { UNION [ ALL ] }
        query
   
 
 ## Parameters  
 *query*  
 
-Any SELECT query that Drill supports. See SELECT.
+Any SELECT query that Drill supports. See [SELECT]({{site.baseurl}}/docs/select/).
 
 ## Usage Notes
-   * The two SELECT query expressions that represent the direct operands of the UNION must produce the same number of columns. Corresponding columns must contain compatible data types. See Supported Data Types.  
+   * The two SELECT query expressions that represent the direct operands of the UNION must produce the same number of columns. Corresponding columns must contain compatible data types. See [Supported Data Types]({{site.baseurl}}/docs/supported-data-types/).  
    * Multiple UNION operators in the same SELECT statement are evaluated left to right, unless otherwise indicated by parentheses.  
-   * You cannot use * in UNION ALL for schemaless data.
+   * You can only use * on either side of UNION when the data source has a defined schema, such as data in Hive or views.
+   * You must explicitly specify columns.
 
 ## Examples
 The following example uses the UNION ALL set operator to combine click activity data before and after a marketing campaign. The data in the example exists in the `dfs.clicks workspace`.

http://git-wip-us.apache.org/repos/asf/drill/blob/1b5fb2ee/_docs/sql-reference/sql-functions/030-date-time-functions-and-arithmetic.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-functions/030-date-time-functions-and-arithmetic.md b/_docs/sql-reference/sql-functions/030-date-time-functions-and-arithmetic.md
index b1ad6b0..4474691 100644
--- a/_docs/sql-reference/sql-functions/030-date-time-functions-and-arithmetic.md
+++ b/_docs/sql-reference/sql-functions/030-date-time-functions-and-arithmetic.md
@@ -9,18 +9,19 @@ This section covers the Drill [time zone limitation]({{site.baseurl}}/docs/data-
 
 **Function**| **Return Type**  
 ---|---  
-[AGE(TIMESTAMP)]({{ site.baseurl }}/docs/date-time-functions-and-arithmetic#age)| INTERVALDAY or INTERVALYEAR
-[EXTRACT(field from time_expression)]({{ site.baseurl }}/docs/date-time-functions-and-arithmetic#extract)| DOUBLE
-[CURRENT_DATE]({{ site.baseurl }}/docs/date-time-functions-and-arithmetic/#other-date-and-time-functions)| DATE  
-[CURRENT_TIME]({{ site.baseurl }}/docs/date-time-functions-and-arithmetic/#other-date-and-time-functions)| TIME   
-[CURRENT_TIMESTAMP]({{ site.baseurl }}/docs/date-time-functions-and-arithmetic/#other-date-and-time-functions)| TIMESTAMP 
-[DATE_ADD]({{ site.baseurl }}/docs/date-time-functions-and-arithmetic#date_add)| DATE, TIMESTAMP  
-[DATE_PART]({{ site.baseurl }}/docs/date-time-functions-and-arithmetic#date_part)| DOUBLE  
-[DATE_SUB]({{ site.baseurl }}/docs/date-time-functions-and-arithmetic#date_sub)| DATE, TIMESTAMP     
-[LOCALTIME]({{ site.baseurl }}/docs/date-time-functions-and-arithmetic/#other-date-and-time-functions)| TIME  
-[LOCALTIMESTAMP]({{ site.baseurl }}/docs/date-time-functions-and-arithmetic/#other-date-and-time-functions)| TIMESTAMP  
-[NOW]({{ site.baseurl }}/docs/date-time-functions-and-arithmetic/#other-date-and-time-functions)| TIMESTAMP  
-[TIMEOFDAY]({{ site.baseurl }}/docs/date-time-functions-and-arithmetic/#other-date-and-time-functions)| VARCHAR  
+[AGE(TIMESTAMP)]({{ site.baseurl }}/docs/date-time-functions-and-arithmetic#age)                               | INTERVALDAY or INTERVALYEAR
+[EXTRACT(field from time_expression)]({{ site.baseurl }}/docs/date-time-functions-and-arithmetic#extract)      | DOUBLE
+[CURRENT_DATE]({{ site.baseurl }}/docs/date-time-functions-and-arithmetic/#other-date-and-time-functions)      | DATE  
+[CURRENT_TIME]({{ site.baseurl }}/docs/date-time-functions-and-arithmetic/#other-date-and-time-functions)      | TIME   
+[CURRENT_TIMESTAMP]({{ site.baseurl }}/docs/date-time-functions-and-arithmetic/#other-date-and-time-functions) | TIMESTAMP 
+[DATE_ADD]({{ site.baseurl }}/docs/date-time-functions-and-arithmetic#date_add)                                | DATE, TIMESTAMP  
+[DATE_PART]({{ site.baseurl }}/docs/date-time-functions-and-arithmetic#date_part)                              | DOUBLE  
+[DATE_SUB]({{ site.baseurl }}/docs/date-time-functions-and-arithmetic#date_sub)                                | DATE, TIMESTAMP     
+[LOCALTIME]({{ site.baseurl }}/docs/date-time-functions-and-arithmetic/#other-date-and-time-functions)         | TIME  
+[LOCALTIMESTAMP]({{ site.baseurl }}/docs/date-time-functions-and-arithmetic/#other-date-and-time-functions)    | TIMESTAMP  
+[NOW]({{ site.baseurl }}/docs/date-time-functions-and-arithmetic/#other-date-and-time-functions)               | TIMESTAMP  
+[TIMEOFDAY]({{ site.baseurl }}/docs/date-time-functions-and-arithmetic/#other-date-and-time-functions)         | VARCHAR  
+[UNIX_TIMESTAMP]({{ site.baseurl }}/docs/date-time-functions-and-arithmetic/#unix_timestamp)                   | BIGINT
 
 ## AGE
 Returns the interval between two timestamps or subtracts a timestamp from midnight of the current date.
@@ -520,4 +521,51 @@ Is the time 2:00 PM?
     +------------+
     1 row selected (0.033 seconds)
 
+## UNIX_TIMESTAMP
+
+ Returns UNIX Epoch time, which is the number of seconds elapsed since January 1, 1970.
+
+ ### UNIX_TIMESTAMP Syntax
+
+UNIX_TIMESTAMP()
+UNIX_TIMESTAMP(string date)
+UNIX_TIMESTAMP(string date, string pattern)
+
+These functions perform the following operations, respectively:
+
+* Gets current Unix timestamp in seconds if given no arguments. 
+* Converts the time string in format yyyy-MM-dd HH:mm:ss to a Unix timestamp in seconds using the default timezone and locale.
+* Converts the time string with the given pattern to a Unix time stamp in seconds.
+
+SELECT UNIX_TIMESTAMP() FROM sys.version;
++-------------+
+|   EXPR$0    |
++-------------+
+| 1435711031  |
++-------------+
+1 row selected (0.749 seconds)
+
+SELECT UNIX_TIMESTAMP()('2009-03-20 11:15:55') from sys.version;
++-------------+
+|   EXPR$0    |
++-------------+
+| 1237572955  |
++-------------+
+1 row selected (1.848 seconds)
+
+SELECT UNIX_TIMESTAMP()('2009-03-20', 'yyyy-MM-dd') from sys.version;
++-------------+
+|   EXPR$0    |
++-------------+
+| 1237532400  |
++-------------+
+1 row selected (0.181 seconds)
+
+SELECT UNIX_TIMESTAMP()('2015-05-29 08:18:53.0', 'yyyy-MM-dd HH:mm:ss.SSS') from sys.version;
++-------------+
+|   EXPR$0    |
++-------------+
+| 1432912733  |
++-------------+
+