You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@drill.apache.org by br...@apache.org on 2015/09/17 01:29:14 UTC

drill-site git commit: "What time is it? Drill time!"

Repository: drill-site
Updated Branches:
  refs/heads/asf-site b0ce85e71 -> 6f013a147


"What time is it? Drill time!"


Project: http://git-wip-us.apache.org/repos/asf/drill-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill-site/commit/6f013a14
Tree: http://git-wip-us.apache.org/repos/asf/drill-site/tree/6f013a14
Diff: http://git-wip-us.apache.org/repos/asf/drill-site/diff/6f013a14

Branch: refs/heads/asf-site
Commit: 6f013a1470dc9b016bbc0619d54191ca1812626f
Parents: b0ce85e
Author: Bridget Bevens <bb...@maprtech.com>
Authored: Wed Sep 16 16:28:55 2015 -0700
Committer: Bridget Bevens <bb...@maprtech.com>
Committed: Wed Sep 16 16:28:55 2015 -0700

----------------------------------------------------------------------
 docs/drop-table/index.html                      | 45 +++++++++---------
 docs/parquet-format/index.html                  |  5 +-
 .../index.html                                  | 19 ++++++--
 docs/workspaces/index.html                      | 50 ++++++++++++++++++--
 feed.xml                                        |  4 +-
 5 files changed, 90 insertions(+), 33 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill-site/blob/6f013a14/docs/drop-table/index.html
----------------------------------------------------------------------
diff --git a/docs/drop-table/index.html b/docs/drop-table/index.html
index fb2d450..69d7f62 100644
--- a/docs/drop-table/index.html
+++ b/docs/drop-table/index.html
@@ -1004,15 +1004,15 @@
 
 <ul>
 <li>You must identify the schema in which a table exists to successfully drop the table. You can identify the schema before dropping the table with the USE <schema_name> command (see <a href="/docs/use/">USE command</a>) or when you issue the DROP TABLE command. See <a href="/docs/drop-table/#example-1:-identifying-a-schema">Example 1: Identifying a schema</a>.<br></li>
-<li>The schema must be mutable. For example, to drop a table from a schema named <code>dfs.sales</code>, the &quot;<code>writable</code>&quot; attribute for the sales workspace in the DFS storage plugin configuration must be set to <code>true</code>. See <a href="/docs/plugin-configuration-basics/#storage-plugin-attributes">Storage Plugin Attributes</a>. </li>
+<li>The schema must be mutable. For example, to drop a table from a schema named <code>dfs.sales</code>, the <code>&quot;writable&quot;</code> attribute for the <code>&quot;sales&quot;</code> workspace in the DFS storage plugin configuration must be set to <code>true</code>. See <a href="/docs/plugin-configuration-basics/#storage-plugin-attributes">Storage Plugin Attributes</a>. </li>
 </ul>
 
 <h3 id="file-type">File Type</h3>
 
 <ul>
-<li>The DROP TABLE command only works against file types that Drill can read. File types are identified as supported file formats, such as Parquet, JSON, or text. See <a href="/docs/querying-a-file-system-introduction/">Querying a File System Introduction</a> for a complete list of supported types. </li>
-<li>Text formats must be configured in the DFS storage plugin configuration. For example, to support CSV files, the “<code>format</code>” attribute in the configuration must include CSV as a value. See <a href="/docs/plugin-configuration-basics/#storage-plugin-attributes">Storage Plugin Attributes</a>.</li>
-<li>The directory on which you issue the DROP TABLE command must contain files of the same type. For example, if you have a workspace configured, such as <code>dfs.sales</code>, that points to a directory containing subdirectories, such as <code>/2012</code> and <code>/2013</code>, files in all of the directories must be of the same type in order to successfully issue the DROP TABLE command against the directory.<br></li>
+<li>The DROP TABLE command only works against file types that Drill can read. File types are identified as supported file formats, such as Parquet, JSON, or Text. See <a href="/docs/querying-a-file-system-introduction/">Querying a File System Introduction</a> for a complete list of supported file types. </li>
+<li>Text formats must be configured in the DFS storage plugin configuration. For example, to support CSV files, the <code>&quot;formats&quot;</code> attribute in the configuration must include <code>&quot;csv&quot;</code> as a value. See <a href="/docs/plugin-configuration-basics/#storage-plugin-attributes">Storage Plugin Attributes</a>.</li>
+<li>The directory on which you issue the DROP TABLE command must contain files of the same type. For example, if you have a workspace configured, such as <code>dfs.sales</code>, that points to a directory containing subdirectories, such as <code>/2012</code> and <code>/2013</code>, files in all of the directories must be of the same type to successfully issue the DROP TABLE command against the directory.<br></li>
 </ul>
 
 <h3 id="permissions">Permissions</h3>
@@ -1046,9 +1046,9 @@
 
 <h3 id="example-1:-identifying-a-schema">Example 1:  Identifying a schema</h3>
 
-<p>This example shows you how to identify a schema with the USE and DROP TABLE commands to successfully drop a table named <code>donuts_json</code> in the “<code>donuts</code>” workspace configured within the DFS storage plugin configuration.  </p>
+<p>This example shows you how to identify a schema with the USE and DROP TABLE commands and successfully drop a table named <code>donuts_json</code> in the <code>&quot;donuts&quot;</code> workspace configured within the DFS storage plugin configuration.  </p>
 
-<p>The &quot;<code>donuts</code>&quot; workspace is configured within the following DFS configuration:  </p>
+<p>The <code>&quot;donuts&quot;</code> workspace is configured within the following DFS configuration:  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">    {
      &quot;type&quot;: &quot;file&quot;,
      &quot;enabled&quot;: true,
@@ -1066,7 +1066,7 @@
        }
      },
 </code></pre></div>
-<p>Issuing the <code>USE dfs.donuts</code> command changes to the <code>dfs.donuts</code> schema before issuing the <code>DROP TABLE</code> command.</p>
+<p>Issuing the USE command changes the schema to the <code>dfs.donuts</code> schema before dropping the <code>donuts_json</code> table.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   0: jdbc:drill:zk=local&gt; use dfs.donuts;
    +-------+-----------------------------------------+
    |  ok   |                 summary                 |
@@ -1083,7 +1083,7 @@
    +-------+------------------------------+
    1 row selected (0.094 seconds) 
 </code></pre></div>
-<p>Alternatively, instead of issuing the <code>USE</code> command to change the schema, you can include the schema name when you drop the table.</p>
+<p>Alternatively, instead of issuing the USE command to change the schema, you can include the schema name when you drop the table.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   0: jdbc:drill:zk=local&gt; drop table dfs.donuts.donuts_json;
    +-------+------------------------------+
    |  ok   |           summary            |
@@ -1092,7 +1092,7 @@
    +-------+------------------------------+
    1 row selected (1.189 seconds)
 </code></pre></div>
-<p>Drill returns the following error when the schema is not identified:</p>
+<p>If you do not identify the schema prior to issuing the DROP TABLE command, Drill returns the following error:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   0: jdbc:drill:zk=local&gt; drop table donuts_json;
 
    Error: PARSE ERROR: Root schema is immutable. Creating or dropping tables/views is not allowed in root schema.Select a schema using &#39;USE schema&#39; command.
@@ -1100,9 +1100,9 @@
 </code></pre></div>
 <h3 id="example-2:-dropping-a-table-created-from-a-file">Example 2: Dropping a table created from a file</h3>
 
-<p>In the following example, the <code>donuts_json</code> table is removed from the <code>/tmp</code> workspace using the <code>DROP TABLE</code> command. This example assumes that the steps in the <a href="/docs/create-table-as-ctas/#complete-ctas-example">Complete CTAS Example</a> were already completed. </p>
+<p>In the following example, the <code>donuts_json</code> table is removed from the <code>/tmp</code> workspace using the DROP TABLE command. This example assumes that the steps in the <a href="/docs/create-table-as-ctas/#complete-ctas-example">Complete CTAS Example</a> were already completed. </p>
 
-<p>Running an <code>ls</code> on the <code>/tmp</code> directory shows the <code>donuts_json</code> file.</p>
+<p>Running an <code>ls</code> on <code>/donuts_json</code> lists the files in the directory.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   $ pwd
    /tmp
    $ cd donuts_json
@@ -1116,7 +1116,7 @@
      &quot;ppu&quot; : 0.55
    }  
 </code></pre></div>
-<p>Issuing <code>USE dfs.tmp</code> changes schema.  </p>
+<p>Issuing the USE command changes the schema to <code>dfs.tmp</code>.  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   0: jdbc:drill:zk=local&gt; use dfs.tmp;
    +-------+-----------------------------------------+
    |  ok   |                 summary                 |
@@ -1125,7 +1125,7 @@
    +-------+-----------------------------------------+
    1 row selected (0.085 seconds)  
 </code></pre></div>
-<p>Running the <code>DROP TABLE</code> command removes the table from the schema.</p>
+<p>Running the <code>DROP TABLE</code> command removes the table from the <code>dfs.tmp</code> schema.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   0: jdbc:drill:zk=local&gt; drop table donuts_json;
    +-------+------------------------------+
    |  ok   |           summary            |
@@ -1136,21 +1136,20 @@
 </code></pre></div>
 <h3 id="example-3:-dropping-a-table-created-as-a-directory">Example 3: Dropping a table created as a directory</h3>
 
-<p>When you create a table that writes files to a directory, you can issue the <code>DROP TABLE</code> command against the table to remove the directory. All files and subdirectories are deleted. For example, the following <code>CTAS</code> command writes Parquet data from the <code>nation.parquet</code> file, installed with Drill, to the <code>/tmp/name_key</code> directory.  </p>
+<p>When you create a table that writes files to a directory, you can issue the <code>DROP TABLE</code> command against the table to remove the directory. All files and subdirectories are deleted. For example, the following CTAS command writes Parquet data from the <code>nation.parquet</code> file, installed with Drill, to the <code>/tmp/name_key</code> directory.  </p>
 
-<p>Issue the <code>USE</code> command to change schema.  </p>
+<p>Issuing the USE command changes the schema to the <code>dfs</code> schema.  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   0: jdbc:drill:zk=local&gt; USE dfs;
 </code></pre></div>
-<p>Create a table using the <code>CTAS</code> command.</p>
+<p>Issuing the CTAS command creates a <code>tmp.name_key</code> table.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   0: jdbc:drill:zk=local&gt; CREATE TABLE tmp.`name_key` (N_NAME, N_NATIONKEY) AS SELECT N_NATIONKEY, N_NAME FROM dfs.`/Users/drilluser/apache-drill-1.2.0/sample-data/nation.parquet`;
    +-----------+----------------------------+
    | Fragment  | Number of records written  |
    +-----------+----------------------------+
    | 0_0       | 25                         |
    +-----------+----------------------------+
-   Query the directory to see the data.
 </code></pre></div>
-<p>Query the directory to see the data. </p>
+<p>Querying the directory shows the data. </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   0: jdbc:drill:zk=local&gt; select * from tmp.`name_key`;
    +---------+-----------------+
    | N_NAME  |   N_NATIONKEY   |
@@ -1183,7 +1182,7 @@
    +---------+-----------------+
    25 rows selected (0.183 seconds)
 </code></pre></div>
-<p>Issue the <code>DROP TABLE</code> command against the directory to remove the directory and deletes all files and subdirectories that existed within the directory.</p>
+<p>Issuing the DROP TABLE command against the directory removes the directory and deletes all the files and subdirectories that existed within the directory.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   0: jdbc:drill:zk=local&gt; drop table name_key;
    +-------+---------------------------+
    |  ok   |          summary          |
@@ -1194,7 +1193,7 @@
 </code></pre></div>
 <h3 id="example-4:-dropping-a-table-that-does-not-exist">Example 4: Dropping a table that does not exist</h3>
 
-<p>The following example shows the result of dropping a table that does not exist because it has already been dropped or it never existed. </p>
+<p>The following example shows the result of dropping a table that does not exist because it was either already dropped or never existed. </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   0: jdbc:drill:zk=local&gt; use use dfs.tmp;
    +-------+--------------------------------------+
    |  ok   |               summary                |
@@ -1210,7 +1209,7 @@
 </code></pre></div>
 <h3 id="example-5:-dropping-a-table-without-permissions">Example 5: Dropping a table without permissions</h3>
 
-<p>The following example shows the result of dropping a table without appropriate permissions in the file system.</p>
+<p>The following example shows the result of dropping a table without the appropriate permissions in the file system.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   0: jdbc:drill:zk=local&gt; drop table name_key;
 
    Error: PERMISSION ERROR: Unauthorized to drop table
@@ -1242,12 +1241,12 @@
 
 <p>The following example shows the result of dropping a table when multiple file formats exists in the directory. In this scenario, the <code>sales_dir</code> table resides in the <code>dfs.sales</code> workspace and contains Parquet, CSV, and JSON files.</p>
 
-<p>Running <code>ls</code> on <code>sales_dir</code> shows that different file types exist in the directory.</p>
+<p>Running <code>ls</code> on <code>sales_dir</code> shows the different file types that exist in the directory.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   $ cd sales_dir/
    $ ls
    0_0_0.parquet    sales_a.csv sales_b.json    sales_c.parquet
 </code></pre></div>
-<p>Issuing the <code>DROP TABLE</code> command on the directory results in an error.</p>
+<p>Issuing the DROP TABLE command on the directory results in an error.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   0: jdbc:drill:zk=local&gt; drop table dfs.sales.sales_dir;
 
    Error: VALIDATION ERROR: Table contains different file formats. 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/6f013a14/docs/parquet-format/index.html
----------------------------------------------------------------------
diff --git a/docs/parquet-format/index.html b/docs/parquet-format/index.html
index fe915ee..d77a5fe 100644
--- a/docs/parquet-format/index.html
+++ b/docs/parquet-format/index.html
@@ -1025,7 +1025,10 @@ When Drill reads the file, it attempts to execute the query on the node where th
 <li>Stores the metadata cache file at each level that covers that particular level and all lower levels.</li>
 </ul>
 
-<p>At execution time, Drill reads the actual files. At planning time, Drill reads only the metadata file.</p>
+<p>At execution time, Drill reads the actual files. At planning time, Drill reads only the metadata file. </p>
+
+<p>The first query that does not see the metadata file will gather the metadata, so the elapsed time of the first query will be very different from a subsequent 
+query. </p>
 
 <h2 id="writing-parquet-files">Writing Parquet Files</h2>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/6f013a14/docs/sql-window-functions-introduction/index.html
----------------------------------------------------------------------
diff --git a/docs/sql-window-functions-introduction/index.html b/docs/sql-window-functions-introduction/index.html
index 189d18d..4d378b0 100644
--- a/docs/sql-window-functions-introduction/index.html
+++ b/docs/sql-window-functions-introduction/index.html
@@ -1054,7 +1054,16 @@ To compare, you can run a query using the AVG() function as a standard set funct
 </code></pre></div>
 <h2 id="types-of-window-functions">Types of Window Functions</h2>
 
-<p>Currently, Drill supports the following aggregate and ranking window functions:  </p>
+<p>Currently, Drill supports the following value, aggregate, and ranking window functions:  </p>
+
+<p>Value</p>
+
+<ul>
+<li>FIRST_VALUE()</li>
+<li>LAG()</li>
+<li>LAST_VALUE()</li>
+<li>LEAD() </li>
+</ul>
 
 <p>Aggregate   </p>
 
@@ -1105,14 +1114,18 @@ Any of the following functions used with the OVER clause to provide a window spe
 <li>AVG()</li>
 <li>COUNT()</li>
 <li>CUME_DIST()</li>
+<li>DENSE_RANK()</li>
+<li>FIRST_VALUE()</li>
+<li>LAG()</li>
+<li>LAST_VALUE()</li>
+<li>LEAD()</li>
 <li>MAX()</li>
 <li>MIN()</li>
-<li>SUM()</li>
-<li>DENSE_RANK()</li>
 <li>NTILE()</li>
 <li>PERCENT_RANK()</li>
 <li>RANK()</li>
 <li>ROW_NUMBER()</li>
+<li>SUM()</li>
 </ul>
 
 <p>OVER()<br>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/6f013a14/docs/workspaces/index.html
----------------------------------------------------------------------
diff --git a/docs/workspaces/index.html b/docs/workspaces/index.html
index 5f1dcfd..ec536d6 100644
--- a/docs/workspaces/index.html
+++ b/docs/workspaces/index.html
@@ -989,10 +989,42 @@
 
     <div class="int_text" align="left">
       
-        <p>You can define one or more workspaces in a storage plugin configuration. The workspace defines the location of files in subdirectories of a local or distributed file system. Drill searches the workspace to locate data when
-you run a query. The <code>default</code>
-workspace points to the root of the file system. </p>
+        <p>You can define one or more workspaces in a <a href="/docs/plugin-configuration-basics/">storage plugin configuration</a>. The workspace defines the location of files in subdirectories of a local or distributed file system. Drill searches the workspace to locate data when
+you run a query. A hidden default workspace, <code>dfs.default</code>, points to the root of the file system.</p>
 
+<p>The following DFS storage plugin configuration shows some examples of defined workspaces:</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">   {
+     &quot;type&quot;: &quot;file&quot;,
+     &quot;enabled&quot;: true,
+     &quot;connection&quot;: &quot;file:///&quot;,
+     &quot;workspaces&quot;: {
+       &quot;root&quot;: {
+         &quot;location&quot;: &quot;/&quot;,
+         &quot;writable&quot;: false,
+         &quot;defaultInputFormat&quot;: null
+       },
+       &quot;tmp&quot;: {
+         &quot;location&quot;: &quot;/tmp&quot;,
+         &quot;writable&quot;: true,
+         &quot;defaultInputFormat&quot;: null
+       },
+       &quot;emp&quot;: {
+         &quot;location&quot;: &quot;/Users/user1/emp&quot;,
+         &quot;writable&quot;: true,
+         &quot;defaultInputFormat&quot;: null
+       },
+       &quot;donuts&quot;: {
+         &quot;location&quot;: &quot;/Users/user1/donuts&quot;,
+         &quot;writable&quot;: true,
+         &quot;defaultInputFormat&quot;: null
+       },
+       &quot;sales&quot;: {
+         &quot;location&quot;: &quot;/Users/user1/sales&quot;,
+         &quot;writable&quot;: true,
+         &quot;defaultInputFormat&quot;: null
+       }
+     },
+</code></pre></div>
 <p>Configuring workspaces to include a subdirectory simplifies the query, which is important when querying the same files repeatedly. After you configure a long path name in the workspace <code>location</code> property, instead of
 using the full path name to the data source, you use dot notation in the FROM
 clause.</p>
@@ -1004,8 +1036,18 @@ clause.</p>
 <p>To query the data source when you have not set the default schema name to the storage plugin configuration, include the plugin name. This syntax assumes you did not issue a USE statement to connect to a storage plugin that defines the
 location of the data:</p>
 
-<p><code>&lt;plugin&gt;.&lt;workspace name&gt;.`&lt;location&gt;</code>`</p>
+<p><code>&lt;plugin&gt;.&lt;workspace name&gt;.`&lt;location&gt;</code>`  </p>
+
+<h2 id="overriding-dfs.default">Overriding <code>dfs.default</code></h2>
 
+<p>You may want to override the hidden default workspace in scenarios where users do not have permissions to access the root directory. 
+Add the following workspace entry to the DFS storage plugin configuration to override the default workspace:</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">&quot;default&quot;: {
+  &quot;location&quot;: &quot;&lt;/directory/path&gt;&quot;,
+  &quot;writable&quot;: true,
+  &quot;defaultInputFormat&quot;: null
+}
+</code></pre></div>
 <h2 id="no-workspaces-for-hive-and-hbase">No Workspaces for Hive and HBase</h2>
 
 <p>You cannot include workspaces in the configurations of the

http://git-wip-us.apache.org/repos/asf/drill-site/blob/6f013a14/feed.xml
----------------------------------------------------------------------
diff --git a/feed.xml b/feed.xml
index 7e74721..e8c9a69 100644
--- a/feed.xml
+++ b/feed.xml
@@ -6,8 +6,8 @@
 </description>
     <link>/</link>
     <atom:link href="/feed.xml" rel="self" type="application/rss+xml"/>
-    <pubDate>Tue, 15 Sep 2015 16:26:24 -0700</pubDate>
-    <lastBuildDate>Tue, 15 Sep 2015 16:26:24 -0700</lastBuildDate>
+    <pubDate>Wed, 16 Sep 2015 16:20:04 -0700</pubDate>
+    <lastBuildDate>Wed, 16 Sep 2015 16:20:04 -0700</lastBuildDate>
     <generator>Jekyll v2.5.2</generator>
     
       <item>