You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by uc...@apache.org on 2016/06/21 12:10:52 UTC

flink git commit: [FLINK-3973] [docs] Emphasize Table/SQL links

Repository: flink
Updated Branches:
  refs/heads/master 56216d56f -> a4d35d7c3


[FLINK-3973] [docs] Emphasize Table/SQL links


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/a4d35d7c
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/a4d35d7c
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/a4d35d7c

Branch: refs/heads/master
Commit: a4d35d7c362feb66b7887561d6a5869935a69544
Parents: 56216d5
Author: Ufuk Celebi <uc...@apache.org>
Authored: Tue Jun 21 14:05:58 2016 +0200
Committer: Ufuk Celebi <uc...@apache.org>
Committed: Tue Jun 21 14:06:04 2016 +0200

----------------------------------------------------------------------
 docs/apis/best_practices.md |  2 +-
 docs/apis/table.md          | 32 ++++++++++++++++----------------
 docs/libs/table.md          |  6 +++++-
 3 files changed, 22 insertions(+), 18 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/a4d35d7c/docs/apis/best_practices.md
----------------------------------------------------------------------
diff --git a/docs/apis/best_practices.md b/docs/apis/best_practices.md
index 62e0ebf..7ae1b64 100644
--- a/docs/apis/best_practices.md
+++ b/docs/apis/best_practices.md
@@ -2,7 +2,7 @@
 title: "Best Practices"
 # Top-level navigation
 top-nav-group: apis
-top-nav-pos: 4
+top-nav-pos: 5
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one

http://git-wip-us.apache.org/repos/asf/flink/blob/a4d35d7c/docs/apis/table.md
----------------------------------------------------------------------
diff --git a/docs/apis/table.md b/docs/apis/table.md
index 1b25099..35caa08 100644
--- a/docs/apis/table.md
+++ b/docs/apis/table.md
@@ -4,7 +4,7 @@ is_beta: true
 # Top-level navigation
 top-nav-group: apis
 top-nav-pos: 4
-top-nav-title: "Table API and SQL"
+top-nav-title: "<strong>Table API and SQL</strong>"
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -205,14 +205,14 @@ tableEnv.registerTableSource("Customers", custTS)
 
 A `TableSource` can provide access to data stored in various storage systems such as databases (MySQL, HBase, ...), file formats (CSV, Apache Parquet, Avro, ORC, ...), or messaging systems (Apache Kafka, RabbitMQ, ...).
 
-Currently, Flink only provides a `CsvTableSource` to read CSV files. A custom `TableSource` can be defined by implementing the `BatchTableSource` or `StreamTableSource` interface. 
+Currently, Flink only provides a `CsvTableSource` to read CSV files. A custom `TableSource` can be defined by implementing the `BatchTableSource` or `StreamTableSource` interface.
 
 
 Table API
 ----------
 The Table API provides methods to apply relational operations on DataSets and Datastreams both in Scala and Java.
 
-The central concept of the Table API is a `Table` which represents a table with relational schema (or relation). Tables can be created from a `DataSet` or `DataStream`, converted into a `DataSet` or `DataStream`, or registered in a table catalog using a `TableEnvironment`. A `Table` is always bound to a specific `TableEnvironment`. It is not possible to combine Tables of different TableEnvironments. 
+The central concept of the Table API is a `Table` which represents a table with relational schema (or relation). Tables can be created from a `DataSet` or `DataStream`, converted into a `DataSet` or `DataStream`, or registered in a table catalog using a `TableEnvironment`. A `Table` is always bound to a specific `TableEnvironment`. It is not possible to combine Tables of different TableEnvironments.
 
 *Note: The only operations currently supported on streaming Tables are selection, projection, and union.*
 
@@ -222,7 +222,7 @@ When using Flink's Java DataSet API, DataSets are converted to Tables and Tables
 The following example shows:
 
 - how a `DataSet` is converted to a `Table`,
-- how relational queries are specified, and 
+- how relational queries are specified, and
 - how a `Table` is converted back to a `DataSet`.
 
 {% highlight java %}
@@ -278,7 +278,7 @@ The Table API is enabled by importing `org.apache.flink.api.scala.table._`. This
 implicit conversions to convert a `DataSet` or `DataStream` to a Table. The following example shows:
 
 - how a `DataSet` is converted to a `Table`,
-- how relational queries are specified, and 
+- how relational queries are specified, and
 - how a `Table` is converted back to a `DataSet`.
 
 {% highlight scala %}
@@ -431,7 +431,7 @@ Table result = left.join(right).where("a = d").select("a, b, e");
 {% endhighlight %}
       </td>
     </tr>
-    
+
     <tr>
       <td><strong>LeftOuterJoin</strong></td>
       <td>
@@ -455,7 +455,7 @@ Table result = left.rightOuterJoin(right, "a = d").select("a, b, e");
 {% endhighlight %}
       </td>
     </tr>
-    
+
     <tr>
       <td><strong>FullOuterJoin</strong></td>
       <td>
@@ -590,7 +590,7 @@ val result = left.join(right).where('a === 'd).select('a, 'b, 'e);
 {% endhighlight %}
       </td>
     </tr>
-    
+
     <tr>
       <td><strong>LeftOuterJoin</strong></td>
       <td>
@@ -614,7 +614,7 @@ val result = left.rightOuterJoin(right, 'a === 'd).select('a, 'b, 'e)
 {% endhighlight %}
       </td>
     </tr>
-    
+
     <tr>
       <td><strong>FullOuterJoin</strong></td>
       <td>
@@ -728,7 +728,7 @@ nullLiteral = "Null(" , dataType , ")" ;
 {% endhighlight %}
 
 Here, `literal` is a valid Java literal, `fieldReference` specifies a column in the data, and `functionIdentifier` specifies a supported scalar function. The
-column names and function names follow Java identifier syntax. Expressions specified as Strings can also use prefix notation instead of suffix notation to call operators and functions. 
+column names and function names follow Java identifier syntax. Expressions specified as Strings can also use prefix notation instead of suffix notation to call operators and functions.
 
 {% top %}
 
@@ -737,7 +737,7 @@ SQL
 ----
 SQL queries are specified using the `sql()` method of the `TableEnvironment`. The method returns the result of the SQL query as a `Table` which can be converted into a `DataSet` or `DataStream`, used in subsequent Table API queries, or written to a `TableSink` (see [Writing Tables to External Sinks](#writing-tables-to-external-sinks)). SQL and Table API queries can seamlessly mixed and are holistically optimized and translated into a single DataStream or DataSet program.
 
-A `Table`, `DataSet`, `DataStream`, or external `TableSource` must be registered in the `TableEnvironment` in order to be accessible by a SQL query (see [Registering Tables](#registering-tables)). 
+A `Table`, `DataSet`, `DataStream`, or external `TableSource` must be registered in the `TableEnvironment` in order to be accessible by a SQL query (see [Registering Tables](#registering-tables)).
 
 *Note: Flink's SQL support is not feature complete, yet. Queries that include unsupported SQL features will cause a `TableException`. The limitations of SQL on batch and streaming tables are listed in the following sections.*
 
@@ -788,7 +788,7 @@ Among others, the following SQL features are not supported, yet:
 - Grouping sets
 - `INTERSECT` and `EXCEPT` set operations
 
-*Note: Tables are joined in the order in which they are specified in the `FROM` clause. In some cases the table order must be manually tweaked to resolve Cartesian products.* 
+*Note: Tables are joined in the order in which they are specified in the `FROM` clause. In some cases the table order must be manually tweaked to resolve Cartesian products.*
 
 ### SQL on Streaming Tables
 
@@ -835,9 +835,9 @@ The current version of streaming SQL only supports `SELECT`, `FROM`, `WHERE`, an
 Writing Tables to External Sinks
 ----
 
-A `Table` can be written to a `TableSink`, which is a generic interface to support a wide variety of file formats (e.g. CSV, Apache Parquet, Apache Avro), storage systems (e.g., JDBC, Apache HBase, Apache Cassandra, Elasticsearch), or messaging systems (e.g., Apache Kafka, RabbitMQ). A batch `Table` can only be written to a `BatchTableSink`, a streaming table requires a `StreamTableSink`. A `TableSink` can implement both interfaces at the same time. 
+A `Table` can be written to a `TableSink`, which is a generic interface to support a wide variety of file formats (e.g. CSV, Apache Parquet, Apache Avro), storage systems (e.g., JDBC, Apache HBase, Apache Cassandra, Elasticsearch), or messaging systems (e.g., Apache Kafka, RabbitMQ). A batch `Table` can only be written to a `BatchTableSink`, a streaming table requires a `StreamTableSink`. A `TableSink` can implement both interfaces at the same time.
 
-Currently, Flink only provides a `CsvTableSink` that writes a batch or streaming `Table` to CSV-formatted files. A custom `TableSink` can be defined by implementing the `BatchTableSink` and/or `StreamTableSink` interface. 
+Currently, Flink only provides a `CsvTableSink` that writes a batch or streaming `Table` to CSV-formatted files. A custom `TableSink` can be defined by implementing the `BatchTableSink` and/or `StreamTableSink` interface.
 
 <div class="codetabs" markdown="1">
 <div data-lang="java" markdown="1">
@@ -846,7 +846,7 @@ ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
 BatchTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);
 
 // compute the result Table using Table API operators and/or SQL queries
-Table result = ... 
+Table result = ...
 
 // create a TableSink
 TableSink sink = new CsvTableSink("/path/to/file", fieldDelim = "|");
@@ -864,7 +864,7 @@ val env = ExecutionEnvironment.getExecutionEnvironment
 val tableEnv = TableEnvironment.getTableEnvironment(env)
 
 // compute the result Table using Table API operators and/or SQL queries
-val result: Table = ... 
+val result: Table = ...
 
 // create a TableSink
 val sink: TableSink = new CsvTableSink("/path/to/file", fieldDelim = "|")

http://git-wip-us.apache.org/repos/asf/flink/blob/a4d35d7c/docs/libs/table.md
----------------------------------------------------------------------
diff --git a/docs/libs/table.md b/docs/libs/table.md
index f4477ba..ea5f1c2 100644
--- a/docs/libs/table.md
+++ b/docs/libs/table.md
@@ -1,5 +1,9 @@
 ---
 title: "Table API and SQL"
+# Top-level navigation
+top-nav-group: libs
+top-nav-pos: 4
+redirect: apis/table.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -22,4 +26,4 @@ under the License.
 
 <meta http-equiv="refresh" content="1; url={{ site.baseurl }}/apis/table.html" />
 
-The *Table API guide* has been moved. Redirecting to [{{ site.baseurl }}/apis/table.html]({{ site.baseurl }}/apis/table.html) in 1 second.
\ No newline at end of file
+The *Table API guide* has been moved. Redirecting to [{{ site.baseurl }}/apis/table.html]({{ site.baseurl }}/apis/table.html) in 1 second.