You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by fh...@apache.org on 2018/06/21 15:29:11 UTC

[12/12] flink-web git commit: [FLINK-9522] Restructure Flink website

[FLINK-9522] Restructure Flink website

- Rework start page: added features, new figure, moved powered-by users up
- Rework menu structure: 1. What is Flink, 2. users' section, 3. contributors' section
- Replace Features page by "What is Flink?" pages
- Reworke "Use Cases" page
- Add "Getting Help" page
- Remove IRC channel info
- Update a few links on the "Powered By" page


Project: http://git-wip-us.apache.org/repos/asf/flink-web/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink-web/commit/cbefc2e9
Tree: http://git-wip-us.apache.org/repos/asf/flink-web/tree/cbefc2e9
Diff: http://git-wip-us.apache.org/repos/asf/flink-web/diff/cbefc2e9

Branch: refs/heads/asf-site
Commit: cbefc2e97c603220729bdc69b827ceff8b2ddaa1
Parents: 2b0c472
Author: Fabian Hueske <fh...@gmail.com>
Authored: Thu Dec 14 10:39:12 2017 +0100
Committer: Fabian Hueske <fh...@apache.org>
Committed: Thu Jun 21 17:27:53 2018 +0200

----------------------------------------------------------------------
 _includes/navbar.html             |  76 ++++----
 blog/index.html                   |   5 +-
 community.md                      |   9 +-
 downloads.md                      |   2 +
 faq.md                            | 104 +---------
 features.md                       | 338 ---------------------------------
 flink-applications.md             | 202 ++++++++++++++++++++
 flink-architecture.md             | 100 ++++++++++
 flink-operations.md               |  72 +++++++
 gettinghelp.md                    | 133 +++++++++++++
 how-to-contribute.md              |   3 +
 img/api-stack.png                 | Bin 0 -> 173731 bytes
 img/bounded-unbounded.png         | Bin 0 -> 176524 bytes
 img/deployment-modes.png          | Bin 0 -> 256031 bytes
 img/flink-home-graphic-update.svg |   4 -
 img/flink-home-graphic.png        | Bin 0 -> 495083 bytes
 img/function-state.png            | Bin 0 -> 34088 bytes
 img/local-state.png               | Bin 0 -> 172758 bytes
 img/usecases-analytics.png        | Bin 0 -> 130058 bytes
 img/usecases-datapipelines.png    | Bin 0 -> 98239 bytes
 img/usecases-eventdrivenapps.png  | Bin 0 -> 168354 bytes
 index.md                          | 169 +++++++++++++----
 poweredby.md                      |  22 +--
 usecases.md                       | 103 ++++++++--
 24 files changed, 798 insertions(+), 544 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink-web/blob/cbefc2e9/_includes/navbar.html
----------------------------------------------------------------------
diff --git a/_includes/navbar.html b/_includes/navbar.html
index 0ce0094..3123650 100755
--- a/_includes/navbar.html
+++ b/_includes/navbar.html
@@ -18,54 +18,60 @@
         <div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
           <ul class="nav navbar-nav navbar-main">
 
-            <!-- Downloads -->
-            <li class="{% if page.url == '/downloads.html' %}active{% endif %}"><a class="btn btn-info" href="{{ site.baseurl }}/downloads.html">Download Flink</a></li>
+            <!-- First menu section explains visitors what Flink is -->
 
-            <!-- Overview -->
-            <li{% if page.url == '/index.html' %} class="active"{% endif %}><a href="{{ site.baseurl }}/index.html">Home</a></li>
+            <!-- What is Stream Processing? -->
+            <!--
+            <li{% if page.url == '/streamprocessing1.html' or page.url == '/streamprocessing2.html' %} class="active"{% endif %}><a href="{{ site.baseurl }}/streamprocessing1.html">What is Stream Processing?</a></li>
+            -->
 
-            <!-- Intro -->
-            <li{% if page.url == '/introduction.html' %} class="active"{% endif %}><a href="{{ site.baseurl }}/introduction.html">Introduction to Flink</a></li>
+            <!-- What is Flink? -->
+            <li{% if page.url == '/flink-architecture.html' or page.url == '/flink-applications.html' or page.url == '/flink-operations.html' %} class="active"{% endif %}><a href="{{ site.baseurl }}/flink-architecture.html">What is Apache Flink?</a></li>
 
             <!-- Use cases -->
-            <li{% if page.url == '/usecases.html' %} class="active"{% endif %}><a href="{{ site.baseurl }}/usecases.html">Flink Use Cases</a></li>
+            <li{% if page.url == '/usecases.html' %} class="active"{% endif %}><a href="{{ site.baseurl }}/usecases.html">Use Cases</a></li>
 
             <!-- Powered by -->
-            <li{% if page.url == '/poweredby.html' %} class="active"{% endif %}><a href="{{ site.baseurl }}/poweredby.html">Powered by Flink</a></li>
+            <li{% if page.url == '/poweredby.html' %} class="active"{% endif %}><a href="{{ site.baseurl }}/poweredby.html">Powered By</a></li>
 
-            <!-- Ecosystem -->
-            <li{% if page.url == '/ecosystem.html' %} class="active"{% endif %}><a href="{{ site.baseurl }}/ecosystem.html">Ecosystem</a></li>
+            <!-- FAQ -->
+            <li{% if page.url == '/faq.html' %} class="active"{% endif %}><a href="{{ site.baseurl }}/faq.html">FAQ</a></li>
 
-            <!-- Community -->
-            <li{% if page.url == '/community.html' %} class="active"{% endif %}><a href="{{ site.baseurl }}/community.html">Community &amp; Project Info</a></li>
+            &nbsp;
+            <!-- Second menu section aims to support Flink users -->
 
-            <!-- Contribute -->
-            <li{% if page.url == '/how-to-contribute.html' %} class="active"{% endif %}><a href="{{ site.baseurl }}/how-to-contribute.html">How to Contribute</a></li>
+            <!-- Downloads -->
+            <li{% if page.url == '/downloads.html' %} class="active"{% endif %}><a href="{{ site.baseurl }}/downloads.html">Downloads</a></li>
 
-            <!-- Blog -->
-            <li class="{% if page.url contains '/blog/' or page.url contains '/news/' %} active{% endif %} hidden-md hidden-sm"><a href="{{ site.baseurl }}/blog/"><b>Flink Blog</b></a></li>
+            <!-- Quickstart -->
+            <li>
+              <a href="{{ site.docs-stable }}/quickstart/setup_quickstart.html" target="_blank">Tutorials <small><span class="glyphicon glyphicon-new-window"></span></small></a>
+            </li>
 
-            <hr />
+            <!-- Documentation -->
+            <li class="dropdown">
+              <a class="dropdown-toggle" data-toggle="dropdown" href="#">Documentation<span class="caret"></span></a>
+              <ul class="dropdown-menu">
+                <li><a href="{{ site.docs-stable }}" target="_blank">{{site.FLINK_VERSION_STABLE_SHORT}} (Latest stable release) <small><span class="glyphicon glyphicon-new-window"></span></small></a></li>
+                <li><a href="{{ site.docs-snapshot }}" target="_blank">{{site.FLINK_VERSION_LATEST_SHORT}} (Snapshot) <small><span class="glyphicon glyphicon-new-window"></span></small></a></li>
+              </ul>
+            </li>
 
+            <!-- getting help -->
+            <li{% if page.url == '/gettinghelp.html' %} class="active"{% endif %}><a href="{{ site.baseurl }}/gettinghelp.html">Getting Help</a></li>
 
+            <!-- Blog -->
+            <li{% if page.url contains '/blog/' or page.url contains '/news/' %} class="active"{% endif %}><a href="{{ site.baseurl }}/blog/"><b>Flink Blog</b></a></li>
 
-            <!-- Documentation -->
-            <!-- <li>
-              <a href="{{ site.docs-stable }}" target="_blank">Documentation <small><span class="glyphicon glyphicon-new-window"></span></small></a>
-            </li> -->
-            <li class="dropdown">
-              <a class="dropdown-toggle" data-toggle="dropdown" href="#">Documentation
-                <span class="caret"></span></a>
-                <ul class="dropdown-menu">
-                  <li><a href="{{ site.docs-stable }}" target="_blank">{{site.FLINK_VERSION_STABLE_SHORT}} (Latest stable release) <small><span class="glyphicon glyphicon-new-window"></span></small></a></li>
-                  <li><a href="{{ site.docs-snapshot }}" target="_blank">{{site.FLINK_VERSION_LATEST_SHORT}} (Snapshot) <small><span class="glyphicon glyphicon-new-window"></span></small></a></li>
-                </ul>
-              </li>
+            &nbsp;
 
-            <!-- Quickstart -->
-            <li>
-              <a href="{{ site.docs-stable }}/quickstart/setup_quickstart.html" target="_blank">Quickstart <small><span class="glyphicon glyphicon-new-window"></span></small></a>
-            </li>
+            <!-- Third menu section aim to support community and contributors -->
+
+            <!-- Community -->
+            <li{% if page.url == '/community.html' %} class="active"{% endif %}><a href="{{ site.baseurl }}/community.html">Community &amp; Project Info</a></li>
+
+            <!-- Contribute -->
+            <li{% if page.url == '/how-to-contribute.html' %} class="active"{% endif %}><a href="{{ site.baseurl }}/how-to-contribute.html">How to Contribute</a></li>
 
             <!-- GitHub -->
             <li>
@@ -75,13 +81,9 @@
           </ul>
 
 
-
           <ul class="nav navbar-nav navbar-bottom">
           <hr />
 
-            <!-- FAQ -->
-            <li {% if page.url == '/faq.html' %} class="hidden-sm active"{% endif %}><a href="{{ site.baseurl }}/faq.html">Project FAQ</a></li>
-
             <!-- Twitter -->
             <li><a href="{{ site.twitter }}" target="_blank">@ApacheFlink <small><span class="glyphicon glyphicon-new-window"></span></small></a></li>
 

http://git-wip-us.apache.org/repos/asf/flink-web/blob/cbefc2e9/blog/index.html
----------------------------------------------------------------------
diff --git a/blog/index.html b/blog/index.html
index c38b388..f986c4f 100755
--- a/blog/index.html
+++ b/blog/index.html
@@ -3,9 +3,8 @@ title: Blog
 layout: base
 ---
 
-<div class="row">
-  <div class="col-sm-12"><h1>Blog</h1></div>
-</div>
+<h1>Blog</h1>
+<hr />
 
 <div class="row">
   <div class="col-sm-8">

http://git-wip-us.apache.org/repos/asf/flink-web/blob/cbefc2e9/community.md
----------------------------------------------------------------------
diff --git a/community.md b/community.md
index 2fb1db1..c7aae44 100755
--- a/community.md
+++ b/community.md
@@ -1,6 +1,9 @@
 ---
 title: "Community & Project Info"
 ---
+
+<hr />
+
 {% toc %}
 
 There are many ways to get help from the Apache Flink community. The [mailing lists](#mailing-lists) are the primary place where all Flink committers are present. If you want to talk with the Flink committers and users in a chat, there is an [IRC channel](#irc). Some committers are also monitoring [Stack Overflow](http://stackoverflow.com/questions/tagged/apache-flink). Please remember to tag your questions with the *[apache-flink](http://stackoverflow.com/questions/tagged/apache-flink)* tag. Bugs and feature requests can either be discussed on the *dev mailing list* or on [Jira]({{ site.jira }}). Those interested in contributing to Flink should check out the [contribution guide](how-to-contribute.html).
@@ -98,12 +101,6 @@ There are many ways to get help from the Apache Flink community. The [mailing li
 
 <b style="color:red">Please make sure you are subscribed to the mailing list you are posting to!</b> If you are not subscribed to the mailing list, your message will either be rejected (dev@ list) or you won't receive the response (user@ list).
 
-## IRC
-
-There is an IRC channel called #flink dedicated to Apache Flink at irc.freenode.org. There is also a [web-based IRC client](http://webchat.freenode.net/?channels=flink) available.
-
-The IRC channel can be used for online discussions about Apache Flink as community, but developers should be careful to move or duplicate all the official or useful discussions to the issue tracking system or dev mailing list.
-
 ## Stack Overflow
 
 Committers are watching [Stack Overflow](http://stackoverflow.com/questions/tagged/apache-flink) for the [apache-flink](http://stackoverflow.com/questions/tagged/apache-flink) tag.

http://git-wip-us.apache.org/repos/asf/flink-web/blob/cbefc2e9/downloads.md
----------------------------------------------------------------------
diff --git a/downloads.md b/downloads.md
index 68e1d8e..3e120b0 100644
--- a/downloads.md
+++ b/downloads.md
@@ -2,6 +2,8 @@
 title: "Downloads"
 ---
 
+<hr />
+
 <script type="text/javascript">
 $( document ).ready(function() {
   // Handler for .ready() called.

http://git-wip-us.apache.org/repos/asf/flink-web/blob/cbefc2e9/faq.md
----------------------------------------------------------------------
diff --git a/faq.md b/faq.md
index c838226..5cbdcf8 100755
--- a/faq.md
+++ b/faq.md
@@ -20,8 +20,11 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-The following questions are frequently asked with regard to the Flink project **in general**.
-If you have further questions, make sure to consult the [documentation]({{site.docs-snapshot}}) or [ask the community]({{ site.baseurl }}/community.html).
+<hr />
+
+The following questions are frequently asked with regard to the Flink project **in general**. 
+
+If you have further questions, make sure to consult the [documentation]({{site.docs-stable}}) or [ask the community]({{ site.baseurl }}/gettinghelp.html).
 
 {% toc %}
 
@@ -82,99 +85,6 @@ For the DataStream API, Flink supports larger-than-memory state be configuring t
 
 For the DataSet API, all operations (except delta-iterations) can scale beyond main memory.
 
+# Common Error Messages
 
-# Debugging and Common Error Messages
-
-## I have a NotSerializableException.
-
-Flink uses Java serialization to distribute copies of the application logic (the functions and operations you implement,
-as well as the program configuration, etc.) to the parallel worker processes.
-Because of that, all functions that you pass to the API must be serializable, as defined by
-[java.io.Serializable](http://docs.oracle.com/javase/8/docs/api/java/io/Serializable.html).
-
-If your function is an anonymous inner class, consider the following:
-
-  - Make the function a standalone class, or a static inner class.
-  - Use a Java 8 lambda function.
-
-If your function is already a static class, check the fields that you assign when you create
-an instance of the class. One of the fields most likely holds a non-serializable type.
-
-  - In Java, use a `RichFunction` and initialize the problematic fields in the `open()` method.
-  - In Scala, you can often simply use “lazy val” to defer initialization until the distributed execution happens. This may come at a minor performance cost. You can naturally also use a `RichFunction` in Scala.
-
-## Using the Scala API, I get an error about implicit values and evidence parameters.
-
-This error means that the implicit value for the type information could not be provided.
-Make sure that you have an `import org.apache.flink.streaming.api.scala._` (DataStream API) or an
-`import org.apache.flink.api.scala._` (DataSet API) statement in your code.
-
-If you are using Flink operations inside functions or classes that take
-generic parameters, then a TypeInformation must be available for that parameter.
-This can be achieved by using a context bound:
-
-~~~scala
-def myFunction[T: TypeInformation](input: DataSet[T]): DataSet[Seq[T]] = {
-  input.reduceGroup( i => i.toSeq )
-}
-~~~
-
-See [Type Extraction and Serialization]({{ site.docs-snapshot }}/dev/types_serialization.html) for
-an in-depth discussion of how Flink handles types.
-
-## I see a ClassCastException: X cannot be cast to X.
-
-When you see an exception in the style `com.foo.X` cannot be cast to `com.foo.X` (or cannot be assigned to `com.foo.X`), it means that
-multiple versions of the class `com.foo.X` have been loaded by different class loaders, and types of that class are attempted to be assigned to each other.
-
-The reason for that can be:
-
-  - Class duplication through `child-first` classloading. That is an intentional mechanism to allow users to use different versions of the same
-    dependencies that Flink uses. However, if different copies of these classes move between Flink's core and the user application code, such an exception
-    can occur. To verify that this is the reason, try setting `classloader.resolve-order: parent-first` in the configuration.
-    If that makes the error disappear, please write to the mailing list to check if that may be a bug.
-
-  - Caching of classes from different execution attempts, for example, by utilities like Guava’s Interners, or Avro's Schema cache.
-    Try to not use interners, or reduce the scope of the interner/cache to make sure a new cache is created whenever a new task
-    execution is started.
-
-## I have an AbstractMethodError or NoSuchFieldError.
-
-Such errors typically indicate a mix-up in some dependency version. That means a different version of a dependency (a library)
-is loaded during the execution compared to the version that code was compiled against.
-
-From Flink 1.4.0 on, dependencies in your application JAR file may have different versions compared to dependencies used
-by Flink's core, or other dependencies in the classpath (for example from Hadoop). That requires `child-first` classloading
-to be activated, which is the default.
-
-If you see these problems in Flink 1.4+, one of the following may be true:
-
-  - You have a dependency version conflict within your application code. Make sure all your dependency versions are consistent.
-  - You are conflicting with a library that Flink cannot support via `child-first` classloading. Currently these are the
-    Scala standard library classes, as well as Flink's own classes, logging APIs, and any Hadoop core classes.
-
-
-## My DataStream application produces no output, even though events are going in.
-
-If your DataStream application uses *Event Time*, check that your watermarks get updated. If no watermarks are produced,
-event time windows might never trigger, and the application would produce no results.
-
-You can check in Flink's web UI (watermarks section) whether watermarks are making progress.
-
-## I see an exception reporting "Insufficient number of network buffers".
-
-If you run Flink with a very high parallelism, you may need to increase the number of network buffers.
-
-By default, Flink takes 10% of the JVM heap size for network buffers, with a minimum of 64MB and a maximum of 1GB.
-You can adjust all these values via `taskmanager.network.memory.fraction`, `taskmanager.network.memory.min`, and
-`taskmanager.network.memory.max`.
-
-Please refer to the [Configuration Reference]({{ site.docs-snapshot }}/ops/config.html#configuring-the-network-buffers) for details.
-
-## My job fails with various exceptions from the HDFS/Hadoop code. What can I do?
-
-The most common cause for that is that the Hadoop version in Flink's classpath is different from the
-Hadoop version of the cluster you want to connect to (HDFS / YARN).
-
-The easiest way to fix that is to pick a Hadoop-free Flink version and simply export the Hadoop path and
-classpath from the cluster.
+Common error messages are listed on the [Getting Help]({{ site.baseurl }}/gettinghelp.html#got-an-error-message) page.

http://git-wip-us.apache.org/repos/asf/flink-web/blob/cbefc2e9/features.md
----------------------------------------------------------------------
diff --git a/features.md b/features.md
deleted file mode 100755
index a3bb5d2..0000000
--- a/features.md
+++ /dev/null
@@ -1,338 +0,0 @@
----
-title: "Features"
-layout: features
----
-
-
-<!-- --------------------------------------------- -->
-<!--                Streaming
-<!-- --------------------------------------------- -->
-
-----
-
-<div class="row" style="padding: 0 0 0 0">
-  <div class="col-sm-12" style="text-align: center;">
-    <h1 id="streaming"><b>Streaming</b></h1>
-  </div>
-</div>
-
-----
-
-<!-- High Performance -->
-<div class="row" style="padding: 0 0 2em 0">
-  <div class="col-sm-12">
-    <h1 id="performance"><i>High Performance & Low Latency</i></h1>
-  </div>
-</div>
-<div class="row">
-  <div class="col-sm-12">
-    <p class="lead">Flink's data streaming runtime achieves high throughput rates and low latency with little configuration.
-    The charts below show the performance of a distributed item counting task, requiring streaming data shuffles.</p>
-  </div>
-</div>
-<div class="row" style="padding: 0 0 2em 0">
-  <div class="col-sm-12 img-column">
-    <img src="{{ site.baseurl }}/img/features/streaming_performance.png" alt="Performance of data streaming applications" style="width:75%" />
-  </div>
-</div>
-
-----
-
-<!-- Event Time Streaming -->
-<div class="row" style="padding: 0 0 2em 0">
-  <div class="col-sm-12">
-    <h1 id="event_time"><i>Support for Event Time and Out-of-Order Events</i></h1>
-  </div>
-</div>
-<div class="row">
-  <div class="col-sm-6">
-    <p class="lead">Flink supports stream processing and windowing with <b>Event Time</b> semantics.</p>
-    <p class="lead">Event time makes it easy to compute over streams where events arrive out of order, and where events may arrive delayed.</p>
-  </div>
-  <div class="col-sm-6 img-column">
-    <img src="{{ site.baseurl }}/img/features/out_of_order_stream.png" alt="Event Time and Out-of-Order Streams" style="width:100%" />
-  </div>
-</div>
-
-----
-
-<!-- Exactly-once Semantics -->
-<div class="row" style="padding: 0 0 2em 0">
-  <div class="col-sm-12">
-    <h1 id="exactly_once"><i>Exactly-once Semantics for Stateful Computations</i></h1>
-  </div>
-</div>
-<div class="row">
-  <div class="col-sm-6">
-    <p class="lead">Streaming applications can maintain custom state during their computation.</p>
-    <p class="lead">Flink's checkpointing mechanism ensures <i>exactly once</i> semantics for the state in the presence of failures.</p>
-  </div>
-  <div class="col-sm-6 img-column">
-    <img src="{{ site.baseurl }}/img/features/exactly_once_state.png" alt="Exactly-once Semantics for Stateful Computations" style="width:50%" />
-  </div>
-</div>
-
-----
-
-<!-- Windowing -->
-<div class="row" style="padding: 0 0 2em 0">
-  <div class="col-sm-12">
-    <h1 id="windows"><i>Highly flexible Streaming Windows</i></h1>
-  </div>
-</div>
-<div class="row">
-  <div class="col-sm-6">
-    <p class="lead">Flink supports windows over time, count, or sessions, as well as data-driven windows.</p>
-    <p class="lead">Windows can be customized with flexible triggering conditions, to support sophisticated streaming patterns.</p>
-  </div>
-  <div class="col-sm-6 img-column">
-    <img src="{{ site.baseurl }}/img/features/windows.png" alt="Windows" style="width:100%" />
-  </div>
-</div>
-
-----
-
-<!-- Continuous streaming -->
-<div class="row" style="padding: 0 0 2em 0">
-  <div class="col-sm-12">
-    <h1 id="streaming_model"><i>Continuous Streaming Model with Backpressure</i></h1>
-  </div>
-</div>
-
-<div class="row">
-  <div class="col-sm-6">
-    <p class="lead">Data streaming applications are executed with continuous (long lived) operators.</p>
-    <p class="lead">Flink's streaming runtime has natural flow control: Slow data sinks backpressure faster sources.</p>
-  </div>
-  <div class="col-sm-6 img-column">
-    <img src="{{ site.baseurl }}/img/features/continuous_streams.png" alt="Continuous Streaming Model" style="width:60%" />
-  </div>
-</div>
-
-----
-
-<!-- Lightweight distributed snapshots -->
-<div class="row" style="padding: 0 0 2em 0">
-  <div class="col-sm-12">
-    <h1 id="snapshots"><i>Fault-tolerance via Lightweight Distributed Snapshots</i></h1>
-  </div>
-</div>
-<div class="row">
-  <div class="col-sm-6">
-    <p class="lead">Flink's fault tolerance mechanism is based on Chandy-Lamport distributed snapshots.</p>
-    <p class="lead">The mechanism is lightweight, allowing the system to maintain high throughput rates and provide strong consistency guarantees at the same time.</p>
-  </div>
-  <div class="col-sm-6 img-column">
-    <img src="{{ site.baseurl }}/img/features/distributed_snapshots.png" alt="Lightweight Distributed Snapshots" style="width:40%" />
-  </div>
-</div>
-
-----
-
-<!-- --------------------------------------------- -->
-<!--                Batch
-<!-- --------------------------------------------- -->
-
-<div class="row" style="padding: 0 0 0 0">
-  <div class="col-sm-12" style="text-align: center;">
-    <h1 id="batch-on-streaming"><b>Batch and Streaming in One System</b></h1>
-  </div>
-</div>
-
-----
-
-<!-- One Runtime for Streaming and Batch Processing -->
-<div class="row" style="padding: 0 0 2em 0">
-  <div class="col-sm-12">
-    <h1 id="one_runtime"><i>One Runtime for Streaming and Batch Processing</i></h1>
-  </div>
-</div>
-<div class="row">
-  <div class="col-sm-6">
-    <p class="lead">Flink uses one common runtime for data streaming applications and batch processing applications.</p>
-    <p class="lead">Batch processing applications run efficiently as special cases of stream processing applications.</p>
-  </div>
-  <div class="col-sm-6 img-column">
-    <img src="{{ site.baseurl }}/img/features/one_runtime.png" alt="Unified Runtime for Batch and Stream Data Analysis" style="width:50%" />
-  </div>
-</div>
-
-----
-
-
-<!-- Memory Management -->
-<div class="row" style="padding: 0 0 2em 0">
-  <div class="col-sm-12">
-    <h1 id="memory_management"><i>Memory Management</i></h1>
-  </div>
-</div>
-<div class="row">
-  <div class="col-sm-6">
-    <p class="lead">Flink implements its own memory management inside the JVM.</p>
-    <p class="lead">Applications scale to data sizes beyond main memory and experience less garbage collection overhead.</p>
-  </div>
-  <div class="col-sm-6 img-column">
-    <img src="{{ site.baseurl }}/img/features/memory_heap_division.png" alt="Managed JVM Heap" style="width:50%" />
-  </div>
-</div>
-
-----
-
-<!-- Iterations -->
-<div class="row" style="padding: 0 0 2em 0">
-  <div class="col-sm-12">
-    <h1 id="iterations"><i>Iterations and Delta Iterations</i></h1>
-  </div>
-</div>
-<div class="row">
-  <div class="col-sm-6">
-    <p class="lead">Flink has dedicated support for iterative computations (as in machine learning and graph analysis).</p>
-    <p class="lead">Delta iterations can exploit computational dependencies for faster convergence.</p>
-  </div>
-  <div class="col-sm-6 img-column">
-    <img src="{{ site.baseurl }}/img/features/iterations.png" alt="Performance of iterations and delta iterations" style="width:75%" />
-  </div>
-</div>
-
-----
-
-<!-- Optimizer -->
-<div class="row" style="padding: 0 0 2em 0">
-  <div class="col-sm-12">
-    <h1 id="optimizer"><i>Program Optimizer</i></h1>
-  </div>
-</div>
-<div class="row">
-  <div class="col-sm-6">
-    <p class="lead">Batch programs are automatically optimized to exploit situations where expensive operations (like shuffles and sorts) can be avoided, and when intermediate data should be cached.</p>
-  </div>
-  <div class="col-sm-6 img-column">
-    <img src="{{ site.baseurl }}/img/features/optimizer_choice.png" alt="Optimizer choosing between different execution strategies" style="width:100%" />
-  </div>
-</div>
-
-----
-
-<!-- --------------------------------------------- -->
-<!--             APIs and Libraries
-<!-- --------------------------------------------- -->
-
-<div class="row" style="padding: 0 0 0 0">
-  <div class="col-sm-12" style="text-align: center;">
-    <h1 id="apis-and-libs"><b>APIs and Libraries</b></h1>
-  </div>
-</div>
-
-----
-
-<!-- Data Streaming API -->
-<div class="row" style="padding: 0 0 2em 0">
-  <div class="col-sm-12">
-    <h1 id="streaming_api"><i>Streaming Data Applications</i></h1>
-  </div>
-</div>
-<div class="row">
-  <div class="col-sm-5">
-    <p class="lead">The <i>DataStream</i> API supports functional transformations on data streams, with user-defined state, and flexible windows.</p>
-    <p class="lead">The example shows how to compute a sliding histogram of word occurrences of a data stream of texts.</p>
-  </div>
-  <div class="col-sm-7">
-    <p class="lead">WindowWordCount in Flink's DataStream API</p>
-{% highlight scala %}
-case class Word(word: String, freq: Long)
-
-val texts: DataStream[String] = ...
-
-val counts = text
-  .flatMap { line => line.split("\\W+") }
-  .map { token => Word(token, 1) }
-  .keyBy("word")
-  .timeWindow(Time.seconds(5), Time.seconds(1))
-  .sum("freq")
-{% endhighlight %}
-  </div>
-</div>
-
-----
-
-<!-- Batch Processing API -->
-<div class="row" style="padding: 0 0 2em 0">
-  <div class="col-sm-12">
-    <h1 id="batch_api"><i>Batch Processing Applications</i></h1>
-  </div>
-</div>
-<div class="row">
-  <div class="col-sm-5">
-    <p class="lead">Flink's <i>DataSet</i> API lets you write beautiful type-safe and maintainable code in Java or Scala. It supports a wide range of data types beyond key/value pairs, and a wealth of operators.</p>
-    <p class="lead">The example shows the core loop of the PageRank algorithm for graphs.</p>
-  </div>
-  <div class="col-sm-7">
-{% highlight scala %}
-case class Page(pageId: Long, rank: Double)
-case class Adjacency(id: Long, neighbors: Array[Long])
-
-val result = initialRanks.iterate(30) { pages =>
-  pages.join(adjacency).where("pageId").equalTo("id") {
-
-    (page, adj, out: Collector[Page]) => {
-      out.collect(Page(page.pageId, 0.15 / numPages))
-
-      val nLen = adj.neighbors.length
-      for (n <- adj.neighbors) {
-        out.collect(Page(n, 0.85 * page.rank / nLen))
-      }
-    }
-  }
-  .groupBy("pageId").sum("rank")
-}
-{% endhighlight %}
-  </div>
-</div>
-
-----
-
-<!-- Library Ecosystem -->
-<div class="row" style="padding: 0 0 2em 0">
-  <div class="col-sm-12">
-    <h1 id="libraries"><i>Library Ecosystem</i></h1>
-  </div>
-</div>
-<div class="row">
-  <div class="col-sm-6">
-    <p class="lead">Flink's stack offers libraries with high-level APIs for different use cases: Complex Event Processing, Machine Learning, and Graph Analytics.</p>
-    <p class="lead">The libraries are currently in <i>beta</i> status and are heavily developed.</p>
-  </div>
-  <div class="col-sm-6 img-column">
-    <img src="{{ site.baseurl }}/img/flink-stack-frontpage.png" alt="Flink Stack with Libraries" style="width:100%" />
-  </div>
-</div>
-
-----
-
-<!-- --------------------------------------------- -->
-<!--             Ecosystem
-<!-- --------------------------------------------- -->
-
-<div class="row" style="padding: 0 0 0 0">
-  <div class="col-sm-12" style="text-align: center;">
-    <h1><b>Ecosystem</b></h1>
-  </div>
-</div>
-
-----
-
-<!-- Ecosystem -->
-<div class="row" style="padding: 0 0 2em 0">
-  <div class="col-sm-12">
-    <h1 id="ecosystem"><i>Broad Integration</i></h1>
-  </div>
-</div>
-<div class="row">
-  <div class="col-sm-6">
-    <p class="lead">Flink is integrated with many other projects in the open-source data processing ecosystem.</p>
-    <p class="lead">Flink runs on YARN, works with HDFS, streams data from Kafka, can execute Hadoop program code, and connects to various other data storage systems.</p>
-  </div>
-  <div class="col-sm-6  img-column">
-    <img src="{{ site.baseurl }}/img/features/ecosystem_logos.png" alt="Other projects that Flink is integrated with" style="width:75%" />
-  </div>
-</div>

http://git-wip-us.apache.org/repos/asf/flink-web/blob/cbefc2e9/flink-applications.md
----------------------------------------------------------------------
diff --git a/flink-applications.md b/flink-applications.md
new file mode 100644
index 0000000..162d1c4
--- /dev/null
+++ b/flink-applications.md
@@ -0,0 +1,202 @@
+---
+title: "What is Apache Flink?"
+---
+
+<hr/>
+<div class="row">
+  <div class="col-sm-12" style="background-color: #f8f8f8;">
+    <h2>
+      <a href="{{ site.baseurl }}/flink-architecture.html">Architecture</a> &nbsp;
+      <span class="glyphicon glyphicon-chevron-right"></span> &nbsp;
+      Applications &nbsp;
+      <span class="glyphicon glyphicon-chevron-right"></span> &nbsp;
+      <a href="{{ site.baseurl }}/flink-operations.html">Operations</a>
+    </h2>
+  </div>
+</div>
+<hr/>
+
+Apache Flink is a framework for stateful computations over unbounded and bounded data streams. Flink provides multiple APIs at different levels of abstraction and offers dedicated libraries for common use cases.
+
+Here, we present Flink's easy-to-use and expressive APIs and libraries.
+
+## Building Blocks for Streaming Applications
+
+The types of applications that can be built with and executed by a stream processing framework are defined by how well the framework controls *streams*, *state*, and *time*. In the following, we describe these building blocks for stream processing applications and explain Flink's approaches to handle them.
+
+### Streams
+
+Obviously, streams are a fundamental aspect of stream processing. However, streams can have different characteristics that affect how a stream can and should be processed. Flink is a versatile processing framework that can handle any kind of stream.
+
+* **Bounded** and **unbounded** streams: Streams can be unbounded or bounded, i.e., fixed-sized data sets. Flink has sophisticated features to process unbounded streams, but also dedicated operators to efficiently process bounded streams. 
+* **Real-time** and **recorded** streams: All data are generated as streams. There are two ways to process the data. Processing it in real-time as it is generated or persisting the stream to a storage system, e.g., a file system or object store, and processed it later. Flink applications can process recorded or real-time streams.
+
+### State
+
+Every non-trivial streaming application is stateful, i.e., only applications that apply transformations on individual events do not require state. Any application that runs basic business logic needs to remember events or intermediate results to access them at a later point in time, for example when the next event is received or after a specific time duration.
+
+<div class="row front-graphic">
+  <img src="{{ site.baseurl }}/img/function-state.png" width="350px" />
+</div>
+
+Application state is a first-class citizen in Flink. You can see that by looking at all the features that Flink provides in the context of state handling.
+
+* **Multiple State Primitives**: Flink provides state primitives for different data structures, such as atomic values, lists, or maps. Developers can choose the state primitive that is most efficient based on the access pattern of the function.
+* **Pluggable State Backends**: Application state is managed in and checkpointed by a pluggable state backend. Flink features different state backends that store state in memory or in [RocksDB](https://rocksdb.org/), an efficient embedded on-disk data store. Custom state backends can be plugged in as well.
+* **Exactly-once state consistency**: Flink's checkpointing and recovery algorithms guarantee the consistency of application state in case of a failure. Hence, failures are transparently handled and do not affect the correctness of an application.
+* **Very Large State**: Flink is able to maintain application state of several terabytes in size due to its asynchronous and incremental checkpoint algorithm.
+* **Scalable Applications**: Flink supports scaling of stateful applications by redistributing the state to more or fewer workers.
+
+### Time
+
+Time is another important ingredient of streaming applications. Most event streams have inherent time semantics because each event is produced at a specific point in time. Moreover, many common stream computations are based on time, such as windows aggregations, sessionization, pattern detection, and time-based joins. An important aspect of stream processing is how an application measures time, i.e., the difference of event-time and processing-time.
+
+Flink provides a rich set of time-related features.
+
+* **Event-time Mode**: Applications that process streams with event-time semantics compute results based on timestamps of the events. Thereby, event-time processing allows for accurate and consistent results regardless whether recorded or real-time events are processed.
+* **Watermark Support**: Flink employs watermarks to reason about time in event-time applications. Watermarks are also a flexible mechanism to trade-off the latency and completeness of results.
+* **Late Data Handling**: When processing streams in event-time mode with watermarks, it can happen that a computation has been completed before all associated events have arrived. Such events are called late events. Flink features multiple options to handle late events, such as rerouting them via side outputs and updating previously completed results.
+* **Processing-time Mode**: In addition to its event-time mode, Flink also supports processing-time semantics which performs computations as triggered by the wall-clock time of the processing machine. The processing-time mode can be suitable for certain applications with strict low-latency requirements that can tolerate approximate results.
+
+## Layered APIs
+
+Flink provides three layered APIs. Each API offers a different trade-off between conciseness and expressiveness and targets different use cases.
+
+<div class="row front-graphic">
+  <img src="{{ site.baseurl }}/img/api-stack.png" width="500px" />
+</div>
+
+We briefly present each API, discuss its applications, and show a code example.
+
+### The ProcessFunctions
+
+[ProcessFunctions](https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/operators/process_function.html) are the most expressive function interfaces that Flink offers. Flink provides ProcessFunctions to process individual events from one or two input streams or events that were grouped in a window. ProcessFunctions provide fine-grained control over time and state. A ProcessFunction can arbitrarily modify its state and register timers that will trigger a callback function in the future. Hence, ProcessFunctions can implement complex per-event business logic as required for many [stateful event-driven applications]({{ site.baseurl }}/usecases.html#eventDrivenApps).
+
+The following example shows a `KeyedProcessFunction` that operates on a `KeyedStream` and matches `START` and `END` events. When a `START` event is received, the function remembers its timestamp in state and registers a timer in four hours. If an `END` event is received before the timer fires, the function computes the duration between `END` and `START` event, clears the state, and returns the value. Otherwise, the timer just fires and clears the state.
+
+{% highlight java %}
+/**
+ * Matches keyed START and END events and computes the difference between 
+ * both elements' timestamps. The first String field is the key attribute, 
+ * the second String attribute marks START and END events.
+ */
+public static class StartEndDuration
+    extends KeyedProcessFunction<String, Tuple2<String, String>, Tuple2<String, Long>> {
+
+  private ValueState<Long> startTime;
+
+  @Override
+  public void open(Configuration conf) {
+    // obtain state handle
+    startTime = getRuntimeContext()
+      .getState(new ValueStateDescriptor<Long>("startTime", Long.class));
+  }
+
+  /** Called for each processed event. */
+  @Override
+  public void processElement(
+      Tuple2<String, String> in,
+      Context ctx,
+      Collector<Tuple2<String, Long>> out) throws Exception {
+
+    switch (in.f1) {
+      case "START":
+        // set the start time if we receive a start event.
+        startTime.update(ctx.timestamp());
+        // register a timer in four hours from the start event.
+        ctx.timerService()
+          .registerEventTimeTimer(ctx.timestamp() + 4 * 60 * 60 * 1000);
+        break;
+      case "END":
+        // emit the duration between start and end event
+        Long sTime = startTime.value();
+        if (sTime != null) {
+          out.collect(Tuple2.of(in.f0, ctx.timestamp() - sTime));
+          // clear the state
+          startTime.clear();
+        }
+      default:
+        // do nothing
+    }
+  }
+
+  /** Called when a timer fires. */
+  @Override
+  public void onTimer(
+      long timestamp,
+      OnTimerContext ctx,
+      Collector<Tuple2<String, Long>> out) {
+
+    // Timeout interval exceeded. Cleaning up the state.
+    startTime.clear();
+  }
+}
+{% endhighlight %}
+
+The example illustrates the expressive power of the `KeyedProcessFunction` but also highlights that it is a rather verbose interface.
+
+### The DataStream API
+
+The [DataStream API](https://ci.apache.org/projects/flink/flink-docs-stable/dev/datastream_api.html) provides primitives for many common stream processing operations, such as windowing, record-at-a-time transformations, and enriching events by querying an external data store. The DataStream API is available for Java and Scala and is based on functions, such as `map()`, `reduce()`, and `aggregate()`. Functions can be defined by extending interfaces or as Java or Scala lambda functions. 
+
+The following example shows how to sessionize a clickstream and count the number of clicks per session.
+
+{% highlight java %}
+// a stream of website clicks
+DataStream<Click> clicks = ...
+
+DataStream<Tuple2<String, Long>> result = clicks
+  // project clicks to userId and add a 1 for counting
+  .map(
+    // define function by implementing the MapFunction interface.
+    new MapFunction<Click, Tuple2<String, Long>>() {
+      @Override
+      public Tuple2<String, Long> map(Click click) {
+        return Tuple2.of(click.userId, 1L);
+      }
+    })
+  // key by userId (field 0)
+  .keyBy(0)
+  // define session window with 30 minute gap
+  .window(EventTimeSessionWindows.withGap(Time.minutes(30L)))
+  // count clicks per session. Define function as lambda function.
+  .reduce((a, b) -> Tuple2.of(a.f0, a.f1 + b.f1));
+{% endhighlight %}
+
+### SQL &amp; Table API
+
+Flink features two relational APIs, the [Table API and SQL](https://ci.apache.org/projects/flink/flink-docs-stable/dev/table/index.html). Both APIs are unified APIs for batch and stream processing, i.e., queries are executed with the same semantics on unbounded, real-time streams or bounded, recorded streams and produce the same results. The Table API and SQL leverage [Apache Calcite](https://calcite.apache.org) for parsing, validation, and query optimization. They can be seamlessly integrated with the DataStream and DataSet APIs and support user-defined scalar, aggregate, and table-valued functions. 
+
+Flink's relational APIs are designed to ease the definition of [data analytics]({{ site.baseurl }}/usecases.html#analytics), [data pipelining, and ETL applications]({{ site.baseurl }}/usecases.html#pipelines).
+
+The following example shows the SQL query to sessionize a clickstream and count the number of clicks per session. This is the same use case as in the example of the DataStream API.
+
+~~~sql
+SELECT userId, COUNT(*)
+FROM clicks
+GROUP BY SESSION(clicktime, INTERVAL '30' MINUTE), userId
+~~~
+
+## Libraries
+
+Flink features several libraries for common data processing use cases. The libraries are typically embedded in an API and not fully self-contained. Hence, they can benefit from all features of the API and be integrated with other libraries.
+
+* **[Complex Event Processing (CEP)](https://ci.apache.org/projects/flink/flink-docs-stable/dev/libs/cep.html)**: Pattern detection is a very common use case for event stream processing. Flink's CEP library provides an API to specify patterns of events (think of regular expressions or state machines). The CEP library is integrated with Flink's DataStream API, such that patterns are evaluated on DataStreams. Applications for the CEP library include network intrusion detection, business process monitoring, and fraud detection. 
+  
+* **[DataSet API](https://ci.apache.org/projects/flink/flink-docs-stable/dev/batch/index.html)**: The DataSet API is Flink's core API for batch processing applications. The primitives of the DataSet API include *map*, *reduce*, *(outer) join*, *co-group*, and *iterate*. All operations are backed by algorithms and data structures that operate on serialized data in memory and spill to disk if the data size exceed the memory budget. The data processing algorithms of Flink's DataSet API are inspired by traditional database operators, such as hybrid hash-join or external merge-sort.
+  
+* **[Gelly](https://ci.apache.org/projects/flink/flink-docs-stable/dev/libs/gelly/index.html)**: Gelly is a library for scalable graph processing and analysis. Gelly is implemented on top of and integrated with the DataSet API. Hence, it benefits from its scalable and robust operators. Gelly features [built-in algorithms](https://ci.apache.org/projects/flink/flink-docs-stable/dev/libs/gelly/library_methods.html), such as label propagation, triangle enumeration, and page rank, but provides also a [Graph API](https://ci.apache.org/projects/flink/flink-docs-stable/dev/libs/gelly/graph_api.html) that eases the implementation of custom graph algorithms.
+
+<hr/>
+<div class="row">
+  <div class="col-sm-12" style="background-color: #f8f8f8;">
+    <h2>
+      <a href="{{ site.baseurl }}/flink-architecture.html">Architecture</a> &nbsp;
+      <span class="glyphicon glyphicon-chevron-right"></span> &nbsp;
+      Applications &nbsp;
+      <span class="glyphicon glyphicon-chevron-right"></span> &nbsp;
+      <a href="{{ site.baseurl }}/flink-operations.html">Operations</a>
+    </h2>
+  </div>
+</div>
+<hr/>

http://git-wip-us.apache.org/repos/asf/flink-web/blob/cbefc2e9/flink-architecture.md
----------------------------------------------------------------------
diff --git a/flink-architecture.md b/flink-architecture.md
new file mode 100644
index 0000000..a3df78f
--- /dev/null
+++ b/flink-architecture.md
@@ -0,0 +1,100 @@
+---
+title: "What is Apache Flink?"
+---
+
+<hr/>
+<div class="row">
+  <div class="col-sm-12" style="background-color: #f8f8f8;">
+    <h2>
+      Architecture &nbsp;
+      <span class="glyphicon glyphicon-chevron-right"></span> &nbsp;
+      <a href="{{ site.baseurl }}/flink-applications.html">Applications</a> &nbsp;
+      <span class="glyphicon glyphicon-chevron-right"></span> &nbsp;
+      <a href="{{ site.baseurl }}/flink-operations.html">Operations</a>
+    </h2>
+  </div>
+</div>
+<hr/>
+
+Apache Flink is a framework and distributed processing engine for stateful computations over *unbounded and bounded* data streams. Flink has been designed to run in *all common cluster environments*, perform computations at *in-memory speed* and at *any scale*.
+
+Here, we explain important aspects of Flink's architecture.
+
+<!--
+<div class="row front-graphic">
+  <img src="{{ site.baseurl }}/img/flink-home-graphic-update3.png" width="800px" />
+</div>
+-->
+
+## Process Unbounded and Bounded Data
+
+Any kind of data is produced as a stream of events. Credit card transactions, sensor measurements, machine logs, or user interactions on a website or mobile application, all of these data are generated as a stream. 
+
+Data can be processed as *unbounded* or *bounded* streams. 
+
+1. **Unbounded streams** have a start but no defined end. They do not terminate and provide data as it is generated. Unbounded streams must be continuously processed, i.e., events must be promptly handled after they have been ingested. It is not possible to wait for all input data to arrive because the input is unbounded and will not be complete at any point in time. Processing unbounded data often requires that events are ingested in a specific order, such as the order in which events occurred, to be able to reason about result completeness.
+
+2. **Bounded streams** have a defined start and end. Bounded streams can be processed by ingesting all data before performing any computations. Ordered ingestion is not required to process bounded streams because a bounded data set can always be sorted. Processing of bounded streams is also known as batch processing.
+
+<div class="row front-graphic">
+  <img src="{{ site.baseurl }}/img/bounded-unbounded.png" width="600px" />
+</div>
+
+**Apache Flink excels at processing unbounded and bounded data sets.** Precise control of time and state enable Flink's runtime to run any kind of application on unbounded streams. Bounded streams are internally processed by algorithms and data structures that are specifically designed for fixed sized data sets, yielding excellent performance. 
+
+Convince yourself by exploring the [use cases]({{ site.baseurl }}/usecases.html) that have been built on top of Flink.
+
+## Deploy Applications Anywhere
+
+Apache Flink is a distributed system and requires compute resources in order to execute applications. Flink integrates with all common cluster resource managers such as [Hadoop YARN](https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/YARN.html), [Apache Mesos](https://mesos.apache.org), and [Kubernetes](https://kubernetes.io/) but can also be setup to run as a stand-alone cluster.
+
+Flink is designed to work well each of the previously listed resource managers. This is achieved by resource-manager-specific deployment modes that allow Flink to interact with each resource manager in its idiomatic way. 
+
+When deploying a Flink application, Flink automatically identifies the required resources based on the application's configured parallelism and requests them from the resource manager. In case of a failure, Flink replaces the failed container by requesting new resources. All communication to submit or control an application happens via REST calls. This eases the integration of Flink in many environments. 
+
+<!-- Add this section once library deployment mode is supported. -->
+<!--
+
+Flink features two deployment modes for applications, the *framework mode* and the *library mode*.
+
+* In the **framework deployment mode**, a client submits a Flink application against a running Flink service that takes care of executing the application. This is the common deployment model for most data processing frameworks, query engines, or database systems.
+
+* In the **library deployment mode**, a Flink application is packaged together with the Flink master executables into a (Docker) image. Another job-independent image contains the Flink worker executables. When a container is started from the job image, the Flink master process is started and the embedded application is automatically loaded. Containers started from the worker image, bootstrap Flink worker processes which automatically connect to the master process. A container manager such as Kubernetes monitors the running containers and automatically restarts failed containers. In this mode, you don't have to setup and maintain a Flink service in your cluster. Instead you package Flink as a library with your application. This model is very popular for deploying microservices. 
+
+<div class="row front-graphic">
+  <img src="{{ site.baseurl }}/img/deployment-modes.png" width="600px" />
+</div>
+
+-->
+
+## Run Applications at any Scale
+
+Flink is designed to run stateful streaming applications at any scale. Applications are parallelized into possibly thousands of tasks that are distributed and concurrently executed in a cluster. Therefore, an application can leverage virtually unlimited amounts of CPUs, main memory, disk and network IO. Moreover, Flink easily maintains very large application state. Its asynchronous and incremental checkpointing algorithm ensures minimal impact on processing latencies while guaranteeing exactly-once state consistency.
+
+[Users reported impressive scalability numbers]({{ site.baseurl }}/poweredby.html) for Flink applications running in their production environments, such as
+
+* applications processing **multiple trillions of events per day**,
+* applications maintaining **multiple terabytes of state**, and
+* applications **running on thousands of cores**.
+
+## Leverage In-Memory Performance
+
+Stateful Flink applications are optimized for local state access. Task state is always maintained in memory or, if the state size exceeds the available memory, in access-efficient on-disk data structures. Hence, tasks perform all computations by accessing local, often in-memory, state yielding very low processing latencies. Flink guarantees exactly-once state consistency in case of failures by periodically and asynchronously checkpointing the local state to durable storage.
+
+<div class="row front-graphic">
+  <img src="{{ site.baseurl }}/img/local-state.png" width="600px" />
+</div>
+
+<hr/>
+<div class="row">
+  <div class="col-sm-12" style="background-color: #f8f8f8;">
+    <h2>
+      Architecture &nbsp;
+      <span class="glyphicon glyphicon-chevron-right"></span> &nbsp;
+      <a href="{{ site.baseurl }}/flink-applications.html">Applications</a> &nbsp;
+      <span class="glyphicon glyphicon-chevron-right"></span> &nbsp;
+      <a href="{{ site.baseurl }}/flink-operations.html">Operations</a>
+    </h2>
+  </div>
+</div>
+<hr/>

http://git-wip-us.apache.org/repos/asf/flink-web/blob/cbefc2e9/flink-operations.md
----------------------------------------------------------------------
diff --git a/flink-operations.md b/flink-operations.md
new file mode 100644
index 0000000..a2fa461
--- /dev/null
+++ b/flink-operations.md
@@ -0,0 +1,72 @@
+---
+title: "What is Apache Flink?"
+---
+
+<hr/>
+<div class="row">
+  <div class="col-sm-12" style="background-color: #f8f8f8;">
+    <h2>
+      <a href="{{ site.baseurl }}/flink-architecture.html">Architecture</a> &nbsp;
+      <span class="glyphicon glyphicon-chevron-right"></span> &nbsp;
+      <a href="{{ site.baseurl }}/flink-applications.html">Applications</a> &nbsp;
+      <span class="glyphicon glyphicon-chevron-right"></span> &nbsp;
+      Operations
+    </h2>
+  </div>
+</div>
+<hr/>
+
+Apache Flink is a framework for stateful computations over unbounded and bounded data streams. Since many streaming applications are designed to run continuously with minimal downtime, a stream processor must provide excellent failure recovery, as well as, tooling to monitor and maintain applications while they are running.
+
+Apache Flink puts a strong focus on the operational aspects of stream processing. Here, we explain Flink's failure recovery mechanism and present its features to manage and supervise running applications.
+
+## Run Your Applications Non-Stop 24/7
+
+Machine and process failures are ubiquitous in distributed systems. A distributed stream processors like Flink must recover from failures in order to be able to run streaming applications 24/7. Obviously, this does not only mean to restart an application after a failure but also to ensure that its internal state remains consistent, such that the application can continue processing as if the failure had never happened.
+
+Flink provides a several features to ensure that applications keep running and remain consistent:
+
+* **Consistent Checkpoints**: Flink's recovery mechanism is based on consistent checkpoints of an application's state. In case of a failure, the application is restarted and its state is loaded from the latest checkpoint. In combination with resettable stream sources, this feature can guarantee *exactly-once state consistency*.
+* **Efficient Checkpoints**: Checkpointing the state of an application can be quite expensive if the application maintains terabytes of state. Flink's can perform asynchronous and incremental checkpoints, in order to keep the impact of checkpoints on the application's latency SLAs very small.
+* **End-to-End Exactly-Once**: Flink features transactional sinks for specific storage systems that guarantee that data is only written out exactly once, even in case of failures.
+* **Integration with Cluster Managers**: Flink is tightly integrated with cluster managers, such as [Hadoop YARN](https://hadoop.apache.org), [Mesos](https://mesos.apache.org), or [Kubernetes](https://kubernetes.io). When a process fails, a new process is automatically started to take over its work. 
+* **High-Availability Setup**: Flink feature a high-availability mode that eliminates all single-points-of-failure. The HA-mode is based on [Apache ZooKeeper](https://zookeeper.apache.org), a battle-proven service for reliable distributed coordination.
+
+## Update, Migrate, Suspend, and Resume Your Applications
+
+Streaming applications that power business-critical services need to be maintained. Bugs need to be fixed and improvements or new features need to be implemented. However, updating a stateful streaming application is not trivial. Often one cannot simply stop the applications and restart an fixed or improved version because one cannot afford to lose the state of the application.
+
+Flink's *Savepoints* are a unique and powerful feature that solves the issue of updating stateful applications and many other related challenges. A savepoint is a consistent snapshot of an application's state and therefore very similar to a checkpoint. However in contrast to checkpoints, savepoints need to be manually triggered and are not automatically removed when an application is stopped. A savepoint can be used to start a state-compatible application and initialize its state. Savepoints enable the following features:
+
+* **Application Evolution**: Savepoints can be used to evolve applications. A fixed or improved version of an application can be restarted from a savepoint that was taken from a previous version of the application. It is also possible to start the application from an earlier point in time (given such a savepoint exists) to repair incorrect results produced by the flawed version.
+* **Cluster Migration**: Using savepoints, applications can be migrated (or cloned) to different clusters.
+* **Flink Version Updates**: An application can be migrated to run on a new Flink version using a savepoint.
+* **Application Scaling**: Savepoints can be used to increase or decrease the parallelism of an application.
+* **A/B Tests and What-If Scenarios**: The performance or quality of two (or more) different versions of an application can be compared by starting all versions from the same savepoint. 
+* **Pause and Resume**: An application can be paused by taking a savepoint and stopping it. At any later point in time, the application can be resumed from the savepoint.
+* **Archiving**: Savepoints can be archived to be able to reset the state of an application to an earlier point in time.
+
+## Monitor and Control Your Applications
+
+Just like any other service, continuously running streaming applications need to be supervised and integrated into the operations infrastructure, i.e., monitoring and logging services, of an organization. Monitoring helps to anticipate problems and react ahead of time. Logging enables root-cause analysis to investigate failures. Finally, easily accessible interfaces to control running applications are an important feature.
+
+Flink integrates nicely with many common logging and monitoring services and provides a REST API to control applications and query information.
+
+* **Web UI**: Flink features a web UI to inspect, monitor, and debug running applications. It can also be used to submit executions for execution or cancel them.
+* **Logging**: Flink implements the popular slf4j logging interface and integrates with the logging frameworks [log4j](https://logging.apache.org/log4j/2.x/) or [logback](https://logback.qos.ch/).
+* **Metrics**: Flink features a sophisticated metrics system to collect and report system and user-defined metrics. Metrics can be exported to several reporters, including [JMX](https://en.wikipedia.org/wiki/Java_Management_Extensions), Ganglia, [Graphite](https://graphiteapp.org/), [Prometheus](https://prometheus.io/), [StatsD](https://github.com/etsy/statsd), [Datadog](https://www.datadoghq.com/), and [Slf4j](https://www.slf4j.org/). 
+* **REST API**: Flink exposes a REST API to submit a new application, take a savepoint of a running application, or cancel an application. The REST API also exposes meta data and collected metrics of running or completed applications.
+
+<hr/>
+<div class="row">
+  <div class="col-sm-12" style="background-color: #f8f8f8;">
+    <h2>
+      <a href="{{ site.baseurl }}/flink-architecture.html">Architecture</a> &nbsp;
+      <span class="glyphicon glyphicon-chevron-right"></span> &nbsp;
+      <a href="{{ site.baseurl }}/flink-applications.html">Applications</a> &nbsp;
+      <span class="glyphicon glyphicon-chevron-right"></span> &nbsp;
+      Operations
+    </h2>
+  </div>
+</div>
+<hr/>

http://git-wip-us.apache.org/repos/asf/flink-web/blob/cbefc2e9/gettinghelp.md
----------------------------------------------------------------------
diff --git a/gettinghelp.md b/gettinghelp.md
new file mode 100644
index 0000000..3cf2303
--- /dev/null
+++ b/gettinghelp.md
@@ -0,0 +1,133 @@
+---
+title: "Getting Help"
+---
+
+<hr />
+
+{% toc %}
+
+## Having a Question?
+
+The Apache Flink community answers many user questions every day. You can search for answers and advice in the archives or reach out to the community for help and guidance.
+
+### User Mailing List
+
+Many Flink users, contributors, and committers are subscribed to Flink's user mailing list. The user mailing list is a very good place to ask for help. 
+
+Before posting to the mailing list, you can search the mailing list archives for email threads that discuss issues related to yours on the following websites.
+
+- [Apache Pony Mail Archive](https://lists.apache.org/list.html?user@flink.apache.org)
+- [Nabble Archive](http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/)
+
+If you'd like to post to the mailing list, you need to
+
+1. subscribe to the mailing list by sending an email to `user-subscribe@flink.apache.org`, 
+2. confirm the subscription by replying to the confirmation email, and
+3. send your email to `user@flink.apache.org`.
+
+Please note that you won't receive a respose to your mail if you are not subscribed.
+
+### Stack Overflow
+
+Many members of the Flink community are active on [Stack Overflow](https://stackoverflow.com). You can search for questions and answers or post your questions using the [\[apache-flink\]](https://stackoverflow.com/questions/tagged/apache-flink) tag. 
+
+## Found a Bug?
+
+If you observe an unexpected behavior that might be caused by a bug, you can search for reported bugs or file a bug report in [Flink's JIRA](https://issues.apache.org/jira/issues/?jql=project %3D FLINK).
+
+If you are unsure whether the unexpected behavior happend due to a bug or not, please post a question to the [user mailing list](#user-mailing-list).
+
+## Got an Error Message?
+
+Identifying the cause for an error message can be challenging. In the following, we list the most common error messages and explain how to handle them.
+
+### I have a NotSerializableException.
+
+Flink uses Java serialization to distribute copies of the application logic (the functions and operations you implement,
+as well as the program configuration, etc.) to the parallel worker processes.
+Because of that, all functions that you pass to the API must be serializable, as defined by
+[java.io.Serializable](http://docs.oracle.com/javase/8/docs/api/java/io/Serializable.html).
+
+If your function is an anonymous inner class, consider the following:
+  - make the function a standalone class, or a static inner class
+  - use a Java 8 lambda function.
+
+Is your function is already a static class, check the fields that you assign when you create
+an instance of the class. One of the fields most likely holds a non-serializable type.
+  - In Java, use a `RichFunction` and initialize the problematic fields in the `open()` method.
+  - In Scala, you can often simply use “lazy val” to defer initialization until the distributed execution happens. This may come at a minor performance cost. You can naturally also use a `RichFunction` in Scala.
+
+### Using the Scala API, I get an error about implicit values and evidence parameters.
+
+This error means that the implicit value for the type information could not be provided.
+Make sure that you have an `import org.apache.flink.streaming.api.scala._` (DataStream API) or an
+`import org.apache.flink.api.scala._` (DataSet API) statement in your code.
+
+If you are using Flink operations inside functions or classes that take
+generic parameters, then a TypeInformation must be available for that parameter.
+This can be achieved by using a context bound:
+
+~~~scala
+def myFunction[T: TypeInformation](input: DataSet[T]): DataSet[Seq[T]] = {
+  input.reduceGroup( i => i.toSeq )
+}
+~~~
+
+See [Type Extraction and Serialization]({{ site.docs-snapshot }}/dev/types_serialization.html) for
+an in-depth discussion of how Flink handles types.
+
+### I see a ClassCastException: X cannot be cast to X.
+
+When you see an exception in the style `com.foo.X` cannot be cast to `com.foo.X` (or cannot be assigned to `com.foo.X`), it means that
+multiple versions of the class `com.foo.X` have been loaded by different class loaders, and types of that class are attempted to be assigned to each other.
+
+The reason for that can be:
+
+  - Class duplication through `child-first` classloading. That is an intended mechanism to allow users to use different versions of the same
+    dependencies that Flink uses. However, if different copies of these classes move between Flink's core and the user application code, such an exception
+    can occur. To verify that this is the reason, try setting `classloader.resolve-order: parent-first` in the configuration.
+    If that makes the error disappear, please write to the mailing list to check if that may be a bug.
+
+  - Caching of classes from different execution attempts, for example by utilities like Guava’s Interners, or Avro's Schema cache.
+    Try to not use interners, or reduce the scope of the interner/cache to make sure a new cache is created whenever a new task
+    execution is started.
+
+### I have an AbstractMethodError or NoSuchFieldError.
+
+Such errors typically indicate a mix-up in some dependency version. That means a different version of a dependency (a library)
+is loaded during the execution compared to the version that code was compiled against.
+
+From Flink 1.4.0 on, dependencies in your application JAR file may have different versions compared to dependencies used
+by Flink's core, or other dependencies in the classpath (for example from Hadoop). That requires `child-first` classloading
+to be activated, which is the default.
+
+If you see these problems in Flink 1.4+, one of the following may be true:
+  - You have a dependency version conflict within your application code. Make sure all your dependency versions are consistent.
+  - You are conflicting with a library that Flink cannot support via `child-first` classloading. Currently these are the
+    Scala standard library classes, as well as Flink's own classes, logging APIs, and any Hadoop core classes.
+
+
+### My DataStream application produces no output, even though events are going in.
+
+If your DataStream application uses *Event Time*, check that your watermarks get updated. If no watermarks are produced,
+event time windows might never trigger, and the application would produce no results.
+
+You can check in Flink's web UI (watermarks section) whether watermarks are making progress.
+
+### I see an exception reporting "Insufficient number of network buffers".
+
+If you run Flink with a very high parallelism, you may need to increase the number of network buffers.
+
+By default, Flink takes 10% of the JVM heap size for network buffers, with a minimum of 64MB and a maximum of 1GB.
+You can adjust all these values via `taskmanager.network.memory.fraction`, `taskmanager.network.memory.min`, and
+`taskmanager.network.memory.max`.
+
+Please refer to the [Configuration Reference]({{ site.docs-snapshot }}/ops/config.html#configuring-the-network-buffers) for details.
+
+### My job fails with various exceptions from the HDFS/Hadoop code. What can I do?
+
+The most common cause for that is that the Hadoop version in Flink's classpath is different than the
+Hadoop version of the cluster you want to connect to (HDFS / YARN).
+
+The easiest way to fix that is to pick a Hadoop-free Flink version and simply export the Hadoop path and
+classpath from the cluster.

http://git-wip-us.apache.org/repos/asf/flink-web/blob/cbefc2e9/how-to-contribute.md
----------------------------------------------------------------------
diff --git a/how-to-contribute.md b/how-to-contribute.md
index 82ac352..efffe26 100644
--- a/how-to-contribute.md
+++ b/how-to-contribute.md
@@ -1,6 +1,9 @@
 ---
 title: "How To Contribute"
 ---
+
+<hr />
+
 Apache Flink is developed by an open and friendly community. Everybody is cordially welcome to join the community and contribute to Apache Flink. There are several ways to interact with the community and to contribute to Flink including asking questions, filing bug reports, proposing new features, joining discussions on the mailing lists, contributing code or documentation, improving the website, or testing release candidates.
 
 {% toc %}

http://git-wip-us.apache.org/repos/asf/flink-web/blob/cbefc2e9/img/api-stack.png
----------------------------------------------------------------------
diff --git a/img/api-stack.png b/img/api-stack.png
new file mode 100644
index 0000000..4de2c94
Binary files /dev/null and b/img/api-stack.png differ

http://git-wip-us.apache.org/repos/asf/flink-web/blob/cbefc2e9/img/bounded-unbounded.png
----------------------------------------------------------------------
diff --git a/img/bounded-unbounded.png b/img/bounded-unbounded.png
new file mode 100644
index 0000000..29dfe8a
Binary files /dev/null and b/img/bounded-unbounded.png differ

http://git-wip-us.apache.org/repos/asf/flink-web/blob/cbefc2e9/img/deployment-modes.png
----------------------------------------------------------------------
diff --git a/img/deployment-modes.png b/img/deployment-modes.png
new file mode 100644
index 0000000..b3f913d
Binary files /dev/null and b/img/deployment-modes.png differ