You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by mx...@apache.org on 2015/06/24 14:17:32 UTC

[1/2] flink-web git commit: add release announcement for 0.9.0

Repository: flink-web
Updated Branches:
  refs/heads/asf-site 0063b16ea -> 487049aa0


add release announcement for 0.9.0


Project: http://git-wip-us.apache.org/repos/asf/flink-web/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink-web/commit/69116233
Tree: http://git-wip-us.apache.org/repos/asf/flink-web/tree/69116233
Diff: http://git-wip-us.apache.org/repos/asf/flink-web/diff/69116233

Branch: refs/heads/asf-site
Commit: 69116233097c1ccbf1bd83b5a0686c6e34a570d1
Parents: 0063b16
Author: Maximilian Michels <mx...@apache.org>
Authored: Wed Jun 24 14:15:53 2015 +0200
Committer: Maximilian Michels <mx...@apache.org>
Committed: Wed Jun 24 14:15:53 2015 +0200

----------------------------------------------------------------------
 ...-24-announcing-apache-flink-0.9.0-release.md | 188 +++++++++++++++++++
 1 file changed, 188 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink-web/blob/69116233/_posts/2015-06-24-announcing-apache-flink-0.9.0-release.md
----------------------------------------------------------------------
diff --git a/_posts/2015-06-24-announcing-apache-flink-0.9.0-release.md b/_posts/2015-06-24-announcing-apache-flink-0.9.0-release.md
new file mode 100644
index 0000000..b6b9b88
--- /dev/null
+++ b/_posts/2015-06-24-announcing-apache-flink-0.9.0-release.md
@@ -0,0 +1,188 @@
+---
+layout: post
+title:  'Announcing Apache Flink 0.9.0'
+date:   2015-06-24 14:00:00
+categories: news
+---
+
+The Apache Flink community is pleased to announce the availability of the 0.9.0 release. The release is the result of many months of hard work within the Flink community. It contains many new features and improvements which were previewed in the 0.9.0-milestone1 release and have been polished since then. This is the largest Flink release so far.
+
+[Download the release](http://flink.apache.org/downloads.html) and check out [the documentation](http://ci.apache.org/projects/flink/flink-docs-release-0.9/). Feedback through the Flink[ mailing lists](http://flink.apache.org/community.html#mailing-lists) is, as always, very welcome!
+
+## New Features
+
+### Exactly-once Fault Tolerance for streaming programs
+
+This release introduces a new fault tolerance mechanism for streaming dataflows. The new checkpointing algorithm takes data sources and also user-defined state into account and recovers failures such that all records are reflected exactly once in the operator states.
+
+The checkpointing algorithm is lightweight and driven by barriers that are periodically injected into the data streams at the sources. As such, it has an extremely low coordination overhead and is able to sustain very high throughput rates. User-defined state can be automatically backed up to configurable storage by the fault tolerance mechanism.
+
+Please refer to [the documentation on stateful computation](http://ci.apache.org/projects/flink/flink-docs-release-0.9/apis/streaming_guide.html#stateful-computation) for details in how to use fault tolerant data streams with Flink.
+
+The fault tolerance mechanism requires data sources that can replay recent parts of the stream, such as [Apache Kafka](http://kafka.apache.org). Read more [about how to use the persistent Kafka source](http://ci.apache.org/projects/flink/flink-docs-release-0.9/apis/streaming_guide.html#apache-kafka).
+
+### Table API
+
+Flink’s new Table API offers a higher-level abstraction for interacting with structured data sources. The Table API allows users to execute logical, SQL-like queries on distributed data sets while allowing them to freely mix declarative queries with regular Flink operators. Here is an example that groups and joins two tables:
+
+val clickCounts = clicks
+  .groupBy('user).select('userId, 'url.count as 'count)
+
+val activeUsers = users.join(clickCounts)
+  .where('id === 'userId && 'count > 10).select('username, 'count, ...)
+
+Tables consist of logical attributes that can be selected by name rather than physical Java and Scala data types. This alleviates a lot of boilerplate code for common ETL tasks and raises the abstraction for Flink programs. Tables are available for both static and streaming data sources (DataSet and DataStream APIs).
+
+[Check out the Table guide for Java and Scala](http://ci.apache.org/projects/flink/flink-docs-release-0.9/libs/table.html).
+
+### Gelly Graph Processing API
+
+Gelly is a Java Graph API for Flink. It contains a set of utilities for graph analysis, support for iterative graph processing and a library of graph algorithms. Gelly exposes a Graph data structure that wraps DataSets for vertices and edges, as well as methods for creating graphs from DataSets, graph transformations and utilities (e.g., in- and out- degrees of vertices), neighborhood aggregations, iterative vertex-centric graph processing, as well as a library of common graph algorithms, including PageRank, SSSP, label propagation, and community detection.
+
+Gelly internally builds on top of Flink’s[ delta iterations](http://ci.apache.org/projects/flink/flink-docs-release-0.9/apis/iterations.html). Iterative graph algorithms are executed leveraging mutable state, achieving similar performance with specialized graph processing systems.
+
+Gelly will eventually subsume Spargel, Flink’s Pregel-like API.
+
+Note: The Gelly library is still in beta status and subject to improvements and heavy performance tuning.
+
+[Check out the Gelly guide](http://ci.apache.org/projects/flink/flink-docs-release-0.9/libs/gelly_guide.html).
+
+### Flink Machine Learning Library
+
+This release includes the first version of Flink’s Machine Learning library. The library’s pipeline approach, which has been strongly inspired by scikit-learn’s abstraction of transformers and predictors, makes it easy to quickly set up a data processing pipeline and to get your job done.
+
+Flink distinguishes between transformers and predictors. Transformers are components which transform your input data into a new format allowing you to extract features, cleanse your data or to sample from it. Predictors on the other hand constitute the components which take your input data and train a model on it. The model you obtain from the learner can then be evaluated and used to make predictions on unseen data.
+
+Currently, the machine learning library contains transformers and predictors to do multiple tasks. The library supports multiple linear regression using stochastic gradient descent to scale to large data sizes. Furthermore, it includes an alternating least squares (ALS) implementation to factorizes large matrices. The matrix factorization can be used to do collaborative filtering. An implementation of the communication efficient distributed dual coordinate ascent (CoCoA) algorithm is the latest addition to the library. The CoCoA algorithm can be used to train distributed soft-margin SVMs.
+
+Note: The ML library is still in beta status and subject to improvements and heavy performance tuning.
+
+[Check out FlinkML](http://ci.apache.org/projects/flink/flink-docs-release-0.9/libs/ml/)
+
+### Flink on YARN leveraging Apache Tez
+
+We are introducing a new execution mode for Flink to be able to run restricted Flink programs on top of[ Apache Tez](http://tez.apache.org). This mode retains Flink’s APIs, optimizer, as well as Flink’s runtime operators, but instead of wrapping those in Flink tasks that are executed by Flink TaskManagers, it wraps them in Tez runtime tasks and builds a Tez DAG that represents the program.
+
+By using Flink on Tez, users have an additional choice for an execution platform for Flink programs. While Flink’s distributed runtime favors low latency, streaming shuffles, and iterative algorithms, Tez focuses on scalability and elastic resource usage in shared YARN clusters.
+
+[Get started with Flink on Tez](http://ci.apache.org/projects/flink/flink-docs-release-0.9/setup/flink_on_tez.html).
+
+### Reworked Distributed Runtime on Akka
+
+Flink’s RPC system has been replaced by the widely adopted[ Akka](http://akka.io) framework. Akka’s concurrency model offers the right abstraction to develop a fast as well as robust distributed system. By using Akka’s own failure detection mechanism the stability of Flink’s runtime is significantly improved, because the system can now react in proper form to node outages. Furthermore, Akka improves Flink’s scalability by introducing asynchronous messages to the system. These asynchronous messages allow Flink to be run on many more nodes than before.
+
+### Improved YARN support
+
+Flink’s YARN client contains several improvements, such as a detached mode for starting a YARN session in the background, the ability to submit a single Flink job to a YARN cluster without starting a session, including a "fire and forget" mode. Flink is now also able to reallocate failed YARN containers to maintain the size of the requested cluster. This feature allows to implement fault-tolerant setups on top of YARN. There is also an internal Java API to deploy and control a running YARN cluster. This is being used by system integrators to easily control Flink on YARN within their Hadoop 2 cluster.
+
+[See the YARN docs](http://ci.apache.org/projects/flink/flink-docs-release-0.9/setup/yarn_setup.html).
+
+### Static Code Analysis for the Flink Optimizer: Opening the UDF blackboxes
+
+This release introduces a first version of a static code analyzer that pre-interprets functions written by the user to get information about the function’s internal dataflow. The code analyzer can provide useful information about [forwarded fields](http://ci.apache.org/projects/flink/flink-docs-release-0.9/apis/programming_guide.html#semantic-annotations) to Flink's optimizer and thus speedup job executions. It also informs if the code contains obvious mistakes. For stability reasons, the code analyzer is initially disabled by default. It can be activated through
+
+ExecutionEnvironment.getExecutionConfig().setCodeAnalysisMode(...)
+
+either as an assistant that gives hints during the implementation or by directly applying the optimizations that have been found.
+
+## More Improvements and Fixes
+
+* [FLINK-1605](https://issues.apache.org/jira/browse/FLINK-1605): Flink is not exposing its Guava and ASM dependencies to Maven projects depending on Flink. We use the maven-shade-plugin to relocate these dependencies into our own namespace. This allows users to use any Guava or ASM version.
+
+* [FLINK-1417](https://issues.apache.org/jira/browse/FLINK-1605): Automatic recognition and registration of Java Types at Kryo and the internal serializers: Flink has its own type handling and serialization framework falling back to Kryo for types that it cannot handle. To get the best performance Flink is automatically registering all types a user is using in their program with Kryo.Flink also registers serializers for Protocol Buffers, Thrift, Avro and YodaTime automatically. Users can also manually register serializers to Kryo (https://issues.apache.org/jira/browse/FLINK-1399)
+
+* [FLINK-1296](https://issues.apache.org/jira/browse/FLINK-1296): Add support for sorting very large records
+
+* [FLINK-1679](https://issues.apache.org/jira/browse/FLINK-1679): "degreeOfParallelism" methods renamed to “parallelism”
+
+* [FLINK-1501](https://issues.apache.org/jira/browse/FLINK-1501): Add metrics library for monitoring TaskManagers
+
+* [FLINK-1760](https://issues.apache.org/jira/browse/FLINK-1760): Add support for building Flink with Scala 2.11
+
+* [FLINK-1648](https://issues.apache.org/jira/browse/FLINK-1648): Add a mode where the system automatically sets the parallelism to the available task slots
+
+* [FLINK-1622](https://issues.apache.org/jira/browse/FLINK-1622): Add groupCombine operator
+
+* [FLINK-1589](https://issues.apache.org/jira/browse/FLINK-1589): Add option to pass Configuration to LocalExecutor
+
+* [FLINK-1504](https://issues.apache.org/jira/browse/FLINK-1504): Add support for accessing secured HDFS clusters in standalone mode
+
+* [FLINK-1478](https://issues.apache.org/jira/browse/FLINK-1478): Add strictly local input split assignment
+
+* [FLINK-1512](https://issues.apache.org/jira/browse/FLINK-1512): Add CsvReader for reading into POJOs.
+
+* [FLINK-1461](https://issues.apache.org/jira/browse/FLINK-1461): Add sortPartition operator
+
+* [FLINK-1450](https://issues.apache.org/jira/browse/FLINK-1450): Add Fold operator to the Streaming api
+
+* [FLINK-1389](https://issues.apache.org/jira/browse/FLINK-1389): Allow setting custom file extensions for files created by the FileOutputFormat
+
+* [FLINK-1236](https://issues.apache.org/jira/browse/FLINK-1236): Add support for localization of Hadoop Input Splits
+
+* [FLINK-1179](https://issues.apache.org/jira/browse/FLINK-1179): Add button to JobManager web interface to request stack trace of a TaskManager
+
+* [FLINK-1105](https://issues.apache.org/jira/browse/FLINK-1105): Add support for locally sorted output
+
+* [FLINK-1688](https://issues.apache.org/jira/browse/FLINK-1688): Add socket sink
+
+* [FLINK-1436](https://issues.apache.org/jira/browse/FLINK-1436): Improve usability of command line interface
+
+* [FLINK-2174](https://issues.apache.org/jira/browse/FLINK-2174): Allow comments in 'slaves' file
+
+* [FLINK-1698](https://issues.apache.org/jira/browse/FLINK-1698): Add polynomial base feature mapper to ML library
+
+* [FLINK-1697](https://issues.apache.org/jira/browse/FLINK-1697): Add alternating least squares algorithm for matrix factorization to ML library
+
+* [FLINK-1792](https://issues.apache.org/jira/browse/FLINK-1792): FLINK-456 Improve TM Monitoring: CPU utilization, hide graphs by default and show summary only
+
+* [FLINK-1672](https://issues.apache.org/jira/browse/FLINK-1672): Refactor task registration/unregistration
+
+* [FLINK-2001](https://issues.apache.org/jira/browse/FLINK-2001): DistanceMetric cannot be serialized
+
+* [FLINK-1676](https://issues.apache.org/jira/browse/FLINK-1676): enableForceKryo() is not working as expected
+
+* [FLINK-1959](https://issues.apache.org/jira/browse/FLINK-1959): Accumulators BROKEN after Partitioning
+
+* [FLINK-1696](https://issues.apache.org/jira/browse/FLINK-1696): Add multiple linear regression to ML library
+
+* [FLINK-1820](https://issues.apache.org/jira/browse/FLINK-1820): Bug in DoubleParser and FloatParser - empty String is not casted to 0
+
+* [FLINK-1985](https://issues.apache.org/jira/browse/FLINK-1985): Streaming does not correctly forward ExecutionConfig to runtime
+
+* [FLINK-1828](https://issues.apache.org/jira/browse/FLINK-1828): Impossible to output data to an HBase table
+
+* [FLINK-1952](https://issues.apache.org/jira/browse/FLINK-1952): Cannot run ConnectedComponents example: Could not allocate a slot on instance
+
+* [FLINK-1848](https://issues.apache.org/jira/browse/FLINK-1848): Paths containing a Windows drive letter cannot be used in FileOutputFormats
+
+* [FLINK-1954](https://issues.apache.org/jira/browse/FLINK-1954): Task Failures and Error Handling
+
+* [FLINK-2004](https://issues.apache.org/jira/browse/FLINK-2004): Memory leak in presence of failed checkpoints in KafkaSource
+
+* [FLINK-2132](https://issues.apache.org/jira/browse/FLINK-2132): Java version parsing is not working for OpenJDK
+
+* [FLINK-2098](https://issues.apache.org/jira/browse/FLINK-2098): Checkpoint barrier initiation at source is not aligned with snapshotting
+
+* [FLINK-2069](https://issues.apache.org/jira/browse/FLINK-2069): writeAsCSV function in DataStream Scala API creates no file
+
+* [FLINK-2092](https://issues.apache.org/jira/browse/FLINK-2092): Document (new) behavior of print() and execute()
+
+* [FLINK-2177](https://issues.apache.org/jira/browse/FLINK-2177): NullPointer in task resource release
+
+* [FLINK-2054](https://issues.apache.org/jira/browse/FLINK-2054): StreamOperator rework removed copy calls when passing output to a chained operator
+
+* [FLINK-2196](https://issues.apache.org/jira/browse/FLINK-2196): Missplaced Class in flink-java SortPartitionOperator
+
+* [FLINK-2191](https://issues.apache.org/jira/browse/FLINK-2191): Inconsistent use of Closure Cleaner in Streaming API
+
+* [FLINK-2206](https://issues.apache.org/jira/browse/FLINK-2206): JobManager webinterface shows 5 finished jobs at most
+
+* [FLINK-2188](https://issues.apache.org/jira/browse/FLINK-2188): Reading from big HBase Tables
+
+* [FLINK-1781](https://issues.apache.org/jira/browse/FLINK-1781): Quickstarts broken due to Scala Version Variables
+
+## Notice
+
+The 0.9 series of Flink is the last version to support Java 6. If you are still using Java 6, please consider upgrading to Java 8 (Java 7 ended its free support in April 2015).
+
+Flink will require at least Java 7 in major releases after 0.9.0.


[2/2] flink-web git commit: regenerating the web site for the release announcement

Posted by mx...@apache.org.
regenerating the web site for the release announcement


Project: http://git-wip-us.apache.org/repos/asf/flink-web/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink-web/commit/487049aa
Tree: http://git-wip-us.apache.org/repos/asf/flink-web/tree/487049aa
Diff: http://git-wip-us.apache.org/repos/asf/flink-web/diff/487049aa

Branch: refs/heads/asf-site
Commit: 487049aa0f6982beb9dc61ba78ae6848b6f567c6
Parents: 6911623
Author: Maximilian Michels <mx...@apache.org>
Authored: Wed Jun 24 14:16:52 2015 +0200
Committer: Maximilian Michels <mx...@apache.org>
Committed: Wed Jun 24 14:16:52 2015 +0200

----------------------------------------------------------------------
 content/blog/feed.xml                           | 239 +++++++++++
 content/blog/index.html                         |  36 +-
 content/blog/page2/index.html                   |  36 +-
 content/blog/page3/index.html                   |  44 +-
 content/blog/page4/index.html                   |  31 ++
 content/index.html                              |   8 +-
 .../announcing-apache-flink-0.9.0-release.html  | 426 +++++++++++++++++++
 7 files changed, 769 insertions(+), 51 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink-web/blob/487049aa/content/blog/feed.xml
----------------------------------------------------------------------
diff --git a/content/blog/feed.xml b/content/blog/feed.xml
index fdc3d6b..53a7665 100644
--- a/content/blog/feed.xml
+++ b/content/blog/feed.xml
@@ -7,6 +7,245 @@
 <atom:link href="http://flink.apache.org/blog/feed.xml" rel="self" type="application/rss+xml" />
 
 <item>
+<title>Announcing Apache Flink 0.9.0</title>
+<description>&lt;p&gt;The Apache Flink community is pleased to announce the availability of the 0.9.0 release. The release is the result of many months of hard work within the Flink community. It contains many new features and improvements which were previewed in the 0.9.0-milestone1 release and have been polished since then. This is the largest Flink release so far.&lt;/p&gt;
+
+&lt;p&gt;&lt;a href=&quot;http://flink.apache.org/downloads.html&quot;&gt;Download the release&lt;/a&gt; and check out &lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-release-0.9/&quot;&gt;the documentation&lt;/a&gt;. Feedback through the Flink&lt;a href=&quot;http://flink.apache.org/community.html#mailing-lists&quot;&gt; mailing lists&lt;/a&gt; is, as always, very welcome!&lt;/p&gt;
+
+&lt;h2 id=&quot;new-features&quot;&gt;New Features&lt;/h2&gt;
+
+&lt;h3 id=&quot;exactly-once-fault-tolerance-for-streaming-programs&quot;&gt;Exactly-once Fault Tolerance for streaming programs&lt;/h3&gt;
+
+&lt;p&gt;This release introduces a new fault tolerance mechanism for streaming dataflows. The new checkpointing algorithm takes data sources and also user-defined state into account and recovers failures such that all records are reflected exactly once in the operator states.&lt;/p&gt;
+
+&lt;p&gt;The checkpointing algorithm is lightweight and driven by barriers that are periodically injected into the data streams at the sources. As such, it has an extremely low coordination overhead and is able to sustain very high throughput rates. User-defined state can be automatically backed up to configurable storage by the fault tolerance mechanism.&lt;/p&gt;
+
+&lt;p&gt;Please refer to &lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-release-0.9/apis/streaming_guide.html#stateful-computation&quot;&gt;the documentation on stateful computation&lt;/a&gt; for details in how to use fault tolerant data streams with Flink.&lt;/p&gt;
+
+&lt;p&gt;The fault tolerance mechanism requires data sources that can replay recent parts of the stream, such as &lt;a href=&quot;http://kafka.apache.org&quot;&gt;Apache Kafka&lt;/a&gt;. Read more &lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-release-0.9/apis/streaming_guide.html#apache-kafka&quot;&gt;about how to use the persistent Kafka source&lt;/a&gt;.&lt;/p&gt;
+
+&lt;h3 id=&quot;table-api&quot;&gt;Table API&lt;/h3&gt;
+
+&lt;p&gt;Flink’s new Table API offers a higher-level abstraction for interacting with structured data sources. The Table API allows users to execute logical, SQL-like queries on distributed data sets while allowing them to freely mix declarative queries with regular Flink operators. Here is an example that groups and joins two tables:&lt;/p&gt;
+
+&lt;p&gt;val clickCounts = clicks
+  .groupBy(‘user).select(‘userId, ‘url.count as ‘count)&lt;/p&gt;
+
+&lt;p&gt;val activeUsers = users.join(clickCounts)
+  .where(‘id === ‘userId &amp;amp;&amp;amp; ‘count &amp;gt; 10).select(‘username, ‘count, …)&lt;/p&gt;
+
+&lt;p&gt;Tables consist of logical attributes that can be selected by name rather than physical Java and Scala data types. This alleviates a lot of boilerplate code for common ETL tasks and raises the abstraction for Flink programs. Tables are available for both static and streaming data sources (DataSet and DataStream APIs).&lt;/p&gt;
+
+&lt;p&gt;&lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-release-0.9/libs/table.html&quot;&gt;Check out the Table guide for Java and Scala&lt;/a&gt;.&lt;/p&gt;
+
+&lt;h3 id=&quot;gelly-graph-processing-api&quot;&gt;Gelly Graph Processing API&lt;/h3&gt;
+
+&lt;p&gt;Gelly is a Java Graph API for Flink. It contains a set of utilities for graph analysis, support for iterative graph processing and a library of graph algorithms. Gelly exposes a Graph data structure that wraps DataSets for vertices and edges, as well as methods for creating graphs from DataSets, graph transformations and utilities (e.g., in- and out- degrees of vertices), neighborhood aggregations, iterative vertex-centric graph processing, as well as a library of common graph algorithms, including PageRank, SSSP, label propagation, and community detection.&lt;/p&gt;
+
+&lt;p&gt;Gelly internally builds on top of Flink’s&lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-release-0.9/apis/iterations.html&quot;&gt; delta iterations&lt;/a&gt;. Iterative graph algorithms are executed leveraging mutable state, achieving similar performance with specialized graph processing systems.&lt;/p&gt;
+
+&lt;p&gt;Gelly will eventually subsume Spargel, Flink’s Pregel-like API.&lt;/p&gt;
+
+&lt;p&gt;Note: The Gelly library is still in beta status and subject to improvements and heavy performance tuning.&lt;/p&gt;
+
+&lt;p&gt;&lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-release-0.9/libs/gelly_guide.html&quot;&gt;Check out the Gelly guide&lt;/a&gt;.&lt;/p&gt;
+
+&lt;h3 id=&quot;flink-machine-learning-library&quot;&gt;Flink Machine Learning Library&lt;/h3&gt;
+
+&lt;p&gt;This release includes the first version of Flink’s Machine Learning library. The library’s pipeline approach, which has been strongly inspired by scikit-learn’s abstraction of transformers and predictors, makes it easy to quickly set up a data processing pipeline and to get your job done.&lt;/p&gt;
+
+&lt;p&gt;Flink distinguishes between transformers and predictors. Transformers are components which transform your input data into a new format allowing you to extract features, cleanse your data or to sample from it. Predictors on the other hand constitute the components which take your input data and train a model on it. The model you obtain from the learner can then be evaluated and used to make predictions on unseen data.&lt;/p&gt;
+
+&lt;p&gt;Currently, the machine learning library contains transformers and predictors to do multiple tasks. The library supports multiple linear regression using stochastic gradient descent to scale to large data sizes. Furthermore, it includes an alternating least squares (ALS) implementation to factorizes large matrices. The matrix factorization can be used to do collaborative filtering. An implementation of the communication efficient distributed dual coordinate ascent (CoCoA) algorithm is the latest addition to the library. The CoCoA algorithm can be used to train distributed soft-margin SVMs.&lt;/p&gt;
+
+&lt;p&gt;Note: The ML library is still in beta status and subject to improvements and heavy performance tuning.&lt;/p&gt;
+
+&lt;p&gt;&lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-release-0.9/libs/ml/&quot;&gt;Check out FlinkML&lt;/a&gt;&lt;/p&gt;
+
+&lt;h3 id=&quot;flink-on-yarn-leveraging-apache-tez&quot;&gt;Flink on YARN leveraging Apache Tez&lt;/h3&gt;
+
+&lt;p&gt;We are introducing a new execution mode for Flink to be able to run restricted Flink programs on top of&lt;a href=&quot;http://tez.apache.org&quot;&gt; Apache Tez&lt;/a&gt;. This mode retains Flink’s APIs, optimizer, as well as Flink’s runtime operators, but instead of wrapping those in Flink tasks that are executed by Flink TaskManagers, it wraps them in Tez runtime tasks and builds a Tez DAG that represents the program.&lt;/p&gt;
+
+&lt;p&gt;By using Flink on Tez, users have an additional choice for an execution platform for Flink programs. While Flink’s distributed runtime favors low latency, streaming shuffles, and iterative algorithms, Tez focuses on scalability and elastic resource usage in shared YARN clusters.&lt;/p&gt;
+
+&lt;p&gt;&lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-release-0.9/setup/flink_on_tez.html&quot;&gt;Get started with Flink on Tez&lt;/a&gt;.&lt;/p&gt;
+
+&lt;h3 id=&quot;reworked-distributed-runtime-on-akka&quot;&gt;Reworked Distributed Runtime on Akka&lt;/h3&gt;
+
+&lt;p&gt;Flink’s RPC system has been replaced by the widely adopted&lt;a href=&quot;http://akka.io&quot;&gt; Akka&lt;/a&gt; framework. Akka’s concurrency model offers the right abstraction to develop a fast as well as robust distributed system. By using Akka’s own failure detection mechanism the stability of Flink’s runtime is significantly improved, because the system can now react in proper form to node outages. Furthermore, Akka improves Flink’s scalability by introducing asynchronous messages to the system. These asynchronous messages allow Flink to be run on many more nodes than before.&lt;/p&gt;
+
+&lt;h3 id=&quot;improved-yarn-support&quot;&gt;Improved YARN support&lt;/h3&gt;
+
+&lt;p&gt;Flink’s YARN client contains several improvements, such as a detached mode for starting a YARN session in the background, the ability to submit a single Flink job to a YARN cluster without starting a session, including a “fire and forget” mode. Flink is now also able to reallocate failed YARN containers to maintain the size of the requested cluster. This feature allows to implement fault-tolerant setups on top of YARN. There is also an internal Java API to deploy and control a running YARN cluster. This is being used by system integrators to easily control Flink on YARN within their Hadoop 2 cluster.&lt;/p&gt;
+
+&lt;p&gt;&lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-release-0.9/setup/yarn_setup.html&quot;&gt;See the YARN docs&lt;/a&gt;.&lt;/p&gt;
+
+&lt;h3 id=&quot;static-code-analysis-for-the-flink-optimizer-opening-the-udf-blackboxes&quot;&gt;Static Code Analysis for the Flink Optimizer: Opening the UDF blackboxes&lt;/h3&gt;
+
+&lt;p&gt;This release introduces a first version of a static code analyzer that pre-interprets functions written by the user to get information about the function’s internal dataflow. The code analyzer can provide useful information about &lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-release-0.9/apis/programming_guide.html#semantic-annotations&quot;&gt;forwarded fields&lt;/a&gt; to Flink’s optimizer and thus speedup job executions. It also informs if the code contains obvious mistakes. For stability reasons, the code analyzer is initially disabled by default. It can be activated through&lt;/p&gt;
+
+&lt;p&gt;ExecutionEnvironment.getExecutionConfig().setCodeAnalysisMode(…)&lt;/p&gt;
+
+&lt;p&gt;either as an assistant that gives hints during the implementation or by directly applying the optimizations that have been found.&lt;/p&gt;
+
+&lt;h2 id=&quot;more-improvements-and-fixes&quot;&gt;More Improvements and Fixes&lt;/h2&gt;
+
+&lt;ul&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1605&quot;&gt;FLINK-1605&lt;/a&gt;: Flink is not exposing its Guava and ASM dependencies to Maven projects depending on Flink. We use the maven-shade-plugin to relocate these dependencies into our own namespace. This allows users to use any Guava or ASM version.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1605&quot;&gt;FLINK-1417&lt;/a&gt;: Automatic recognition and registration of Java Types at Kryo and the internal serializers: Flink has its own type handling and serialization framework falling back to Kryo for types that it cannot handle. To get the best performance Flink is automatically registering all types a user is using in their program with Kryo.Flink also registers serializers for Protocol Buffers, Thrift, Avro and YodaTime automatically. Users can also manually register serializers to Kryo (https://issues.apache.org/jira/browse/FLINK-1399)&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1296&quot;&gt;FLINK-1296&lt;/a&gt;: Add support for sorting very large records&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1679&quot;&gt;FLINK-1679&lt;/a&gt;: “degreeOfParallelism” methods renamed to “parallelism”&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1501&quot;&gt;FLINK-1501&lt;/a&gt;: Add metrics library for monitoring TaskManagers&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1760&quot;&gt;FLINK-1760&lt;/a&gt;: Add support for building Flink with Scala 2.11&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1648&quot;&gt;FLINK-1648&lt;/a&gt;: Add a mode where the system automatically sets the parallelism to the available task slots&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1622&quot;&gt;FLINK-1622&lt;/a&gt;: Add groupCombine operator&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1589&quot;&gt;FLINK-1589&lt;/a&gt;: Add option to pass Configuration to LocalExecutor&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1504&quot;&gt;FLINK-1504&lt;/a&gt;: Add support for accessing secured HDFS clusters in standalone mode&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1478&quot;&gt;FLINK-1478&lt;/a&gt;: Add strictly local input split assignment&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1512&quot;&gt;FLINK-1512&lt;/a&gt;: Add CsvReader for reading into POJOs.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1461&quot;&gt;FLINK-1461&lt;/a&gt;: Add sortPartition operator&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1450&quot;&gt;FLINK-1450&lt;/a&gt;: Add Fold operator to the Streaming api&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1389&quot;&gt;FLINK-1389&lt;/a&gt;: Allow setting custom file extensions for files created by the FileOutputFormat&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1236&quot;&gt;FLINK-1236&lt;/a&gt;: Add support for localization of Hadoop Input Splits&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1179&quot;&gt;FLINK-1179&lt;/a&gt;: Add button to JobManager web interface to request stack trace of a TaskManager&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1105&quot;&gt;FLINK-1105&lt;/a&gt;: Add support for locally sorted output&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1688&quot;&gt;FLINK-1688&lt;/a&gt;: Add socket sink&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1436&quot;&gt;FLINK-1436&lt;/a&gt;: Improve usability of command line interface&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2174&quot;&gt;FLINK-2174&lt;/a&gt;: Allow comments in ‘slaves’ file&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1698&quot;&gt;FLINK-1698&lt;/a&gt;: Add polynomial base feature mapper to ML library&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1697&quot;&gt;FLINK-1697&lt;/a&gt;: Add alternating least squares algorithm for matrix factorization to ML library&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1792&quot;&gt;FLINK-1792&lt;/a&gt;: FLINK-456 Improve TM Monitoring: CPU utilization, hide graphs by default and show summary only&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1672&quot;&gt;FLINK-1672&lt;/a&gt;: Refactor task registration/unregistration&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2001&quot;&gt;FLINK-2001&lt;/a&gt;: DistanceMetric cannot be serialized&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1676&quot;&gt;FLINK-1676&lt;/a&gt;: enableForceKryo() is not working as expected&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1959&quot;&gt;FLINK-1959&lt;/a&gt;: Accumulators BROKEN after Partitioning&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1696&quot;&gt;FLINK-1696&lt;/a&gt;: Add multiple linear regression to ML library&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1820&quot;&gt;FLINK-1820&lt;/a&gt;: Bug in DoubleParser and FloatParser - empty String is not casted to 0&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1985&quot;&gt;FLINK-1985&lt;/a&gt;: Streaming does not correctly forward ExecutionConfig to runtime&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1828&quot;&gt;FLINK-1828&lt;/a&gt;: Impossible to output data to an HBase table&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1952&quot;&gt;FLINK-1952&lt;/a&gt;: Cannot run ConnectedComponents example: Could not allocate a slot on instance&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1848&quot;&gt;FLINK-1848&lt;/a&gt;: Paths containing a Windows drive letter cannot be used in FileOutputFormats&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1954&quot;&gt;FLINK-1954&lt;/a&gt;: Task Failures and Error Handling&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2004&quot;&gt;FLINK-2004&lt;/a&gt;: Memory leak in presence of failed checkpoints in KafkaSource&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2132&quot;&gt;FLINK-2132&lt;/a&gt;: Java version parsing is not working for OpenJDK&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2098&quot;&gt;FLINK-2098&lt;/a&gt;: Checkpoint barrier initiation at source is not aligned with snapshotting&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2069&quot;&gt;FLINK-2069&lt;/a&gt;: writeAsCSV function in DataStream Scala API creates no file&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2092&quot;&gt;FLINK-2092&lt;/a&gt;: Document (new) behavior of print() and execute()&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2177&quot;&gt;FLINK-2177&lt;/a&gt;: NullPointer in task resource release&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2054&quot;&gt;FLINK-2054&lt;/a&gt;: StreamOperator rework removed copy calls when passing output to a chained operator&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2196&quot;&gt;FLINK-2196&lt;/a&gt;: Missplaced Class in flink-java SortPartitionOperator&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2191&quot;&gt;FLINK-2191&lt;/a&gt;: Inconsistent use of Closure Cleaner in Streaming API&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2206&quot;&gt;FLINK-2206&lt;/a&gt;: JobManager webinterface shows 5 finished jobs at most&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2188&quot;&gt;FLINK-2188&lt;/a&gt;: Reading from big HBase Tables&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1781&quot;&gt;FLINK-1781&lt;/a&gt;: Quickstarts broken due to Scala Version Variables&lt;/p&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
+
+&lt;h2 id=&quot;notice&quot;&gt;Notice&lt;/h2&gt;
+
+&lt;p&gt;The 0.9 series of Flink is the last version to support Java 6. If you are still using Java 6, please consider upgrading to Java 8 (Java 7 ended its free support in April 2015).&lt;/p&gt;
+
+&lt;p&gt;Flink will require at least Java 7 in major releases after 0.9.0.&lt;/p&gt;
+</description>
+<pubDate>Wed, 24 Jun 2015 16:00:00 +0200</pubDate>
+<link>http://flink.apache.org/news/2015/06/24/announcing-apache-flink-0.9.0-release.html</link>
+<guid isPermaLink="true">/news/2015/06/24/announcing-apache-flink-0.9.0-release.html</guid>
+</item>
+
+<item>
 <title>April 2015 in the Flink community</title>
 <description>&lt;p&gt;April was an packed month for Apache Flink. &lt;/p&gt;
 

http://git-wip-us.apache.org/repos/asf/flink-web/blob/487049aa/content/blog/index.html
----------------------------------------------------------------------
diff --git a/content/blog/index.html b/content/blog/index.html
index b3d8e19..f377375 100644
--- a/content/blog/index.html
+++ b/content/blog/index.html
@@ -146,6 +146,19 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a href="/news/2015/06/24/announcing-apache-flink-0.9.0-release.html">Announcing Apache Flink 0.9.0</a></h2>
+      <p>24 Jun 2015</p>
+
+      <p><p>The Apache Flink community is pleased to announce the availability of the 0.9.0 release. The release is the result of many months of hard work within the Flink community. It contains many new features and improvements which were previewed in the 0.9.0-milestone1 release and have been polished since then. This is the largest Flink release so far.</p>
+
+</p>
+
+      <p><a href="/news/2015/06/24/announcing-apache-flink-0.9.0-release.html">Continue reading &raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a href="/news/2015/05/14/Community-update-April.html">April 2015 in the Flink community</a></h2>
       <p>14 May 2015 by Kostas Tzoumas (<a href="https://twitter.com/kostas_tzoumas">@kostas_tzoumas</a>)</p>
 
@@ -269,19 +282,6 @@ and offers a new API including definition of flexible windows.</p>
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a href="/news/2015/01/06/december-in-flink.html">December 2014 in the Flink community</a></h2>
-      <p>06 Jan 2015</p>
-
-      <p><p>This is the first blog post of a “newsletter” like series where we give a summary of the monthly activity in the Flink community. As the Flink project grows, this can serve as a “tl;dr” for people that are not following the Flink dev and user mailing lists, or those that are simply overwhelmed by the traffic.</p>
-
-</p>
-
-      <p><a href="/news/2015/01/06/december-in-flink.html">Continue reading &raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -314,6 +314,16 @@ and offers a new API including definition of flexible windows.</p>
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2015/06/24/announcing-apache-flink-0.9.0-release.html">Announcing Apache Flink 0.9.0</a></li>
+      
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/news/2015/05/14/Community-update-April.html">April 2015 in the Flink community</a></li>
       
       

http://git-wip-us.apache.org/repos/asf/flink-web/blob/487049aa/content/blog/page2/index.html
----------------------------------------------------------------------
diff --git a/content/blog/page2/index.html b/content/blog/page2/index.html
index c996c70..609a749 100644
--- a/content/blog/page2/index.html
+++ b/content/blog/page2/index.html
@@ -146,6 +146,19 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a href="/news/2015/01/06/december-in-flink.html">December 2014 in the Flink community</a></h2>
+      <p>06 Jan 2015</p>
+
+      <p><p>This is the first blog post of a “newsletter” like series where we give a summary of the monthly activity in the Flink community. As the Flink project grows, this can serve as a “tl;dr” for people that are not following the Flink dev and user mailing lists, or those that are simply overwhelmed by the traffic.</p>
+
+</p>
+
+      <p><a href="/news/2015/01/06/december-in-flink.html">Continue reading &raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a href="/news/2014/11/18/hadoop-compatibility.html">Hadoop Compatibility in Flink</a></h2>
       <p>18 Nov 2014 by Fabian Hüske (<a href="https://twitter.com/fhueske">@fhueske</a>)</p>
 
@@ -265,19 +278,6 @@ academic and open source project that Flink originates from.</p>
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a href="/news/2014/01/28/querying_mongodb.html">Accessing Data Stored in MongoDB with Stratosphere</a></h2>
-      <p>28 Jan 2014 by Robert Metzger (<a href="https://twitter.com/rmetzger_">@rmetzger_</a>)</p>
-
-      <p><p>We recently merged a <a href="https://github.com/stratosphere/stratosphere/pull/437">pull request</a> that allows you to use any existing Hadoop <a href="http://developer.yahoo.com/hadoop/tutorial/module5.html#inputformat">InputFormat</a> with Stratosphere. So you can now (in the <code>0.5-SNAPSHOT</code> and upwards versions) define a Hadoop-based data source:</p>
-
-</p>
-
-      <p><a href="/news/2014/01/28/querying_mongodb.html">Continue reading &raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -310,6 +310,16 @@ academic and open source project that Flink originates from.</p>
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2015/06/24/announcing-apache-flink-0.9.0-release.html">Announcing Apache Flink 0.9.0</a></li>
+      
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/news/2015/05/14/Community-update-April.html">April 2015 in the Flink community</a></li>
       
       

http://git-wip-us.apache.org/repos/asf/flink-web/blob/487049aa/content/blog/page3/index.html
----------------------------------------------------------------------
diff --git a/content/blog/page3/index.html b/content/blog/page3/index.html
index f05b25e..441e662 100644
--- a/content/blog/page3/index.html
+++ b/content/blog/page3/index.html
@@ -146,6 +146,19 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a href="/news/2014/01/28/querying_mongodb.html">Accessing Data Stored in MongoDB with Stratosphere</a></h2>
+      <p>28 Jan 2014 by Robert Metzger (<a href="https://twitter.com/rmetzger_">@rmetzger_</a>)</p>
+
+      <p><p>We recently merged a <a href="https://github.com/stratosphere/stratosphere/pull/437">pull request</a> that allows you to use any existing Hadoop <a href="http://developer.yahoo.com/hadoop/tutorial/module5.html#inputformat">InputFormat</a> with Stratosphere. So you can now (in the <code>0.5-SNAPSHOT</code> and upwards versions) define a Hadoop-based data source:</p>
+
+</p>
+
+      <p><a href="/news/2014/01/28/querying_mongodb.html">Continue reading &raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a href="/news/2014/01/26/optimizer_plan_visualization_tool.html">Optimizer Plan Visualization Tool</a></h2>
       <p>26 Jan 2014</p>
 
@@ -288,27 +301,6 @@ Analyzing big data sets as they occur in modern business and science application
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a href="/news/2012/10/15/icde2013.html">Stratosphere Demo Accepted for ICDE 2013</a></h2>
-      <p>15 Oct 2012</p>
-
-      <p> <p>Our demo submission<br />
-<strong><cite>"Peeking into the Optimization of Data Flow Programs with MapReduce-style UDFs"</cite></strong><br />
-has been accepted for ICDE 2013 in Brisbane, Australia.<br />
-The demo illustrates the contributions of our VLDB 2012 paper <cite>"Opening the Black Boxes in Data Flow Optimization"</cite> <a href="/assets/papers/optimizationOfDataFlowsWithUDFs_13.pdf">[PDF]</a> and <a href="/assets/papers/optimizationOfDataFlowsWithUDFs_poster_13.pdf">[Poster PDF]</a>.</p>
-<p>Visit our poster, enjoy the demo, and talk to us if you are going to attend ICDE 2013.</p>
-<p><strong>Abstract:</strong><br />
-Data flows are a popular abstraction to define data-intensive processing tasks. In order to support a wide range of use cases, many data processing systems feature MapReduce-style user-defined functions (UDFs). In contrast to UDFs as known from relational DBMS, MapReduce-style UDFs have less strict templates. These templates do not alone provide all the information needed to decide whether they can be reordered with relational operators and other UDFs. However, it is well-known that reordering operators such as filters, joins, and aggregations can yield runtime improvements by orders of magnitude.<br />
-We demonstrate an optimizer for data flows that is able to reorder operators with MapReduce-style UDFs written in an imperative language. Our approach leverages static code analysis to extract information from UDFs which is used to reason about the reorderbility of UDF operators. This information is sufficient to enumerate a large fraction of the search space covered by conventional RDBMS optimizers including filter and aggregation push-down, bushy join orders, and choice of physical execution strategies based on interesting properties.<br />
-We demonstrate our optimizer and a job submission client that allows users to peek step-by-step into each phase of the optimization process: the static code analysis of UDFs, the enumeration of reordered candidate data flows, the generation of physical execution plans, and their parallel execution. For the demonstration, we provide a selection of relational and non-relational data flow programs which highlight the salient features of our approach.</p>
-
-</p>
-
-      <p><a href="/news/2012/10/15/icde2013.html">Continue reading &raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -341,6 +333,16 @@ We demonstrate our optimizer and a job submission client that allows users to pe
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2015/06/24/announcing-apache-flink-0.9.0-release.html">Announcing Apache Flink 0.9.0</a></li>
+      
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/news/2015/05/14/Community-update-April.html">April 2015 in the Flink community</a></li>
       
       

http://git-wip-us.apache.org/repos/asf/flink-web/blob/487049aa/content/blog/page4/index.html
----------------------------------------------------------------------
diff --git a/content/blog/page4/index.html b/content/blog/page4/index.html
index d18bdea..98f552b 100644
--- a/content/blog/page4/index.html
+++ b/content/blog/page4/index.html
@@ -146,6 +146,27 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a href="/news/2012/10/15/icde2013.html">Stratosphere Demo Accepted for ICDE 2013</a></h2>
+      <p>15 Oct 2012</p>
+
+      <p> <p>Our demo submission<br />
+<strong><cite>"Peeking into the Optimization of Data Flow Programs with MapReduce-style UDFs"</cite></strong><br />
+has been accepted for ICDE 2013 in Brisbane, Australia.<br />
+The demo illustrates the contributions of our VLDB 2012 paper <cite>"Opening the Black Boxes in Data Flow Optimization"</cite> <a href="/assets/papers/optimizationOfDataFlowsWithUDFs_13.pdf">[PDF]</a> and <a href="/assets/papers/optimizationOfDataFlowsWithUDFs_poster_13.pdf">[Poster PDF]</a>.</p>
+<p>Visit our poster, enjoy the demo, and talk to us if you are going to attend ICDE 2013.</p>
+<p><strong>Abstract:</strong><br />
+Data flows are a popular abstraction to define data-intensive processing tasks. In order to support a wide range of use cases, many data processing systems feature MapReduce-style user-defined functions (UDFs). In contrast to UDFs as known from relational DBMS, MapReduce-style UDFs have less strict templates. These templates do not alone provide all the information needed to decide whether they can be reordered with relational operators and other UDFs. However, it is well-known that reordering operators such as filters, joins, and aggregations can yield runtime improvements by orders of magnitude.<br />
+We demonstrate an optimizer for data flows that is able to reorder operators with MapReduce-style UDFs written in an imperative language. Our approach leverages static code analysis to extract information from UDFs which is used to reason about the reorderbility of UDF operators. This information is sufficient to enumerate a large fraction of the search space covered by conventional RDBMS optimizers including filter and aggregation push-down, bushy join orders, and choice of physical execution strategies based on interesting properties.<br />
+We demonstrate our optimizer and a job submission client that allows users to peek step-by-step into each phase of the optimization process: the static code analysis of UDFs, the enumeration of reordered candidate data flows, the generation of physical execution plans, and their parallel execution. For the demonstration, we provide a selection of relational and non-relational data flow programs which highlight the salient features of our approach.</p>
+
+</p>
+
+      <p><a href="/news/2012/10/15/icde2013.html">Continue reading &raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a href="/news/2012/08/21/release02.html">Version 0.2 Released</a></h2>
       <p>21 Aug 2012</p>
 
@@ -199,6 +220,16 @@
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2015/06/24/announcing-apache-flink-0.9.0-release.html">Announcing Apache Flink 0.9.0</a></li>
+      
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/news/2015/05/14/Community-update-April.html">April 2015 in the Flink community</a></li>
       
       

http://git-wip-us.apache.org/repos/asf/flink-web/blob/487049aa/content/index.html
----------------------------------------------------------------------
diff --git a/content/index.html b/content/index.html
index 1ca8a38..38577dd 100644
--- a/content/index.html
+++ b/content/index.html
@@ -229,6 +229,10 @@
 
     <ul class="list-group">
   
+      <li class="list-group-item"><span>24 Jun 2015</span> &raquo;
+        <a href="/news/2015/06/24/announcing-apache-flink-0.9.0-release.html">Announcing Apache Flink 0.9.0</a>
+      </li>
+  
       <li class="list-group-item"><span>14 May 2015</span> &raquo;
         <a href="/news/2015/05/14/Community-update-April.html">April 2015 in the Flink community</a>
       </li>
@@ -244,10 +248,6 @@
       <li class="list-group-item"><span>07 Apr 2015</span> &raquo;
         <a href="/news/2015/04/07/march-in-flink.html">March 2015 in the Flink community</a>
       </li>
-  
-      <li class="list-group-item"><span>13 Mar 2015</span> &raquo;
-        <a href="/news/2015/03/13/peeking-into-Apache-Flinks-Engine-Room.html">Peeking into Apache Flink's Engine Room</a>
-      </li>
 
 </ul>
 

http://git-wip-us.apache.org/repos/asf/flink-web/blob/487049aa/content/news/2015/06/24/announcing-apache-flink-0.9.0-release.html
----------------------------------------------------------------------
diff --git a/content/news/2015/06/24/announcing-apache-flink-0.9.0-release.html b/content/news/2015/06/24/announcing-apache-flink-0.9.0-release.html
new file mode 100644
index 0000000..0d1dcc2
--- /dev/null
+++ b/content/news/2015/06/24/announcing-apache-flink-0.9.0-release.html
@@ -0,0 +1,426 @@
+<!DOCTYPE html>
+<html lang="en">
+  <head>
+    <meta charset="utf-8">
+    <meta http-equiv="X-UA-Compatible" content="IE=edge">
+    <meta name="viewport" content="width=device-width, initial-scale=1">
+    <!-- The above 3 meta tags *must* come first in the head; any other head content must come *after* these tags -->
+    <title>Apache Flink: Announcing Apache Flink 0.9.0</title>
+    <link rel="shortcut icon" href="/favicon.ico" type="image/x-icon">
+    <link rel="icon" href="/favicon.ico" type="image/x-icon">
+
+    <!-- Bootstrap -->
+    <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.4/css/bootstrap.min.css">
+    <link rel="stylesheet" href="/css/flink.css">
+    <link rel="stylesheet" href="/css/syntax.css">
+
+    <!-- Blog RSS feed -->
+    <link href="/blog/feed.xml" rel="alternate" type="application/rss+xml" title="Apache Flink Blog: RSS feed" />
+
+    <!-- HTML5 shim and Respond.js for IE8 support of HTML5 elements and media queries -->
+    <!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
+    <!--[if lt IE 9]>
+      <script src="https://oss.maxcdn.com/html5shiv/3.7.2/html5shiv.min.js"></script>
+      <script src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js"></script>
+    <![endif]-->
+  </head>
+  <body>  
+    
+
+  <!-- Top navbar. -->
+    <nav class="navbar navbar-default navbar-fixed-top">
+      <div class="container">
+        <!-- The logo. -->
+        <div class="navbar-header">
+          <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
+            <span class="icon-bar"></span>
+            <span class="icon-bar"></span>
+            <span class="icon-bar"></span>
+          </button>
+          <div class="navbar-logo">
+            <a href="/"><img alt="Apache Flink" src="/img/navbar-brand-logo.jpg" width="78px" height="40px"></a>
+          </div>
+        </div><!-- /.navbar-header -->
+
+        <!-- The navigation links. -->
+        <div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
+          <ul class="nav navbar-nav">
+            <!-- Overview -->
+            <li><a href="/index.html">Overview</a></li>
+
+            <!-- Quickstart -->
+            <li class="dropdown">
+              <a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-expanded="false">Quickstart <span class="caret"></span></a>
+              <ul class="dropdown-menu" role="menu">
+                <li><a href="http://ci.apache.org/projects/flink/flink-docs-release-0.9/quickstart/setup_quickstart.html">Setup</a></li>
+                <li><a href="http://ci.apache.org/projects/flink/flink-docs-release-0.9/quickstart/java_api_quickstart.html">Java API</a></li>
+                <li><a href="http://ci.apache.org/projects/flink/flink-docs-release-0.9/quickstart/scala_api_quickstart.html">Scala API</a></li>
+                <li><a href="http://ci.apache.org/projects/flink/flink-docs-release-0.9/quickstart/run_example_quickstart.html">Run Step-by-Step Example</a></li>
+              </ul>
+            </li>
+
+            <!-- Features -->
+            <li><a href="/features.html">Features</a></li>
+
+            <!-- Downloads -->
+            <li><a href="/downloads.html">Downloads</a></li>
+
+            <!-- Documentation -->
+            <li class="dropdown">
+              <a href="" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-expanded="false">Documentation <span class="caret"></span></a>
+              <ul class="dropdown-menu" role="menu">
+                <!-- Latest stable release -->
+                <li role="presentation" class="dropdown-header"><strong>Latest Release</strong> (Stable)</li>
+                <li><a href="http://ci.apache.org/projects/flink/flink-docs-release-0.9">0.9.0 Documentation</a></li>
+                <li><a href="http://ci.apache.org/projects/flink/flink-docs-release-0.9/api/java" class="active">0.9.0 Javadocs</a></li>
+                <li><a href="http://ci.apache.org/projects/flink/flink-docs-release-0.9/api/scala/index.html" class="active">0.9.0 ScalaDocs</a></li>
+
+                <!-- Snapshot docs -->
+                <li class="divider"></li>
+                <li role="presentation" class="dropdown-header"><strong>Snapshot</strong> (Development)</li>
+                <li><a href="http://ci.apache.org/projects/flink/flink-docs-master">0.10 Documentation</a></li>
+                <li><a href="http://ci.apache.org/projects/flink/flink-docs-master/api/java" class="active">0.10 Javadocs</a></li>
+                <li><a href="http://ci.apache.org/projects/flink/flink-docs-master/api/scala/index.html" class="active">0.10 ScalaDocs</a></li>
+
+                <!-- Wiki -->
+                <li class="divider"></li>
+                <li><a href="https://cwiki.apache.org/confluence/display/FLINK/Apache+Flink+Home"><small><span class="glyphicon glyphicon-new-window"></span></small> Wiki</a></li>
+              </ul>
+            </li>
+
+            <!-- FAQ -->
+            <li><a href="/faq.html">FAQ</a></li>
+          </ul>
+
+          <ul class="nav navbar-nav navbar-right">
+            <!-- Blog -->
+            <li class=" active hidden-md hidden-sm"><a href="/blog/">Blog</a></li>
+
+            <li class="dropdown hidden-md hidden-sm">
+              <a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-expanded="false">Community <span class="caret"></span></a>
+              <ul class="dropdown-menu" role="menu">
+                <!-- Community -->
+                <li role="presentation" class="dropdown-header"><strong>Community</strong></li>
+                <li><a href="/community.html#mailing-lists">Mailing Lists</a></li>
+                <li><a href="/community.html#irc">IRC</a></li>
+                <li><a href="/community.html#stack-overflow">Stack Overflow</a></li>
+                <li><a href="/community.html#issue-tracker">Issue Tracker</a></li>
+                <li><a href="/community.html#source-code">Source Code</a></li>
+                <li><a href="/community.html#people">People</a></li>
+
+                <!-- Contribute -->
+                <li class="divider"></li>
+                <li role="presentation" class="dropdown-header"><strong>Contribute</strong></li>
+                <li><a href="/how-to-contribute.html">How to Contribute</a></li>
+                <li><a href="/coding-guidelines.html">Coding Guidelines</a></li>
+              </ul>
+            </li>
+
+            <li class="dropdown hidden-md hidden-sm">
+              <a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-expanded="false">Project <span class="caret"></span></a>
+              <ul class="dropdown-menu" role="menu">
+                <!-- Project -->
+                <li role="presentation" class="dropdown-header"><strong>Project</strong></li>
+                <li><a href="/material.html">Material</a></li>
+                <li><a href="https://twitter.com/apacheflink"><small><span class="glyphicon glyphicon-new-window"></span></small> Twitter</a></li>
+                <li><a href="https://github.com/apache/flink"><small><span class="glyphicon glyphicon-new-window"></span></small> GitHub</a></li>
+                <li><a href="https://cwiki.apache.org/confluence/display/FLINK/Apache+Flink+Home"><small><span class="glyphicon glyphicon-new-window"></span></small> Wiki</a></li>
+              </ul>
+            </li>
+          </ul>
+        </div><!-- /.navbar-collapse -->
+      </div><!-- /.container -->
+    </nav>
+
+
+    <!-- Main content. -->
+    <div class="container">
+      
+
+<div class="row">
+  <div class="col-sm-8 col-sm-offset-2">
+    <div class="row">
+      <h1>Announcing Apache Flink 0.9.0</h1>
+
+      <article>
+        <p>24 Jun 2015</p>
+
+<p>The Apache Flink community is pleased to announce the availability of the 0.9.0 release. The release is the result of many months of hard work within the Flink community. It contains many new features and improvements which were previewed in the 0.9.0-milestone1 release and have been polished since then. This is the largest Flink release so far.</p>
+
+<p><a href="http://flink.apache.org/downloads.html">Download the release</a> and check out <a href="http://ci.apache.org/projects/flink/flink-docs-release-0.9/">the documentation</a>. Feedback through the Flink<a href="http://flink.apache.org/community.html#mailing-lists"> mailing lists</a> is, as always, very welcome!</p>
+
+<h2 id="new-features">New Features</h2>
+
+<h3 id="exactly-once-fault-tolerance-for-streaming-programs">Exactly-once Fault Tolerance for streaming programs</h3>
+
+<p>This release introduces a new fault tolerance mechanism for streaming dataflows. The new checkpointing algorithm takes data sources and also user-defined state into account and recovers failures such that all records are reflected exactly once in the operator states.</p>
+
+<p>The checkpointing algorithm is lightweight and driven by barriers that are periodically injected into the data streams at the sources. As such, it has an extremely low coordination overhead and is able to sustain very high throughput rates. User-defined state can be automatically backed up to configurable storage by the fault tolerance mechanism.</p>
+
+<p>Please refer to <a href="http://ci.apache.org/projects/flink/flink-docs-release-0.9/apis/streaming_guide.html#stateful-computation">the documentation on stateful computation</a> for details in how to use fault tolerant data streams with Flink.</p>
+
+<p>The fault tolerance mechanism requires data sources that can replay recent parts of the stream, such as <a href="http://kafka.apache.org">Apache Kafka</a>. Read more <a href="http://ci.apache.org/projects/flink/flink-docs-release-0.9/apis/streaming_guide.html#apache-kafka">about how to use the persistent Kafka source</a>.</p>
+
+<h3 id="table-api">Table API</h3>
+
+<p>Flink’s new Table API offers a higher-level abstraction for interacting with structured data sources. The Table API allows users to execute logical, SQL-like queries on distributed data sets while allowing them to freely mix declarative queries with regular Flink operators. Here is an example that groups and joins two tables:</p>
+
+<p>val clickCounts = clicks
+  .groupBy(‘user).select(‘userId, ‘url.count as ‘count)</p>
+
+<p>val activeUsers = users.join(clickCounts)
+  .where(‘id === ‘userId &amp;&amp; ‘count &gt; 10).select(‘username, ‘count, …)</p>
+
+<p>Tables consist of logical attributes that can be selected by name rather than physical Java and Scala data types. This alleviates a lot of boilerplate code for common ETL tasks and raises the abstraction for Flink programs. Tables are available for both static and streaming data sources (DataSet and DataStream APIs).</p>
+
+<p><a href="http://ci.apache.org/projects/flink/flink-docs-release-0.9/libs/table.html">Check out the Table guide for Java and Scala</a>.</p>
+
+<h3 id="gelly-graph-processing-api">Gelly Graph Processing API</h3>
+
+<p>Gelly is a Java Graph API for Flink. It contains a set of utilities for graph analysis, support for iterative graph processing and a library of graph algorithms. Gelly exposes a Graph data structure that wraps DataSets for vertices and edges, as well as methods for creating graphs from DataSets, graph transformations and utilities (e.g., in- and out- degrees of vertices), neighborhood aggregations, iterative vertex-centric graph processing, as well as a library of common graph algorithms, including PageRank, SSSP, label propagation, and community detection.</p>
+
+<p>Gelly internally builds on top of Flink’s<a href="http://ci.apache.org/projects/flink/flink-docs-release-0.9/apis/iterations.html"> delta iterations</a>. Iterative graph algorithms are executed leveraging mutable state, achieving similar performance with specialized graph processing systems.</p>
+
+<p>Gelly will eventually subsume Spargel, Flink’s Pregel-like API.</p>
+
+<p>Note: The Gelly library is still in beta status and subject to improvements and heavy performance tuning.</p>
+
+<p><a href="http://ci.apache.org/projects/flink/flink-docs-release-0.9/libs/gelly_guide.html">Check out the Gelly guide</a>.</p>
+
+<h3 id="flink-machine-learning-library">Flink Machine Learning Library</h3>
+
+<p>This release includes the first version of Flink’s Machine Learning library. The library’s pipeline approach, which has been strongly inspired by scikit-learn’s abstraction of transformers and predictors, makes it easy to quickly set up a data processing pipeline and to get your job done.</p>
+
+<p>Flink distinguishes between transformers and predictors. Transformers are components which transform your input data into a new format allowing you to extract features, cleanse your data or to sample from it. Predictors on the other hand constitute the components which take your input data and train a model on it. The model you obtain from the learner can then be evaluated and used to make predictions on unseen data.</p>
+
+<p>Currently, the machine learning library contains transformers and predictors to do multiple tasks. The library supports multiple linear regression using stochastic gradient descent to scale to large data sizes. Furthermore, it includes an alternating least squares (ALS) implementation to factorizes large matrices. The matrix factorization can be used to do collaborative filtering. An implementation of the communication efficient distributed dual coordinate ascent (CoCoA) algorithm is the latest addition to the library. The CoCoA algorithm can be used to train distributed soft-margin SVMs.</p>
+
+<p>Note: The ML library is still in beta status and subject to improvements and heavy performance tuning.</p>
+
+<p><a href="http://ci.apache.org/projects/flink/flink-docs-release-0.9/libs/ml/">Check out FlinkML</a></p>
+
+<h3 id="flink-on-yarn-leveraging-apache-tez">Flink on YARN leveraging Apache Tez</h3>
+
+<p>We are introducing a new execution mode for Flink to be able to run restricted Flink programs on top of<a href="http://tez.apache.org"> Apache Tez</a>. This mode retains Flink’s APIs, optimizer, as well as Flink’s runtime operators, but instead of wrapping those in Flink tasks that are executed by Flink TaskManagers, it wraps them in Tez runtime tasks and builds a Tez DAG that represents the program.</p>
+
+<p>By using Flink on Tez, users have an additional choice for an execution platform for Flink programs. While Flink’s distributed runtime favors low latency, streaming shuffles, and iterative algorithms, Tez focuses on scalability and elastic resource usage in shared YARN clusters.</p>
+
+<p><a href="http://ci.apache.org/projects/flink/flink-docs-release-0.9/setup/flink_on_tez.html">Get started with Flink on Tez</a>.</p>
+
+<h3 id="reworked-distributed-runtime-on-akka">Reworked Distributed Runtime on Akka</h3>
+
+<p>Flink’s RPC system has been replaced by the widely adopted<a href="http://akka.io"> Akka</a> framework. Akka’s concurrency model offers the right abstraction to develop a fast as well as robust distributed system. By using Akka’s own failure detection mechanism the stability of Flink’s runtime is significantly improved, because the system can now react in proper form to node outages. Furthermore, Akka improves Flink’s scalability by introducing asynchronous messages to the system. These asynchronous messages allow Flink to be run on many more nodes than before.</p>
+
+<h3 id="improved-yarn-support">Improved YARN support</h3>
+
+<p>Flink’s YARN client contains several improvements, such as a detached mode for starting a YARN session in the background, the ability to submit a single Flink job to a YARN cluster without starting a session, including a “fire and forget” mode. Flink is now also able to reallocate failed YARN containers to maintain the size of the requested cluster. This feature allows to implement fault-tolerant setups on top of YARN. There is also an internal Java API to deploy and control a running YARN cluster. This is being used by system integrators to easily control Flink on YARN within their Hadoop 2 cluster.</p>
+
+<p><a href="http://ci.apache.org/projects/flink/flink-docs-release-0.9/setup/yarn_setup.html">See the YARN docs</a>.</p>
+
+<h3 id="static-code-analysis-for-the-flink-optimizer-opening-the-udf-blackboxes">Static Code Analysis for the Flink Optimizer: Opening the UDF blackboxes</h3>
+
+<p>This release introduces a first version of a static code analyzer that pre-interprets functions written by the user to get information about the function’s internal dataflow. The code analyzer can provide useful information about <a href="http://ci.apache.org/projects/flink/flink-docs-release-0.9/apis/programming_guide.html#semantic-annotations">forwarded fields</a> to Flink’s optimizer and thus speedup job executions. It also informs if the code contains obvious mistakes. For stability reasons, the code analyzer is initially disabled by default. It can be activated through</p>
+
+<p>ExecutionEnvironment.getExecutionConfig().setCodeAnalysisMode(…)</p>
+
+<p>either as an assistant that gives hints during the implementation or by directly applying the optimizations that have been found.</p>
+
+<h2 id="more-improvements-and-fixes">More Improvements and Fixes</h2>
+
+<ul>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1605">FLINK-1605</a>: Flink is not exposing its Guava and ASM dependencies to Maven projects depending on Flink. We use the maven-shade-plugin to relocate these dependencies into our own namespace. This allows users to use any Guava or ASM version.</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1605">FLINK-1417</a>: Automatic recognition and registration of Java Types at Kryo and the internal serializers: Flink has its own type handling and serialization framework falling back to Kryo for types that it cannot handle. To get the best performance Flink is automatically registering all types a user is using in their program with Kryo.Flink also registers serializers for Protocol Buffers, Thrift, Avro and YodaTime automatically. Users can also manually register serializers to Kryo (https://issues.apache.org/jira/browse/FLINK-1399)</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1296">FLINK-1296</a>: Add support for sorting very large records</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1679">FLINK-1679</a>: “degreeOfParallelism” methods renamed to “parallelism”</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1501">FLINK-1501</a>: Add metrics library for monitoring TaskManagers</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1760">FLINK-1760</a>: Add support for building Flink with Scala 2.11</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1648">FLINK-1648</a>: Add a mode where the system automatically sets the parallelism to the available task slots</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1622">FLINK-1622</a>: Add groupCombine operator</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1589">FLINK-1589</a>: Add option to pass Configuration to LocalExecutor</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1504">FLINK-1504</a>: Add support for accessing secured HDFS clusters in standalone mode</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1478">FLINK-1478</a>: Add strictly local input split assignment</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1512">FLINK-1512</a>: Add CsvReader for reading into POJOs.</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1461">FLINK-1461</a>: Add sortPartition operator</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1450">FLINK-1450</a>: Add Fold operator to the Streaming api</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1389">FLINK-1389</a>: Allow setting custom file extensions for files created by the FileOutputFormat</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1236">FLINK-1236</a>: Add support for localization of Hadoop Input Splits</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1179">FLINK-1179</a>: Add button to JobManager web interface to request stack trace of a TaskManager</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1105">FLINK-1105</a>: Add support for locally sorted output</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1688">FLINK-1688</a>: Add socket sink</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1436">FLINK-1436</a>: Improve usability of command line interface</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-2174">FLINK-2174</a>: Allow comments in ‘slaves’ file</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1698">FLINK-1698</a>: Add polynomial base feature mapper to ML library</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1697">FLINK-1697</a>: Add alternating least squares algorithm for matrix factorization to ML library</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1792">FLINK-1792</a>: FLINK-456 Improve TM Monitoring: CPU utilization, hide graphs by default and show summary only</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1672">FLINK-1672</a>: Refactor task registration/unregistration</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-2001">FLINK-2001</a>: DistanceMetric cannot be serialized</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1676">FLINK-1676</a>: enableForceKryo() is not working as expected</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1959">FLINK-1959</a>: Accumulators BROKEN after Partitioning</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1696">FLINK-1696</a>: Add multiple linear regression to ML library</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1820">FLINK-1820</a>: Bug in DoubleParser and FloatParser - empty String is not casted to 0</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1985">FLINK-1985</a>: Streaming does not correctly forward ExecutionConfig to runtime</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1828">FLINK-1828</a>: Impossible to output data to an HBase table</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1952">FLINK-1952</a>: Cannot run ConnectedComponents example: Could not allocate a slot on instance</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1848">FLINK-1848</a>: Paths containing a Windows drive letter cannot be used in FileOutputFormats</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1954">FLINK-1954</a>: Task Failures and Error Handling</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-2004">FLINK-2004</a>: Memory leak in presence of failed checkpoints in KafkaSource</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-2132">FLINK-2132</a>: Java version parsing is not working for OpenJDK</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-2098">FLINK-2098</a>: Checkpoint barrier initiation at source is not aligned with snapshotting</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-2069">FLINK-2069</a>: writeAsCSV function in DataStream Scala API creates no file</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-2092">FLINK-2092</a>: Document (new) behavior of print() and execute()</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-2177">FLINK-2177</a>: NullPointer in task resource release</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-2054">FLINK-2054</a>: StreamOperator rework removed copy calls when passing output to a chained operator</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-2196">FLINK-2196</a>: Missplaced Class in flink-java SortPartitionOperator</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-2191">FLINK-2191</a>: Inconsistent use of Closure Cleaner in Streaming API</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-2206">FLINK-2206</a>: JobManager webinterface shows 5 finished jobs at most</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-2188">FLINK-2188</a>: Reading from big HBase Tables</p>
+  </li>
+  <li>
+    <p><a href="https://issues.apache.org/jira/browse/FLINK-1781">FLINK-1781</a>: Quickstarts broken due to Scala Version Variables</p>
+  </li>
+</ul>
+
+<h2 id="notice">Notice</h2>
+
+<p>The 0.9 series of Flink is the last version to support Java 6. If you are still using Java 6, please consider upgrading to Java 8 (Java 7 ended its free support in April 2015).</p>
+
+<p>Flink will require at least Java 7 in major releases after 0.9.0.</p>
+
+      </article>
+    </div>
+
+    <div class="row">
+      <div id="disqus_thread"></div>
+      <script type="text/javascript">
+        /* * * CONFIGURATION VARIABLES: EDIT BEFORE PASTING INTO YOUR WEBPAGE * * */
+        var disqus_shortname = 'stratosphere-eu'; // required: replace example with your forum shortname
+
+        /* * * DON'T EDIT BELOW THIS LINE * * */
+        (function() {
+            var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true;
+            dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js';
+             (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq);
+        })();
+      </script>
+    </div>
+  </div>
+</div>
+
+      <hr />
+      <div class="footer text-center">
+        <p>Copyright © 2014-2015 <a href="http://apache.org">The Apache Software Foundation</a>. All Rights Reserved.</p>
+        <p>Apache Flink, Apache, and the Apache feather logo are trademarks of The Apache Software Foundation.</p>
+        <p><a href="/privacy-policy.html">Privacy Policy</a> &middot; <a href="/blog/feed.xml">RSS feed</a></p>
+      </div>
+
+    </div><!-- /.container -->
+
+    <!-- jQuery (necessary for Bootstrap's JavaScript plugins) -->
+    <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.2/jquery.min.js"></script>
+    <!-- Include all compiled plugins (below), or include individual files as needed -->
+    <script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.4/js/bootstrap.min.js"></script>
+    <script src="/js/codetabs.js"></script>
+
+    <!-- Google Analytics -->
+    <script>
+      (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
+      (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
+      m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
+      })(window,document,'script','//www.google-analytics.com/analytics.js','ga');
+
+      ga('create', 'UA-52545728-1', 'auto');
+      ga('send', 'pageview');
+    </script>
+  </body>
+</html>