You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by rm...@apache.org on 2014/07/22 12:40:52 UTC

[52/92] [abbrv] Rename documentation

http://git-wip-us.apache.org/repos/asf/incubator-flink/blob/fbc93386/docs/scala_api_guide.md
----------------------------------------------------------------------
diff --git a/docs/scala_api_guide.md b/docs/scala_api_guide.md
index 7a1b747..d6d53a9 100644
--- a/docs/scala_api_guide.md
+++ b/docs/scala_api_guide.md
@@ -6,12 +6,12 @@ title: "Scala API Programming Guide"
 Scala Programming Guide
 =======================
 
-This guide explains how to develop Stratosphere programs with the Scala
+This guide explains how to develop Flink programs with the Scala
 programming interface. 
 
 Here we will look at the general structure of a Scala job. You will learn how to
 write data sources, data sinks, and operators to create data flows that can be
-executed using the Stratosphere system.
+executed using the Flink system.
 
 Writing Scala jobs requires an understanding of Scala, there is excellent
 documentation available [here](http://scala-lang.org/documentation/). Most
@@ -28,10 +28,10 @@ To start, let's look at a Word Count job implemented in Scala. This program is
 very simple but it will give you a basic idea of what a Scala job looks like.
 
 ```scala
-import eu.stratosphere.client.LocalExecutor
+import org.apache.flinkclient.LocalExecutor
 
-import eu.stratosphere.api.scala._
-import eu.stratosphere.api.scala.operators._
+import org.apache.flinkapi.scala._
+import org.apache.flinkapi.scala.operators._
 
 object WordCount {
   def main(args: Array[String]) {
@@ -50,12 +50,12 @@ object WordCount {
 }
 ``` 
 
-Same as any Stratosphere job a Scala job consists of one or several data
+Same as any Flink job a Scala job consists of one or several data
 sources, one or several data sinks and operators in between these that transform
 data. Together these parts are referred to as the data flow graph. It dictates
 the way data is passed when a job is executed.
 
-When using Scala in Stratosphere an important concept to grasp is that of the
+When using Scala in Flink an important concept to grasp is that of the
 `DataSet`. `DataSet` is an abstract concept that represents actual data sets at
 runtime and which has operations that transform data to create a new transformed
 data set. In this example the `TextFile("/some/input")` call creates a
@@ -84,7 +84,7 @@ Project Setup
 
 We will only cover maven here but the concepts should work equivalently with
 other build systems such as Gradle or sbt. When wanting to develop a Scala job
-all that is needed as dependency is is `stratosphere-scala` (and `stratosphere-clients`, if
+all that is needed as dependency is is `flink-scala` (and `flink-clients`, if
 you want to execute your jobs). So all that needs to be done is to add the
 following lines to your POM.
 
@@ -104,7 +104,7 @@ following lines to your POM.
 </dependencies>
 ```
 
-To quickly get started you can use the Stratosphere Scala quickstart available
+To quickly get started you can use the Flink Scala quickstart available
 [here]({{site.baseurl}}/quickstart/scala.html). This will give you a
 completeMaven project with some working example code that you can use to explore
 the system or as basis for your own projects.
@@ -112,11 +112,11 @@ the system or as basis for your own projects.
 These imports are normally enough for any project:
 
 ```scala
-import eu.stratosphere.api.scala._
-import eu.stratosphere.api.scala.operators._
+import org.apache.flinkapi.scala._
+import org.apache.flinkapi.scala.operators._
 
-import eu.stratosphere.client.LocalExecutor
-import eu.stratosphere.client.RemoteExecutor
+import org.apache.flinkclient.LocalExecutor
+import org.apache.flinkclient.RemoteExecutor
 ```
 
 The first two imports contain things like `DataSet`, `Plan`, data sources, data
@@ -131,7 +131,7 @@ The DataSet Abstraction
 
 As already alluded to in the introductory example you write Scala jobs by using
 operations on a `DataSet` to create new transformed `DataSet`. This concept is
-the core of the Stratosphere Scala API so it merits some more explanation. A
+the core of the Flink Scala API so it merits some more explanation. A
 `DataSet` can look and behave like a regular Scala collection in your code but
 it does not contain any actual data but only represents data. For example: when
 you use `TextFile()` you get back a `DataSource[String]` that represents each
@@ -170,9 +170,9 @@ the primitive Scala types, case classes (which includes tuples), and custom
 data types.
 
 Custom data types must implement the interface
-{% gh_link /stratosphere-core/src/main/java/eu/stratosphere/types/Value.java "Value" %}.
+{% gh_link /flink-core/src/main/java/org/apache/flink/types/Value.java "Value" %}.
 For custom data types that should also be used as a grouping key or join key
-the {% gh_link /stratosphere-core/src/main/java/eu/stratosphere/types/Key.java "Key" %}
+the {% gh_link /flink-core/src/main/java/org/apache/flink/types/Key.java "Key" %}
 interface must be implemented.
 
 [Back to top](#top)
@@ -340,8 +340,8 @@ Here `input` would be of type `DataSet[(Int, Double)]`.
 
 This input format is only meant to be used in conjunction with
 `BinarySerializedOutputFormat`. You can use these to write elements to files using a
-Stratosphere-internal format that can efficiently be read again. You should only
-use this when output is only meant to be consumed by other Stratosphere jobs.
+Flink-internal format that can efficiently be read again. You should only
+use this when output is only meant to be consumed by other Flink jobs.
 The format can be used on one of two ways:
 
 ```scala
@@ -380,7 +380,7 @@ Operations on DataSet
 ---------------------
 
 As explained in [Programming Model](pmodel.html#operators),
-a Stratosphere job is a graph of operators that process data coming from
+a Flink job is a graph of operators that process data coming from
 sources that is finally written to sinks. When you use the Scala front end
 these operators as well as the graph is created behind the scenes. For example,
 when you write code like this:
@@ -402,7 +402,7 @@ it helps to remember, that when you are using Scala you are building
 a data flow graph that processes data only when executed.
 
 There are operations on `DataSet` that correspond to all the types of operators
-that the Stratosphere system supports. We will shortly go trough all of them with
+that the Flink system supports. We will shortly go trough all of them with
 some examples.
 
 [Back to top](#top)
@@ -686,13 +686,13 @@ Where `A` is the generic type of the `DataSet` on which you execute the `union`.
 Iterations
 ----------
 
-Iterations allow you to implement *loops* in Stratosphere programs.
+Iterations allow you to implement *loops* in Flink programs.
 [This page](iterations.html) gives a
 general introduction to iterations. This section here provides quick examples
 of how to use the concepts using the Scala API.
 The iteration operators encapsulate a part of the program and execute it
 repeatedly, feeding back the result of one iteration (the partial solution) into
-the next iteration. Stratosphere has two different types of iterations,
+the next iteration. Flink has two different types of iterations,
 *Bulk Iteration* and *Delta Iteration*.
 
 For both types of iterations you provide the iteration body as a function
@@ -886,8 +886,8 @@ BinaryOutputFormat[In](writeFunction: (In, DataOutput) => Unit, blockSize: Long)
 
 This output format is only meant to be used in conjunction with
 `BinarySerializedInputFormat`. You can use these to write elements to files using a
-Stratosphere-internal format that can efficiently be read again. You should only
-use this when output is only meant to be consumed by other Stratosphere jobs.
+Flink-internal format that can efficiently be read again. You should only
+use this when output is only meant to be consumed by other Flink jobs.
 The output format can be used on one of two ways:
 
 ```scala
@@ -911,7 +911,7 @@ by Scala.
 Executing Jobs
 --------------
 
-To execute a data flow graph the sinks need to be wrapped in a {% gh_link /stratosphere-scala/src/main/scala/eu/stratosphere/api/scala/ScalaPlan.scala "ScalaPlan" %} object like this:
+To execute a data flow graph the sinks need to be wrapped in a {% gh_link /flink-scala/src/main/scala/org/apache/flink/api/scala/ScalaPlan.scala "ScalaPlan" %} object like this:
 
 ```scala
 val out: DataSet[(String, Int)]
@@ -932,7 +932,7 @@ now give an example for each of the two execution modes.
 First up is local execution:
 
 ```scala
-import eu.stratosphere.client.LocalExecutor
+import org.apache.flinkclient.LocalExecutor
 
 ...
 
@@ -946,10 +946,10 @@ Remote (or cluster) execution is a bit more complicated because you have
 to package your code in a jar file so that it can be distributed on the cluster.
 Have a look at the [scala quickstart](scala_api_quickstart.html) to see how you
 can set up a maven project that does the packaging. Remote execution is done
-using the {% gh_link /stratosphere-clients/src/main/java/eu/stratosphere/client/RemoteExecutor.java "RemoteExecutor" %}, like this:
+using the {% gh_link /flink-clients/src/main/java/org/apache/flink/client/RemoteExecutor.java "RemoteExecutor" %}, like this:
 
 ```scala
-import eu.stratosphere.client.RemoteExecutor
+import org.apache.flinkclient.RemoteExecutor
 
 ...
 
@@ -958,7 +958,7 @@ val ex = new RemoteExecutor("<job manager ip address>", <job manager port>, "you
 ex.executePlan(plan);
 ```
 
-The IP address and the port of the Stratosphere job manager depend on your
+The IP address and the port of the Flink job manager depend on your
 setup. Have a look at [cluster quickstart](/quickstart/setup.html) for a quick
 guide about how to set up a cluster. The default cluster port is 6123, so
 if you run a job manger on your local computer you can give this and "localhost"
@@ -1006,14 +1006,14 @@ instead of the anonymous class we used here.
 
 There are rich functions for all the various operator types. The basic
 template is the some, though. The common interface that they implement 
-is {% gh_link /stratosphere-core/src/main/java/eu/stratosphere/api/common/functions/Function.java "Function" %}. The `open` and `close` methods can be overridden to run set-up
+is {% gh_link /flink-core/src/main/java/org/apache/flink/api/common/functions/Function.java "Function" %}. The `open` and `close` methods can be overridden to run set-up
 and tear-down code. The other methods can be used in a rich function to
 work with the runtime context which gives information about the context
 of the operator. Your operation code must now reside in an `apply` method
 that has the same signature as the anonymous function you would normally
 supply.
 
-The rich functions reside in the package `eu.stratosphere.api.scala.functions`.
+The rich functions reside in the package `org.apache.flinkapi.scala.functions`.
 This is a list of all the rich functions can can be used instead of
 simple functions in the respective operations:
 

http://git-wip-us.apache.org/repos/asf/incubator-flink/blob/fbc93386/docs/scala_api_quickstart.md
----------------------------------------------------------------------
diff --git a/docs/scala_api_quickstart.md b/docs/scala_api_quickstart.md
index 76489f8..dee9c1a 100644
--- a/docs/scala_api_quickstart.md
+++ b/docs/scala_api_quickstart.md
@@ -2,7 +2,7 @@
 title: "Quickstart: Scala API"
 ---
 
-Start working on your Stratosphere Scala program in a few simple steps.
+Start working on your Flink Scala program in a few simple steps.
 
 #Requirements
 The only requirements are working __Maven 3.0.4__ (or higher) and __Java 6.x__ (or higher) installations.
@@ -18,13 +18,13 @@ Use one of the following commands to __create a project__:
 <div class="tab-content">
     <div class="tab-pane active" id="quickstart-script">
 {% highlight bash %}
-$ curl https://raw.githubusercontent.com/apache/incubator-flink/master/stratosphere-quickstart/quickstart-scala.sh | bash
+$ curl https://raw.githubusercontent.com/apache/incubator-flink/master/flink-quickstart/quickstart-scala.sh | bash
 {% endhighlight %}
     </div>
     <div class="tab-pane" id="maven-archetype">
 {% highlight bash %}
 $ mvn archetype:generate                             \
-  -DarchetypeGroupId=eu.stratosphere               \
+  -DarchetypeGroupId=org.apache.flink              \
   -DarchetypeArtifactId=quickstart-scala           \
   -DarchetypeVersion={{site.FLINK_VERSION_STABLE}}                  
 {% endhighlight %}
@@ -36,7 +36,7 @@ $ mvn archetype:generate                             \
 #Inspect Project
 There will be a __new directory in your working directory__. If you've used the _curl_ approach, the directory is called `quickstart`. Otherwise, it has the name of your artifactId.
 
-The sample project is a __Maven project__, which contains a sample scala _job_ that implements Word Count. Please note that the _RunJobLocal_ and _RunJobRemote_ objects allow you to start Stratosphere in a development/testing mode.</p>
+The sample project is a __Maven project__, which contains a sample scala _job_ that implements Word Count. Please note that the _RunJobLocal_ and _RunJobRemote_ objects allow you to start Flink in a development/testing mode.</p>
 
 We recommend to __import this project into your IDE__. For Eclipse, you need the following plugins, which you can install from the provided Eclipse Update Sites:
 
@@ -54,7 +54,7 @@ The IntelliJ IDE also supports Maven and offers a plugin for Scala development.
 
 # Build Project
 
-If you want to __build your project__, go to your project directory and issue the`mvn clean package` command. You will __find a jar__ that runs on every Stratosphere cluster in __target/stratosphere-project-0.1-SNAPSHOT.jar__.
+If you want to __build your project__, go to your project directory and issue the`mvn clean package` command. You will __find a jar__ that runs on every Flink cluster in __target/flink-project-0.1-SNAPSHOT.jar__.
 
 #Next Steps
 

http://git-wip-us.apache.org/repos/asf/incubator-flink/blob/fbc93386/docs/setup_quickstart.md
----------------------------------------------------------------------
diff --git a/docs/setup_quickstart.md b/docs/setup_quickstart.md
index 92fc13e..aa5ac23 100644
--- a/docs/setup_quickstart.md
+++ b/docs/setup_quickstart.md
@@ -2,13 +2,13 @@
 title: "Quickstart: Setup"
 ---
 
-Get Stratosphere up and running in a few simple steps.
+Get Flink up and running in a few simple steps.
 
 # Requirements
-Stratosphere runs on all __UNIX-like__ environments: __Linux__, __Mac OS X__, __Cygwin__. The only requirement is to have a working __Java 6.x__ (or higher) installation.
+Flink runs on all __UNIX-like__ environments: __Linux__, __Mac OS X__, __Cygwin__. The only requirement is to have a working __Java 6.x__ (or higher) installation.
 
 # Download
-Download the ready to run binary package. Choose the Stratosphere distribution that __matches your Hadoop version__. If you are unsure which version to choose or you just want to run locally, pick the package for Hadoop 1.2.
+Download the ready to run binary package. Choose the Flink distribution that __matches your Hadoop version__. If you are unsure which version to choose or you just want to run locally, pick the package for Hadoop 1.2.
 
 <ul class="nav nav-tabs">
    <li class="active"><a href="#bin-hadoop1" data-toggle="tab">Hadoop 1.2</a></li>
@@ -16,10 +16,10 @@ Download the ready to run binary package. Choose the Stratosphere distribution t
  </ul>
  <div class="tab-content text-center">
    <div class="tab-pane active" id="bin-hadoop1">
-     <a class="btn btn-info btn-lg" onclick="_gaq.push(['_trackEvent','Action','download-quickstart-setup-1',this.href]);" href="{{site.FLINK_DOWNLOAD_URL_HADOOP_1_STABLE}}"><i class="icon-download"> </i> Download Stratosphere for Hadoop 1.2</a>
+     <a class="btn btn-info btn-lg" onclick="_gaq.push(['_trackEvent','Action','download-quickstart-setup-1',this.href]);" href="{{site.FLINK_DOWNLOAD_URL_HADOOP_1_STABLE}}"><i class="icon-download"> </i> Download Flink for Hadoop 1.2</a>
    </div>
    <div class="tab-pane" id="bin-hadoop2">
-     <a class="btn btn-info btn-lg" onclick="_gaq.push(['_trackEvent','Action','download-quickstart-setup-2',this.href]);" href="{{site.FLINK_DOWNLOAD_URL_HADOOP_2_STABLE}}"><i class="icon-download"> </i> Download Stratosphere for Hadoop 2</a>
+     <a class="btn btn-info btn-lg" onclick="_gaq.push(['_trackEvent','Action','download-quickstart-setup-2',this.href]);" href="{{site.FLINK_DOWNLOAD_URL_HADOOP_2_STABLE}}"><i class="icon-download"> </i> Download Flink for Hadoop 2</a>
    </div>
  </div>
 </p>
@@ -30,21 +30,21 @@ You are almost done.
   
 1. Go to the download directory.
 2. Unpack the downloaded archive.
-3. Start Stratosphere.
+3. Start Flink.
 
 
 ```bash
 $ cd ~/Downloads              # Go to download directory
-$ tar xzf stratosphere-*.tgz  # Unpack the downloaded archive
-$ cd stratosphere
-$ bin/start-local.sh          # Start Stratosphere
+$ tar xzf flink-*.tgz  # Unpack the downloaded archive
+$ cd flink
+$ bin/start-local.sh          # Start Flink
 ```
 
 Check the __JobManager's web frontend__ at [http://localhost:8081](http://localhost:8081) and make sure everything is up and running.
 
 # Run Example
 
-Run the __Word Count example__ to see Stratosphere at work.
+Run the __Word Count example__ to see Flink at work.
 
 * __Download test data__:
 ```bash
@@ -53,8 +53,8 @@ $ wget -O hamlet.txt http://www.gutenberg.org/cache/epub/1787/pg1787.txt
 * You now have a text file called _hamlet.txt_ in your working directory.
 * __Start the example program__:
 ```bash
-$ bin/stratosphere run \
-    --jarfile ./examples/stratosphere-java-examples-{{site.FLINK_VERSION_STABLE}}-WordCount.jar \
+$ bin/flink run \
+    --jarfile ./examples/flink-java-examples-{{site.FLINK_VERSION_STABLE}}-WordCount.jar \
 
     --arguments file://`pwd`/hamlet.txt file://`pwd`/wordcount-result.txt
 ```
@@ -63,10 +63,10 @@ $ bin/stratosphere run \
 
 # Cluster Setup
   
-__Running Stratosphere on a cluster__ is as easy as running it locally. Having __passwordless SSH__ and __the same directory structure__ on all your cluster nodes lets you use our scripts to control everything.
+__Running Flink on a cluster__ is as easy as running it locally. Having __passwordless SSH__ and __the same directory structure__ on all your cluster nodes lets you use our scripts to control everything.
 
-1. Copy the unpacked __stratosphere__ directory from the downloaded archive to the same file system path on each node of your setup.
-2. Choose a __master node__ (JobManager) and set the `jobmanager.rpc.address` key in `conf/stratosphere-conf.yaml` to its IP or hostname. Make sure that all nodes in your cluster have the same `jobmanager.rpc.address` configured.
+1. Copy the unpacked __flink__ directory from the downloaded archive to the same file system path on each node of your setup.
+2. Choose a __master node__ (JobManager) and set the `jobmanager.rpc.address` key in `conf/flink-conf.yaml` to its IP or hostname. Make sure that all nodes in your cluster have the same `jobmanager.rpc.address` configured.
 3. Add the IPs or hostnames (one per line) of all __worker nodes__ (TaskManager) to the slaves files in `conf/slaves`.
 
 You can now __start the cluster__ at your master node with `bin/start-cluster.sh`.
@@ -81,13 +81,13 @@ The following __example__ illustrates the setup with three nodes (with IP addres
 <div class="col-md-6">
   <div class="row">
     <p class="lead text-center">
-      /path/to/<strong>stratosphere/conf/<br>stratosphere-conf.yaml</strong>
+      /path/to/<strong>flink/conf/<br>flink-conf.yaml</strong>
     <pre>jobmanager.rpc.address: 10.0.0.1</pre>
     </p>
   </div>
 <div class="row" style="margin-top: 1em;">
   <p class="lead text-center">
-    /path/to/<strong>stratosphere/<br>conf/slaves</strong>
+    /path/to/<strong>flink/<br>conf/slaves</strong>
   <pre>
     10.0.0.2
     10.0.0.3</pre>
@@ -96,10 +96,10 @@ The following __example__ illustrates the setup with three nodes (with IP addres
 </div>
 </div>
 
-# Stratosphere on YARN
-You can easily deploy Stratosphere on your existing __YARN cluster__. 
+# Flink on YARN
+You can easily deploy Flink on your existing __YARN cluster__. 
 
-1. Download the __Stratosphere YARN package__ with the YARN client: [Stratosphere for YARN]({{site.FLINK_DOWNLOAD_URL_YARN_STABLE}})
+1. Download the __Flink YARN package__ with the YARN client: [Flink for YARN]({{site.FLINK_DOWNLOAD_URL_YARN_STABLE}})
 2. Make sure your __HADOOP_HOME__ (or _YARN_CONF_DIR_ or _HADOOP_CONF_DIR_) __environment variable__ is set to read your YARN and HDFS configuration.
 3. Run the __YARN client__ with: `./bin/yarn-session.sh`. You can run the client with options `-n 10 -tm 8192` to allocate 10 TaskManagers with 8GB of memory each.
 

http://git-wip-us.apache.org/repos/asf/incubator-flink/blob/fbc93386/docs/spargel_guide.md
----------------------------------------------------------------------
diff --git a/docs/spargel_guide.md b/docs/spargel_guide.md
index 7a74155..9d1a5c9 100644
--- a/docs/spargel_guide.md
+++ b/docs/spargel_guide.md
@@ -17,7 +17,7 @@ This vertex-centric view makes it easy to express a large class of graph problem
 Spargel API
 -----------
 
-The Spargel API is part of the *addons* Maven project. All relevant classes are located in the *eu.stratosphere.spargel.java* package.
+The Spargel API is part of the *addons* Maven project. All relevant classes are located in the *org.apache.flinkspargel.java* package.
 
 Add the following dependency to your `pom.xml` to use the Spargel.
 

http://git-wip-us.apache.org/repos/asf/incubator-flink/blob/fbc93386/docs/web_client.md
----------------------------------------------------------------------
diff --git a/docs/web_client.md b/docs/web_client.md
index faaf7c0..ea844e0 100644
--- a/docs/web_client.md
+++ b/docs/web_client.md
@@ -2,7 +2,7 @@
 title:  "Web Client"
 ---
 
-Stratosphere provides a web interface to upload jobs, inspect their execution plans, and execute them. The interface is a great tool to showcase programs, debug execution plans, or demonstrate the system as a whole.
+Flink provides a web interface to upload jobs, inspect their execution plans, and execute them. The interface is a great tool to showcase programs, debug execution plans, or demonstrate the system as a whole.
 
 # Start, Stop, and Configure the Web Interface
 
@@ -14,20 +14,20 @@ and stop it by calling:
 
     ./bin/stop-webclient.sh
 
-The web interface runs on port 8080 by default. To specify a custom port set the ```webclient.port``` property in the *./conf/stratosphere.yaml* configuration file. Jobs are submitted to the JobManager specified by ```jobmanager.rpc.address``` and ```jobmanager.rpc.port```. Please consult the [configuration](config.html#web_frontend) page for details and further configuration options.
+The web interface runs on port 8080 by default. To specify a custom port set the ```webclient.port``` property in the *./conf/flink.yaml* configuration file. Jobs are submitted to the JobManager specified by ```jobmanager.rpc.address``` and ```jobmanager.rpc.port```. Please consult the [configuration](config.html#web_frontend) page for details and further configuration options.
 
 # Use the Web Interface
 
 The web interface provides two views:
 
-1.  The **job view** to upload, preview, and submit Stratosphere programs.
-2.  The **plan view** to analyze the optimized execution plans of Stratosphere programs.
+1.  The **job view** to upload, preview, and submit Flink programs.
+2.  The **plan view** to analyze the optimized execution plans of Flink programs.
 
 ## Job View
 
 The interface starts serving the job view. 
 
-You can **upload** a Stratosphere program as a jar file. To **execute** an uploaded program:
+You can **upload** a Flink program as a jar file. To **execute** an uploaded program:
 
 * select it from the job list on the left, 
 * enter the program arguments in the *"Arguments"* field (bottom left), and 

http://git-wip-us.apache.org/repos/asf/incubator-flink/blob/fbc93386/docs/yarn_setup.md
----------------------------------------------------------------------
diff --git a/docs/yarn_setup.md b/docs/yarn_setup.md
index 9dcaad3..3335f8f 100644
--- a/docs/yarn_setup.md
+++ b/docs/yarn_setup.md
@@ -8,40 +8,40 @@ Start YARN session with 4 Taskmanagers (each with 4 GB of Heapspace):
 
 ```bash
 wget {{ site.FLINK_DOWNLOAD_URL_YARN_STABLE }}
-tar xvzf stratosphere-dist-{{ site.FLINK_VERSION_STABLE }}-yarn.tar.gz
-cd stratosphere-yarn-{{ site.FLINK_VERSION_STABLE }}/
+tar xvzf flink-dist-{{ site.FLINK_VERSION_STABLE }}-yarn.tar.gz
+cd flink-yarn-{{ site.FLINK_VERSION_STABLE }}/
 ./bin/yarn-session.sh -n 4 -jm 1024 -tm 4096
 ```
 
 # Introducing YARN
 
-Apache [Hadoop YARN](http://hadoop.apache.org/) is a cluster resource management framework. It allows to run various distributed applications on top of a cluster. Stratosphere runs on YARN next to other applications. Users do not have to setup or install anything if there is already a YARN setup.
+Apache [Hadoop YARN](http://hadoop.apache.org/) is a cluster resource management framework. It allows to run various distributed applications on top of a cluster. Flink runs on YARN next to other applications. Users do not have to setup or install anything if there is already a YARN setup.
 
 **Requirements**
 
 - Apache Hadoop 2.2
 - HDFS
 
-If you have troubles using the Stratosphere YARN client, have a look in the [FAQ section]({{site.baseurl}}/docs/0.5/general/faq.html).
+If you have troubles using the Flink YARN client, have a look in the [FAQ section]({{site.baseurl}}/docs/0.5/general/faq.html).
 
-## Start Stratosphere Session
+## Start Flink Session
 
-Follow these instructions to learn how to launch a Stratosphere Session within your YARN cluster.
+Follow these instructions to learn how to launch a Flink Session within your YARN cluster.
 
-A session will start all required Stratosphere services (JobManager and TaskManagers) so that you can submit programs to the cluster. Note that you can run multiple programs per session.
+A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster. Note that you can run multiple programs per session.
 
-### Download Stratosphere for YARN
+### Download Flink for YARN
 
 Download the YARN tgz package on the [download page]({{site.baseurl}}/downloads/#nightly). It contains the required files.
 
 
-If you want to build the YARN .tgz file from sources, follow the build instructions. Make sure to use the `-Dhadoop.profile=2` profile. You can find the file in `stratosphere-dist/target/stratosphere-dist-{{site.docs_05_stable}}-yarn.tar.gz` (*Note: The version might be different for you* ).
+If you want to build the YARN .tgz file from sources, follow the build instructions. Make sure to use the `-Dhadoop.profile=2` profile. You can find the file in `flink-dist/target/flink-dist-{{site.docs_05_stable}}-yarn.tar.gz` (*Note: The version might be different for you* ).
 
 Extract the package using:
 
 ```bash
-tar xvzf stratosphere-dist-{{site.FLINK_VERSION_STABLE }}-yarn.tar.gz
-cd stratosphere-yarn-{{site.FLINK_VERSION_STABLE }}/
+tar xvzf flink-dist-{{site.FLINK_VERSION_STABLE }}-yarn.tar.gz
+cd flink-yarn-{{site.FLINK_VERSION_STABLE }}/
 ```
 
 ### Start a Session
@@ -75,25 +75,25 @@ Please note that the Client requires the `HADOOP_HOME` (or `YARN_CONF_DIR` or `H
 ./bin/yarn-session.sh -n 10 -tm 8192
 ```
 
-The system will use the configuration in `conf/stratosphere-config.yaml`. Please follow our [configuration guide](config.html) if you want to change something. Stratosphere on YARN will overwrite the following configuration parameters `jobmanager.rpc.address` (because the JobManager is always allocated at different machines) and `taskmanager.tmp.dirs` (we are using the tmp directories given by YARN).
+The system will use the configuration in `conf/flink-config.yaml`. Please follow our [configuration guide](config.html) if you want to change something. Flink on YARN will overwrite the following configuration parameters `jobmanager.rpc.address` (because the JobManager is always allocated at different machines) and `taskmanager.tmp.dirs` (we are using the tmp directories given by YARN).
 
 The example invocation starts 11 containers, since there is one additional container for the ApplicationMaster and JobTracker.
 
-Once Stratosphere is deployed in your YARN cluster, it will show you the connection details of the JobTracker.
+Once Flink is deployed in your YARN cluster, it will show you the connection details of the JobTracker.
 
 The client has to remain open to keep the deployment running. We suggest to use `screen`, which will start a detachable shell:
 
 1. Open `screen`,
-2. Start Stratosphere on YARN,
+2. Start Flink on YARN,
 3. Use `CTRL+a`, then press `d` to detach the screen session,
 4. Use `screen -r` to resume again.
 
-# Submit Job to Stratosphere
+# Submit Job to Flink
 
-Use the following command to submit a Stratosphere program to the YARN cluster:
+Use the following command to submit a Flink program to the YARN cluster:
 
 ```bash
-./bin/stratosphere
+./bin/flink
 ```
 
 Please refer to the documentation of the [commandline client](cli.html).
@@ -102,11 +102,11 @@ The command will show you a help menu like this:
 
 ```bash
 [...]
-Action "run" compiles and submits a Stratosphere program.
+Action "run" compiles and submits a Flink program.
   "run" action arguments:
      -a,--arguments <programArgs>   Program arguments
      -c,--class <classname>         Program class
-     -j,--jarfile <jarfile>         Stratosphere program JAR file
+     -j,--jarfile <jarfile>         Flink program JAR file
      -m,--jobmanager <host:port>    Jobmanager to which the program is submitted
      -w,--wait                      Wait for program to finish
 [...]
@@ -119,14 +119,14 @@ Use the *run* action to submit a job to YARN. The client is able to determine th
 ```bash
 wget -O apache-license-v2.txt http://www.apache.org/licenses/LICENSE-2.0.txt
 
-./bin/stratosphere run -j ./examples/stratosphere-java-examples-{{site.FLINK_VERSION_STABLE }}-WordCount.jar \
+./bin/flink run -j ./examples/flink-java-examples-{{site.FLINK_VERSION_STABLE }}-WordCount.jar \
                        -a 1 file://`pwd`/apache-license-v2.txt file://`pwd`/wordcount-result.txt 
 ```
 
 If there is the following error, make sure that all TaskManagers started:
 
 ```bash
-Exception in thread "main" eu.stratosphere.compiler.CompilerException:
+Exception in thread "main" org.apache.flinkcompiler.CompilerException:
     Available instances could not be determined from job manager: Connection timed out.
 ```
 
@@ -134,14 +134,14 @@ You can check the number of TaskManagers in the JobManager web interface. The ad
 
 If the TaskManagers do not show up after a minute, you should investigate the issue using the log files.
 
-# Build Stratosphere for a specific Hadoop Version
+# Build Flink for a specific Hadoop Version
 
-This section covers building Stratosphere for a specific Hadoop version. Most users do not need to do this manually.
-The problem is that Stratosphere uses HDFS and YARN which are both from Apache Hadoop. There exist many different builds of Hadoop (from both the upstream project and the different Hadoop distributions). Typically errors arise with the RPC services. An error could look like this:
+This section covers building Flink for a specific Hadoop version. Most users do not need to do this manually.
+The problem is that Flink uses HDFS and YARN which are both from Apache Hadoop. There exist many different builds of Hadoop (from both the upstream project and the different Hadoop distributions). Typically errors arise with the RPC services. An error could look like this:
 
 ```
 ERROR: The job was not successfully submitted to the nephele job manager:
-    eu.stratosphere.nephele.executiongraph.GraphConversionException: Cannot compute input splits for TSV:
+    org.apache.flinknephele.executiongraph.GraphConversionException: Cannot compute input splits for TSV:
     java.io.IOException: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException:
     Protocol message contained an invalid tag (zero).; Host Details :
 ```
@@ -154,7 +154,7 @@ mvn -Dhadoop.profile=2 -Pcdh-repo -Dhadoop.version=2.2.0-cdh5.0.0-beta-2 -DskipT
 
 The commands in detail:
 
-*  `-Dhadoop.profile=2` activates the Hadoop YARN profile of Stratosphere. This will enable all components of Stratosphere that are compatible with Hadoop 2.2
+*  `-Dhadoop.profile=2` activates the Hadoop YARN profile of Flink. This will enable all components of Flink that are compatible with Hadoop 2.2
 *  `-Pcdh-repo` activates the Cloudera Hadoop dependencies. If you want other vendor's Hadoop dependencies (not in maven central) add the repository to your local maven configuration in `~/.m2/`.
 * `-Dhadoop.version=2.2.0-cdh5.0.0-beta-2` sets a special version of the Hadoop dependencies. Make sure that the specified Hadoop version is compatible with the profile you activated.
 
@@ -166,23 +166,23 @@ If you want to build HDFS for Hadoop 2 without YARN, use the following parameter
 
 Some Cloudera versions (such as `2.0.0-cdh4.2.0`) require this, since they have a new HDFS version with the old YARN API.
 
-Please post to the _Stratosphere mailinglist_(dev@flink.incubator.apache.org) or create an issue on [Jira]({{site.FLINK_ISSUES_URL}}), if you have issues with your YARN setup and Stratosphere.
+Please post to the _Flink mailinglist_(dev@flink.incubator.apache.org) or create an issue on [Jira]({{site.FLINK_ISSUES_URL}}), if you have issues with your YARN setup and Flink.
 
 # Background
 
-This section briefly describes how Stratosphere and YARN interact. 
+This section briefly describes how Flink and YARN interact. 
 
-<img src="img/StratosphereOnYarn.svg" class="img-responsive">
+<img src="img/FlinkOnYarn.svg" class="img-responsive">
 
 The YARN client needs to access the Hadoop configuration to connect to the YARN resource manager and to HDFS. It determines the Hadoop configuration using the following strategy:
 
 * Test if `YARN_CONF_DIR`, `HADOOP_CONF_DIR` or `HADOOP_CONF_PATH` are set (in that order). If one of these variables are set, they are used to read the configuration.
 * If the above strategy fails (this should not be the case in a correct YARN setup), the client is using the `HADOOP_HOME` environment variable. If it is set, the client tries to access `$HADOOP_HOME/etc/hadoop` (Hadoop 2) and `$HADOOP_HOME/conf` (Hadoop 1).
 
-When starting a new Stratosphere YARN session, the client first checks if the requested resources (containers and memory) are available. After that, it uploads a jar that contains Stratosphere and the configuration to HDFS (step 1).
+When starting a new Flink YARN session, the client first checks if the requested resources (containers and memory) are available. After that, it uploads a jar that contains Flink and the configuration to HDFS (step 1).
 
 The next step of the client is to request (step 2) a YARN container to start the *ApplicationMaster* (step 3). Since the client registered the configuration and jar-file as a resource for the container, the NodeManager of YARN running on that particular machine will take care of preparing the container (e.g. downloading the files). Once that has finished, the *ApplicationMaster* (AM) is started.
 
-The *JobManager* and AM are running in the same container. Once they successfully started, the AM knows the address of the JobManager (its own host). It is generating a new Stratosphere configuration file for the TaskManagers (so that they can connect to the JobManager). The file is also uploaded to HDFS. Additionally, the *AM* container is also serving Stratosphere's web interface.
+The *JobManager* and AM are running in the same container. Once they successfully started, the AM knows the address of the JobManager (its own host). It is generating a new Flink configuration file for the TaskManagers (so that they can connect to the JobManager). The file is also uploaded to HDFS. Additionally, the *AM* container is also serving Flink's web interface.
 
-After that, the AM starts allocating the containers for Stratosphere's TaskManagers, which will download the jar file and the modified configuration from the HDFS. Once these steps are completed, Stratosphere is set up and ready to accept Jobs.
\ No newline at end of file
+After that, the AM starts allocating the containers for Flink's TaskManagers, which will download the jar file and the modified configuration from the HDFS. Once these steps are completed, Flink is set up and ready to accept Jobs.
\ No newline at end of file