You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by se...@apache.org on 2016/08/25 18:48:04 UTC

[01/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs [Forced Update!]

Repository: flink
Updated Branches:
  refs/heads/flip-6 affa548ca -> 734297615 (forced update)


http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/fig/program_dataflow.svg
----------------------------------------------------------------------
diff --git a/docs/fig/program_dataflow.svg b/docs/fig/program_dataflow.svg
new file mode 100644
index 0000000..7c1ec8d
--- /dev/null
+++ b/docs/fig/program_dataflow.svg
@@ -0,0 +1,546 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<svg
+   xmlns:dc="http://purl.org/dc/elements/1.1/"
+   xmlns:cc="http://creativecommons.org/ns#"
+   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+   xmlns:svg="http://www.w3.org/2000/svg"
+   xmlns="http://www.w3.org/2000/svg"
+   version="1.1"
+   width="632.86151"
+   height="495.70895"
+   id="svg2">
+  <defs
+     id="defs4" />
+  <metadata
+     id="metadata7">
+    <rdf:RDF>
+      <cc:Work
+         rdf:about="">
+        <dc:format>image/svg+xml</dc:format>
+        <dc:type
+           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+        <dc:title></dc:title>
+      </cc:Work>
+    </rdf:RDF>
+  </metadata>
+  <g
+     transform="translate(-89.288343,-87.370121)"
+     id="layer1">
+    <g
+       transform="translate(65.132093,66.963871)"
+       id="g2989">
+      <text
+         x="571.35248"
+         y="45.804131"
+         id="text2991"
+         xml:space="preserve"
+         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source</text>
+      <text
+         x="25.304533"
+         y="37.765511"
+         id="text2993"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#843c0c;font-family:Courier New">DataStream</text>
+      <text
+         x="107.97513"
+         y="37.765511"
+         id="text2995"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">&lt;</text>
+      <text
+         x="116.22718"
+         y="37.765511"
+         id="text2997"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#5b9bd5;font-family:Courier New">String</text>
+      <text
+         x="166.0396"
+         y="37.765511"
+         id="text2999"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">&gt; </text>
+      <text
+         x="182.69376"
+         y="37.765511"
+         id="text3001"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">lines = </text>
+      <text
+         x="248.86023"
+         y="37.765511"
+         id="text3003"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">env.</text>
+      <text
+         x="282.01849"
+         y="37.765511"
+         id="text3005"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#843c0c;font-family:Courier New">addSource</text>
+      <text
+         x="356.58704"
+         y="37.765511"
+         id="text3007"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">(</text>
+      <text
+         x="282.01849"
+         y="54.269619"
+         id="text3009"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#2f5597;font-family:Courier New">new</text>
+      <text
+         x="315.17673"
+         y="54.269619"
+         id="text3011"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">FlinkKafkaConsumer</text>
+      <text
+         x="464.3139"
+         y="54.269619"
+         id="text3013"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">&lt;&gt;(\u2026));</text>
+      <text
+         x="25.304533"
+         y="87.277847"
+         id="text3015"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#843c0c;font-family:Courier New">DataStream</text>
+      <text
+         x="107.97513"
+         y="87.277847"
+         id="text3017"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">&lt;</text>
+      <text
+         x="116.22718"
+         y="87.277847"
+         id="text3019"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#5b9bd5;font-family:Courier New">Event</text>
+      <text
+         x="157.78754"
+         y="87.277847"
+         id="text3021"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">&gt; events = </text>
+      <text
+         x="248.86023"
+         y="87.277847"
+         id="text3023"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">lines.</text>
+      <text
+         x="298.52261"
+         y="87.277847"
+         id="text3025"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#843c0c;font-family:Courier New">map</text>
+      <text
+         x="323.57883"
+         y="87.277847"
+         id="text3027"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">((line) </text>
+      <text
+         x="389.7453"
+         y="87.277847"
+         id="text3029"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#2f5597;font-family:Courier New">-</text>
+      <text
+         x="397.99738"
+         y="87.277847"
+         id="text3031"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#2f5597;font-family:Courier New">&gt;</text>
+      <text
+         x="414.50146"
+         y="87.277847"
+         id="text3033"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">parse</text>
+      <text
+         x="456.06183"
+         y="87.277847"
+         id="text3035"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">(line));</text>
+      <text
+         x="25.304533"
+         y="120.28607"
+         id="text3037"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#843c0c;font-family:Courier New">DataStream</text>
+      <text
+         x="107.97513"
+         y="120.28607"
+         id="text3039"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">&lt;</text>
+      <text
+         x="116.22718"
+         y="120.28607"
+         id="text3041"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#5b9bd5;font-family:Courier New">Statistics</text>
+      <text
+         x="199.19786"
+         y="120.28607"
+         id="text3043"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">&gt; stats = events</text>
+      <text
+         x="91.471016"
+         y="136.79018"
+         id="text3045"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">.</text>
+      <text
+         x="99.723068"
+         y="136.79018"
+         id="text3047"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#843c0c;font-family:Courier New">keyBy</text>
+      <text
+         x="141.13339"
+         y="136.79018"
+         id="text3049"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">(</text>
+      <text
+         x="149.38544"
+         y="136.79018"
+         id="text3051"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#548235;font-family:Courier New">&quot;id&quot;</text>
+      <text
+         x="182.69376"
+         y="136.79018"
+         id="text3053"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">)</text>
+      <text
+         x="91.471016"
+         y="153.2943"
+         id="text3055"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">.</text>
+      <text
+         x="99.723068"
+         y="153.2943"
+         id="text3057"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#843c0c;font-family:Courier New">timeWindow</text>
+      <text
+         x="182.69376"
+         y="153.2943"
+         id="text3059"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">(</text>
+      <text
+         x="190.94582"
+         y="153.2943"
+         id="text3061"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">Time.seconds</text>
+      <text
+         x="290.27054"
+         y="153.2943"
+         id="text3063"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">(10))</text>
+      <text
+         x="91.471016"
+         y="169.79842"
+         id="text3065"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">.</text>
+      <text
+         x="99.723068"
+         y="169.79842"
+         id="text3067"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#843c0c;font-family:Courier New">apply</text>
+      <text
+         x="141.13339"
+         y="169.79842"
+         id="text3069"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">(</text>
+      <text
+         x="149.38544"
+         y="169.79842"
+         id="text3071"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#2f5597;font-family:Courier New">new</text>
+      <text
+         x="182.69376"
+         y="169.79842"
+         id="text3073"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">MyWindowAggregationFunction</text>
+      <text
+         x="406.24942"
+         y="169.79842"
+         id="text3075"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">());</text>
+      <text
+         x="25.304533"
+         y="202.80663"
+         id="text3077"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">stats.</text>
+      <text
+         x="74.816864"
+         y="202.80663"
+         id="text3079"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#843c0c;font-family:Courier New">addSink</text>
+      <text
+         x="132.88133"
+         y="202.80663"
+         id="text3081"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">(</text>
+      <text
+         x="141.13339"
+         y="202.80663"
+         id="text3083"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#2f5597;font-family:Courier New">new</text>
+      <text
+         x="174.4417"
+         y="202.80663"
+         id="text3085"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">RollingSink</text>
+      <text
+         x="265.36435"
+         y="202.80663"
+         id="text3087"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">(path));</text>
+      <path
+         d="m 32.445583,379.40702 c 0,-17.70441 14.309815,-32.05174 31.957962,-32.05174 17.648146,0 31.957961,14.34733 31.957961,32.05174 0,17.68566 -14.309815,32.03298 -31.957961,32.03298 -17.648147,0 -31.957962,-14.34732 -31.957962,-32.03298"
+         id="path3089"
+         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <text
+         x="46.974251"
+         y="383.34982"
+         id="text3091"
+         xml:space="preserve"
+         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source</text>
+      <path
+         d="m 170.44246,379.40702 c 0,-17.70441 14.34733,-32.05174 32.03298,-32.05174 17.70441,0 32.05174,14.34733 32.05174,32.05174 0,17.68566 -14.34733,32.03298 -32.05174,32.03298 -17.68565,0 -32.03298,-14.34732 -32.03298,-32.03298"
+         id="path3093"
+         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <text
+         x="187.06161"
+         y="383.34982"
+         id="text3095"
+         xml:space="preserve"
+         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">map()</text>
+      <path
+         d="m 104.1822,375.18722 50.16875,0 0,-4.2198 8.43961,8.4396 -8.43961,8.4396 0,-4.2198 -50.16875,0 z"
+         id="path3097"
+         style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 532.55767,21.023988 c 4.93248,0 8.90847,0.675168 8.90847,1.500373 l 0,17.49811 c 0,0.825205 3.99475,1.481619 8.90847,1.481619 -4.91372,0 -8.90847,0.675168 -8.90847,1.481619 l 0,17.516864 c 0,0.806451 -3.97599,1.481619 -8.90847,1.481619"
+         id="path3099"
+         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 532.55767,70.573832 c 4.93248,0 8.90847,0.675168 8.90847,1.481619 l 0,9.771184 c 0,0.825206 3.99475,1.481619 8.90847,1.481619 -4.91372,0 -8.90847,0.675168 -8.90847,1.481619 l 0,9.771185 c 0,0.825205 -3.97599,1.481619 -8.90847,1.481619"
+         id="path3101"
+         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 532.08881,104.65107 c 4.93248,0 8.90847,0.65641 8.90847,1.48162 l 0,33.83343 c 0,0.8252 3.99474,1.48162 8.90847,1.48162 -4.91373,0 -8.90847,0.67517 -8.90847,1.48162 l 0,33.85218 c 0,0.80645 -3.97599,1.48162 -8.90847,1.48162"
+         id="path3103"
+         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 532.08881,188.2594 c 4.93248,0 8.90847,0.67517 8.90847,1.48162 l 0,9.62115 c 0,0.8252 3.99474,1.48162 8.90847,1.48162 -4.91373,0 -8.90847,0.65641 -8.90847,1.48161 l 0,9.62115 c 0,0.80645 -3.97599,1.48162 -8.90847,1.48162"
+         id="path3105"
+         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="571.35248"
+         y="86.226501"
+         id="text3107"
+         xml:space="preserve"
+         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Transformation</text>
+      <text
+         x="571.35248"
+         y="145.72234"
+         id="text3109"
+         xml:space="preserve"
+         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Transformation</text>
+      <text
+         x="94.209488"
+         y="313.34818"
+         id="text3111"
+         xml:space="preserve"
+         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source</text>
+      <text
+         x="88.508072"
+         y="326.85153"
+         id="text3113"
+         xml:space="preserve"
+         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Operator</text>
+      <path
+         d="m 242.17908,375.18722 50.16875,0 0,-4.2198 8.4396,8.4396 -8.4396,8.4396 0,-4.2198 -50.16875,0 z"
+         id="path3115"
+         style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 308.43934,379.40702 c 0,-17.70441 14.34732,-32.05174 32.05173,-32.05174 17.68566,0 32.03299,14.34733 32.03299,32.05174 0,17.68566 -14.34733,32.03298 -32.03299,32.03298 -17.70441,0 -32.05173,-14.34732 -32.05173,-32.03298"
+         id="path3117"
+         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <text
+         x="318.24503"
+         y="371.34683"
+         id="text3119"
+         xml:space="preserve"
+         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">keyBy</text>
+      <text
+         x="349.15274"
+         y="371.34683"
+         id="text3121"
+         xml:space="preserve"
+         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">()/</text>
+      <text
+         x="314.79416"
+         y="383.34982"
+         id="text3123"
+         xml:space="preserve"
+         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">window()/</text>
+      <text
+         x="322.29605"
+         y="395.35281"
+         id="text3125"
+         xml:space="preserve"
+         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">apply()</text>
+      <path
+         d="m 380.17596,375.18722 50.16875,0 0,-4.2198 8.4396,8.4396 -8.4396,8.4396 0,-4.2198 -50.16875,0 z"
+         id="path3127"
+         style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 446.45497,379.40702 c 0,-17.70441 14.34733,-32.05174 32.03298,-32.05174 17.70441,0 32.03298,14.34733 32.03298,32.05174 0,17.68566 -14.32857,32.03298 -32.03298,32.03298 -17.68565,0 -32.03298,-14.34732 -32.03298,-32.03298"
+         id="path3129"
+         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <text
+         x="467.4761"
+         y="383.34982"
+         id="text3131"
+         xml:space="preserve"
+         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Sink</text>
+      <path
+         d="m 240.45365,250.46865 17.01049,0 0,-16.2603 33.98347,0 0,16.2603 16.99173,0 -33.98347,16.24154 z"
+         id="path3133"
+         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="227.00369"
+         y="313.34818"
+         id="text3135"
+         xml:space="preserve"
+         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Transformation</text>
+      <text
+         x="242.00743"
+         y="326.85153"
+         id="text3137"
+         xml:space="preserve"
+         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Operators</text>
+      <text
+         x="438.99158"
+         y="313.34818"
+         id="text3139"
+         xml:space="preserve"
+         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Sink</text>
+      <text
+         x="426.08835"
+         y="326.85153"
+         id="text3141"
+         xml:space="preserve"
+         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Operator</text>
+      <path
+         d="m 451.65002,333.72064 5.6264,8.21454 -1.03151,0.69393 -5.6264,-8.19579 1.03151,-0.71268 z m 6.4516,6.11402 0.76895,5.53263 -4.89497,-2.70067 4.12602,-2.83196 z"
+         id="path3143"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 89.591069,331.58261 -4.688668,8.04575 1.069017,0.63766 4.688668,-8.06451 -1.069017,-0.6189 z m -5.682665,6.02025 -0.356339,5.58889 4.669913,-3.07577 -4.313574,-2.51312 z"
+         id="path3145"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 234.03956,329.40706 -12.92197,12.80944 0.88147,0.88147 12.92196,-12.79068 -0.88146,-0.90023 z m -13.35333,10.59639 -1.80045,5.28882 5.32633,-1.74418 -3.52588,-3.54464 z"
+         id="path3147"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 301.21879,329.40706 15.51012,14.72242 -0.86272,0.90023 -15.51011,-14.72242 0.86271,-0.90023 z m 15.88521,12.49062 1.91298,5.27006 -5.34509,-1.63166 3.43211,-3.6384 z"
+         id="path3149"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 241.72897,447.91784 -87.19047,-54.5761 0.65641,-1.06902 87.20923,54.5761 -0.67517,1.06902 z m -87.13421,-52.32554 -2.90697,-4.78244 5.57014,0.54389 -2.66317,4.23855 z"
+         id="path3151"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="245.67148"
+         y="463.08081"
+         id="text3153"
+         xml:space="preserve"
+         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Stream</text>
+      <path
+         d="m 291.37259,447.9741 102.90688,-49.34354 -0.54388,-1.12528 -102.90689,49.34354 0.54389,1.12528 z m 102.58806,-47.11174 3.41335,-4.4261 -5.5889,-0.0938 2.17555,4.51987 z"
+         id="path3155"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 270.77996,440.92234 0,-39.79741 -1.25656,0 0,39.79741 1.25656,0 z m 1.87547,-38.54085 -2.49438,-5.00749 -2.51312,5.00749 5.0075,0 z"
+         id="path3157"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="571.35248"
+         y="205.12807"
+         id="text3159"
+         xml:space="preserve"
+         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Sink</text>
+      <path
+         d="m 516.93503,463.33418 c 0,8.15828 -1.10652,14.75993 -2.45686,14.75993 l -241.14758,0 c -1.36909,0 -2.47561,6.62039 -2.47561,14.77868 0,-8.15829 -1.08777,-14.77868 -2.45687,-14.77868 l -241.147571,0 c -1.369091,0 -2.475617,-6.60165 -2.475617,-14.75993"
+         id="path3161"
+         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="214.03168"
+         y="513.79102"
+         id="text3163"
+         xml:space="preserve"
+         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Streaming Dataflow</text>
+      <text
+         x="-125.25892"
+         y="179.02612"
+         transform="translate(24.15625,20.40625)"
+         id="text3257"
+         xml:space="preserve"
+         style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"><tspan
+           x="-125.25892"
+           y="179.02612"
+           id="tspan3259"></tspan></text>
+    </g>
+  </g>
+</svg>


[32/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/concepts/fig/checkpoints.svg
----------------------------------------------------------------------
diff --git a/docs/concepts/fig/checkpoints.svg b/docs/concepts/fig/checkpoints.svg
deleted file mode 100644
index c824296..0000000
--- a/docs/concepts/fig/checkpoints.svg
+++ /dev/null
@@ -1,249 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<svg
-   xmlns:dc="http://purl.org/dc/elements/1.1/"
-   xmlns:cc="http://creativecommons.org/ns#"
-   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
-   xmlns:svg="http://www.w3.org/2000/svg"
-   xmlns="http://www.w3.org/2000/svg"
-   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
-   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
-   width="481.59604"
-   height="368.51669"
-   id="svg2"
-   version="1.1"
-   inkscape:version="0.48.5 r10040"
-   sodipodi:docname="state_partitioning.svg">
-  <defs
-     id="defs4" />
-  <sodipodi:namedview
-     id="base"
-     pagecolor="#ffffff"
-     bordercolor="#666666"
-     borderopacity="1.0"
-     inkscape:pageopacity="0.0"
-     inkscape:pageshadow="2"
-     inkscape:zoom="2.8"
-     inkscape:cx="354.96251"
-     inkscape:cy="137.95685"
-     inkscape:document-units="px"
-     inkscape:current-layer="layer1"
-     showgrid="false"
-     fit-margin-right="0.5"
-     fit-margin-bottom="0.3"
-     fit-margin-top="0.3"
-     fit-margin-left="0"
-     inkscape:window-width="2560"
-     inkscape:window-height="1418"
-     inkscape:window-x="1592"
-     inkscape:window-y="-8"
-     inkscape:window-maximized="1" />
-  <metadata
-     id="metadata7">
-    <rdf:RDF>
-      <cc:Work
-         rdf:about="">
-        <dc:format>image/svg+xml</dc:format>
-        <dc:type
-           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
-        <dc:title></dc:title>
-      </cc:Work>
-    </rdf:RDF>
-  </metadata>
-  <g
-     inkscape:label="Layer 1"
-     inkscape:groupmode="layer"
-     id="layer1"
-     transform="translate(-130.78007,-350.87488)">
-    <g
-       id="g3138"
-       transform="translate(116.16121,190.10975)">
-      <path
-         id="path3140"
-         d="m 78.39453,344.07322 0,74.86865 95.01117,0 0,-74.86865 -95.01117,0 z"
-         style="fill:#c5e0b4;fill-opacity:1;fill-rule:evenodd;stroke:none"
-         inkscape:connector-curvature="0" />
-      <text
-         id="text3142"
-         style="font-size:17.55437279px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
-         y="377.96512"
-         x="106.64163"
-         xml:space="preserve">Task</text>
-      <text
-         id="text3144"
-         style="font-size:17.55437279px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
-         y="398.97034"
-         x="88.036995"
-         xml:space="preserve">Manager</text>
-      <path
-         id="path3146"
-         d="m 207.48294,344.07322 0,74.86865 95.02992,0 0,-74.86865 -95.02992,0 z"
-         style="fill:#c5e0b4;fill-opacity:1;fill-rule:evenodd;stroke:none"
-         inkscape:connector-curvature="0" />
-      <text
-         id="text3148"
-         style="font-size:17.55437279px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
-         y="377.96512"
-         x="235.75273"
-         xml:space="preserve">Task</text>
-      <text
-         id="text3150"
-         style="font-size:17.55437279px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
-         y="398.97034"
-         x="217.1481"
-         xml:space="preserve">Manager</text>
-      <path
-         id="path3152"
-         d="m 336.57135,344.07322 0,74.86865 95.17996,0 0,-74.86865 -95.17996,0 z"
-         style="fill:#c5e0b4;fill-opacity:1;fill-rule:evenodd;stroke:none"
-         inkscape:connector-curvature="0" />
-      <text
-         id="text3154"
-         style="font-size:17.55437279px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
-         y="377.96512"
-         x="364.86383"
-         xml:space="preserve">Task</text>
-      <text
-         id="text3156"
-         style="font-size:17.55437279px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
-         y="398.97034"
-         x="346.25919"
-         xml:space="preserve">Manager</text>
-      <path
-         id="path3158"
-         d="m 93.079438,161.06513 0,74.85927 95.179962,0 0,-74.85927 -95.179962,0 z"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none"
-         inkscape:connector-curvature="0" />
-      <text
-         id="text3160"
-         style="font-size:17.55437279px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
-         y="194.95898"
-         x="125.94909"
-         xml:space="preserve">Job</text>
-      <text
-         id="text3162"
-         style="font-size:17.55437279px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
-         y="215.96423"
-         x="102.84333"
-         xml:space="preserve">Manager</text>
-      <text
-         id="text3164"
-         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
-         y="202.80112"
-         x="33.991787"
-         xml:space="preserve">(master)</text>
-      <text
-         id="text3166"
-         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
-         y="385.80722"
-         x="13.838635"
-         xml:space="preserve">(workers)</text>
-      <path
-         id="path3168"
-         d="m 106.5828,243.0418 -2.25056,5.53263 0,0 -2.15679,5.53262 0,0 -1.0315,2.7757 0,0 -0.994,2.79444 0.0187,-0.0188 -0.918974,2.79444 0,0 -0.843961,2.8132 0,-0.0188 -0.768941,2.83196 0,-0.0188 -0.675168,2.85071 0,-0.0188 -0.581395,2.86946 0,-0.0188 -0.468867,2.88822 0.01875,-0.0188 -0.356339,2.90698 0,-0.0188 -0.225056,2.94448 0,-0.0375 -0.07502,2.96324 0,-0.0187 0.07502,3.00075 0,-0.0375 0.225056,3.0195 0,-0.0188 0.375093,3.05702 -0.01875,-0.0375 0.52513,3.09452 0,-0.0188 0.637659,3.11328 0,-0.0188 0.768942,3.13203 0,-0.0188 0.881469,3.15078 -0.01875,-0.0188 0.975243,3.16954 0,-0.0188 1.069009,3.20705 -0.0187,-0.0187 1.16279,3.20705 0,0 1.21905,3.20705 -0.0187,0 1.27532,3.2258 0,0 1.33158,3.24456 0,-0.0188 2.75693,6.50788 0,0 2.34434,5.40134 -1.14404,0.48762 -2.34433,-5.38259 -2.77569,-6.52662 -1.31283,-3.24456 -1.29407,-3.24456 -1.21906,-3.2258 -1.162782,-3.22581 -1.050262,-3.20705 -0.993997,-3.18829 -0.88147,-3.16954 -0.768941,-3.15079 -0.656414,-3.13203 -0.525131,-3.11327
  -0.375093,-3.09452 -0.225056,-3.05701 -0.05626,-3.03826 0.07502,-2.98199 0.225056,-2.982 0.337585,-2.94448 0.487621,-2.92573 0.581395,-2.88822 0.675168,-2.86946 0.787696,-2.85071 0.843961,-2.83196 0.918979,-2.8132 0.993997,-2.8132 1.031508,-2.79445 2.17554,-5.55138 2.25056,-5.53263 z m 5.30757,86.5153 -1.14403,9.69617 -7.87696,-5.77644 c -0.28132,-0.2063 -0.33759,-0.60015 -0.13129,-0.86272 0.20631,-0.28132 0.60015,-0.33758 0.88147,-0.15003 l 6.9955,5.13878 -0.97525,0.43135 1.01276,-8.62715 c 0.0375,-0.33758 0.35634,-0.58139 0.69392,-0.54388 0.33758,0.0375 0.58139,0.35634 0.54388,0.69392 z"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
-         inkscape:connector-curvature="0" />
-      <path
-         id="path3170"
-         d="m 115.3975,337.34029 1.70667,-6.07651 0,0 1.65041,-6.07652 0,0.0188 1.53789,-6.07652 0,0.0188 1.36909,-6.02025 0,0 0.60015,-3.00075 0,0.0188 0.54388,-3.00075 0,0.0188 0.46887,-3.00075 0,0.0188 0.39385,-2.96324 0,0 0.30007,-2.94448 0,0.0188 0.22506,-2.96324 0,0.0375 0.13128,-2.92573 0,0.0188 0.0188,-2.92573 0,0.0375 -0.0938,-2.90697 0,0.0188 -0.2063,-2.86946 0,0.0188 -0.28132,-2.86946 0,0.0188 -0.39385,-2.85071 0.0188,0.0187 -0.48762,-2.85071 0,0.0188 -0.54389,-2.8132 0,0.0188 -0.6189,-2.8132 0.0187,0 -0.67517,-2.8132 0,0.0188 -1.53788,-5.57014 0.0188,0.0188 -1.68792,-5.57014 0,0.0188 -1.80045,-5.55138 0,0 -1.46287,-4.35109 1.18155,-0.4126 1.46286,4.36984 1.80045,5.55138 1.68792,5.58889 1.53788,5.5889 0.67517,2.83195 0.63766,2.83196 0.54389,2.83195 0.46886,2.85071 0.39385,2.88822 0.30008,2.88822 0.18754,2.88822 0.0938,2.92573 -0.0188,2.94448 -0.11253,2.94449 -0.22505,2.96324 -0.31883,2.98199 -0.39385,2.98199 -0.46887,3.0195 -0.54388,3.00075 -0.61891,3.0195 -1.36909,6.0390
 1 -1.53788,6.07651 -1.65041,6.09527 -1.70668,6.07651 z m -2.56939,-83.25199 1.96924,-9.56488 7.35183,6.43285 c 0.26257,0.22506 0.28132,0.6189 0.0563,0.88147 -0.22505,0.26256 -0.6189,0.28132 -0.88147,0.0563 l -6.52662,-5.72017 1.01275,-0.33759 -1.76294,8.49587 c -0.075,0.33758 -0.39385,0.56264 -0.73143,0.48762 -0.33758,-0.075 -0.56264,-0.39385 -0.48762,-0.73143 z"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
-         inkscape:connector-curvature="0" />
-      <path
-         id="path3172"
-         d="m 127.0629,243.41689 10.12753,7.78319 5.02625,3.88222 4.98874,3.84471 4.91373,3.82595 4.85746,3.76969 4.76368,3.73218 4.65116,3.67591 4.53863,3.6009 4.40735,3.54463 4.27607,3.46962 4.10727,3.35708 3.91973,3.28207 3.73218,3.16954 3.52587,3.05701 1.68792,1.50038 1.61291,1.44411 1.57539,1.42535 1.50037,1.38785 2.86947,2.70067 2.62565,2.58814 2.47562,2.47562 2.26931,2.38184 2.11928,2.28807 1.96924,2.23181 1.83796,2.11928 1.72543,2.08177 1.6129,2.0255 1.51913,1.96924 1.44411,1.95049 1.38785,1.87547 1.36909,1.89422 1.89422,2.68192 -1.03151,0.71267 -1.89422,-2.68191 0,0 -1.35034,-1.87547 0.0188,0.0188 -1.38785,-1.89422 0,0 -1.44411,-1.93173 0,0.0188 -1.51913,-1.96924 0,0.0188 -1.59414,-2.02551 0,0.0188 -1.70668,-2.08177 0,0.0187 -1.8192,-2.13803 0,0.0188 -1.95049,-2.21305 0,0 -2.10052,-2.26932 0,0 -2.26932,-2.38184 0,0 -2.45686,-2.45687 0.0188,0 -2.64441,-2.56939 0.0188,0 -2.85071,-2.68191 0,0 -1.50037,-1.38785 0,0 -1.55664,-1.4066 0,0 -1.63166,-1.46286 0,0 -1.66916,-1.48162 0,
 0.0188 -3.52588,-3.05701 0,0 -3.71342,-3.16954 0,0 -3.91973,-3.26331 0,0 -4.08852,-3.37584 0,0 -4.27607,-3.45086 0.0188,0 -4.40735,-3.54464 0,0.0188 -4.53863,-3.61965 -4.65116,-3.67592 0,0 -4.76368,-3.73218 0,0 -4.83871,-3.76969 -4.93248,-3.82595 -4.96999,-3.84471 -5.02625,-3.86346 -10.12752,-7.80195 z m 100.69384,82.6706 0.84396,9.71492 -8.88971,-4.05101 c -0.30008,-0.15004 -0.45012,-0.50638 -0.30008,-0.82521 0.13128,-0.31883 0.50638,-0.45011 0.82521,-0.31883 l 7.89571,3.61965 -0.88147,0.61891 -0.75018,-8.64591 c -0.0188,-0.35633 0.22505,-0.65641 0.58139,-0.69392 0.33759,-0.0187 0.63766,0.22506 0.67517,0.5814 z"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
-         inkscape:connector-curvature="0" />
-      <path
-         id="path3174"
-         d="m 242.72297,335.48358 -4.57614,-8.25206 0,0 -4.5949,-8.15828 0,0 -2.32558,-4.03226 0,0 -2.36308,-3.97599 0,0.0188 -2.38185,-3.91973 0.0188,0 -2.41936,-3.84471 0.0188,0 -2.45686,-3.76968 0,0.0188 -2.47562,-3.67592 0,0 -2.53188,-3.58214 0,0.0188 -2.56939,-3.46962 0,0.0188 -2.64441,-3.35709 0.0188,0.0188 -2.70068,-3.22581 0.0188,0.0188 -2.75694,-3.07577 0.0188,0.0188 -2.83195,-2.92572 0,0.0188 -2.90697,-2.77569 0.0188,0 -2.96324,-2.62566 0.0188,0.0188 -3.03826,-2.47562 0.0188,0.0188 -3.09452,-2.36309 0.0188,0.0188 -3.15079,-2.2318 0.0188,0 -3.1883,-2.11928 0,0.0188 -3.24456,-2.0255 0.0188,0 -3.28207,-1.93173 0.0188,0 -3.33834,-1.85671 0.0188,0 -3.35709,-1.7817 0.0188,0.0188 -3.3946,-1.72543 0.0188,0 -3.41335,-1.68792 0.0187,0.0187 -6.86421,-3.24456 0.0188,0 -5.77644,-2.64441 0.52513,-1.14403 5.77644,2.64441 6.84545,3.24456 3.41335,1.68792 3.3946,1.72543 3.35708,1.80044 3.35709,1.85672 3.28207,1.95048 3.26331,2.02551 3.20705,2.13803 3.16954,2.25056 3.11328,2.36309 3.05701,2.
 49437 2.98199,2.64441 2.92573,2.79445 2.85071,2.94448 2.77569,3.09452 2.70067,3.2258 2.66317,3.37585 2.58814,3.46961 2.55064,3.6009 2.49437,3.69467 2.45686,3.76969 2.4006,3.86346 2.4006,3.91973 2.34433,3.99474 2.34433,4.03225 4.61365,8.17704 4.57614,8.2333 z m -85.57757,-83.60833 -5.60765,-7.97074 9.71492,-0.95649 c 0.33759,-0.0375 0.63766,0.20631 0.67517,0.56264 0.0375,0.33759 -0.22506,0.63766 -0.56264,0.67517 l -8.6459,0.86272 0.45011,-0.994 5.0075,7.10802 c 0.18754,0.28132 0.13128,0.67517 -0.1688,0.88147 -0.28132,0.18755 -0.65641,0.13128 -0.86271,-0.16879 z"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
-         inkscape:connector-curvature="0" />
-      <path
-         id="path3176"
-         d="m 341.37254,343.34179 -123.2557,-87.0967 -2.25056,-1.63166 -2.08177,-1.51913 -1.93173,-1.42535 -1.80045,-1.33158 -1.66917,-1.21906 -1.53788,-1.14403 -1.4066,-1.05027 -1.29407,-0.97524 -1.18155,-0.91898 -1.06901,-0.8252 -0.97525,-0.75019 -0.88147,-0.69392 -0.80645,-0.65642 -0.71267,-0.58139 -0.63766,-0.52513 -0.56264,-0.48762 -0.50638,-0.45012 -0.45011,-0.4126 -0.75019,-0.75018 -0.58139,-0.65642 -0.45012,-0.58139 -0.39384,-0.56264 -0.33759,-0.54389 -0.13128,-0.18755 1.06902,-0.67516 0.11252,0.2063 0.33759,0.52513 -0.0188,-0.0188 0.3751,0.54389 -0.0188,-0.0375 0.43136,0.54389 -0.0187,-0.0188 0.54388,0.6189 -0.0188,-0.0375 0.73143,0.73143 -0.0375,-0.0188 0.45011,0.39385 0,0 0.48762,0.45011 0,-0.0188 0.56264,0.48762 -0.0187,0 0.63766,0.52513 0,0 0.71267,0.5814 -0.0188,0 0.7877,0.63765 0,-0.0188 0.88147,0.69392 0,0 0.97524,0.76894 0,0 1.06901,0.82521 0,-0.0188 1.18155,0.90022 0,0 1.29407,0.97524 -0.0187,0 1.4066,1.05026 1.53788,1.14404 0,0 1.65041,1.23781 0,0 1.80045,1.31282 
 1.93173,1.42536 0,0 2.10052,1.51913 2.23181,1.63165 -0.0188,0 123.27446,87.0967 z m -147.31795,-98.61207 -0.48762,-9.73368 8.72092,4.36984 c 0.31883,0.15004 0.45012,0.52513 0.28133,0.84396 -0.15004,0.30008 -0.52514,0.43136 -0.82521,0.28132 l -7.76443,-3.90097 0.90022,-0.58139 0.43136,8.66465 c 0.0188,0.33759 -0.24381,0.63766 -0.60015,0.65642 -0.33759,0.0188 -0.63766,-0.24381 -0.65642,-0.60015 z"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
-         inkscape:connector-curvature="0" />
-      <path
-         id="path3178"
-         d="m 200.99382,225.67497 15.21004,6.07652 7.55813,3.05701 7.50187,3.05701 7.42685,3.07576 7.35184,3.09453 7.2393,3.11327 7.10802,3.15079 6.95798,3.15078 6.80795,3.1883 6.63915,3.2258 6.4141,3.26331 3.15079,1.65041 3.07576,1.65041 3.0195,1.66917 2.96324,1.68792 2.88822,1.68792 2.85071,1.68792 2.75694,1.72543 2.70067,1.72543 2.62566,1.74419 2.56939,1.74418 2.47561,1.76294 2.43811,1.78169 2.36309,1.80045 2.28807,1.7817 2.25056,1.8192 2.1943,1.8192 4.20104,3.67592 4.0135,3.69467 3.82596,3.75093 3.67591,3.76969 3.50713,3.78845 3.39459,3.80719 3.28207,3.82596 3.20705,3.86346 3.11327,3.86346 3.07577,3.86346 5.27006,6.77044 -0.97524,0.76894 -5.28882,-6.77043 0,0 -3.05701,-3.86347 0.0188,0 -3.13204,-3.8447 0,0 -3.18829,-3.84471 0.0188,0.0187 -3.28207,-3.82595 0,0 -3.37584,-3.78844 0,0 -3.50713,-3.78845 0.0188,0.0188 -3.65716,-3.75094 0,0 -3.8072,-3.73218 0,0.0188 -3.99475,-3.69467 0.0188,0.0187 -4.20105,-3.65716 0.0188,0 -2.1943,-1.80045 0,0 -2.23181,-1.8192 0.0188,0.0187 -2.28807,-
 1.80045 0,0.0188 -2.34434,-1.78169 0,0 -2.41935,-1.7817 0.0188,0.0188 -2.49438,-1.76294 0.0188,0 -2.55064,-1.74419 0,0 -2.6069,-1.72543 0,0 -2.70067,-1.72543 0.0188,0 -2.75694,-1.70667 0,0.0188 -2.83196,-1.70667 0.0188,0 -2.88822,-1.68792 0,0 -2.96324,-1.66917 0.0187,0 -3.0195,-1.65041 0,0 -3.07576,-1.65041 0,0 -3.13203,-1.65041 0.0188,0 -6.4141,-3.24456 0,0 -6.6204,-3.2258 0.0188,0.0187 -6.80795,-3.18829 0.0188,0 -6.97674,-3.16954 0,0 -7.08927,-3.13203 0,0 -7.2393,-3.09452 0.0188,0 -7.33307,-3.09453 0,0 -7.44561,-3.07576 0,0 -7.48311,-3.05701 -7.55814,-3.05702 0,0 -15.19128,-6.09526 z m 168.3607,101.55655 1.31282,9.67741 -9.0585,-3.6384 c -0.33759,-0.11253 -0.48763,-0.48763 -0.35634,-0.80645 0.13128,-0.31883 0.48762,-0.46887 0.80645,-0.35634 l 8.06451,3.2258 -0.84396,0.67517 -1.16279,-8.6084 c -0.0563,-0.33758 0.18754,-0.65641 0.52513,-0.71267 0.35634,-0.0375 0.67517,0.2063 0.71268,0.54388 z"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
-         inkscape:connector-curvature="0" />
-      <path
-         id="path3180"
-         d="m 270.9675,241.59769 -2.58814,-1.33158 -4.8012,5.70142 -0.97524,-0.48762 4.8012,-5.72018 -2.6069,-1.31283 0.60015,-0.73143 6.18904,3.15079 z m 2.79445,3.90097 -0.0375,-0.0375 c -0.11253,-0.0938 -0.22505,-0.16879 -0.33758,-0.24381 -0.0938,-0.075 -0.22506,-0.15004 -0.39385,-0.24381 -0.30008,-0.15004 -0.63766,-0.22506 -1.01275,-0.24381 -0.3751,-0.0188 -0.75019,0 -1.12528,0.0563 l -2.86947,3.43211 -0.91898,-0.46887 4.03226,-4.8387 0.91898,0.46886 -0.5814,0.71268 c 0.5814,-0.075 1.05026,-0.0938 1.42536,-0.0563 0.35634,0.0375 0.67516,0.11252 0.95648,0.26256 0.15004,0.075 0.26257,0.15004 0.33759,0.18755 0.075,0.0375 0.16879,0.11253 0.28132,0.2063 z m 4.0135,-1.53788 -0.71268,0.84396 -1.05026,-0.54389 0.71268,-0.84396 z m -1.46286,1.6129 -4.05101,4.81995 -0.91898,-0.46887 4.05101,-4.81995 z m 6.30157,3.20705 -3.6009,4.29482 c -0.67517,0.80645 -1.36909,1.31282 -2.08177,1.50037 -0.71268,0.16879 -1.50037,0.0375 -2.38184,-0.4126 -0.28132,-0.13128 -0.54389,-0.30008 -0.80645,-0.48762 
 -0.26257,-0.18755 -0.50638,-0.3751 -0.71268,-0.5814 l 0.69392,-0.80645 0.0375,0.0375 c 0.15004,0.16879 0.35634,0.37509 0.61891,0.60015 0.28132,0.24381 0.54388,0.43136 0.8252,0.56264 0.30008,0.15004 0.5814,0.26257 0.84396,0.28132 0.26257,0.0375 0.48762,0.0188 0.71268,-0.0563 0.2063,-0.0563 0.39385,-0.1688 0.58139,-0.30008 0.1688,-0.13128 0.33759,-0.30007 0.50638,-0.50637 l 0.35634,-0.43136 c -0.45011,0.0563 -0.82521,0.075 -1.12528,0.0563 -0.30008,-0.0187 -0.65642,-0.13128 -1.05026,-0.31883 -0.52513,-0.28132 -0.86272,-0.65641 -0.994,-1.12528 -0.13128,-0.48762 -0.0563,-1.01275 0.24381,-1.59414 0.24381,-0.48763 0.56264,-0.90023 0.93773,-1.27532 0.3751,-0.35634 0.80645,-0.63766 1.25657,-0.84396 0.43135,-0.2063 0.90022,-0.30008 1.38784,-0.31883 0.48762,0 0.93774,0.0938 1.36909,0.31883 0.31883,0.16879 0.5814,0.33758 0.7877,0.54388 0.2063,0.18755 0.37509,0.39385 0.50638,0.60015 l 0.22505,-0.16879 z m -1.65041,0.39385 c -0.15004,-0.20631 -0.31883,-0.39385 -0.50638,-0.5814 -0.18755,-0.16879 -
 0.39385,-0.30007 -0.61891,-0.43136 -0.33758,-0.15003 -0.65641,-0.22505 -0.99399,-0.22505 -0.33759,0.0188 -0.67517,0.0938 -0.994,0.26256 -0.30008,0.15004 -0.5814,0.3751 -0.86272,0.65642 -0.26256,0.30007 -0.48762,0.60015 -0.65641,0.95648 -0.2063,0.3751 -0.26256,0.71268 -0.2063,1.01276 0.075,0.30007 0.30007,0.56264 0.69392,0.75018 0.26257,0.15004 0.5814,0.22506 0.91898,0.26257 0.33759,0.0188 0.67517,0.0188 1.03151,-0.0188 z m 7.93322,2.8132 -3.6009,4.29482 c -0.67516,0.80645 -1.36909,1.31282 -2.08176,1.48162 -0.71268,0.18754 -1.50038,0.0563 -2.38185,-0.39385 -0.28132,-0.15004 -0.54388,-0.30008 -0.80645,-0.48762 -0.26256,-0.18755 -0.50637,-0.39385 -0.71268,-0.5814 l 0.69393,-0.80645 0.0375,0.0188 c 0.15003,0.18755 0.35633,0.39385 0.6189,0.61891 0.28132,0.22506 0.54389,0.4126 0.82521,0.56264 0.30007,0.15004 0.58139,0.24381 0.84396,0.28132 0.26256,0.0375 0.48762,0.0188 0.71267,-0.0563 0.20631,-0.0563 0.39385,-0.1688 0.5814,-0.30008 0.16879,-0.15004 0.33758,-0.31883 0.50637,-0.50638 l 0.35
 634,-0.43135 c -0.45011,0.0563 -0.8252,0.075 -1.12528,0.0563 -0.30007,-0.0188 -0.65641,-0.13128 -1.05026,-0.31883 -0.52513,-0.28132 -0.86271,-0.65641 -0.994,-1.12528 -0.13128,-0.48762 -0.0563,-1.01275 0.24381,-1.6129 0.24382,-0.46887 0.56264,-0.88147 0.93774,-1.25656 0.37509,-0.35634 0.80645,-0.65642 1.25656,-0.86272 0.43136,-0.18755 0.90023,-0.30007 1.38785,-0.30007 0.48762,-0.0188 0.93773,0.0938 1.36909,0.31883 0.31883,0.15003 0.58139,0.33758 0.78769,0.54388 0.20631,0.18755 0.3751,0.39385 0.50638,0.60015 l 0.22506,-0.18755 z m -1.65041,0.39384 c -0.15004,-0.2063 -0.31883,-0.4126 -0.50638,-0.58139 -0.18754,-0.16879 -0.39384,-0.31883 -0.6189,-0.43136 -0.33758,-0.16879 -0.65641,-0.24381 -0.994,-0.22505 -0.33758,0.0187 -0.67517,0.0938 -0.994,0.26256 -0.30007,0.15004 -0.58139,0.3751 -0.86271,0.65642 -0.26257,0.28132 -0.48762,0.60014 -0.65641,0.93773 -0.20631,0.39385 -0.26257,0.73143 -0.20631,1.03151 0.075,0.30007 0.30008,0.54388 0.69393,0.75018 0.26256,0.13129 0.58139,0.22506 0.91898,0
 .24381 0.33758,0.0375 0.67516,0.0375 1.0315,0 z m 5.58889,4.29482 c 0.0563,-0.075 0.11253,-0.15003 0.15004,-0.2063 0.0375,-0.075 0.075,-0.13128 0.13128,-0.22505 0.16879,-0.33759 0.2063,-0.65642 0.13129,-0.95649 -0.0938,-0.30008 -0.35634,-0.54389 -0.76895,-0.76894 -0.46886,-0.22506 -0.95648,-0.30008 -1.46286,-0.18755 -0.50638,0.0938 -0.93773,0.35634 -1.31283,0.76894 z m -3.60089,2.53189 c -0.75019,-0.39385 -1.25657,-0.86272 -1.51913,-1.44411 -0.24381,-0.5814 -0.18755,-1.2003 0.16879,-1.89423 0.50638,-0.99399 1.25656,-1.70667 2.25056,-2.10052 0.994,-0.39385 1.96924,-0.33758 2.90697,0.13128 0.61891,0.31883 1.01276,0.71268 1.18155,1.18155 0.16879,0.48762 0.11253,0.994 -0.16879,1.55664 -0.0563,0.0938 -0.15004,0.24381 -0.28132,0.45011 -0.13129,0.18754 -0.30008,0.4126 -0.50638,0.65641 l -4.03225,-2.04426 c -0.075,0.075 -0.13129,0.16879 -0.18755,0.26257 -0.0563,0.075 -0.0938,0.16879 -0.13128,0.24381 -0.24381,0.45011 -0.28132,0.90022 -0.13129,1.29407 0.13129,0.4126 0.46887,0.75019 0.994,1.01
 275 0.35634,0.16879 0.75019,0.30008 1.2003,0.35634 0.45011,0.0375 0.86272,0.0563 1.2003,0 l 0.0563,0.0375 -0.71267,0.88147 c -0.18755,-0.0188 -0.35634,-0.0375 -0.50638,-0.0563 -0.16879,-0.0188 -0.35634,-0.0563 -0.5814,-0.11253 -0.22505,-0.0563 -0.43135,-0.11253 -0.60014,-0.15004 -0.1688,-0.0563 -0.3751,-0.15004 -0.60015,-0.26256 z m 10.20254,-0.63766 -0.0563,-0.0188 c -0.11252,-0.0938 -0.22505,-0.18754 -0.31883,-0.26256 -0.11252,-0.075 -0.24381,-0.15004 -0.39384,-0.22506 -0.31883,-0.15004 -0.65642,-0.24381 -1.03151,-0.26257 -0.35634,0 -0.73143,0.0188 -1.10653,0.0563 l -2.86946,3.4321 -0.93774,-0.46886 4.03226,-4.83871 0.93773,0.48762 -0.60015,0.71268 c 0.5814,-0.0938 1.06902,-0.11253 1.42536,-0.075 0.35634,0.0375 0.67517,0.13128 0.95649,0.26257 0.16879,0.0938 0.28132,0.15003 0.33758,0.18754 0.075,0.0563 0.16879,0.11253 0.30008,0.20631 z m 4.31357,8.04575 c -0.91898,-0.46887 -1.51913,-1.08777 -1.80045,-1.83796 -0.28132,-0.73143 -0.2063,-1.55663 0.24381,-2.41935 0.33759,-0.67517 0.768
 95,-1.25656 1.25657,-1.72543 0.50637,-0.48762 1.05026,-0.84396 1.65041,-1.08777 0.60015,-0.24381 1.23781,-0.35634 1.91297,-0.31883 0.67517,0.0375 1.35034,0.22506 2.02551,0.56264 0.4126,0.2063 0.7877,0.46887 1.10653,0.76894 0.31882,0.28132 0.63765,0.63766 0.91897,1.06902 l -0.78769,0.97524 -0.075,-0.0375 c -0.0563,-0.15004 -0.11253,-0.31883 -0.16879,-0.45011 -0.0563,-0.15004 -0.16879,-0.33758 -0.33759,-0.56264 -0.13128,-0.18755 -0.30007,-0.37509 -0.52513,-0.56264 -0.2063,-0.18755 -0.46886,-0.35634 -0.76894,-0.52513 -0.45011,-0.22506 -0.91898,-0.33759 -1.4066,-0.35634 -0.46887,0 -0.93773,0.11253 -1.38785,0.30007 -0.45011,0.20631 -0.86271,0.48763 -1.27531,0.88147 -0.39385,0.41261 -0.73144,0.88147 -1.01276,1.4066 -0.33758,0.67517 -0.43135,1.29408 -0.28132,1.83796 0.1688,0.56264 0.5814,0.994 1.23781,1.33158 0.30008,0.1688 0.61891,0.28132 0.93774,0.35634 0.31883,0.0563 0.60015,0.11253 0.90022,0.11253 0.26257,0 0.50638,0 0.73143,-0.0375 0.22506,-0.0188 0.41261,-0.0375 0.5814,-0.075 l 0.075
 ,0.0375 -0.80645,0.994 c -0.43136,0 -0.90023,-0.0375 -1.4066,-0.13128 -0.52514,-0.075 -1.03151,-0.26257 -1.53789,-0.50638 z m 11.30907,0.28132 c -0.0375,0.075 -0.0938,0.16879 -0.18755,0.30008 -0.075,0.13128 -0.16879,0.22505 -0.24381,0.31883 l -2.62565,3.13203 -0.91898,-0.46887 2.30682,-2.73818 c 0.13129,-0.1688 0.22506,-0.30008 0.31883,-0.41261 0.0938,-0.11252 0.16879,-0.24381 0.22506,-0.37509 0.15004,-0.26257 0.18755,-0.50638 0.11253,-0.71268 -0.0563,-0.2063 -0.26257,-0.4126 -0.61891,-0.58139 -0.26256,-0.13128 -0.56264,-0.2063 -0.93773,-0.22506 -0.35634,-0.0188 -0.73143,0 -1.12528,0.0563 l -3.0195,3.60089 -0.91898,-0.46886 5.64515,-6.73293 0.91898,0.46887 -2.04426,2.4381 c 0.46887,-0.075 0.90023,-0.0938 1.27532,-0.0563 0.37509,0.0188 0.71268,0.11253 1.03151,0.28132 0.48762,0.24381 0.78769,0.54388 0.93773,0.91898 0.15004,0.37509 0.0938,0.78769 -0.13128,1.25656 z m 4.78244,3.54463 c 0.0563,-0.075 0.11253,-0.15003 0.15004,-0.22505 0.0375,-0.0563 0.075,-0.13128 0.11253,-0.2063 0.18754,
 -0.33759 0.22505,-0.67517 0.13128,-0.95649 -0.0938,-0.30008 -0.33759,-0.54389 -0.76894,-0.76894 -0.45011,-0.22506 -0.93774,-0.30008 -1.44411,-0.18755 -0.50638,0.0938 -0.93774,0.35634 -1.31283,0.75019 z m -3.6009,2.53188 c -0.76894,-0.39384 -1.25656,-0.88146 -1.51912,-1.44411 -0.24382,-0.58139 -0.18755,-1.20029 0.15003,-1.89422 0.52513,-0.99399 1.27532,-1.70667 2.26932,-2.10052 0.994,-0.39385 1.96924,-0.35634 2.88822,0.13128 0.6189,0.31883 1.01275,0.71268 1.2003,1.18155 0.16879,0.48762 0.11252,0.99399 -0.1688,1.55663 -0.0563,0.0938 -0.15003,0.24382 -0.28132,0.45012 -0.13128,0.18754 -0.31883,0.4126 -0.52513,0.65641 l -4.0135,-2.04426 c -0.075,0.075 -0.13128,0.16879 -0.18754,0.26257 -0.0563,0.075 -0.11253,0.16879 -0.15004,0.24381 -0.22506,0.45011 -0.28132,0.88147 -0.13128,1.29407 0.15003,0.4126 0.48762,0.75019 0.99399,1.01275 0.35634,0.16879 0.76895,0.30008 1.21906,0.33759 0.45011,0.0563 0.84396,0.075 1.2003,0.0187 l 0.0563,0.0375 -0.71268,0.88147 c -0.18754,-0.0188 -0.35634,-0.0375 -0
 .52513,-0.0563 -0.15003,-0.0188 -0.33758,-0.0563 -0.56264,-0.11253 -0.22505,-0.0563 -0.43135,-0.11253 -0.60015,-0.16879 -0.16879,-0.0375 -0.37509,-0.13129 -0.60015,-0.24382 z m 5.7952,2.94449 c -0.35634,-0.16879 -0.63766,-0.3751 -0.86272,-0.60015 -0.22505,-0.2063 -0.39385,-0.45011 -0.48762,-0.73143 -0.11253,-0.26257 -0.15004,-0.56264 -0.13128,-0.86272 0.0375,-0.31883 0.13128,-0.63766 0.30007,-0.994 0.26257,-0.50637 0.60015,-0.93773 0.97525,-1.31282 0.39384,-0.35634 0.80645,-0.63766 1.27531,-0.82521 0.45012,-0.16879 0.93774,-0.26256 1.44411,-0.22505 0.52513,0.0187 1.03151,0.15003 1.50038,0.39384 0.31883,0.1688 0.60015,0.35634 0.84396,0.60015 0.24381,0.22506 0.43135,0.45012 0.58139,0.69393 l -0.71268,0.88147 -0.0375,-0.0188 c -0.0375,-0.0938 -0.0938,-0.2063 -0.15004,-0.33758 -0.0563,-0.13129 -0.13128,-0.26257 -0.24381,-0.39385 -0.0938,-0.13128 -0.2063,-0.26257 -0.33759,-0.39385 -0.15003,-0.13128 -0.31883,-0.26257 -0.52513,-0.35634 -0.63766,-0.33758 -1.29407,-0.31883 -1.96924,0 -0.6564
 1,0.33759 -1.2003,0.88147 -1.59415,1.65041 -0.22505,0.45011 -0.28132,0.86272 -0.16879,1.23781 0.0938,0.37509 0.3751,0.65641 0.80645,0.88147 0.2063,0.11253 0.43136,0.18755 0.67517,0.22506 0.22506,0.0563 0.43136,0.075 0.61891,0.075 0.18754,0.0188 0.37509,0.0188 0.56264,0 0.18754,-0.0188 0.31883,-0.0375 0.39384,-0.0375 l 0.0563,0.0187 -0.71268,0.91898 c -0.33758,-0.0188 -0.69392,-0.075 -1.06902,-0.13128 -0.35633,-0.075 -0.71267,-0.18755 -1.0315,-0.35634 z m 7.93322,3.88222 -1.12528,-0.56264 -0.52513,-3.30083 -1.10652,0.24381 -1.31283,1.55664 -0.91898,-0.46887 5.64516,-6.73292 0.91898,0.48762 -3.61966,4.31357 4.5949,-1.12528 1.21905,0.61891 -4.44485,0.99399 z m 8.30832,-0.46887 c -0.26256,0.54388 -0.6189,1.01275 -1.0315,1.4066 -0.39385,0.39385 -0.82521,0.67517 -1.27532,0.86271 -0.45011,0.20631 -0.91898,0.30008 -1.38785,0.31883 -0.48762,0 -0.91898,-0.0938 -1.35033,-0.31883 -0.28132,-0.15003 -0.54389,-0.31883 -0.75019,-0.50637 -0.22506,-0.18755 -0.39385,-0.4126 -0.52513,-0.63766 l -1.7066
 8,2.0255 -0.91898,-0.46886 5.55139,-6.60165 0.91898,0.46887 -0.43136,0.50638 c 0.4126,-0.0563 0.7877,-0.075 1.16279,-0.0563 0.35634,0.0188 0.71268,0.11253 1.06902,0.28132 0.54388,0.28132 0.86271,0.65641 0.97524,1.12528 0.11253,0.46887 0.0188,0.994 -0.30008,1.59415 z m -1.05026,-0.31883 c 0.2063,-0.37509 0.26257,-0.73143 0.2063,-1.03151 -0.0563,-0.28132 -0.28132,-0.52513 -0.65641,-0.71268 -0.26257,-0.15003 -0.5814,-0.22505 -0.93773,-0.24381 -0.35634,0 -0.71268,0 -1.05027,0.0563 l -2.28807,2.71943 c 0.15004,0.22505 0.31883,0.4126 0.46887,0.56264 0.16879,0.16879 0.39385,0.30007 0.65641,0.45011 0.33759,0.16879 0.69393,0.24381 1.05027,0.22505 0.33758,-0.0375 0.67516,-0.13128 0.97524,-0.30007 0.33758,-0.18755 0.6189,-0.4126 0.88147,-0.71268 0.26256,-0.28132 0.50638,-0.6189 0.69392,-1.01275 z m 7.05176,3.63841 c -0.24381,0.48762 -0.56264,0.91897 -0.93773,1.29407 -0.3751,0.37509 -0.7877,0.65641 -1.23781,0.84396 -0.46887,0.2063 -0.91898,0.31883 -1.4066,0.31883 -0.46887,0 -0.93774,-0.13129 -1
 .44411,-0.3751 -0.63766,-0.33758 -1.05027,-0.76894 -1.23781,-1.33158 -0.16879,-0.56264 -0.0938,-1.16279 0.24381,-1.8192 0.24381,-0.48762 0.56264,-0.91898 0.93773,-1.27532 0.3751,-0.37509 0.7877,-0.65641 1.23781,-0.86271 0.45011,-0.18755 0.91898,-0.30008 1.4066,-0.28132 0.48762,0 0.97524,0.11252 1.44411,0.35634 0.63766,0.31882 1.03151,0.75018 1.21905,1.29407 0.20631,0.56264 0.11253,1.16279 -0.22505,1.83796 z m -2.55064,1.29407 c 0.31883,-0.16879 0.60015,-0.39385 0.86272,-0.69392 0.28132,-0.30008 0.50637,-0.63766 0.69392,-1.01276 0.24381,-0.45011 0.30008,-0.86271 0.2063,-1.2003 -0.0938,-0.35633 -0.35634,-0.6189 -0.75018,-0.8252 -0.31883,-0.16879 -0.63766,-0.24381 -0.95649,-0.2063 -0.33759,0.0188 -0.65642,0.11252 -0.97525,0.28132 -0.30007,0.16879 -0.60015,0.4126 -0.86271,0.71267 -0.28132,0.28132 -0.50638,0.61891 -0.69392,0.994 -0.22506,0.45011 -0.30008,0.86272 -0.2063,1.2003 0.0938,0.35634 0.33758,0.63766 0.75018,0.84396 0.31883,0.15004 0.63766,0.22506 0.95649,0.2063 0.33758,-0.0188 0.
 65641,-0.11253 0.97524,-0.30007 z m 7.95198,-3.33833 -0.69392,0.84396 -1.05026,-0.52513 0.71268,-0.84396 z m -1.44411,1.6129 -4.051,4.8387 -0.91898,-0.46886 4.05101,-4.83871 z m 5.45761,4.36984 c -0.0375,0.075 -0.11252,0.16879 -0.18754,0.30007 -0.075,0.13128 -0.16879,0.22506 -0.24381,0.31883 l -2.62566,3.13203 -0.91898,-0.46887 2.30683,-2.73818 c 0.13128,-0.16879 0.22505,-0.30007 0.31883,-0.4126 0.0938,-0.11253 0.16879,-0.24381 0.22505,-0.37509 0.15004,-0.26257 0.18755,-0.50638 0.11253,-0.71268 -0.0563,-0.2063 -0.28132,-0.4126 -0.6189,-0.5814 -0.26257,-0.13128 -0.56264,-0.2063 -0.93774,-0.22505 -0.35633,-0.0188 -0.73143,0 -1.12528,0.0563 l -3.0195,3.6009 -0.91898,-0.46887 4.05101,-4.8387 0.91898,0.46886 -0.45011,0.54389 c 0.46887,-0.075 0.90022,-0.0938 1.27532,-0.0563 0.35634,0.0188 0.71267,0.11253 1.0315,0.28132 0.48762,0.24382 0.7877,0.54389 0.93774,0.91898 0.13128,0.3751 0.0938,0.7877 -0.13129,1.25657 z m 5.53263,1.23781 -0.54388,0.67516 -1.89422,-0.97524 -1.87547,2.23181 c -0.09
 38,0.0938 -0.2063,0.22505 -0.30008,0.39384 -0.11252,0.15004 -0.2063,0.26257 -0.24381,0.3751 -0.13128,0.22505 -0.15004,0.43136 -0.0938,0.6189 0.0563,0.16879 0.26256,0.33759 0.58139,0.50638 0.13129,0.075 0.30008,0.13128 0.50638,0.18755 0.2063,0.0375 0.35634,0.075 0.43136,0.075 l 0.0563,0.0188 -0.58139,0.71268 c -0.2063,-0.0563 -0.41261,-0.11253 -0.63766,-0.18755 -0.22506,-0.075 -0.41261,-0.15004 -0.5814,-0.24381 -0.45011,-0.22506 -0.75018,-0.50638 -0.88147,-0.82521 -0.15003,-0.31883 -0.11252,-0.69392 0.0938,-1.12528 0.0563,-0.11253 0.11252,-0.2063 0.18754,-0.30007 0.075,-0.0938 0.15004,-0.2063 0.24381,-0.31883 l 2.17555,-2.58815 -0.61891,-0.31883 0.54389,-0.65641 0.6189,0.31883 1.16279,-1.38785 0.93773,0.46887 -1.18154,1.38785 z"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:none"
-         inkscape:connector-curvature="0" />
-      <path
-         id="path3182"
-         d="m 229.5947,278.93824 -0.90023,-0.6189 1.06902,-2.04426 -2.88822,-2.00675 -2.4006,1.10652 -0.95649,-0.63766 8.66466,-3.95723 1.18155,0.80645 z m 0.60015,-3.46961 1.85671,-3.6009 -4.23856,1.95048 z m 2.36309,5.68266 c -0.31883,-0.22505 -0.5814,-0.45011 -0.76895,-0.69392 -0.2063,-0.24381 -0.33758,-0.50638 -0.39384,-0.80645 -0.075,-0.28132 -0.075,-0.56264 0,-0.88147 0.0563,-0.30008 0.2063,-0.60015 0.43135,-0.91898 0.31883,-0.48762 0.69393,-0.86272 1.12528,-1.18154 0.43136,-0.30008 0.90023,-0.52514 1.38785,-0.63766 0.46887,-0.11253 0.95649,-0.13129 1.46286,-0.0375 0.50638,0.0938 0.97525,0.28132 1.42536,0.60015 0.30007,0.2063 0.54388,0.43135 0.75019,0.69392 0.2063,0.26256 0.37509,0.50638 0.48762,0.76894 l -0.82521,0.7877 -0.0375,-0.0375 c -0.0375,-0.0938 -0.0563,-0.2063 -0.0938,-0.33759 -0.0563,-0.15003 -0.11253,-0.28132 -0.18755,-0.43135 -0.075,-0.15004 -0.18754,-0.30008 -0.30007,-0.45012 -0.11253,-0.15003 -0.28132,-0.28132 -0.46887,-0.4126 -0.58139,-0.4126 -1.23781,-0.48762 
 -1.95048,-0.26256 -0.69393,0.24381 -1.29408,0.71267 -1.7817,1.42535 -0.30007,0.4126 -0.4126,0.82521 -0.35634,1.2003 0.0563,0.37509 0.28132,0.71268 0.69393,0.994 0.18754,0.13128 0.39384,0.22505 0.6189,0.30007 0.22506,0.075 0.43136,0.13129 0.6189,0.16879 0.18755,0.0375 0.3751,0.0563 0.56264,0.0563 0.1688,0.0188 0.30008,0.0188 0.39385,0.0188 l 0.0375,0.0375 -0.80645,0.80645 c -0.35634,-0.075 -0.69392,-0.16879 -1.05026,-0.28132 -0.35634,-0.11252 -0.67517,-0.28132 -0.97524,-0.48762 z m 7.33307,4.91373 -1.0315,-0.71268 -0.075,-3.33833 -1.14404,0.0938 -1.50037,1.36909 -0.84396,-0.60015 6.48911,-5.88897 0.84397,0.5814 -4.16354,3.78844 4.70742,-0.46886 1.12528,0.76894 -4.53863,0.39385 z m 2.17554,1.51912 -1.08777,-0.75018 1.2003,-1.08777 1.08777,0.75018 z m 8.15829,5.81395 c -0.86272,-0.58139 -1.36909,-1.25656 -1.55664,-2.04426 -0.16879,-0.76894 0.0187,-1.57539 0.58139,-2.38184 0.43136,-0.6189 0.91898,-1.12528 1.48162,-1.53788 0.54389,-0.41261 1.14404,-0.69393 1.7817,-0.84396 0.6189,-0.1688 
 1.25656,-0.18755 1.93173,-0.0563 0.65641,0.11253 1.29407,0.39385 1.93173,0.82521 0.37509,0.26256 0.71268,0.56264 0.994,0.90022 0.28132,0.33759 0.52513,0.73143 0.75018,1.18155 l -0.90022,0.88147 -0.075,-0.0563 c -0.0188,-0.16879 -0.0563,-0.31883 -0.0938,-0.48762 -0.0563,-0.15004 -0.13128,-0.35634 -0.26257,-0.58139 -0.11253,-0.20631 -0.26256,-0.41261 -0.45011,-0.63766 -0.18755,-0.22506 -0.4126,-0.41261 -0.69392,-0.61891 -0.41261,-0.28132 -0.86272,-0.46886 -1.33158,-0.52513 -0.48763,-0.075 -0.95649,-0.0375 -1.42536,0.0938 -0.46887,0.13128 -0.93773,0.37509 -1.38785,0.71267 -0.45011,0.33759 -0.84396,0.75019 -1.18154,1.25657 -0.43136,0.6189 -0.60015,1.21905 -0.52513,1.78169 0.075,0.56264 0.43136,1.06902 1.03151,1.48162 0.30007,0.2063 0.58139,0.35634 0.88147,0.46887 0.30007,0.11252 0.60015,0.18754 0.88147,0.24381 0.26256,0.0375 0.50637,0.0563 0.73143,0.0563 0.22505,0.0188 0.43136,0.0188 0.60015,0 l 0.0563,0.0375 -0.91898,0.90022 c -0.43135,-0.0563 -0.90022,-0.16879 -1.38784,-0.31882 -0.506
 38,-0.1688 -0.994,-0.39385 -1.44411,-0.73144 z m 11.15903,1.80045 c -0.0563,0.075 -0.13129,0.16879 -0.22506,0.28132 -0.0938,0.11253 -0.18755,0.2063 -0.28132,0.28132 l -3.0195,2.75694 -0.86272,-0.5814 2.66317,-2.41935 c 0.15003,-0.13128 0.26256,-0.26256 0.37509,-0.35634 0.0938,-0.11253 0.18755,-0.22505 0.26257,-0.33758 0.18754,-0.26257 0.24381,-0.48762 0.2063,-0.71268 -0.0188,-0.2063 -0.2063,-0.43136 -0.54389,-0.65641 -0.22505,-0.15004 -0.52513,-0.26257 -0.88147,-0.33759 -0.35634,-0.0563 -0.73143,-0.0938 -1.12528,-0.0938 l -3.46961,3.15078 -0.84396,-0.58139 6.48911,-5.90772 0.84396,0.58139 -2.34433,2.13804 c 0.46887,0 0.90022,0.0375 1.25656,0.11252 0.3751,0.075 0.69393,0.22506 0.994,0.43136 0.45011,0.30008 0.71268,0.63766 0.80645,1.03151 0.0938,0.39385 -0.0188,0.80645 -0.30007,1.21905 z m 4.25731,4.16354 c 0.075,-0.075 0.13128,-0.13128 0.18754,-0.18755 0.0375,-0.0563 0.0938,-0.13128 0.15004,-0.2063 0.2063,-0.31883 0.30008,-0.63766 0.24381,-0.93773 -0.0375,-0.30008 -0.26256,-0.5814 -0
 .65641,-0.84396 -0.43136,-0.30008 -0.90023,-0.43136 -1.4066,-0.39385 -0.52513,0.0375 -0.994,0.22505 -1.4066,0.58139 z m -3.90097,2.0255 c -0.71268,-0.48762 -1.14404,-1.0315 -1.31283,-1.63165 -0.16879,-0.61891 -0.0375,-1.21906 0.4126,-1.85672 0.63766,-0.91897 1.48162,-1.51912 2.51313,-1.78169 1.05026,-0.24381 1.98799,-0.075 2.85071,0.52513 0.58139,0.39385 0.91898,0.82521 1.03151,1.33158 0.0938,0.48762 -0.0188,0.994 -0.3751,1.51913 -0.075,0.075 -0.18755,0.22506 -0.33758,0.39385 -0.16879,0.18755 -0.3751,0.37509 -0.60015,0.60015 l -3.71343,-2.58815 c -0.0938,0.075 -0.15003,0.1688 -0.22505,0.24381 -0.0563,0.075 -0.11253,0.15004 -0.1688,0.22506 -0.30007,0.43136 -0.39384,0.84396 -0.30007,1.27532 0.075,0.4126 0.37509,0.78769 0.84396,1.12528 0.33758,0.22506 0.71268,0.39385 1.16279,0.50638 0.43136,0.11252 0.82521,0.16879 1.18154,0.16879 l 0.0563,0.0375 -0.82521,0.78769 c -0.18754,-0.0375 -0.35634,-0.0938 -0.50637,-0.13128 -0.15004,-0.0375 -0.33759,-0.0938 -0.56264,-0.18755 -0.20631,-0.075 -0.
 41261,-0.15003 -0.56264,-0.22505 -0.1688,-0.075 -0.35634,-0.18755 -0.56264,-0.33759 z m 5.34508,3.69467 c -0.31883,-0.22505 -0.5814,-0.45011 -0.76894,-0.69392 -0.18755,-0.24381 -0.31883,-0.52513 -0.39385,-0.80645 -0.075,-0.28132 -0.075,-0.58139 0,-0.88147 0.0563,-0.30007 0.2063,-0.6189 0.43136,-0.93773 0.31882,-0.46887 0.71267,-0.86272 1.12528,-1.16279 0.43135,-0.30008 0.90022,-0.52513 1.38784,-0.63766 0.46887,-0.13128 0.95649,-0.13128 1.46287,-0.0563 0.50637,0.0938 0.99399,0.30008 1.42535,0.60015 0.30008,0.20631 0.54389,0.45012 0.75019,0.71268 0.2063,0.26257 0.37509,0.50638 0.48762,0.76894 l -0.80645,0.7877 -0.0563,-0.0375 c -0.0188,-0.0938 -0.0563,-0.22506 -0.0938,-0.35634 -0.0375,-0.13128 -0.11253,-0.26256 -0.18755,-0.4126 -0.075,-0.15004 -0.16879,-0.30008 -0.30007,-0.45011 -0.11253,-0.15004 -0.26257,-0.30008 -0.46887,-0.43136 -0.58139,-0.39385 -1.21905,-0.48762 -1.93173,-0.24381 -0.71268,0.24381 -1.31283,0.71268 -1.80045,1.4066 -0.30007,0.43136 -0.4126,0.8252 -0.35634,1.21905 0.
 0563,0.3751 0.30008,0.69392 0.69393,0.97524 0.18754,0.13129 0.39385,0.24382 0.63766,0.31883 0.22505,0.075 0.4126,0.13129 0.60015,0.1688 0.18754,0.0188 0.37509,0.0375 0.56264,0.0563 0.16879,0.0188 0.30007,0.0188 0.39384,0.0188 l 0.0563,0.0375 -0.82521,0.7877 c -0.35634,-0.0563 -0.69392,-0.15004 -1.05026,-0.26256 -0.35634,-0.11253 -0.67517,-0.28132 -0.97524,-0.48763 z m 7.33307,4.91373 -1.0315,-0.71268 -0.075,-3.33833 -1.14404,0.0938 -1.50037,1.36909 -0.84396,-0.60015 6.48912,-5.90772 0.86271,0.60015 -4.18229,3.78845 4.70742,-0.48763 1.12528,0.7877 -4.53863,0.37509 z m 8.30832,0.65641 c -0.33758,0.48762 -0.75018,0.91898 -1.20029,1.23781 -0.46887,0.33758 -0.93774,0.56264 -1.40661,0.69392 -0.46886,0.15004 -0.93773,0.18755 -1.4066,0.13129 -0.46886,-0.0563 -0.90022,-0.22506 -1.29407,-0.48763 -0.26256,-0.18754 -0.48762,-0.39384 -0.67517,-0.6189 -0.18754,-0.2063 -0.33758,-0.45011 -0.43135,-0.69392 l -1.96924,1.78169 -0.84396,-0.58139 6.37658,-5.81395 0.84396,0.60015 -0.48762,0.43135 c 0.412
 61,0 0.80645,0.0375 1.16279,0.0938 0.35634,0.075 0.69393,0.2063 1.03151,0.45011 0.48762,0.33758 0.75019,0.75019 0.80645,1.21905 0.0375,0.48762 -0.13128,0.994 -0.50638,1.55664 z m -0.99399,-0.45011 c 0.24381,-0.37509 0.35634,-0.69392 0.33758,-0.994 -0.0188,-0.30007 -0.2063,-0.58139 -0.54388,-0.8252 -0.26257,-0.1688 -0.56264,-0.28132 -0.91898,-0.33759 -0.33759,-0.075 -0.69393,-0.0938 -1.03151,-0.11253 l -2.64441,2.4006 c 0.13128,0.24381 0.26257,0.45011 0.39385,0.63766 0.15004,0.16879 0.33758,0.33758 0.60015,0.52513 0.30007,0.2063 0.63766,0.31883 0.994,0.33759 0.35633,0.0375 0.69392,-0.0188 1.01275,-0.15004 0.35634,-0.13128 0.67517,-0.33759 0.97524,-0.5814 0.30008,-0.24381 0.5814,-0.56264 0.82521,-0.90022 z m 6.48911,4.53863 c -0.30007,0.45011 -0.67516,0.82521 -1.10652,1.14403 -0.43136,0.33759 -0.86272,0.56265 -1.33158,0.69393 -0.48763,0.13128 -0.95649,0.16879 -1.42536,0.11253 -0.46887,-0.0563 -0.93773,-0.24381 -1.38784,-0.56264 -0.60015,-0.41261 -0.95649,-0.91898 -1.05027,-1.48162 -0.
 0938,-0.5814 0.0563,-1.16279 0.46887,-1.76294 0.31883,-0.45011 0.69392,-0.84396 1.10653,-1.16279 0.43135,-0.30008 0.88147,-0.54389 1.35033,-0.67517 0.46887,-0.13128 0.93774,-0.16879 1.42536,-0.0938 0.50637,0.0563 0.95649,0.24381 1.38784,0.54388 0.5814,0.39385 0.93774,0.88147 1.05026,1.46287 0.11253,0.56264 -0.0563,1.16279 -0.48762,1.78169 z m -2.70067,0.93773 c 0.33758,-0.11252 0.65641,-0.31883 0.95649,-0.56264 0.30007,-0.26256 0.58139,-0.56264 0.8252,-0.91898 0.28132,-0.43135 0.41261,-0.80645 0.35634,-1.16279 -0.0563,-0.35633 -0.26256,-0.67516 -0.6189,-0.91897 -0.30008,-0.20631 -0.61891,-0.31883 -0.93773,-0.33759 -0.33759,-0.0188 -0.65642,0.0375 -0.994,0.15004 -0.33759,0.13128 -0.65642,0.31883 -0.95649,0.58139 -0.31883,0.26257 -0.5814,0.56264 -0.82521,0.90023 -0.28132,0.4126 -0.4126,0.80645 -0.35633,1.18154 0.0375,0.35634 0.26256,0.65642 0.6189,0.91898 0.30007,0.2063 0.60015,0.31883 0.93773,0.33759 0.31883,0.0188 0.65642,-0.0375 0.994,-0.1688 z m 8.34583,-2.2318 -0.82521,0.73143 -0
 .95648,-0.65641 0.80645,-0.75019 z m -1.66917,1.4066 -4.65115,4.23855 -0.86272,-0.60015 4.66991,-4.23855 z m 4.81995,5.045 c -0.0375,0.075 -0.11252,0.1688 -0.22505,0.28132 -0.0938,0.11253 -0.18755,0.20631 -0.28132,0.28132 l -3.0195,2.75694 -0.84396,-0.60015 2.6444,-2.4006 c 0.15004,-0.15003 0.28132,-0.26256 0.3751,-0.37509 0.11253,-0.0938 0.18754,-0.2063 0.28132,-0.31883 0.16879,-0.26256 0.24381,-0.48762 0.2063,-0.71268 -0.0375,-0.2063 -0.22506,-0.43135 -0.54389,-0.65641 -0.22505,-0.15004 -0.52513,-0.26257 -0.88147,-0.33758 -0.37509,-0.0563 -0.75018,-0.0938 -1.12528,-0.11253 l -3.48836,3.16954 -0.84397,-0.5814 4.66992,-4.23855 0.84396,0.58139 -0.52513,0.46887 c 0.48762,0 0.90022,0.0375 1.27532,0.11253 0.35633,0.075 0.69392,0.2063 0.99399,0.4126 0.43136,0.31883 0.69393,0.65641 0.7877,1.05026 0.0938,0.39385 0,0.80645 -0.30008,1.21905 z m 5.32633,1.988 -0.6189,0.58139 -1.76294,-1.21905 -2.15679,1.95049 c -0.11253,0.0938 -0.22506,0.2063 -0.35634,0.35634 -0.13128,0.13128 -0.22505,0.24381
  -0.28132,0.33758 -0.15004,0.2063 -0.2063,0.4126 -0.16879,0.60015 0.0375,0.18755 0.2063,0.37509 0.50638,0.58139 0.11252,0.0938 0.28132,0.1688 0.46886,0.24382 0.2063,0.075 0.33759,0.13128 0.41261,0.15003 l 0.0563,0.0188 -0.65641,0.6189 c -0.20631,-0.075 -0.41261,-0.15004 -0.61891,-0.26256 -0.2063,-0.11253 -0.39385,-0.20631 -0.54388,-0.31883 -0.41261,-0.28132 -0.67517,-0.60015 -0.76895,-0.93774 -0.0938,-0.33758 -0.0187,-0.69392 0.26257,-1.10652 0.0563,-0.0938 0.13128,-0.16879 0.2063,-0.26257 0.0938,-0.0938 0.18755,-0.18755 0.28132,-0.28132 l 2.51313,-2.26931 -0.5814,-0.41261 0.63766,-0.58139 0.5814,0.4126 1.33158,-1.21905 0.86271,0.58139 -1.35033,1.21906 z"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:none"
-         inkscape:connector-curvature="0" />
-      <path
-         id="path3184"
-         d="m 410.35223,475.95607 c 0,-3.86346 14.96623,-6.99549 33.43958,-6.99549 18.47335,0 33.43958,3.13203 33.43958,6.99549 l 0,28.03824 c 0,3.86346 -14.96623,6.99549 -33.43958,6.99549 -18.47335,0 -33.43958,-3.13203 -33.43958,-6.99549 z"
-         style="fill:#d9d9d9;fill-opacity:1;fill-rule:evenodd;stroke:none"
-         inkscape:connector-curvature="0" />
-      <path
-         id="path3186"
-         d="m 477.23139,475.95607 c 0,3.88222 -14.96623,7.01425 -33.43958,7.01425 -18.47335,0 -33.43958,-3.13203 -33.43958,-7.01425"
-         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
-         inkscape:connector-curvature="0" />
-      <path
-         id="path3188"
-         d="m 410.35223,475.95607 c 0,-3.86346 14.96623,-6.99549 33.43958,-6.99549 18.47335,0 33.43958,3.13203 33.43958,6.99549 l 0,28.03824 c 0,3.86346 -14.96623,6.99549 -33.43958,6.99549 -18.47335,0 -33.43958,-3.13203 -33.43958,-6.99549 z"
-         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
-         inkscape:connector-curvature="0" />
-      <text
-         id="text3190"
-         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
-         y="526.71808"
-         x="402.59354"
-         xml:space="preserve">(snapshot store)</text>
-      <path
-         id="path3192"
-         d="m 358.25175,418.11666 -1.4066,8.73968 0,0 -0.67517,4.33233 0,-0.0188 -0.60015,4.27606 0,0 -0.54388,4.20105 0,0 -0.45012,4.10727 0,-0.0188 -0.33758,4.0135 0,-0.0188 -0.22506,3.88222 0,-0.0188 -0.075,3.73218 0,-0.0188 0.075,3.58214 0,-0.0375 0.26257,3.3946 -0.0188,-0.0375 0.46887,3.2258 0,-0.0375 0.30008,1.51913 -0.0188,-0.0375 0.37509,1.48162 -0.0187,-0.0375 0.43136,1.42536 -0.0188,-0.0375 0.48762,1.35033 -0.0188,-0.0188 0.54389,1.27532 -0.0188,-0.0375 0.60015,1.23781 -0.0187,-0.0375 0.67516,1.16279 -0.0187,-0.0375 0.75019,1.08777 -0.0375,-0.0375 0.8252,1.01275 -0.0375,-0.0375 0.88147,0.95649 -0.0375,-0.0375 0.95649,0.88147 -0.0375,-0.0188 1.01275,0.82521 -0.0375,-0.0375 1.06902,0.76894 -0.0375,-0.0187 1.14404,0.69392 -0.0375,-0.0188 1.2003,0.63765 -0.0375,-0.0188 1.25656,0.5814 -0.0375,-0.0188 2.64441,1.01275 -0.0375,-0.0188 2.83196,0.80645 -0.0375,0 3.0195,0.63766 -0.0375,-0.0188 3.18829,0.48762 -0.0188,0 3.31958,0.31883 -0.0188,0 3.46962,0.22506 -0.0375,0 3.56338,0.093
 8 -0.0188,0 3.65716,0.0188 -0.0187,0 3.71342,-0.0563 0,0 3.78844,-0.11252 -0.0188,0 3.88222,-0.15004 0.0563,1.23781 -3.88222,0.16879 -3.78844,0.0938 -3.73218,0.0563 -3.67592,0 -3.56338,-0.11253 -3.48837,-0.2063 -3.35709,-0.33759 -3.20705,-0.48762 -3.07576,-0.63766 -2.88822,-0.8252 -2.70068,-1.03151 -1.29407,-0.60015 -1.21905,-0.65641 -1.16279,-0.71268 -1.10653,-0.7877 -1.05026,-0.84396 -0.97524,-0.91898 -0.91898,-0.97524 -0.84396,-1.06902 -0.76894,-1.12528 -0.69393,-1.20029 -0.6189,-1.25657 -0.56264,-1.33158 -0.48762,-1.36909 -0.43136,-1.44411 -0.37509,-1.50037 -0.30008,-1.55664 -0.46886,-3.24456 -0.26257,-3.45086 -0.075,-3.61965 0.075,-3.75094 0.22506,-3.90097 0.35634,-4.03225 0.45011,-4.12603 0.54388,-4.2198 0.60015,-4.27607 0.67517,-4.33233 1.42536,-8.75843 1.21905,0.2063 z m 44.03597,65.11623 5.08252,2.28807 -4.89497,2.70067 -0.18755,-4.98874 z"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
-         inkscape:connector-curvature="0" />
-      <path
-         id="path3194"
-         d="m 247.59918,420.55477 -0.37509,4.87622 0,-0.0188 -0.26257,4.85746 0,-0.0188 -0.0938,4.8387 0,-0.0187 0.0938,4.80119 0,-0.0187 0.13129,2.38184 0,-0.0187 0.18754,2.36309 0,-0.0188 0.24381,2.34434 0,-0.0188 0.30008,2.30683 0,-0.0188 0.37509,2.26932 0,-0.0188 0.45011,2.25056 0,-0.0188 0.52514,2.21305 0,-0.0188 0.6189,2.1943 -0.0187,-0.0375 0.71267,2.13803 -0.0188,-0.0187 0.78769,2.08177 -0.0187,-0.0188 0.88147,2.04426 -0.0188,-0.0188 0.97525,1.98799 -0.0188,-0.0188 1.08777,1.95048 -0.0188,-0.0375 1.16279,1.89423 -0.0188,-0.0188 1.29408,1.81921 -0.0188,-0.0188 1.38785,1.76294 -0.0188,-0.0188 1.50038,1.70667 -0.0188,-0.0188 1.63166,1.65041 -0.0375,-0.0375 1.76294,1.57539 -0.0375,-0.0188 1.87547,1.50037 -0.0188,-0.0188 1.988,1.42536 -0.0188,-0.0188 2.13803,1.33159 -0.0188,-0.0188 2.26932,1.27532 -0.0375,-0.0188 2.41935,1.18155 -0.0188,-0.0188 2.53188,1.10653 -0.0187,-0.0188 2.64441,1.01275 -0.0188,-0.0188 2.7757,0.93774 -0.0375,0 2.86946,0.84396 -0.0188,0 2.98199,0.76894 -0.018
 8,-0.0188 3.09453,0.71268 -0.0188,0 3.1883,0.61891 0,0 3.30082,0.56264 -0.0188,0 3.3946,0.48762 0,0 3.50712,0.45011 -0.0188,0 3.61965,0.37509 -0.0188,0 3.73218,0.31883 -0.0187,0 3.82595,0.28132 0,0 3.91973,0.22506 0,0 4.03225,0.16879 0,0 4.12603,0.15004 0,0 4.23856,0.0938 -0.0188,0 4.35109,0.0563 0,0 4.4261,0.0188 0,0 4.53863,-0.0188 0,0 4.6324,-0.0375 0,0 4.72618,-0.0563 0,0 4.83871,-0.0938 0,0 4.95123,-0.0938 0,0 5.02625,-0.13129 5.13878,-0.13128 5.2138,-0.15004 5.32633,-0.15003 5.4201,-0.15004 5.51387,-0.16879 5.60765,-0.1688 1.96924,-0.0563 0.0188,1.25656 -1.95049,0.0563 -5.60765,0.15003 -5.51387,0.1688 -5.4201,0.16879 -5.32633,0.15004 -5.23255,0.15003 -5.12003,0.13129 -5.045,0.11252 -4.93248,0.11253 -4.85746,0.075 -4.72618,0.075 -4.65116,0.0375 -4.53863,0 -4.44486,-0.0188 -4.35108,-0.0563 -4.23856,-0.0938 -4.14478,-0.13129 -4.03225,-0.18754 -3.93849,-0.22506 -3.8447,-0.26256 -3.73218,-0.31883 -3.61965,-0.39385 -3.52588,-0.45011 -3.41335,-0.48763 -3.31958,-0.56264 -3.20705,-0.63
 765 -3.11328,-0.71268 -3.00074,-0.76894 -2.88822,-0.86272 -2.79445,-0.93773 -2.68192,-1.01275 -2.55063,-1.10653 -2.45686,-1.18154 -2.28807,-1.29408 -2.1943,-1.35033 -2.02551,-1.44411 -1.89422,-1.53789 -1.78169,-1.59414 -1.65041,-1.66917 -1.53789,-1.74418 -1.4066,-1.80045 -1.31282,-1.85671 -1.18155,-1.91298 -1.10652,-1.988 -0.97525,-2.0255 -0.88147,-2.08177 -0.78769,-2.11928 -0.71268,-2.15679 -0.6189,-2.21305 -0.54389,-2.2318 -0.45011,-2.26932 -0.39385,-2.30682 -0.31883,-2.32558 -0.24381,-2.34434 -0.18755,-2.38184 -0.11252,-2.4006 -0.0938,-4.8387 0.0938,-4.85746 0.24381,-4.87622 0.39384,-4.87621 1.23781,0.0938 z m 153.61952,70.33002 5.06377,2.36309 -4.93248,2.64441 -0.13129,-5.0075 z"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
-         inkscape:connector-curvature="0" />
-      <path
-         id="path3196"
-         d="m 104.81986,419.2607 0.0188,3.0195 0,-0.0375 0.15004,2.98199 0,-0.0375 0.30007,2.94449 -0.0187,-0.0375 0.46886,2.90697 0,-0.0188 0.60015,2.85071 -0.0187,-0.0375 0.75018,2.83195 0,-0.0375 0.90023,2.77569 0,-0.0188 1.05026,2.73818 -0.0187,-0.0375 1.20029,2.68191 -0.0187,-0.0188 1.35034,2.64441 -0.0188,-0.0375 1.50038,2.6069 -0.0188,-0.0375 1.65041,2.55063 -0.0188,-0.0187 1.8192,2.49437 -0.0187,-0.0187 1.95048,2.4381 -0.0187,-0.0187 2.10052,2.38184 -0.0187,-0.0187 2.25056,2.34433 -0.0188,-0.0188 2.4006,2.28807 -0.0188,-0.0188 2.55064,2.21305 -0.0187,0 2.70067,2.17555 -0.0188,-0.0188 2.86947,2.10052 -0.0188,-0.0188 3.00075,2.04426 -0.0188,0 3.15078,1.98799 -0.0188,-0.0188 3.31957,1.93173 -0.0188,-0.0188 3.45086,1.85671 -0.0188,0 3.61966,1.78169 -0.0188,0 3.75094,1.72543 -0.0188,-0.0188 3.91973,1.66916 -0.0188,-0.0188 4.06977,1.59415 -0.0188,0 4.2198,1.51912 -0.0187,-0.0188 4.36984,1.44411 -0.0188,0 4.51988,1.36909 -0.0188,0 4.66992,1.29407 -0.0188,0 9.62115,2.28807 -0.0375,0
  9.92122,1.89422 -0.0375,-0.0187 10.27756,1.51913 -0.0187,0 10.70892,1.2003 -0.0188,0 11.19654,0.88147 -0.0188,0 5.8327,0.33758 0,0 5.96398,0.28132 0,0 6.13278,0.22506 -0.0187,0 6.32032,0.18754 0,0 6.48912,0.11253 -0.0188,0 6.67667,0.0938 0,0 6.88296,0.0375 7.08927,0.0188 -0.0188,0 7.31433,-0.0188 7.52062,-0.0375 0,0 7.76443,-0.0563 8.00825,-0.075 8.27081,-0.0938 8.51462,-0.0938 8.79594,-0.0938 9.07726,-0.0938 9.37734,-0.075 9.65866,-0.075 9.97748,-0.0563 10.27756,-0.0375 10.61515,0 7.20179,0 0,1.25656 -7.20179,-0.0188 -10.59639,0.0188 -10.29632,0.0375 -9.97748,0.0563 -9.65866,0.0563 -9.35858,0.0938 -9.07726,0.0938 -8.79594,0.0938 -8.53338,0.0938 -8.25206,0.075 -8.00824,0.075 -7.76444,0.075 -7.53937,0.0375 -7.31433,0.0188 -7.08926,-0.0188 -6.86421,-0.0375 -6.69542,-0.0938 -6.48912,-0.13128 -6.32032,-0.16879 -6.13278,-0.22506 -5.98274,-0.28132 -5.8327,-0.35633 -11.2153,-0.88147 -10.72767,-1.18155 -10.29631,-1.53788 -9.93998,-1.89422 -9.6399,-2.30683 -4.68867,-1.29407 -4.53863,-1.3690
 9 -4.36984,-1.46287 -4.23856,-1.51912 -4.08851,-1.59415 -3.91973,-1.66917 -3.76969,-1.72543 -3.63841,-1.80045 -3.46961,-1.85671 -3.33833,-1.95048 -3.16954,-1.988 -3.0195,-2.06301 -2.88822,-2.11928 -2.71943,-2.1943 -2.56939,-2.2318 -2.43811,-2.30683 -2.26931,-2.36309 -2.11928,-2.41935 -1.96924,-2.45686 -1.81921,-2.53188 -1.66916,-2.56939 -1.51913,-2.62566 -1.36909,-2.68192 -1.2003,-2.71942 -1.06902,-2.75694 -0.91898,-2.8132 -0.75018,-2.85071 -0.61891,-2.90698 -0.46886,-2.92572 -0.30008,-2.982 -0.15004,-3.00074 0,-3.03826 1.23781,0 z m 298.19929,77.96317 4.98874,2.51313 -5.00749,2.49437 0.0188,-5.0075 z"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
-         inkscape:connector-curvature="0" />
-      <text
-         id="text3198"
-         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
-         y="468.82831"
-         x="182.75459"
-         xml:space="preserve">store state</text>
-      <text
-         id="text3200"
-         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
-         y="482.33167"
-         x="184.85512"
-         xml:space="preserve">snapshots</text>
-    </g>
-  </g>
-</svg>


[54/89] [abbrv] flink git commit: [FLINK-4253] [config] Rename 'recovery.mode' key to 'high-availability'

Posted by se...@apache.org.
[FLINK-4253] [config] Rename 'recovery.mode' key to 'high-availability'


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/01ffe34c
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/01ffe34c
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/01ffe34c

Branch: refs/heads/flip-6
Commit: 01ffe34c682c5f22928fbb476896600d0177d84e
Parents: 7206455
Author: Ramkrishna <ra...@intel.com>
Authored: Tue Aug 9 14:48:12 2016 +0530
Committer: Ufuk Celebi <uc...@apache.org>
Committed: Wed Aug 24 12:09:24 2016 +0200

----------------------------------------------------------------------
 docs/setup/config.md                            |  26 ++--
 docs/setup/jobmanager_high_availability.md      |  39 +++---
 .../org/apache/flink/client/cli/DefaultCLI.java |   2 +-
 .../flink/configuration/ConfigConstants.java    |  83 ++++++++++++-
 .../apache/flink/util/ConfigurationUtil.java    | 101 ++++++++++++++++
 .../flink/util/ConfigurationUtilTest.java       | 115 ++++++++++++++++++
 flink-dist/src/main/flink-bin/bin/config.sh     |  22 +++-
 flink-dist/src/main/resources/flink-conf.yaml   |   2 +-
 .../webmonitor/WebRuntimeMonitorITCase.java     |   4 +-
 .../apache/flink/runtime/blob/BlobServer.java   |  14 +--
 .../flink/runtime/blob/FileSystemBlobStore.java |   7 +-
 .../StandaloneCheckpointIDCounter.java          |   4 +-
 .../StandaloneCheckpointRecoveryFactory.java    |   4 +-
 .../StandaloneCompletedCheckpointStore.java     |   4 +-
 .../ZooKeeperCheckpointIDCounter.java           |   4 +-
 .../ZooKeeperCheckpointRecoveryFactory.java     |   4 +-
 .../ZooKeeperCompletedCheckpointStore.java      |   4 +-
 .../jobmanager/HighAvailabilityMode.java        |  86 +++++++++++++
 .../flink/runtime/jobmanager/RecoveryMode.java  |  72 -----------
 .../StandaloneSubmittedJobGraphStore.java       |   2 +-
 .../ZooKeeperSubmittedJobGraphStore.java        |   2 +-
 .../runtime/util/LeaderRetrievalUtils.java      |  36 ++----
 .../flink/runtime/util/ZooKeeperUtils.java      | 120 +++++++++++++------
 .../flink/runtime/jobmanager/JobManager.scala   |  19 +--
 .../runtime/minicluster/FlinkMiniCluster.scala  |  24 ++--
 .../flink/runtime/blob/BlobRecoveryITCase.java  |   8 +-
 .../BlobLibraryCacheRecoveryITCase.java         |   8 +-
 .../jobmanager/JobManagerHARecoveryTest.java    |   4 +-
 .../JobManagerLeaderElectionTest.java           |   4 +-
 .../ZooKeeperLeaderElectionTest.java            |  28 ++---
 .../ZooKeeperLeaderRetrievalTest.java           |  50 +++++++-
 .../runtime/testutils/JobManagerProcess.java    |   2 +-
 .../runtime/testutils/TaskManagerProcess.java   |   2 +-
 .../runtime/testutils/ZooKeeperTestUtils.java   |  22 ++--
 .../flink/runtime/util/ZooKeeperUtilTest.java   |   2 +-
 .../zookeeper/ZooKeeperTestEnvironment.java     |   6 +-
 .../runtime/testingUtils/TestingUtils.scala     |   6 +-
 .../apache/flink/test/util/TestBaseUtils.java   |   2 +-
 .../test/util/ForkableFlinkMiniCluster.scala    |  10 +-
 .../flink/test/recovery/ChaosMonkeyITCase.java  |   2 +-
 .../JobManagerHAJobGraphRecoveryITCase.java     |   4 +-
 ...agerHAProcessFailureBatchRecoveryITCase.java |   4 +-
 .../ZooKeeperLeaderElectionITCase.java          |   8 +-
 ...CliFrontendYarnAddressConfigurationTest.java |   4 +-
 .../flink/yarn/YARNHighAvailabilityITCase.java  |   2 +-
 .../yarn/AbstractYarnClusterDescriptor.java     |   8 +-
 .../flink/yarn/YarnApplicationMasterRunner.java |   2 +-
 .../flink/yarn/cli/FlinkYarnSessionCli.java     |   6 +-
 48 files changed, 702 insertions(+), 292 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/docs/setup/config.md
----------------------------------------------------------------------
diff --git a/docs/setup/config.md b/docs/setup/config.md
index 6bc9655..e6a335b 100644
--- a/docs/setup/config.md
+++ b/docs/setup/config.md
@@ -134,7 +134,7 @@ will be used under the directory specified by jobmanager.web.tmpdir.
 
 - `state.backend.fs.checkpointdir`: Directory for storing checkpoints in a Flink supported filesystem. Note: State backend must be accessible from the JobManager, use `file://` only for local setups.
 
-- `recovery.zookeeper.storageDir`: Required for HA. Directory for storing JobManager metadata; this is persisted in the state backend and only a pointer to this state is stored in ZooKeeper. Exactly like the checkpoint directory it must be accessible from the JobManager and a local filesystem should only be used for local deployments.
+- `high-availability.zookeeper.storageDir`: Required for HA. Directory for storing JobManager metadata; this is persisted in the state backend and only a pointer to this state is stored in ZooKeeper. Exactly like the checkpoint directory it must be accessible from the JobManager and a local filesystem should only be used for local deployments. Previously this config was named as `recovery.zookeeper.storageDir`.
 
 - `blob.storage.directory`: Directory for storing blobs (such as user jar's) on the TaskManagers.
 
@@ -285,29 +285,29 @@ of the JobManager, because the same ActorSystem is used. Its not possible to use
 
 ## High Availability Mode
 
-- `recovery.mode`: (Default 'standalone') Defines the recovery mode used for the cluster execution. Currently, Flink supports the 'standalone' mode where only a single JobManager runs and no JobManager state is checkpointed. The high availability mode 'zookeeper' supports the execution of multiple JobManagers and JobManager state checkpointing. Among the group of JobManagers, ZooKeeper elects one of them as the leader which is responsible for the cluster execution. In case of a JobManager failure, a standby JobManager will be elected as the new leader and is given the last checkpointed JobManager state. In order to use the 'zookeeper' mode, it is mandatory to also define the `recovery.zookeeper.quorum` configuration value.
+- `high-availability`: (Default 'none') Defines the recovery mode used for the cluster execution. Currently, Flink supports the 'none' mode where only a single JobManager runs and no JobManager state is checkpointed. The high availability mode 'zookeeper' supports the execution of multiple JobManagers and JobManager state checkpointing. Among the group of JobManagers, ZooKeeper elects one of them as the leader which is responsible for the cluster execution. In case of a JobManager failure, a standby JobManager will be elected as the new leader and is given the last checkpointed JobManager state. In order to use the 'zookeeper' mode, it is mandatory to also define the `recovery.zookeeper.quorum` configuration value.  Previously this config was named 'recovery.mode' and the default config was 'standalone'.
 
-- `recovery.zookeeper.quorum`: Defines the ZooKeeper quorum URL which is used to connet to the ZooKeeper cluster when the 'zookeeper' recovery mode is selected
+- `high-availability.zookeeper.quorum`: Defines the ZooKeeper quorum URL which is used to connet to the ZooKeeper cluster when the 'zookeeper' recovery mode is selected.  Previously this config was name as `recovery.zookeeper.quorum`.
 
-- `recovery.zookeeper.path.root`: (Default '/flink') Defines the root dir under which the ZooKeeper recovery mode will create namespace directories.
+- `high-availability.zookeeper.path.root`: (Default '/flink') Defines the root dir under which the ZooKeeper recovery mode will create namespace directories.  Previously this config was name as `recovery.zookeeper.path.root`.
 
-- `recovery.zookeeper.path.namespace`: (Default '/default_ns' in standalone mode, or the <yarn-application-id> under Yarn) Defines the subdirectory under the root dir where the ZooKeeper recovery mode will create znodes. This allows to isolate multiple applications on the same ZooKeeper.
+- `high-availability.zookeeper.path.namespace`: (Default '/default_ns' in standalone mode, or the <yarn-application-id> under Yarn) Defines the subdirectory under the root dir where the ZooKeeper recovery mode will create znodes. This allows to isolate multiple applications on the same ZooKeeper.  Previously this config was named as `recovery.zookeeper.path.namespace`.
 
-- `recovery.zookeeper.path.latch`: (Default '/leaderlatch') Defines the znode of the leader latch which is used to elect the leader.
+- `high-availability.zookeeper.path.latch`: (Default '/leaderlatch') Defines the znode of the leader latch which is used to elect the leader.  Previously this config was named as `recovery.zookeeper.path.latch`.
 
-- `recovery.zookeeper.path.leader`: (Default '/leader') Defines the znode of the leader which contains the URL to the leader and the current leader session ID
+- `high-availability.zookeeper.path.leader`: (Default '/leader') Defines the znode of the leader which contains the URL to the leader and the current leader session ID  Previously this config was named as `recovery.zookeeper.path.leader`.
 
-- `recovery.zookeeper.storageDir`: Defines the directory in the state backend where the JobManager metadata will be stored (ZooKeeper only keeps pointers to it). Required for HA.
+- `high-availability.zookeeper.storageDir`: Defines the directory in the state backend where the JobManager metadata will be stored (ZooKeeper only keeps pointers to it). Required for HA.  Previously this config was named as `recovery.zookeeper.storageDir`.
 
-- `recovery.zookeeper.client.session-timeout`: (Default '60000') Defines the session timeout for the ZooKeeper session in ms.
+- `high-availability.zookeeper.client.session-timeout`: (Default '60000') Defines the session timeout for the ZooKeeper session in ms.  Previously this config was named as `recovery.zookeeper.client.session-timeout`
 
-- `recovery.zookeeper.client.connection-timeout`: (Default '15000') Defines the connection timeout for ZooKeeper in ms.
+- `high-availability.zookeeper.client.connection-timeout`: (Default '15000') Defines the connection timeout for ZooKeeper in ms.  Previously this config was named as `recovery.zookeeper.client.connection-timeout`.
 
-- `recovery.zookeeper.client.retry-wait`: (Default '5000') Defines the pause between consecutive retries in ms.
+- `high-availability.zookeeper.client.retry-wait`: (Default '5000') Defines the pause between consecutive retries in ms.  Previously this config was named as `recovery.zookeeper.client.retry-wait`.
 
-- `recovery.zookeeper.client.max-retry-attempts`: (Default '3') Defines the number of connection retries before the client gives up.
+- `high-availability.zookeeper.client.max-retry-attempts`: (Default '3') Defines the number of connection retries before the client gives up.  Previously this config was named as `recovery.zookeeper.client.max-retry-attempts`.
 
-- `recovery.job.delay`: (Default 'akka.ask.timeout') Defines the delay before persisted jobs are recovered in case of a recovery situation.
+- `high-availability.job.delay`: (Default 'akka.ask.timeout') Defines the delay before persisted jobs are recovered in case of a recovery situation.  Previously this config was named as `recovery.job.delay`.
 
 ## Environment
 

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/docs/setup/jobmanager_high_availability.md
----------------------------------------------------------------------
diff --git a/docs/setup/jobmanager_high_availability.md b/docs/setup/jobmanager_high_availability.md
index d4f329a..dd6782d 100644
--- a/docs/setup/jobmanager_high_availability.md
+++ b/docs/setup/jobmanager_high_availability.md
@@ -42,7 +42,7 @@ As an example, consider the following setup with three JobManager instances:
 
 ### Configuration
 
-To enable JobManager High Availability you have to set the **recovery mode** to *zookeeper*, configure a **ZooKeeper quorum** and set up a **masters file** with all JobManagers hosts and their web UI ports.
+To enable JobManager High Availability you have to set the **high-availability mode** to *zookeeper*, configure a **ZooKeeper quorum** and set up a **masters file** with all JobManagers hosts and their web UI ports.
 
 Flink leverages **[ZooKeeper](http://zookeeper.apache.org)** for  *distributed coordination* between all running JobManager instances. ZooKeeper is a separate service from Flink, which provides highly reliable distributed coordination via leader election and light-weight consistent state storage. Check out [ZooKeeper's Getting Started Guide](http://zookeeper.apache.org/doc/trunk/zookeeperStarted.html) for more information about ZooKeeper. Flink includes scripts to [bootstrap a simple ZooKeeper](#bootstrap-zookeeper) installation.
 
@@ -58,38 +58,39 @@ jobManagerAddress1:webUIPort1
 jobManagerAddressX:webUIPortX
   </pre>
 
-By default, the job manager will pick a *random port* for inter process communication. You can change this via the **`recovery.jobmanager.port`** key. This key accepts single ports (e.g. `50010`), ranges (`50000-50025`), or a combination of both (`50010,50011,50020-50025,50050-50075`).
+By default, the job manager will pick a *random port* for inter process communication. You can change this via the **`high-availability.jobmanager.port`** key. This key accepts single ports (e.g. `50010`), ranges (`50000-50025`), or a combination of both (`50010,50011,50020-50025,50050-50075`).
 
 #### Config File (flink-conf.yaml)
 
 In order to start an HA-cluster add the following configuration keys to `conf/flink-conf.yaml`:
 
-- **Recovery mode** (required): The *recovery mode* has to be set in `conf/flink-conf.yaml` to *zookeeper* in order to enable high availability mode.
+- **high-availability mode** (required): The *high-availability mode* has to be set in `conf/flink-conf.yaml` to *zookeeper* in order to enable high availability mode.
 
-  <pre>recovery.mode: zookeeper</pre>
+  <pre>high-availability: zookeeper</pre>
+- **Previously this config was named 'recovery.mode' and the default config was 'standalone'.
 
 - **ZooKeeper quorum** (required): A *ZooKeeper quorum* is a replicated group of ZooKeeper servers, which provide the distributed coordination service.
 
-  <pre>recovery.zookeeper.quorum: address1:2181[,...],addressX:2181</pre>
+  <pre>high-availability.zookeeper.quorum: address1:2181[,...],addressX:2181</pre>
 
   Each *addressX:port* refers to a ZooKeeper server, which is reachable by Flink at the given address and port.
 
 - **ZooKeeper root** (recommended): The *root ZooKeeper node*, under which all cluster namespace nodes are placed.
 
-  <pre>recovery.zookeeper.path.root: /flink
+  <pre>high-availability.zookeeper.path.root: /flink
 
 - **ZooKeeper namespace** (recommended): The *namespace ZooKeeper node*, under which all required coordination data for a cluster is placed.
 
-  <pre>recovery.zookeeper.path.namespace: /default_ns # important: customize per cluster</pre>
+  <pre>high-availability.zookeeper.path.namespace: /default_ns # important: customize per cluster</pre>
 
-  **Important**: if you are running multiple Flink HA clusters, you have to manually configure separate namespaces for each cluster. By default, the Yarn cluster and the Yarn session automatically generate namespaces based on Yarn application id. A manual configuration overrides this behaviour in Yarn. Specifying a namespace with the -z CLI option, in turn, overrides manual configuration.
+  **Important**: if you are running multiple Flink HA clusters, you have to manually configure separate namespaces for each cluster. By default, the Yarn cluster and the Yarn session automatically generate namespaces based on Yarn application id. A manual configuration overrides this behaviour in Yarn. Specifying a namespace with the -z CLI option, in turn, overrides manual configuration. 
 
 - **State backend and storage directory** (required): JobManager meta data is persisted in the *state backend* and only a pointer to this state is stored in ZooKeeper. Currently, only the file system state backend is supported in HA mode.
 
     <pre>
 state.backend: filesystem
 state.backend.fs.checkpointdir: hdfs:///flink/checkpoints
-recovery.zookeeper.storageDir: hdfs:///flink/recovery</pre>
+high-availability.zookeeper.storageDir: hdfs:///flink/recovery</pre>
 
     The `storageDir` stores all meta data needed to recover a JobManager failure.
 
@@ -100,13 +101,13 @@ After configuring the masters and the ZooKeeper quorum, you can use the provided
 1. **Configure recovery mode and ZooKeeper quorum** in `conf/flink-conf.yaml`:
 
    <pre>
-recovery.mode: zookeeper
-recovery.zookeeper.quorum: localhost:2181
-recovery.zookeeper.path.root: /flink
-recovery.zookeeper.path.namespace: /cluster_one # important: customize per cluster
+high-availability: zookeeper
+high-availability.zookeeper.quorum: localhost:2181
+high-availability.zookeeper.path.root: /flink
+high-availability.zookeeper.path.namespace: /cluster_one # important: customize per cluster
 state.backend: filesystem
 state.backend.fs.checkpointdir: hdfs:///flink/checkpoints
-recovery.zookeeper.storageDir: hdfs:///flink/recovery</pre>
+high-availability.zookeeper.storageDir: hdfs:///flink/recovery</pre>
 
 2. **Configure masters** in `conf/masters`:
 
@@ -186,13 +187,13 @@ This means that the application can be restarted 10 times before YARN fails the
 1. **Configure recovery mode and ZooKeeper quorum** in `conf/flink-conf.yaml`:
 
    <pre>
-recovery.mode: zookeeper
-recovery.zookeeper.quorum: localhost:2181
-recovery.zookeeper.path.root: /flink
-recovery.zookeeper.path.namespace: /cluster_one # important: customize per cluster
+high-availability: zookeeper
+high-availability.zookeeper.quorum: localhost:2181
+high-availability.zookeeper.path.root: /flink
+high-availability.zookeeper.path.namespace: /cluster_one # important: customize per cluster
 state.backend: filesystem
 state.backend.fs.checkpointdir: hdfs:///flink/checkpoints
-recovery.zookeeper.storageDir: hdfs:///flink/recovery
+high-availability.zookeeper.storageDir: hdfs:///flink/recovery
 yarn.application-attempts: 10</pre>
 
 3. **Configure ZooKeeper server** in `conf/zoo.cfg` (currently it's only possible to run a single ZooKeeper server per machine):

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-clients/src/main/java/org/apache/flink/client/cli/DefaultCLI.java
----------------------------------------------------------------------
diff --git a/flink-clients/src/main/java/org/apache/flink/client/cli/DefaultCLI.java b/flink-clients/src/main/java/org/apache/flink/client/cli/DefaultCLI.java
index 5f83c3d..18fa323 100644
--- a/flink-clients/src/main/java/org/apache/flink/client/cli/DefaultCLI.java
+++ b/flink-clients/src/main/java/org/apache/flink/client/cli/DefaultCLI.java
@@ -64,7 +64,7 @@ public class DefaultCLI implements CustomCommandLine<StandaloneClusterClient> {
 
 		if (commandLine.hasOption(CliFrontendParser.ZOOKEEPER_NAMESPACE_OPTION.getOpt())) {
 			String zkNamespace = commandLine.getOptionValue(CliFrontendParser.ZOOKEEPER_NAMESPACE_OPTION.getOpt());
-			config.setString(ConfigConstants.ZOOKEEPER_NAMESPACE_KEY, zkNamespace);
+			config.setString(ConfigConstants.HA_ZOOKEEPER_NAMESPACE_KEY, zkNamespace);
 		}
 
 		StandaloneClusterDescriptor descriptor = new StandaloneClusterDescriptor(config);

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-core/src/main/java/org/apache/flink/configuration/ConfigConstants.java
----------------------------------------------------------------------
diff --git a/flink-core/src/main/java/org/apache/flink/configuration/ConfigConstants.java b/flink-core/src/main/java/org/apache/flink/configuration/ConfigConstants.java
index 98a843d..5cc1161 100644
--- a/flink-core/src/main/java/org/apache/flink/configuration/ConfigConstants.java
+++ b/flink-core/src/main/java/org/apache/flink/configuration/ConfigConstants.java
@@ -626,52 +626,127 @@ public final class ConfigConstants {
 
 	// --------------------------- Recovery -----------------------------------
 
-	/** Defines recovery mode used for the cluster execution ("standalone", "zookeeper") */
+	/** Defines recovery mode used for the cluster execution ("standalone", "zookeeper")
+	 *  Use {@link #HIGH_AVAILABILITY} instead
+	 * */
+	@Deprecated
 	public static final String RECOVERY_MODE = "recovery.mode";
 
+	/** Defines recovery mode used for the cluster execution ("NONE", "ZOOKEEPER") */
+	public static final String HIGH_AVAILABILITY = "high-availability";
+
 	/** Ports used by the job manager if not in standalone recovery mode */
+	@Deprecated
 	public static final String RECOVERY_JOB_MANAGER_PORT = "recovery.jobmanager.port";
 
+	/** Ports used by the job manager if not in 'none' recovery mode */
+	public static final String HA_JOB_MANAGER_PORT =
+		"high-availability.jobmanager.port";
+
 	/** The time before the JobManager recovers persisted jobs */
+	@Deprecated
 	public static final String RECOVERY_JOB_DELAY = "recovery.job.delay";
 
+	/** The time before the JobManager recovers persisted jobs */
+	public static final String HA_JOB_DELAY = "high-availability.job.delay";
+
 	// --------------------------- ZooKeeper ----------------------------------
 
 	/** ZooKeeper servers. */
+	@Deprecated
 	public static final String ZOOKEEPER_QUORUM_KEY = "recovery.zookeeper.quorum";
 
+	/** ZooKeeper servers. */
+	public static final String HA_ZOOKEEPER_QUORUM_KEY =
+		"high-availability.zookeeper.quorum";
+
 	/**
 	 * File system state backend base path for recoverable state handles. Recovery state is written
 	 * to this path and the file state handles are persisted for recovery.
 	 */
+	@Deprecated
 	public static final String ZOOKEEPER_RECOVERY_PATH = "recovery.zookeeper.storageDir";
 
+	/**
+	 * File system state backend base path for recoverable state handles. Recovery state is written
+	 * to this path and the file state handles are persisted for recovery.
+	 */
+	public static final String ZOOKEEPER_HA_PATH =
+		"high-availability.zookeeper.storageDir";
+
 	/** ZooKeeper root path. */
+	@Deprecated
 	public static final String ZOOKEEPER_DIR_KEY = "recovery.zookeeper.path.root";
 
+	/** ZooKeeper root path. */
+	public static final String HA_ZOOKEEPER_DIR_KEY =
+		"high-availability.zookeeper.path.root";
+
+	@Deprecated
 	public static final String ZOOKEEPER_NAMESPACE_KEY = "recovery.zookeeper.path.namespace";
 
+	public static final String HA_ZOOKEEPER_NAMESPACE_KEY =
+		"high-availability.zookeeper.path.namespace";
+
+	@Deprecated
 	public static final String ZOOKEEPER_LATCH_PATH = "recovery.zookeeper.path.latch";
 
+	public static final String HA_ZOOKEEPER_LATCH_PATH =
+		"high-availability.zookeeper.path.latch";
+
+	@Deprecated
 	public static final String ZOOKEEPER_LEADER_PATH = "recovery.zookeeper.path.leader";
 
+	public static final String HA_ZOOKEEPER_LEADER_PATH = "high-availability.zookeeper.path.leader";
+
 	/** ZooKeeper root path (ZNode) for job graphs. */
+	@Deprecated
 	public static final String ZOOKEEPER_JOBGRAPHS_PATH = "recovery.zookeeper.path.jobgraphs";
 
+	/** ZooKeeper root path (ZNode) for job graphs. */
+	public static final String HA_ZOOKEEPER_JOBGRAPHS_PATH =
+		"high-availability.zookeeper.path.jobgraphs";
+
 	/** ZooKeeper root path (ZNode) for completed checkpoints. */
+	@Deprecated
 	public static final String ZOOKEEPER_CHECKPOINTS_PATH = "recovery.zookeeper.path.checkpoints";
 
+	/** ZooKeeper root path (ZNode) for completed checkpoints. */
+	public static final String HA_ZOOKEEPER_CHECKPOINTS_PATH =
+		"high-availability.zookeeper.path.checkpoints";
+
 	/** ZooKeeper root path (ZNode) for checkpoint counters. */
+	@Deprecated
 	public static final String ZOOKEEPER_CHECKPOINT_COUNTER_PATH = "recovery.zookeeper.path.checkpoint-counter";
 
+	/** ZooKeeper root path (ZNode) for checkpoint counters. */
+	public static final String HA_ZOOKEEPER_CHECKPOINT_COUNTER_PATH =
+		"high-availability.zookeeper.path.checkpoint-counter";
+
+	@Deprecated
 	public static final String ZOOKEEPER_SESSION_TIMEOUT = "recovery.zookeeper.client.session-timeout";
 
+	public static final String HA_ZOOKEEPER_SESSION_TIMEOUT =
+		"high-availability.zookeeper.client.session-timeout";
+
+	@Deprecated
 	public static final String ZOOKEEPER_CONNECTION_TIMEOUT = "recovery.zookeeper.client.connection-timeout";
 
+	public static final String HA_ZOOKEEPER_CONNECTION_TIMEOUT =
+		"high-availability.zookeeper.client.connection-timeout";
+
+	@Deprecated
 	public static final String ZOOKEEPER_RETRY_WAIT = "recovery.zookeeper.client.retry-wait";
 
+	public static final String HA_ZOOKEEPER_RETRY_WAIT =
+		"high-availability.zookeeper.client.retry-wait";
+
+	@Deprecated
 	public static final String ZOOKEEPER_MAX_RETRY_ATTEMPTS = "recovery.zookeeper.client.max-retry-attempts";
 
+	public static final String HA_ZOOKEEPER_MAX_RETRY_ATTEMPTS =
+		"high-availability.zookeeper.client.max-retry-attempts";
+
 	// ---------------------------- Metrics -----------------------------------
 
 	/**
@@ -1015,10 +1090,12 @@ public final class ConfigConstants {
 
 	public static final String LOCAL_START_WEBSERVER = "local.start-webserver";
 
-  	// --------------------------- Recovery ---------------------------------
-
+	// --------------------------- Recovery ---------------------------------
+	@Deprecated
 	public static String DEFAULT_RECOVERY_MODE = "standalone";
 
+	public static String DEFAULT_HIGH_AVAILABILTY = "none";
+
 	/**
 	 * Default port used by the job manager if not in standalone recovery mode. If <code>0</code>
 	 * the OS picks a random port port.

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-core/src/main/java/org/apache/flink/util/ConfigurationUtil.java
----------------------------------------------------------------------
diff --git a/flink-core/src/main/java/org/apache/flink/util/ConfigurationUtil.java b/flink-core/src/main/java/org/apache/flink/util/ConfigurationUtil.java
new file mode 100644
index 0000000..44f098b
--- /dev/null
+++ b/flink-core/src/main/java/org/apache/flink/util/ConfigurationUtil.java
@@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.util;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.configuration.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Utility for accessing deprecated {@link Configuration} values.
+ */
+@Internal
+public class ConfigurationUtil {
+
+	private static final Logger LOG = LoggerFactory.getLogger(ConfigurationUtil.class);
+
+	/**
+	 * Returns the value associated with the given key as an Integer in the
+	 * given Configuration.
+	 *
+	 * <p>The regular key has precedence over any deprecated keys. The
+	 * precedence of deprecated keys depends on the argument order, first
+	 * deprecated keys have higher precedence than later ones.
+	 *
+	 * @param config Configuration to access
+	 * @param key Configuration key (highest precedence)
+	 * @param defaultValue Default value (if no key is set)
+	 * @param deprecatedKeys Optional deprecated keys in precedence order
+	 * @return Integer value associated with first found key or the default value
+	 */
+	public static int getIntegerWithDeprecatedKeys(
+			Configuration config,
+			String key,
+			int defaultValue,
+			String... deprecatedKeys) {
+
+		if (config.containsKey(key)) {
+			return config.getInteger(key, defaultValue);
+		} else {
+			// Check deprecated keys
+			for (String deprecatedKey : deprecatedKeys) {
+				if (config.containsKey(deprecatedKey)) {
+					LOG.warn("Configuration key '{}' has been deprecated. Please use '{}' instead.", deprecatedKey, key);
+					return config.getInteger(deprecatedKey, defaultValue);
+				}
+			}
+			return defaultValue;
+		}
+	}
+
+	/**
+	 * Returns the value associated with the given key as a String in the
+	 * given Configuration.
+	 *
+	 * <p>The regular key has precedence over any deprecated keys. The
+	 * precedence of deprecated keys depends on the argument order, first
+	 * deprecated keys have higher precedence than later ones.
+	 *
+	 * @param config Configuration to access
+	 * @param key Configuration key (highest precedence)
+	 * @param defaultValue Default value (if no key is set)
+	 * @param deprecatedKeys Optional deprecated keys in precedence order
+	 * @return String associated with first found key or the default value
+	 */
+	public static String getStringWithDeprecatedKeys(
+			Configuration config,
+			String key,
+			String defaultValue,
+			String... deprecatedKeys) {
+
+		if (config.containsKey(key)) {
+			return config.getString(key, defaultValue);
+		} else {
+			// Check deprecated keys
+			for (String deprecatedKey : deprecatedKeys) {
+				if (config.containsKey(deprecatedKey)) {
+					LOG.warn("Configuration key {} has been deprecated. Please use {} instead.", deprecatedKey, key);
+					return config.getString(deprecatedKey, defaultValue);
+				}
+			}
+			return defaultValue;
+		}
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-core/src/test/java/org/apache/flink/util/ConfigurationUtilTest.java
----------------------------------------------------------------------
diff --git a/flink-core/src/test/java/org/apache/flink/util/ConfigurationUtilTest.java b/flink-core/src/test/java/org/apache/flink/util/ConfigurationUtilTest.java
new file mode 100644
index 0000000..7ecbd3f
--- /dev/null
+++ b/flink-core/src/test/java/org/apache/flink/util/ConfigurationUtilTest.java
@@ -0,0 +1,115 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.util;
+
+import org.apache.flink.configuration.Configuration;
+import org.junit.Test;
+
+import static org.junit.Assert.assertEquals;
+
+public class ConfigurationUtilTest {
+
+	/**
+	 * Tests getInteger without any deprecated keys.
+	 */
+	@Test
+	public void testGetIntegerNoDeprecatedKeys() throws Exception {
+		Configuration config = new Configuration();
+		String key = "asdasd";
+		int value = 1223239;
+		int defaultValue = 272770;
+
+		assertEquals(defaultValue, ConfigurationUtil.getIntegerWithDeprecatedKeys(config, key, defaultValue));
+
+		config.setInteger(key, value);
+		assertEquals(value, ConfigurationUtil.getIntegerWithDeprecatedKeys(config, key, defaultValue));
+	}
+
+	/**
+	 * Tests getInteger with deprecated keys and checks precedence.
+	 */
+	@Test
+	public void testGetIntegerWithDeprecatedKeys() throws Exception {
+		Configuration config = new Configuration();
+		String key = "asdasd";
+		int value = 1223239;
+		int defaultValue = 272770;
+
+		String[] deprecatedKey = new String[] { "deprecated-0", "deprecated-1" };
+		int[] deprecatedValue = new int[] { 99192, 7727 };
+
+		assertEquals(defaultValue, ConfigurationUtil.getIntegerWithDeprecatedKeys(config, key, defaultValue));
+
+		// Set 2nd deprecated key
+		config.setInteger(deprecatedKey[1], deprecatedValue[1]);
+		assertEquals(deprecatedValue[1], ConfigurationUtil.getIntegerWithDeprecatedKeys(config, key, defaultValue, deprecatedKey[1]));
+
+		// Set 1st deprecated key (precedence)
+		config.setInteger(deprecatedKey[0], deprecatedValue[0]);
+		assertEquals(deprecatedValue[0], ConfigurationUtil.getIntegerWithDeprecatedKeys(config, key, defaultValue, deprecatedKey[0]));
+
+		// Set current key
+		config.setInteger(key, value);
+		assertEquals(value, ConfigurationUtil.getIntegerWithDeprecatedKeys(config, key, defaultValue));
+	}
+
+	/**
+	 * Tests getString without any deprecated keys.
+	 */
+	@Test
+	public void testGetStringNoDeprecatedKeys() throws Exception {
+		Configuration config = new Configuration();
+		String key = "asdasd";
+		String value = "1223239";
+		String defaultValue = "272770";
+
+		assertEquals(defaultValue, ConfigurationUtil.getStringWithDeprecatedKeys(config, key, defaultValue));
+
+		config.setString(key, value);
+		assertEquals(value, ConfigurationUtil.getStringWithDeprecatedKeys(config, key, defaultValue));
+	}
+
+	/**
+	 * Tests getString with deprecated keys and checks precedence.
+	 */
+	@Test
+	public void testGetStringWithDeprecatedKeys() throws Exception {
+		Configuration config = new Configuration();
+		String key = "asdasd";
+		String value = "1223239";
+		String defaultValue = "272770";
+
+		String[] deprecatedKey = new String[] { "deprecated-0", "deprecated-1" };
+		String[] deprecatedValue = new String[] { "99192", "7727" };
+
+		assertEquals(defaultValue, ConfigurationUtil.getStringWithDeprecatedKeys(config, key, defaultValue));
+
+		// Set 2nd deprecated key
+		config.setString(deprecatedKey[1], deprecatedValue[1]);
+		assertEquals(deprecatedValue[1], ConfigurationUtil.getStringWithDeprecatedKeys(config, key, defaultValue, deprecatedKey[1]));
+
+		// Set 1st deprecated key (precedence)
+		config.setString(deprecatedKey[0], deprecatedValue[0]);
+		assertEquals(deprecatedValue[0], ConfigurationUtil.getStringWithDeprecatedKeys(config, key, defaultValue, deprecatedKey[0]));
+
+		// Set current key
+		config.setString(key, value);
+		assertEquals(value, ConfigurationUtil.getStringWithDeprecatedKeys(config, key, defaultValue));
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-dist/src/main/flink-bin/bin/config.sh
----------------------------------------------------------------------
diff --git a/flink-dist/src/main/flink-bin/bin/config.sh b/flink-dist/src/main/flink-bin/bin/config.sh
index 9ffa713..687a39c 100755
--- a/flink-dist/src/main/flink-bin/bin/config.sh
+++ b/flink-dist/src/main/flink-bin/bin/config.sh
@@ -104,7 +104,9 @@ KEY_ENV_JAVA_OPTS="env.java.opts"
 KEY_ENV_JAVA_OPTS_JM="env.java.opts.jobmanager"
 KEY_ENV_JAVA_OPTS_TM="env.java.opts.taskmanager"
 KEY_ENV_SSH_OPTS="env.ssh.opts"
+#deprecated
 KEY_RECOVERY_MODE="recovery.mode"
+KEY_HIGH_AVAILABILITY="high-availability"
 KEY_ZK_HEAP_MB="zookeeper.heap.mb"
 
 ########################################################################################################################
@@ -257,8 +259,26 @@ if [ -z "${ZK_HEAP}" ]; then
     ZK_HEAP=$(readFromConfig ${KEY_ZK_HEAP_MB} 0 "${YAML_CONF}")
 fi
 
+# for backward compatability
+if [ -z "${OLD_RECOVERY_MODE}" ]; then
+    OLD_RECOVERY_MODE=$(readFromConfig ${KEY_RECOVERY_MODE} "standalone" "${YAML_CONF}")
+fi
+
 if [ -z "${RECOVERY_MODE}" ]; then
-    RECOVERY_MODE=$(readFromConfig ${KEY_RECOVERY_MODE} "standalone" "${YAML_CONF}")
+     # Read the new config
+     RECOVERY_MODE=$(readFromConfig ${KEY_HIGH_AVAILABILITY} "" "${YAML_CONF}")
+     if [ -z "${RECOVERY_MODE}" ]; then
+        #no new config found. So old config should be used
+        if [ -z "${OLD_RECOVERY_MODE}" ]; then
+            # If old config is also not found, use the 'none' as the default config
+            RECOVERY_MODE="none"
+        elif [ ${OLD_RECOVERY_MODE} = "standalone" ]; then
+            # if oldconfig is 'standalone', rename to 'none'
+            RECOVERY_MODE="none"
+        else
+            RECOVERY_MODE=${OLD_RECOVERY_MODE}
+        fi
+     fi
 fi
 
 # Arguments for the JVM. Used for job and task manager JVMs.

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-dist/src/main/resources/flink-conf.yaml
----------------------------------------------------------------------
diff --git a/flink-dist/src/main/resources/flink-conf.yaml b/flink-dist/src/main/resources/flink-conf.yaml
index 8bd4747..a2586ce 100644
--- a/flink-dist/src/main/resources/flink-conf.yaml
+++ b/flink-dist/src/main/resources/flink-conf.yaml
@@ -138,7 +138,7 @@ jobmanager.web.port: 8081
 # setup. This must be a list of the form:
 # "host1:clientPort,host2[:clientPort],..." (default clientPort: 2181)
 #
-# recovery.mode: zookeeper
+# high-availability: zookeeper
 #
 # recovery.zookeeper.quorum: localhost:2181,...
 #

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-runtime-web/src/test/java/org/apache/flink/runtime/webmonitor/WebRuntimeMonitorITCase.java
----------------------------------------------------------------------
diff --git a/flink-runtime-web/src/test/java/org/apache/flink/runtime/webmonitor/WebRuntimeMonitorITCase.java b/flink-runtime-web/src/test/java/org/apache/flink/runtime/webmonitor/WebRuntimeMonitorITCase.java
index 677ff54..d9edafe 100644
--- a/flink-runtime-web/src/test/java/org/apache/flink/runtime/webmonitor/WebRuntimeMonitorITCase.java
+++ b/flink-runtime-web/src/test/java/org/apache/flink/runtime/webmonitor/WebRuntimeMonitorITCase.java
@@ -296,8 +296,8 @@ public class WebRuntimeMonitorITCase extends TestLogger {
 			final Configuration config = new Configuration();
 			config.setInteger(ConfigConstants.JOB_MANAGER_WEB_PORT_KEY, 0);
 			config.setString(ConfigConstants.JOB_MANAGER_WEB_LOG_PATH_KEY, logFile.toString());
-			config.setString(ConfigConstants.RECOVERY_MODE, "ZOOKEEPER");
-			config.setString(ConfigConstants.ZOOKEEPER_QUORUM_KEY, zooKeeper.getConnectString());
+			config.setString(ConfigConstants.HIGH_AVAILABILITY, "ZOOKEEPER");
+			config.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, zooKeeper.getConnectString());
 
 			actorSystem = AkkaUtils.createDefaultActorSystem();
 

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-runtime/src/main/java/org/apache/flink/runtime/blob/BlobServer.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/blob/BlobServer.java b/flink-runtime/src/main/java/org/apache/flink/runtime/blob/BlobServer.java
index b4b6812..d1b78a5 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/blob/BlobServer.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/blob/BlobServer.java
@@ -23,7 +23,7 @@ import org.apache.flink.api.common.JobID;
 import org.apache.flink.configuration.ConfigConstants;
 import org.apache.flink.configuration.Configuration;
 import org.apache.flink.configuration.IllegalConfigurationException;
-import org.apache.flink.runtime.jobmanager.RecoveryMode;
+import org.apache.flink.runtime.jobmanager.HighAvailabilityMode;
 import org.apache.flink.util.NetUtils;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -77,7 +77,7 @@ public class BlobServer extends Thread implements BlobService {
 
 	/**
 	 * Shutdown hook thread to ensure deletion of the storage directory (or <code>null</code> if
-	 * the configured recovery mode does not equal{@link RecoveryMode#STANDALONE})
+	 * the configured recovery mode does not equal{@link HighAvailabilityMode#NONE})
 	 */
 	private final Thread shutdownHook;
 
@@ -90,7 +90,7 @@ public class BlobServer extends Thread implements BlobService {
 	public BlobServer(Configuration config) throws IOException {
 		checkNotNull(config, "Configuration");
 
-		RecoveryMode recoveryMode = RecoveryMode.fromConfig(config);
+		HighAvailabilityMode highAvailabilityMode = HighAvailabilityMode.fromConfig(config);
 
 		// configure and create the storage directory
 		String storageDirectory = config.getString(ConfigConstants.BLOB_STORAGE_DIRECTORY_KEY, null);
@@ -98,14 +98,14 @@ public class BlobServer extends Thread implements BlobService {
 		LOG.info("Created BLOB server storage directory {}", storageDir);
 
 		// No recovery.
-		if (recoveryMode == RecoveryMode.STANDALONE) {
+		if (highAvailabilityMode == HighAvailabilityMode.NONE) {
 			this.blobStore = new VoidBlobStore();
 		}
 		// Recovery.
-		else if (recoveryMode == RecoveryMode.ZOOKEEPER) {
+		else if (highAvailabilityMode == HighAvailabilityMode.ZOOKEEPER) {
 			this.blobStore = new FileSystemBlobStore(config);
 		} else {
-			throw new IllegalConfigurationException("Unexpected recovery mode '" + recoveryMode + ".");
+			throw new IllegalConfigurationException("Unexpected recovery mode '" + highAvailabilityMode + ".");
 		}
 
 		// configure the maximum number of concurrent connections
@@ -128,7 +128,7 @@ public class BlobServer extends Thread implements BlobService {
 			backlog = ConfigConstants.DEFAULT_BLOB_FETCH_BACKLOG;
 		}
 
-		if (recoveryMode == RecoveryMode.STANDALONE) {
+		if (highAvailabilityMode == HighAvailabilityMode.NONE) {
 			// Add shutdown hook to delete storage directory
 			this.shutdownHook = BlobUtils.addShutdownHook(this, LOG);
 		}

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-runtime/src/main/java/org/apache/flink/runtime/blob/FileSystemBlobStore.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/blob/FileSystemBlobStore.java b/flink-runtime/src/main/java/org/apache/flink/runtime/blob/FileSystemBlobStore.java
index 5641d87..f535c35 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/blob/FileSystemBlobStore.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/blob/FileSystemBlobStore.java
@@ -51,12 +51,15 @@ class FileSystemBlobStore implements BlobStore {
 	private final String basePath;
 
 	FileSystemBlobStore(Configuration config) throws IOException {
-		String recoveryPath = config.getString(ConfigConstants.ZOOKEEPER_RECOVERY_PATH, null);
+		String recoveryPath = config.getString(ConfigConstants.ZOOKEEPER_HA_PATH, null);
+		if(recoveryPath == null) {
+			recoveryPath = config.getString(ConfigConstants.ZOOKEEPER_HA_PATH, null);
+		}
 
 		if (recoveryPath == null) {
 			throw new IllegalConfigurationException(String.format("Missing configuration for " +
 					"file system state backend recovery path. Please specify via " +
-					"'%s' key.", ConfigConstants.ZOOKEEPER_RECOVERY_PATH));
+					"'%s' key.", ConfigConstants.ZOOKEEPER_HA_PATH));
 		}
 
 		this.basePath = recoveryPath + "/blob";

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StandaloneCheckpointIDCounter.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StandaloneCheckpointIDCounter.java b/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StandaloneCheckpointIDCounter.java
index 0a235bc..c2f67f1 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StandaloneCheckpointIDCounter.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StandaloneCheckpointIDCounter.java
@@ -18,12 +18,12 @@
 
 package org.apache.flink.runtime.checkpoint;
 
-import org.apache.flink.runtime.jobmanager.RecoveryMode;
+import org.apache.flink.runtime.jobmanager.HighAvailabilityMode;
 
 import java.util.concurrent.atomic.AtomicLong;
 
 /**
- * {@link CheckpointIDCounter} instances for JobManagers running in {@link RecoveryMode#STANDALONE}.
+ * {@link CheckpointIDCounter} instances for JobManagers running in {@link HighAvailabilityMode#NONE}.
  *
  * <p>Simple wrapper of an {@link AtomicLong}. This is sufficient, because job managers are not
  * recoverable in this recovery mode.

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StandaloneCheckpointRecoveryFactory.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StandaloneCheckpointRecoveryFactory.java b/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StandaloneCheckpointRecoveryFactory.java
index 05f9e77..aecb51e 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StandaloneCheckpointRecoveryFactory.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StandaloneCheckpointRecoveryFactory.java
@@ -19,10 +19,10 @@
 package org.apache.flink.runtime.checkpoint;
 
 import org.apache.flink.api.common.JobID;
-import org.apache.flink.runtime.jobmanager.RecoveryMode;
+import org.apache.flink.runtime.jobmanager.HighAvailabilityMode;
 
 /**
- * {@link CheckpointCoordinator} components in {@link RecoveryMode#STANDALONE}.
+ * {@link CheckpointCoordinator} components in {@link HighAvailabilityMode#NONE}.
  */
 public class StandaloneCheckpointRecoveryFactory implements CheckpointRecoveryFactory {
 

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StandaloneCompletedCheckpointStore.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StandaloneCompletedCheckpointStore.java b/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StandaloneCompletedCheckpointStore.java
index bc111cd..a35ca77 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StandaloneCompletedCheckpointStore.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StandaloneCompletedCheckpointStore.java
@@ -18,7 +18,7 @@
 
 package org.apache.flink.runtime.checkpoint;
 
-import org.apache.flink.runtime.jobmanager.RecoveryMode;
+import org.apache.flink.runtime.jobmanager.HighAvailabilityMode;
 
 import java.util.ArrayDeque;
 import java.util.ArrayList;
@@ -28,7 +28,7 @@ import static org.apache.flink.util.Preconditions.checkArgument;
 import static org.apache.flink.util.Preconditions.checkNotNull;
 
 /**
- * {@link CompletedCheckpointStore} for JobManagers running in {@link RecoveryMode#STANDALONE}.
+ * {@link CompletedCheckpointStore} for JobManagers running in {@link HighAvailabilityMode#NONE}.
  */
 public class StandaloneCompletedCheckpointStore implements CompletedCheckpointStore {
 

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/ZooKeeperCheckpointIDCounter.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/ZooKeeperCheckpointIDCounter.java b/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/ZooKeeperCheckpointIDCounter.java
index 12839c1..0bceb8b 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/ZooKeeperCheckpointIDCounter.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/ZooKeeperCheckpointIDCounter.java
@@ -23,14 +23,14 @@ import org.apache.curator.framework.recipes.shared.SharedCount;
 import org.apache.curator.framework.recipes.shared.VersionedValue;
 import org.apache.curator.framework.state.ConnectionState;
 import org.apache.curator.framework.state.ConnectionStateListener;
-import org.apache.flink.runtime.jobmanager.RecoveryMode;
+import org.apache.flink.runtime.jobmanager.HighAvailabilityMode;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import static org.apache.flink.util.Preconditions.checkNotNull;
 
 /**
- * {@link CheckpointIDCounter} instances for JobManagers running in {@link RecoveryMode#ZOOKEEPER}.
+ * {@link CheckpointIDCounter} instances for JobManagers running in {@link HighAvailabilityMode#ZOOKEEPER}.
  *
  * <p>Each counter creates a ZNode:
  * <pre>

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/ZooKeeperCheckpointRecoveryFactory.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/ZooKeeperCheckpointRecoveryFactory.java b/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/ZooKeeperCheckpointRecoveryFactory.java
index dcd6260..55a0bbb 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/ZooKeeperCheckpointRecoveryFactory.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/ZooKeeperCheckpointRecoveryFactory.java
@@ -21,13 +21,13 @@ package org.apache.flink.runtime.checkpoint;
 import org.apache.curator.framework.CuratorFramework;
 import org.apache.flink.api.common.JobID;
 import org.apache.flink.configuration.Configuration;
-import org.apache.flink.runtime.jobmanager.RecoveryMode;
+import org.apache.flink.runtime.jobmanager.HighAvailabilityMode;
 import org.apache.flink.runtime.util.ZooKeeperUtils;
 
 import static org.apache.flink.util.Preconditions.checkNotNull;
 
 /**
- * {@link CheckpointCoordinator} components in {@link RecoveryMode#ZOOKEEPER}.
+ * {@link CheckpointCoordinator} components in {@link HighAvailabilityMode#ZOOKEEPER}.
  */
 public class ZooKeeperCheckpointRecoveryFactory implements CheckpointRecoveryFactory {
 

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/ZooKeeperCompletedCheckpointStore.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/ZooKeeperCompletedCheckpointStore.java b/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/ZooKeeperCompletedCheckpointStore.java
index 541629d..376ef70 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/ZooKeeperCompletedCheckpointStore.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/ZooKeeperCompletedCheckpointStore.java
@@ -24,7 +24,7 @@ import org.apache.curator.framework.api.CuratorEvent;
 import org.apache.curator.framework.api.CuratorEventType;
 import org.apache.curator.utils.ZKPaths;
 import org.apache.flink.api.java.tuple.Tuple2;
-import org.apache.flink.runtime.jobmanager.RecoveryMode;
+import org.apache.flink.runtime.jobmanager.HighAvailabilityMode;
 import org.apache.flink.runtime.state.StateHandle;
 import org.apache.flink.runtime.zookeeper.StateStorageHelper;
 import org.apache.flink.runtime.zookeeper.ZooKeeperStateHandleStore;
@@ -40,7 +40,7 @@ import static org.apache.flink.util.Preconditions.checkArgument;
 import static org.apache.flink.util.Preconditions.checkNotNull;
 
 /**
- * {@link CompletedCheckpointStore} for JobManagers running in {@link RecoveryMode#ZOOKEEPER}.
+ * {@link CompletedCheckpointStore} for JobManagers running in {@link HighAvailabilityMode#ZOOKEEPER}.
  *
  * <p>Checkpoints are added under a ZNode per job:
  * <pre>

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-runtime/src/main/java/org/apache/flink/runtime/jobmanager/HighAvailabilityMode.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/jobmanager/HighAvailabilityMode.java b/flink-runtime/src/main/java/org/apache/flink/runtime/jobmanager/HighAvailabilityMode.java
new file mode 100644
index 0000000..8e2efa8
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/jobmanager/HighAvailabilityMode.java
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.jobmanager;
+
+import org.apache.flink.configuration.ConfigConstants;
+import org.apache.flink.configuration.Configuration;
+
+/**
+ * Recovery mode for Flink's cluster execution. Currently supported modes are:
+ *
+ * - Standalone: No recovery from JobManager failures
+ * - ZooKeeper: JobManager high availability via ZooKeeper
+ * ZooKeeper is used to select a leader among a group of JobManager. This JobManager
+ * is responsible for the job execution. Upon failure of the leader a new leader is elected
+ * which will take over the responsibilities of the old leader
+ */
+public enum HighAvailabilityMode {
+	// STANDALONE mode renamed to NONE
+	NONE,
+	ZOOKEEPER;
+
+	/**
+	 * Return the configured {@link HighAvailabilityMode}.
+	 *
+	 * @param config The config to parse
+	 * @return Configured recovery mode or {@link ConfigConstants#DEFAULT_HIGH_AVAILABILTY} if not
+	 * configured.
+	 */
+	public static HighAvailabilityMode fromConfig(Configuration config) {
+		// Not passing the default value here so that we could determine
+		// if there is an older config set
+		String recoveryMode = config.getString(
+			ConfigConstants.HIGH_AVAILABILITY, "");
+		if (recoveryMode.isEmpty()) {
+			// New config is not set.
+			// check the older one
+			// check for older 'recover.mode' config
+			recoveryMode = config.getString(
+				ConfigConstants.RECOVERY_MODE,
+				ConfigConstants.DEFAULT_RECOVERY_MODE);
+			if (recoveryMode.equalsIgnoreCase(ConfigConstants.DEFAULT_RECOVERY_MODE)) {
+				// There is no HA configured.
+				return HighAvailabilityMode.valueOf(ConfigConstants.DEFAULT_HIGH_AVAILABILTY.toUpperCase());
+			}
+		} else if (recoveryMode.equalsIgnoreCase(ConfigConstants.DEFAULT_HIGH_AVAILABILTY)) {
+			// The new config is found but with default value. So use this
+			return HighAvailabilityMode.valueOf(ConfigConstants.DEFAULT_HIGH_AVAILABILTY.toUpperCase());
+		}
+		return HighAvailabilityMode.valueOf(recoveryMode.toUpperCase());
+	}
+
+	/**
+	 * Returns true if the defined recovery mode supports high availability.
+	 *
+	 * @param configuration Configuration which contains the recovery mode
+	 * @return true if high availability is supported by the recovery mode, otherwise false
+	 */
+	public static boolean isHighAvailabilityModeActivated(Configuration configuration) {
+		HighAvailabilityMode mode = fromConfig(configuration);
+		switch (mode) {
+			case NONE:
+				return false;
+			case ZOOKEEPER:
+				return true;
+			default:
+				return false;
+		}
+
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-runtime/src/main/java/org/apache/flink/runtime/jobmanager/RecoveryMode.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/jobmanager/RecoveryMode.java b/flink-runtime/src/main/java/org/apache/flink/runtime/jobmanager/RecoveryMode.java
deleted file mode 100644
index 077e34d..0000000
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/jobmanager/RecoveryMode.java
+++ /dev/null
@@ -1,72 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.runtime.jobmanager;
-
-import org.apache.flink.configuration.ConfigConstants;
-import org.apache.flink.configuration.Configuration;
-
-/**
- * Recovery mode for Flink's cluster execution. Currently supported modes are:
- *
- * - Standalone: No recovery from JobManager failures
- * - ZooKeeper: JobManager high availability via ZooKeeper
- * ZooKeeper is used to select a leader among a group of JobManager. This JobManager
- * is responsible for the job execution. Upon failure of the leader a new leader is elected
- * which will take over the responsibilities of the old leader
- */
-public enum RecoveryMode {
-	STANDALONE,
-	ZOOKEEPER;
-
-	/**
-	 * Return the configured {@link RecoveryMode}.
-	 *
-	 * @param config The config to parse
-	 * @return Configured recovery mode or {@link ConfigConstants#DEFAULT_RECOVERY_MODE} if not
-	 * configured.
-	 */
-	public static RecoveryMode fromConfig(Configuration config) {
-		return RecoveryMode.valueOf(config.getString(
-				ConfigConstants.RECOVERY_MODE,
-				ConfigConstants.DEFAULT_RECOVERY_MODE).toUpperCase());
-	}
-
-	/**
-	 * Returns true if the defined recovery mode supports high availability.
-	 *
-	 * @param configuration Configuration which contains the recovery mode
-	 * @return true if high availability is supported by the recovery mode, otherwise false
-	 */
-	public static boolean isHighAvailabilityModeActivated(Configuration configuration) {
-		String recoveryMode = configuration.getString(
-			ConfigConstants.RECOVERY_MODE,
-			ConfigConstants.DEFAULT_RECOVERY_MODE).toUpperCase();
-
-		RecoveryMode mode = RecoveryMode.valueOf(recoveryMode);
-
-		switch(mode) {
-			case STANDALONE:
-				return false;
-			case ZOOKEEPER:
-				return true;
-			default:
-				return false;
-		}
-	}
-}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-runtime/src/main/java/org/apache/flink/runtime/jobmanager/StandaloneSubmittedJobGraphStore.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/jobmanager/StandaloneSubmittedJobGraphStore.java b/flink-runtime/src/main/java/org/apache/flink/runtime/jobmanager/StandaloneSubmittedJobGraphStore.java
index db36f92..3041cde 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/jobmanager/StandaloneSubmittedJobGraphStore.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/jobmanager/StandaloneSubmittedJobGraphStore.java
@@ -26,7 +26,7 @@ import java.util.Collections;
 import java.util.List;
 
 /**
- * {@link SubmittedJobGraph} instances for JobManagers running in {@link RecoveryMode#STANDALONE}.
+ * {@link SubmittedJobGraph} instances for JobManagers running in {@link HighAvailabilityMode#NONE}.
  *
  * <p>All operations are NoOps, because {@link JobGraph} instances cannot be recovered in this
  * recovery mode.

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-runtime/src/main/java/org/apache/flink/runtime/jobmanager/ZooKeeperSubmittedJobGraphStore.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/jobmanager/ZooKeeperSubmittedJobGraphStore.java b/flink-runtime/src/main/java/org/apache/flink/runtime/jobmanager/ZooKeeperSubmittedJobGraphStore.java
index 128db83..7f7c5fe 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/jobmanager/ZooKeeperSubmittedJobGraphStore.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/jobmanager/ZooKeeperSubmittedJobGraphStore.java
@@ -44,7 +44,7 @@ import static org.apache.flink.util.Preconditions.checkNotNull;
 import static org.apache.flink.util.Preconditions.checkState;
 
 /**
- * {@link SubmittedJobGraph} instances for JobManagers running in {@link RecoveryMode#ZOOKEEPER}.
+ * {@link SubmittedJobGraph} instances for JobManagers running in {@link HighAvailabilityMode#ZOOKEEPER}.
  *
  * <p>Each job graph creates ZNode:
  * <pre>

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-runtime/src/main/java/org/apache/flink/runtime/util/LeaderRetrievalUtils.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/util/LeaderRetrievalUtils.java b/flink-runtime/src/main/java/org/apache/flink/runtime/util/LeaderRetrievalUtils.java
index 8f88daa..7a656cf 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/util/LeaderRetrievalUtils.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/util/LeaderRetrievalUtils.java
@@ -28,7 +28,7 @@ import org.apache.flink.configuration.IllegalConfigurationException;
 import org.apache.flink.runtime.akka.AkkaUtils;
 import org.apache.flink.runtime.instance.ActorGateway;
 import org.apache.flink.runtime.instance.AkkaActorGateway;
-import org.apache.flink.runtime.jobmanager.RecoveryMode;
+import org.apache.flink.runtime.jobmanager.HighAvailabilityMode;
 import org.apache.flink.runtime.leaderretrieval.LeaderRetrievalException;
 import org.apache.flink.runtime.leaderretrieval.LeaderRetrievalListener;
 import org.apache.flink.runtime.leaderretrieval.LeaderRetrievalService;
@@ -61,15 +61,15 @@ public class LeaderRetrievalUtils {
 	public static LeaderRetrievalService createLeaderRetrievalService(Configuration configuration)
 		throws Exception {
 
-		RecoveryMode recoveryMode = getRecoveryMode(configuration);
+		HighAvailabilityMode highAvailabilityMode = getRecoveryMode(configuration);
 
-		switch (recoveryMode) {
-			case STANDALONE:
+		switch (highAvailabilityMode) {
+			case NONE:
 				return StandaloneUtils.createLeaderRetrievalService(configuration);
 			case ZOOKEEPER:
 				return ZooKeeperUtils.createLeaderRetrievalService(configuration);
 			default:
-				throw new Exception("Recovery mode " + recoveryMode + " is not supported.");
+				throw new Exception("Recovery mode " + highAvailabilityMode + " is not supported.");
 		}
 	}
 
@@ -86,16 +86,16 @@ public class LeaderRetrievalUtils {
 	public static LeaderRetrievalService createLeaderRetrievalService(
 				Configuration configuration, ActorRef standaloneRef) throws Exception {
 
-		RecoveryMode recoveryMode = getRecoveryMode(configuration);
+		HighAvailabilityMode highAvailabilityMode = getRecoveryMode(configuration);
 
-		switch (recoveryMode) {
-			case STANDALONE:
+		switch (highAvailabilityMode) {
+			case NONE:
 				String akkaUrl = standaloneRef.path().toSerializationFormat();
 				return new StandaloneLeaderRetrievalService(akkaUrl);
 			case ZOOKEEPER:
 				return ZooKeeperUtils.createLeaderRetrievalService(configuration);
 			default:
-				throw new Exception("Recovery mode " + recoveryMode + " is not supported.");
+				throw new Exception("Recovery mode " + highAvailabilityMode + " is not supported.");
 		}
 	}
 	
@@ -282,7 +282,7 @@ public class LeaderRetrievalUtils {
 	}
 
 	/**
-	 * Gets the recovery mode as configured, based on the {@link ConfigConstants#RECOVERY_MODE}
+	 * Gets the recovery mode as configured, based on the {@link ConfigConstants#HIGH_AVAILABILITY}
 	 * config key.
 	 * 
 	 * @param config The configuration to read the recovery mode from.
@@ -291,20 +291,8 @@ public class LeaderRetrievalUtils {
 	 * @throws IllegalConfigurationException Thrown, if the recovery mode does not correspond
 	 *                                       to a known value.
 	 */
-	public static RecoveryMode getRecoveryMode(Configuration config) {
-		String mode = config.getString(
-			ConfigConstants.RECOVERY_MODE,
-			ConfigConstants.DEFAULT_RECOVERY_MODE).toUpperCase();
-		
-		switch (mode) {
-			case "STANDALONE":
-				return RecoveryMode.STANDALONE;
-			case "ZOOKEEPER":
-				return RecoveryMode.ZOOKEEPER;
-			default:
-				throw new IllegalConfigurationException(
-					"The value for '" + ConfigConstants.RECOVERY_MODE + "' is unknown: " + mode);
-		}
+	public static HighAvailabilityMode getRecoveryMode(Configuration config) {
+		return HighAvailabilityMode.fromConfig(config);
 	}
 	
 	// ------------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-runtime/src/main/java/org/apache/flink/runtime/util/ZooKeeperUtils.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/util/ZooKeeperUtils.java b/flink-runtime/src/main/java/org/apache/flink/runtime/util/ZooKeeperUtils.java
index 3986fed..5fd6f8c 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/util/ZooKeeperUtils.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/util/ZooKeeperUtils.java
@@ -29,7 +29,7 @@ import org.apache.flink.runtime.checkpoint.CompletedCheckpointStore;
 import org.apache.flink.runtime.checkpoint.CompletedCheckpoint;
 import org.apache.flink.runtime.checkpoint.ZooKeeperCheckpointIDCounter;
 import org.apache.flink.runtime.checkpoint.ZooKeeperCompletedCheckpointStore;
-import org.apache.flink.runtime.jobmanager.RecoveryMode;
+import org.apache.flink.runtime.jobmanager.HighAvailabilityMode;
 import org.apache.flink.runtime.jobmanager.SubmittedJobGraph;
 import org.apache.flink.runtime.jobmanager.ZooKeeperSubmittedJobGraphStore;
 import org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionService;
@@ -56,35 +56,44 @@ public class ZooKeeperUtils {
 	 * @return {@link CuratorFramework} instance
 	 */
 	public static CuratorFramework startCuratorFramework(Configuration configuration) {
-		String zkQuorum = configuration.getString(ConfigConstants.ZOOKEEPER_QUORUM_KEY, "");
-
+		String zkQuorum = configuration.getString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, "");
+		if (zkQuorum.isEmpty()) {
+			zkQuorum = configuration.getString(ConfigConstants.ZOOKEEPER_QUORUM_KEY, "");
+		}
 		if (zkQuorum == null || zkQuorum.equals("")) {
 			throw new RuntimeException("No valid ZooKeeper quorum has been specified. " +
 					"You can specify the quorum via the configuration key '" +
-					ConfigConstants.ZOOKEEPER_QUORUM_KEY + "'.");
+					ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY + "'.");
 		}
 
-		int sessionTimeout = configuration.getInteger(
-				ConfigConstants.ZOOKEEPER_SESSION_TIMEOUT,
-				ConfigConstants.DEFAULT_ZOOKEEPER_SESSION_TIMEOUT);
+		int sessionTimeout = getConfiguredIntValue(configuration,
+			ConfigConstants.HA_ZOOKEEPER_SESSION_TIMEOUT,
+			ConfigConstants.ZOOKEEPER_SESSION_TIMEOUT,
+			ConfigConstants.DEFAULT_ZOOKEEPER_SESSION_TIMEOUT);
 
-		int connectionTimeout = configuration.getInteger(
-				ConfigConstants.ZOOKEEPER_CONNECTION_TIMEOUT,
-				ConfigConstants.DEFAULT_ZOOKEEPER_CONNECTION_TIMEOUT);
+		int connectionTimeout = getConfiguredIntValue(configuration,
+			ConfigConstants.HA_ZOOKEEPER_CONNECTION_TIMEOUT,
+			ConfigConstants.ZOOKEEPER_CONNECTION_TIMEOUT,
+			ConfigConstants.DEFAULT_ZOOKEEPER_SESSION_TIMEOUT);
 
-		int retryWait = configuration.getInteger(
-				ConfigConstants.ZOOKEEPER_RETRY_WAIT,
-				ConfigConstants.DEFAULT_ZOOKEEPER_RETRY_WAIT);
+		int retryWait = getConfiguredIntValue(configuration,
+			ConfigConstants.HA_ZOOKEEPER_RETRY_WAIT,
+			ConfigConstants.ZOOKEEPER_RETRY_WAIT,
+			ConfigConstants.DEFAULT_ZOOKEEPER_RETRY_WAIT);
 
-		int maxRetryAttempts = configuration.getInteger(
-				ConfigConstants.ZOOKEEPER_MAX_RETRY_ATTEMPTS,
-				ConfigConstants.DEFAULT_ZOOKEEPER_MAX_RETRY_ATTEMPTS);
+		int maxRetryAttempts = getConfiguredIntValue(configuration,
+			ConfigConstants.HA_ZOOKEEPER_MAX_RETRY_ATTEMPTS,
+			ConfigConstants.ZOOKEEPER_MAX_RETRY_ATTEMPTS,
+			ConfigConstants.DEFAULT_ZOOKEEPER_MAX_RETRY_ATTEMPTS);
 
-		String root = configuration.getString(ConfigConstants.ZOOKEEPER_DIR_KEY,
-				ConfigConstants.DEFAULT_ZOOKEEPER_DIR_KEY);
+		String root = getConfiguredStringValue(configuration, ConfigConstants.HA_ZOOKEEPER_DIR_KEY,
+			ConfigConstants.ZOOKEEPER_DIR_KEY,
+			ConfigConstants.DEFAULT_ZOOKEEPER_DIR_KEY);
 
-		String namespace = configuration.getString(ConfigConstants.ZOOKEEPER_NAMESPACE_KEY,
-				ConfigConstants.DEFAULT_ZOOKEEPER_NAMESPACE_KEY);
+		String namespace = getConfiguredStringValue(configuration,
+			ConfigConstants.HA_ZOOKEEPER_NAMESPACE_KEY,
+			ConfigConstants.ZOOKEEPER_NAMESPACE_KEY,
+			ConfigConstants.DEFAULT_ZOOKEEPER_NAMESPACE_KEY);
 
 		String rootWithNamespace = generateZookeeperPath(root, namespace);
 
@@ -105,11 +114,36 @@ public class ZooKeeperUtils {
 		return cf;
 	}
 
+	private static int getConfiguredIntValue(Configuration configuration, String newConfigName, String oldConfigName, int defaultValue) {
+		int val = configuration.getInteger(newConfigName, -1);
+		if (val == -1) {
+			val = configuration.getInteger(
+				oldConfigName, -1);
+		}
+		// if still the val is not set use the default value
+		if (val == -1) {
+			return defaultValue;
+		}
+		return val;
+	}
+
+	private static String getConfiguredStringValue(Configuration configuration, String newConfigName, String oldConfigName, String defaultValue) {
+		String val = configuration.getString(newConfigName, "");
+		if (val.isEmpty()) {
+			val = configuration.getString(
+				oldConfigName, "");
+		}
+		// still no value found - use the default value
+		if (val.isEmpty()) {
+			return defaultValue;
+		}
+		return val;
+	}
 	/**
-	 * Returns whether {@link RecoveryMode#ZOOKEEPER} is configured.
+	 * Returns whether {@link HighAvailabilityMode#ZOOKEEPER} is configured.
 	 */
 	public static boolean isZooKeeperRecoveryMode(Configuration flinkConf) {
-		return RecoveryMode.fromConfig(flinkConf).equals(RecoveryMode.ZOOKEEPER);
+		return HighAvailabilityMode.fromConfig(flinkConf).equals(HighAvailabilityMode.ZOOKEEPER);
 	}
 
 	/**
@@ -119,7 +153,10 @@ public class ZooKeeperUtils {
 	public static String getZooKeeperEnsemble(Configuration flinkConf)
 			throws IllegalConfigurationException {
 
-		String zkQuorum = flinkConf.getString(ConfigConstants.ZOOKEEPER_QUORUM_KEY, "");
+		String zkQuorum = flinkConf.getString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, "");
+		if (zkQuorum.isEmpty()) {
+			zkQuorum = flinkConf.getString(ConfigConstants.ZOOKEEPER_QUORUM_KEY, "");
+		}
 
 		if (zkQuorum == null || zkQuorum.equals("")) {
 			throw new IllegalConfigurationException("No ZooKeeper quorum specified in config.");
@@ -141,8 +178,9 @@ public class ZooKeeperUtils {
 	public static ZooKeeperLeaderRetrievalService createLeaderRetrievalService(
 			Configuration configuration) throws Exception {
 		CuratorFramework client = startCuratorFramework(configuration);
-		String leaderPath = configuration.getString(ConfigConstants.ZOOKEEPER_LEADER_PATH,
-				ConfigConstants.DEFAULT_ZOOKEEPER_LEADER_PATH);
+		String leaderPath = getConfiguredStringValue(configuration,
+			ConfigConstants.HA_ZOOKEEPER_LEADER_PATH, ConfigConstants.ZOOKEEPER_LEADER_PATH,
+			ConfigConstants.DEFAULT_ZOOKEEPER_LEADER_PATH);
 
 		return new ZooKeeperLeaderRetrievalService(client, leaderPath);
 	}
@@ -175,9 +213,11 @@ public class ZooKeeperUtils {
 			CuratorFramework client,
 			Configuration configuration) throws Exception {
 
-		String latchPath = configuration.getString(ConfigConstants.ZOOKEEPER_LATCH_PATH,
-				ConfigConstants.DEFAULT_ZOOKEEPER_LATCH_PATH);
-		String leaderPath = configuration.getString(ConfigConstants.ZOOKEEPER_LEADER_PATH,
+		String latchPath = getConfiguredStringValue(configuration,
+			ConfigConstants.HA_ZOOKEEPER_LATCH_PATH, ConfigConstants.ZOOKEEPER_LATCH_PATH,
+			ConfigConstants.DEFAULT_ZOOKEEPER_LATCH_PATH);
+		String leaderPath = getConfiguredStringValue(configuration,
+			ConfigConstants.HA_ZOOKEEPER_LEADER_PATH, ConfigConstants.ZOOKEEPER_LEADER_PATH,
 			ConfigConstants.DEFAULT_ZOOKEEPER_LEADER_PATH);
 
 		return new ZooKeeperLeaderElectionService(client, latchPath, leaderPath);
@@ -199,9 +239,9 @@ public class ZooKeeperUtils {
 		StateStorageHelper<SubmittedJobGraph> stateStorage = createFileSystemStateStorage(configuration, "submittedJobGraph");
 
 		// ZooKeeper submitted jobs root dir
-		String zooKeeperSubmittedJobsPath = configuration.getString(
-				ConfigConstants.ZOOKEEPER_JOBGRAPHS_PATH,
-				ConfigConstants.DEFAULT_ZOOKEEPER_JOBGRAPHS_PATH);
+		String zooKeeperSubmittedJobsPath = getConfiguredStringValue(configuration,
+			ConfigConstants.HA_ZOOKEEPER_JOBGRAPHS_PATH, ConfigConstants.ZOOKEEPER_JOBGRAPHS_PATH,
+			ConfigConstants.DEFAULT_ZOOKEEPER_JOBGRAPHS_PATH);
 
 		return new ZooKeeperSubmittedJobGraphStore(
 				client, zooKeeperSubmittedJobsPath, stateStorage);
@@ -226,7 +266,8 @@ public class ZooKeeperUtils {
 
 		checkNotNull(configuration, "Configuration");
 
-		String checkpointsPath = configuration.getString(
+		String checkpointsPath = getConfiguredStringValue(configuration,
+			ConfigConstants.HA_ZOOKEEPER_CHECKPOINTS_PATH,
 			ConfigConstants.ZOOKEEPER_CHECKPOINTS_PATH,
 			ConfigConstants.DEFAULT_ZOOKEEPER_CHECKPOINTS_PATH);
 
@@ -257,9 +298,10 @@ public class ZooKeeperUtils {
 			Configuration configuration,
 			JobID jobId) throws Exception {
 
-		String checkpointIdCounterPath = configuration.getString(
-				ConfigConstants.ZOOKEEPER_CHECKPOINT_COUNTER_PATH,
-				ConfigConstants.DEFAULT_ZOOKEEPER_CHECKPOINT_COUNTER_PATH);
+		String checkpointIdCounterPath = getConfiguredStringValue(configuration,
+			ConfigConstants.HA_ZOOKEEPER_CHECKPOINT_COUNTER_PATH,
+			ConfigConstants.ZOOKEEPER_CHECKPOINT_COUNTER_PATH,
+			ConfigConstants.DEFAULT_ZOOKEEPER_CHECKPOINT_COUNTER_PATH);
 
 		checkpointIdCounterPath += ZooKeeperSubmittedJobGraphStore.getPathForJob(jobId);
 
@@ -280,11 +322,15 @@ public class ZooKeeperUtils {
 			String prefix) throws IOException {
 
 		String rootPath = configuration.getString(
-			ConfigConstants.ZOOKEEPER_RECOVERY_PATH, "");
+			ConfigConstants.ZOOKEEPER_HA_PATH, "");
+		if (rootPath.isEmpty()) {
+			rootPath = configuration.getString(
+				ConfigConstants.ZOOKEEPER_RECOVERY_PATH, "");
+		}
 
 		if (rootPath.equals("")) {
 			throw new IllegalConfigurationException("Missing recovery path. Specify via " +
-				"configuration key '" + ConfigConstants.ZOOKEEPER_RECOVERY_PATH + "'.");
+				"configuration key '" + ConfigConstants.ZOOKEEPER_HA_PATH + "'.");
 		} else {
 			return new FileSystemStateStorageHelper<T>(rootPath, prefix);
 		}

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobManager.scala
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobManager.scala b/flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobManager.scala
index a82e89a..5962afc 100644
--- a/flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobManager.scala
+++ b/flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobManager.scala
@@ -155,7 +155,7 @@ class JobManager(
   /** Either running or not yet archived jobs (session hasn't been ended). */
   protected val currentJobs = scala.collection.mutable.HashMap[JobID, (ExecutionGraph, JobInfo)]()
 
-  protected val recoveryMode = RecoveryMode.fromConfig(flinkConfiguration)
+  protected val recoveryMode = HighAvailabilityMode.fromConfig(flinkConfiguration)
 
   var leaderSessionID: Option[UUID] = None
 
@@ -317,7 +317,7 @@ class JobManager(
 
         // TODO (critical next step) This needs to be more flexible and robust (e.g. wait for task
         // managers etc.)
-        if (recoveryMode != RecoveryMode.STANDALONE) {
+        if (recoveryMode != HighAvailabilityMode.NONE) {
           log.info(s"Delaying recovery of all jobs by $jobRecoveryTimeout.")
 
           context.system.scheduler.scheduleOnce(
@@ -2026,7 +2026,7 @@ object JobManager {
 
     if (!listeningPortRange.hasNext) {
       if (ZooKeeperUtils.isZooKeeperRecoveryMode(configuration)) {
-        val message = "Config parameter '" + ConfigConstants.RECOVERY_JOB_MANAGER_PORT +
+        val message = "Config parameter '" + ConfigConstants.HA_JOB_MANAGER_PORT +
           "' does not specify a valid port range."
         LOG.error(message)
         System.exit(STARTUP_FAILURE_RETURN_CODE)
@@ -2568,8 +2568,8 @@ object JobManager {
 
     // Create recovery related components
     val (leaderElectionService, submittedJobGraphs, checkpointRecoveryFactory) =
-      RecoveryMode.fromConfig(configuration) match {
-        case RecoveryMode.STANDALONE =>
+      HighAvailabilityMode.fromConfig(configuration) match {
+        case HighAvailabilityMode.NONE =>
           val leaderElectionService = leaderElectionServiceOption match {
             case Some(les) => les
             case None => new StandaloneLeaderElectionService()
@@ -2579,7 +2579,7 @@ object JobManager {
             new StandaloneSubmittedJobGraphStore(),
             new StandaloneCheckpointRecoveryFactory())
 
-        case RecoveryMode.ZOOKEEPER =>
+        case HighAvailabilityMode.ZOOKEEPER =>
           val client = ZooKeeperUtils.startCuratorFramework(configuration)
 
           val leaderElectionService = leaderElectionServiceOption match {
@@ -2594,7 +2594,10 @@ object JobManager {
 
     val savepointStore = SavepointStoreFactory.createFromConfig(configuration)
 
-    val jobRecoveryTimeoutStr = configuration.getString(ConfigConstants.RECOVERY_JOB_DELAY, "");
+    var jobRecoveryTimeoutStr = configuration.getString(ConfigConstants.HA_JOB_DELAY, "");
+    if (jobRecoveryTimeoutStr.isEmpty) {
+      jobRecoveryTimeoutStr = configuration.getString(ConfigConstants.RECOVERY_JOB_DELAY, "");
+    }
 
     val jobRecoveryTimeout = if (jobRecoveryTimeoutStr == null || jobRecoveryTimeoutStr.isEmpty) {
       timeout
@@ -2604,7 +2607,7 @@ object JobManager {
       } catch {
         case n: NumberFormatException =>
           throw new Exception(
-            s"Invalid config value for ${ConfigConstants.RECOVERY_JOB_DELAY}: " +
+            s"Invalid config value for ${ConfigConstants.HA_JOB_DELAY}: " +
               s"$jobRecoveryTimeoutStr. Value must be a valid duration (such as '10 s' or '1 min')")
       }
     }

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-runtime/src/main/scala/org/apache/flink/runtime/minicluster/FlinkMiniCluster.scala
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/scala/org/apache/flink/runtime/minicluster/FlinkMiniCluster.scala b/flink-runtime/src/main/scala/org/apache/flink/runtime/minicluster/FlinkMiniCluster.scala
index f6e9360..0778aae 100644
--- a/flink-runtime/src/main/scala/org/apache/flink/runtime/minicluster/FlinkMiniCluster.scala
+++ b/flink-runtime/src/main/scala/org/apache/flink/runtime/minicluster/FlinkMiniCluster.scala
@@ -24,22 +24,18 @@ import java.util.UUID
 import akka.pattern.Patterns.gracefulStop
 import akka.pattern.ask
 import akka.actor.{ActorRef, ActorSystem}
-
 import com.typesafe.config.Config
-
-import org.apache.flink.api.common.{JobID, JobExecutionResult, JobSubmissionResult}
+import org.apache.flink.api.common.{JobExecutionResult, JobID, JobSubmissionResult}
 import org.apache.flink.configuration.{ConfigConstants, Configuration}
 import org.apache.flink.runtime.akka.AkkaUtils
-import org.apache.flink.runtime.client.{JobExecutionException, JobClient}
-import org.apache.flink.runtime.instance.{AkkaActorGateway, ActorGateway}
+import org.apache.flink.runtime.client.{JobClient, JobExecutionException}
+import org.apache.flink.runtime.instance.{ActorGateway, AkkaActorGateway}
 import org.apache.flink.runtime.jobgraph.JobGraph
-import org.apache.flink.runtime.jobmanager.RecoveryMode
-import org.apache.flink.runtime.leaderretrieval.{LeaderRetrievalService, LeaderRetrievalListener,
-StandaloneLeaderRetrievalService}
+import org.apache.flink.runtime.jobmanager.HighAvailabilityMode
+import org.apache.flink.runtime.leaderretrieval.{LeaderRetrievalListener, LeaderRetrievalService, StandaloneLeaderRetrievalService}
 import org.apache.flink.runtime.messages.TaskManagerMessages.NotifyWhenRegisteredAtJobManager
 import org.apache.flink.runtime.util.ZooKeeperUtils
-import org.apache.flink.runtime.webmonitor.{WebMonitorUtils, WebMonitor}
-
+import org.apache.flink.runtime.webmonitor.{WebMonitor, WebMonitorUtils}
 import org.slf4j.LoggerFactory
 
 import scala.concurrent.duration.{Duration, FiniteDuration}
@@ -88,7 +84,7 @@ abstract class FlinkMiniCluster(
 
   implicit val timeout = AkkaUtils.getTimeout(configuration)
 
-  val recoveryMode = RecoveryMode.fromConfig(configuration)
+  val recoveryMode = HighAvailabilityMode.fromConfig(configuration)
 
   val numJobManagers = getNumberOfJobManagers
 
@@ -126,7 +122,7 @@ abstract class FlinkMiniCluster(
   // --------------------------------------------------------------------------
 
   def getNumberOfJobManagers: Int = {
-    if(recoveryMode == RecoveryMode.STANDALONE) {
+    if(recoveryMode == HighAvailabilityMode.NONE) {
       1
     } else {
       configuration.getInteger(
@@ -137,7 +133,7 @@ abstract class FlinkMiniCluster(
   }
 
   def getNumberOfResourceManagers: Int = {
-    if(recoveryMode == RecoveryMode.STANDALONE) {
+    if(recoveryMode == HighAvailabilityMode.NONE) {
       1
     } else {
       configuration.getInteger(
@@ -528,7 +524,7 @@ abstract class FlinkMiniCluster(
   protected def createLeaderRetrievalService(): LeaderRetrievalService = {
     (jobManagerActorSystems, jobManagerActors) match {
       case (Some(jmActorSystems), Some(jmActors)) =>
-        if (recoveryMode == RecoveryMode.STANDALONE) {
+        if (recoveryMode == HighAvailabilityMode.NONE) {
           new StandaloneLeaderRetrievalService(
             AkkaUtils.getAkkaURL(jmActorSystems(0), jmActors(0)))
         } else {

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-runtime/src/test/java/org/apache/flink/runtime/blob/BlobRecoveryITCase.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/blob/BlobRecoveryITCase.java b/flink-runtime/src/test/java/org/apache/flink/runtime/blob/BlobRecoveryITCase.java
index 528a20c..bd4723f 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/blob/BlobRecoveryITCase.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/blob/BlobRecoveryITCase.java
@@ -22,7 +22,7 @@ import org.apache.commons.io.FileUtils;
 import org.apache.flink.api.common.JobID;
 import org.apache.flink.configuration.ConfigConstants;
 import org.apache.flink.configuration.Configuration;
-import org.apache.flink.runtime.jobmanager.RecoveryMode;
+import org.apache.flink.runtime.jobmanager.HighAvailabilityMode;
 import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
@@ -55,7 +55,7 @@ public class BlobRecoveryITCase {
 	}
 
 	/**
-	 * Tests that with {@link RecoveryMode#ZOOKEEPER} distributed JARs are recoverable from any
+	 * Tests that with {@link HighAvailabilityMode#ZOOKEEPER} distributed JARs are recoverable from any
 	 * participating BlobServer.
 	 */
 	@Test
@@ -68,9 +68,9 @@ public class BlobRecoveryITCase {
 
 		try {
 			Configuration config = new Configuration();
-			config.setString(ConfigConstants.RECOVERY_MODE, "ZOOKEEPER");
+			config.setString(ConfigConstants.HIGH_AVAILABILITY, "ZOOKEEPER");
 			config.setString(ConfigConstants.STATE_BACKEND, "FILESYSTEM");
-			config.setString(ConfigConstants.ZOOKEEPER_RECOVERY_PATH, recoveryDir.getPath());
+			config.setString(ConfigConstants.ZOOKEEPER_HA_PATH, recoveryDir.getPath());
 
 			for (int i = 0; i < server.length; i++) {
 				server[i] = new BlobServer(config);

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-runtime/src/test/java/org/apache/flink/runtime/execution/librarycache/BlobLibraryCacheRecoveryITCase.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/execution/librarycache/BlobLibraryCacheRecoveryITCase.java b/flink-runtime/src/test/java/org/apache/flink/runtime/execution/librarycache/BlobLibraryCacheRecoveryITCase.java
index f1021e6..a3fe0d4 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/execution/librarycache/BlobLibraryCacheRecoveryITCase.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/execution/librarycache/BlobLibraryCacheRecoveryITCase.java
@@ -26,7 +26,7 @@ import org.apache.flink.runtime.blob.BlobClient;
 import org.apache.flink.runtime.blob.BlobKey;
 import org.apache.flink.runtime.blob.BlobServer;
 import org.apache.flink.runtime.executiongraph.ExecutionAttemptID;
-import org.apache.flink.runtime.jobmanager.RecoveryMode;
+import org.apache.flink.runtime.jobmanager.HighAvailabilityMode;
 import org.junit.Rule;
 import org.junit.Test;
 import org.junit.rules.TemporaryFolder;
@@ -48,7 +48,7 @@ public class BlobLibraryCacheRecoveryITCase {
 	@Rule
 	public TemporaryFolder temporaryFolder = new TemporaryFolder();
 	/**
-	 * Tests that with {@link RecoveryMode#ZOOKEEPER} distributed JARs are recoverable from any
+	 * Tests that with {@link HighAvailabilityMode#ZOOKEEPER} distributed JARs are recoverable from any
 	 * participating BlobLibraryCacheManager.
 	 */
 	@Test
@@ -63,9 +63,9 @@ public class BlobLibraryCacheRecoveryITCase {
 
 		try {
 			Configuration config = new Configuration();
-			config.setString(ConfigConstants.RECOVERY_MODE, "ZOOKEEPER");
+			config.setString(ConfigConstants.HIGH_AVAILABILITY, "ZOOKEEPER");
 			config.setString(ConfigConstants.STATE_BACKEND, "FILESYSTEM");
-			config.setString(ConfigConstants.ZOOKEEPER_RECOVERY_PATH, temporaryFolder.getRoot().getAbsolutePath());
+			config.setString(ConfigConstants.ZOOKEEPER_HA_PATH, temporaryFolder.getRoot().getAbsolutePath());
 
 			for (int i = 0; i < server.length; i++) {
 				server[i] = new BlobServer(config);

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-runtime/src/test/java/org/apache/flink/runtime/jobmanager/JobManagerHARecoveryTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/jobmanager/JobManagerHARecoveryTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/jobmanager/JobManagerHARecoveryTest.java
index f050e29..d980517 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/jobmanager/JobManagerHARecoveryTest.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/jobmanager/JobManagerHARecoveryTest.java
@@ -123,8 +123,8 @@ public class JobManagerHARecoveryTest {
 		ActorRef jobManager = null;
 		ActorRef taskManager = null;
 
-		flinkConfiguration.setString(ConfigConstants.RECOVERY_MODE, "zookeeper");
-		flinkConfiguration.setString(ConfigConstants.ZOOKEEPER_RECOVERY_PATH, temporaryFolder.newFolder().toString());
+		flinkConfiguration.setString(ConfigConstants.HIGH_AVAILABILITY, "zookeeper");
+		flinkConfiguration.setString(ConfigConstants.ZOOKEEPER_HA_PATH, temporaryFolder.newFolder().toString());
 		flinkConfiguration.setInteger(ConfigConstants.TASK_MANAGER_NUM_TASK_SLOTS, slots);
 
 		try {

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/JobManagerLeaderElectionTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/JobManagerLeaderElectionTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/JobManagerLeaderElectionTest.java
index bdd019d..5c696ce 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/JobManagerLeaderElectionTest.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/JobManagerLeaderElectionTest.java
@@ -35,7 +35,7 @@ import org.apache.flink.runtime.checkpoint.StandaloneCheckpointRecoveryFactory;
 import org.apache.flink.runtime.execution.librarycache.BlobLibraryCacheManager;
 import org.apache.flink.runtime.executiongraph.restart.NoRestartStrategy;
 import org.apache.flink.runtime.instance.InstanceManager;
-import org.apache.flink.runtime.jobmanager.RecoveryMode;
+import org.apache.flink.runtime.jobmanager.HighAvailabilityMode;
 import org.apache.flink.runtime.jobmanager.StandaloneSubmittedJobGraphStore;
 import org.apache.flink.runtime.jobmanager.SubmittedJobGraphStore;
 import org.apache.flink.runtime.jobmanager.scheduler.Scheduler;
@@ -171,7 +171,7 @@ public class JobManagerLeaderElectionTest extends TestLogger {
 
 	private Props createJobManagerProps(Configuration configuration) throws Exception {
 		LeaderElectionService leaderElectionService;
-		if (RecoveryMode.fromConfig(configuration) == RecoveryMode.STANDALONE) {
+		if (HighAvailabilityMode.fromConfig(configuration) == HighAvailabilityMode.NONE) {
 			leaderElectionService = new StandaloneLeaderElectionService();
 		}
 		else {


[27/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/concepts/fig/slot_sharing.svg
----------------------------------------------------------------------
diff --git a/docs/concepts/fig/slot_sharing.svg b/docs/concepts/fig/slot_sharing.svg
deleted file mode 100644
index a3be133..0000000
--- a/docs/concepts/fig/slot_sharing.svg
+++ /dev/null
@@ -1,721 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<svg
-   xmlns:dc="http://purl.org/dc/elements/1.1/"
-   xmlns:cc="http://creativecommons.org/ns#"
-   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
-   xmlns:svg="http://www.w3.org/2000/svg"
-   xmlns="http://www.w3.org/2000/svg"
-   version="1.1"
-   width="922.50159"
-   height="427.73508"
-   id="svg2">
-  <defs
-     id="defs4" />
-  <metadata
-     id="metadata7">
-    <rdf:RDF>
-      <cc:Work
-         rdf:about="">
-        <dc:format>image/svg+xml</dc:format>
-        <dc:type
-           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
-        <dc:title></dc:title>
-      </cc:Work>
-    </rdf:RDF>
-  </metadata>
-  <g
-     transform="translate(86.2508,-318.49464)"
-     id="layer1">
-    <g
-       transform="translate(-111.60404,284.24304)"
-       id="g2989">
-      <path
-         d="m 26.181522,98.086936 0,320.854934 450.749798,0 0,-320.854934 -450.749798,0 z"
-         id="path2991"
-         style="fill:#c5e0b4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 26.181522,98.086936 450.749798,0 0,320.854934 -450.749798,0 z"
-         id="path2993"
-         style="fill:none;stroke:#935f1c;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="194.22502"
-         y="119.99496"
-         id="text2995"
-         xml:space="preserve"
-         style="font-size:17.55437279px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">TaskManager</text>
-      <path
-         d="m 36.665384,137.47175 0,271.77395 136.440236,0 0,-271.77395 -136.440236,0 z"
-         id="path2997"
-         style="fill:#e4eaf4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 36.665384,137.47175 136.440236,0 0,271.77395 -136.440236,0 z"
-         id="path2999"
-         style="fill:none;stroke:#898c92;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="71.012276"
-         y="156.82903"
-         id="text3001"
-         xml:space="preserve"
-         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task Slot</text>
-      <path
-         d="m 183.57073,137.47175 0,271.94275 136.44024,0 0,-271.94275 -136.44024,0 z"
-         id="path3003"
-         style="fill:#e4eaf4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 183.57073,137.47175 136.44024,0 0,271.94275 -136.44024,0 z"
-         id="path3005"
-         style="fill:none;stroke:#898c92;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="217.92635"
-         y="156.82913"
-         id="text3007"
-         xml:space="preserve"
-         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task Slot</text>
-      <path
-         d="m 330.47608,137.30296 0,271.94274 136.44024,0 0,-271.94274 -136.44024,0 z"
-         id="path3009"
-         style="fill:#e4eaf4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 330.47608,137.30296 136.44024,0 0,271.94274 -136.44024,0 z"
-         id="path3011"
-         style="fill:none;stroke:#898c92;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="364.84039"
-         y="156.76062"
-         id="text3013"
-         xml:space="preserve"
-         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task Slot</text>
-      <path
-         d="m 47.749396,203.732 c 0,-17.69503 14.347324,-32.04235 32.042357,-32.04235 17.695033,0 32.042357,14.34732 32.042357,32.04235 0,17.69504 -14.347324,32.03298 -32.042357,32.03298 -17.695033,0 -32.042357,-14.33794 -32.042357,-32.03298"
-         id="path3015"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 47.749396,203.732 c 0,-17.69503 14.347324,-32.04235 32.042357,-32.04235 17.695033,0 32.042357,14.34732 32.042357,32.04235 0,17.69504 -14.347324,32.03298 -32.042357,32.03298 -17.695033,0 -32.042357,-14.33794 -32.042357,-32.03298"
-         id="path3017"
-         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="62.373657"
-         y="201.69257"
-         id="text3019"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source</text>
-      <text
-         x="72.126091"
-         y="213.69556"
-         id="text3021"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[1]</text>
-      <path
-         d="m 97.449277,203.732 c 0,-17.69503 14.347323,-32.04235 32.042353,-32.04235 17.69504,0 32.04236,14.34732 32.04236,32.04235 0,17.69504 -14.34732,32.03298 -32.04236,32.03298 -17.69503,0 -32.042353,-14.33794 -32.042353,-32.03298"
-         id="path3023"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 97.449277,203.732 c 0,-17.69503 14.347323,-32.04235 32.042353,-32.04235 17.69504,0 32.04236,14.34732 32.04236,32.04235 0,17.69504 -14.34732,32.03298 -32.04236,32.03298 -17.69503,0 -32.042353,-14.33794 -32.042353,-32.03298"
-         id="path3025"
-         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="113.99941"
-         y="201.69257"
-         id="text3027"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">map()</text>
-      <text
-         x="121.80136"
-         y="213.69556"
-         id="text3029"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[1]</text>
-      <path
-         d="m 46.980454,283.75819 c 0,-17.70441 14.347324,-32.05173 32.03298,-32.05173 17.704411,0 32.032976,14.34732 32.032976,32.05173 0,17.68566 -14.328565,32.03298 -32.032976,32.03298 -17.685656,0 -32.03298,-14.34732 -32.03298,-32.03298"
-         id="path3031"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="56.812088"
-         y="269.76364"
-         id="text3033"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">keyBy</text>
-      <text
-         x="87.719788"
-         y="269.76364"
-         id="text3035"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">()/</text>
-      <text
-         x="53.361229"
-         y="281.76663"
-         id="text3037"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">window()/</text>
-      <text
-         x="60.863094"
-         y="293.76962"
-         id="text3039"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">apply()</text>
-      <text
-         x="71.365715"
-         y="305.77261"
-         id="text3041"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[1]</text>
-      <path
-         d="m 45.405062,362.52781 c 0,-17.70441 14.309814,-32.05173 31.976716,-32.05173 17.648146,0 31.957962,14.34732 31.957962,32.05173 0,17.68566 -14.309816,32.03298 -31.957962,32.03298 -17.666902,0 -31.976716,-14.34732 -31.976716,-32.03298"
-         id="path3043"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="66.428658"
-         y="360.5033"
-         id="text3045"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Sink</text>
-      <text
-         x="69.729477"
-         y="372.50629"
-         id="text3047"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[1]</text>
-      <path
-         d="m 42.91069,228.62883 0,-1.24718 1.247186,0 0,1.24718 -1.247186,0 z m 0,-2.49437 0,-1.25656 1.247186,0 0,1.25656 -1.247186,0 z m 0,-2.50375 0,-1.24718 1.247186,0 0,1.24718 -1.247186,0 z m 0,-2.50375 0,-1.24718 1.247186,0 0,1.24718 -1.247186,0 z m 0,-2.49437 0,-1.25656 1.247186,0 0,1.25656 -1.247186,0 z m 0,-2.50375 0,-1.24718 1.247186,0 0,1.24718 -1.247186,0 z m 0,-2.50375 0,-1.24718 1.247186,0 0,1.24718 -1.247186,0 z m 0,-2.49437 0,-1.25656 1.247186,0 0,1.25656 -1.247186,0 z m 0,-2.50375 0,-1.24718 1.247186,0 0,1.24718 -1.247186,0 z m 0,-2.50374 0,-1.24719 1.247186,0 0,1.24719 -1.247186,0 z m 0,-2.49438 0,-1.25656 1.247186,0 0,1.25656 -1.247186,0 z m 0,-2.50374 0,-1.24719 1.247186,0 0,1.24719 -1.247186,0 z m 0,-2.50375 0,-1.24719 1.247186,0 0,1.24719 -1.247186,0 z m 0,-2.49437 0,-1.25657 1.247186,0 0,1.25657 -1.247186,0 z m 0,-2.50375 0,-1.24719 1.247186,0 0,1.24719 -1.247186,0 z m 0,-2.50375 0,-1.24719 1.247186,0 0,1.24719 -1.247186,0 z m 0,-2.49437 0,-1.25657 1.247186,
 0 0,1.25657 -1.247186,0 z m 0,-2.50375 0,-1.24719 1.247186,0 0,1.24719 -1.247186,0 z m 0,-2.50375 0,-1.24719 1.247186,0 0,1.24719 -1.247186,0 z m 0,-2.49437 0,-1.25657 1.247186,0 0,1.25657 -1.247186,0 z m 0.0094,-2.53188 0.05626,-1.09715 0.02813,-0.21568 1.237809,0.18755 -0.02813,0.17817 0.0094,-0.0563 -0.05626,1.06902 -1.247186,-0.0656 z m 0.309453,-2.6069 0.271942,-1.04089 0.07502,-0.21568 1.172167,0.42198 -0.06564,0.18755 0.01875,-0.0563 -0.262565,1.02213 -1.209676,-0.31883 z m 0.815828,-2.485 0.440735,-0.9096 0.159414,-0.26256 1.069017,0.65641 -0.150038,0.23443 0.03751,-0.0563 -0.431358,0.88147 -1.12528,-0.54389 z m 1.312827,-2.25993 0.534508,-0.73144 0.271943,-0.30007 0.928356,0.84396 -0.253188,0.27194 0.03751,-0.0469 -0.515753,0.7033 -1.003375,-0.7408 z m 1.72543,-1.96925 0.581395,-0.5345 0.42198,-0.30946 0.740809,1.00338 -0.393848,0.2907 0.04689,-0.0375 -0.553263,0.50637 -0.84396,-0.91898 z m 2.081768,-1.57539 0.581395,-0.36571 0.56264,-0.27195 0.543886,1.12528 -0.534508,0.25
 319 0.05626,-0.0281 -0.56264,0.34696 -0.647037,-1.05964 z m 2.353712,-1.14403 0.581395,-0.2063 0.66579,-0.17817 0.31883,1.20967 -0.647036,0.16879 0.05626,-0.0188 -0.553263,0.19692 -0.42198,-1.17216 z m 2.541258,-0.63766 0.590772,-0.0938 0.712678,-0.0375 0.06564,1.24718 -0.684546,0.0375 0.05626,-0.009 -0.553262,0.0844 -0.187547,-1.22843 z m 2.588145,-0.15942 1.247185,0 0,1.24719 -1.247185,0 0,-1.24719 z m 2.494371,0 1.256563,0 0,1.24719 -1.256563,0 0,-1.24719 z m 2.503749,0 1.247185,0 0,1.24719 -1.247185,0 0,-1.24719 z m 2.503748,0 1.247186,0 0,1.24719 -1.247186,0 0,-1.24719 z m 2.494372,0 1.256563,0 0,1.24719 -1.256563,0 0,-1.24719 z m 2.503749,0 1.247185,0 0,1.24719 -1.247185,0 0,-1.24719 z m 2.503748,0 1.247186,0 0,1.24719 -1.247186,0 0,-1.24719 z m 2.494372,0 1.256563,0 0,1.24719 -1.256563,0 0,-1.24719 z m 2.503749,0 1.247185,0 0,1.24719 -1.247185,0 0,-1.24719 z m 2.503748,0 1.247186,0 0,1.24719 -1.247186,0 0,-1.24719 z m 2.494372,0 1.256563,0 0,1.24719 -1.256563,0 0,-1.24719 z m
  2.503748,0 1.247186,0 0,1.24719 -1.247186,0 0,-1.24719 z m 2.503749,0 1.247186,0 0,1.24719 -1.247186,0 0,-1.24719 z m 2.494372,0 1.256563,0 0,1.24719 -1.256563,0 0,-1.24719 z m 2.503748,0 1.247186,0 0,1.24719 -1.247186,0 0,-1.24719 z m 2.503749,0 1.247186,0 0,1.24719 -1.247186,0 0,-1.24719 z m 2.494371,0 1.256564,0 0,1.24719 -1.256564,0 0,-1.24719 z m 2.503749,0 1.247185,0 0,1.24719 -1.247185,0 0,-1.24719 z m 2.503745,0 1.24719,0 0,1.24719 -1.24719,0 0,-1.24719 z m 2.49437,0 1.25657,0 0,1.24719 -1.25657,0 0,-1.24719 z m 2.50375,0 1.24719,0 0,1.24719 -1.24719,0 0,-1.24719 z m 2.50375,0 1.24719,0 0,1.24719 -1.24719,0 0,-1.24719 z m 2.49437,0 1.25656,0 0,1.24719 -1.25656,0 0,-1.24719 z m 2.50375,0 1.24719,0 0,1.24719 -1.24719,0 0,-1.24719 z m 2.50375,0 1.24718,0 0,1.24719 -1.24718,0 0,-1.24719 z m 2.49437,0 1.25656,0 0,1.24719 -1.25656,0 0,-1.24719 z m 2.50375,0 1.24718,0 0,1.24719 -1.24718,0 0,-1.24719 z m 2.50375,0 1.24718,0 0,1.24719 -1.24718,0 0,-1.24719 z m 2.49437,0 1.25656,0 0,
 1.24719 -1.25656,0 0,-1.24719 z m 2.50375,0 1.24718,0 0,1.24719 -1.24718,0 0,-1.24719 z m 2.50375,0 1.24718,0 0,1.24719 -1.24718,0 0,-1.24719 z m 2.49437,0 1.25656,0 0,1.24719 -1.25656,0 0,-1.24719 z m 2.50375,0 1.24718,0 0,1.24719 -1.24718,0 0,-1.24719 z m 2.50375,0 1.24718,0 0,1.24719 -1.24718,0 0,-1.24719 z m 2.49437,0 1.25656,0 0,1.24719 -1.25656,0 0,-1.24719 z m 2.50375,0 1.24718,0 0,1.24719 -1.24718,0 0,-1.24719 z m 2.50374,0 1.24719,0 0,1.24719 -1.24719,0 0,-1.24719 z m 2.49438,0 1.25656,0 0,1.24719 -1.25656,0 0,-1.24719 z m 2.53188,0.0563 0.13128,0.009 1.17217,0.17817 -0.18755,1.23781 -1.14404,-0.17817 0.0657,0.009 -0.10315,-0.009 0.0656,-1.24718 z m 2.57876,0.50637 0.10315,0.0281 1.12529,0.40322 -0.42199,1.17217 -1.09714,-0.39385 0.0563,0.0188 -0.0844,-0.0188 0.31883,-1.20968 z m 2.41936,0.994 0.0281,0.0188 1.07839,0.65641 0.0563,0.0469 -0.74081,1.00337 -0.0375,-0.0281 0.0469,0.0281 -1.02213,-0.62828 0.0469,0.0281 0,0 0.54389,-1.12528 z m 2.21305,1.50975 0.83458,0.75957 0.1
 1253,0.13128 -0.91898,0.84396 -0.10315,-0.11253 0.0375,0.0469 -0.80645,-0.75019 0.84396,-0.91898 z m 1.8192,1.88485 0.60953,0.81583 0.15004,0.24381 -1.06902,0.65641 -0.13128,-0.22506 0.0281,0.0469 -0.59077,-0.79707 1.00337,-0.74081 z m 1.41598,2.19429 0.40322,0.84396 0.13129,0.35634 -1.18155,0.42198 -0.11253,-0.3282 0.0188,0.0563 -0.38447,-0.80645 1.12528,-0.54389 z m 0.93773,2.43811 0.22506,0.85334 0.0656,0.43136 -1.23781,0.18754 -0.0563,-0.40322 0.009,0.0656 -0.21567,-0.81583 1.20967,-0.31883 z m 0.44074,2.58815 0.0469,0.89084 0,0.38447 -1.24718,0 0,-0.37509 0,0.0375 -0.0469,-0.87209 1.24719,-0.0656 z m 0.0469,2.53188 0,1.24718 -1.24718,0 0,-1.24718 1.24718,0 z m 0,2.49437 0,1.25656 -1.24718,0 0,-1.25656 1.24718,0 z m 0,2.50375 0,1.24718 -1.24718,0 0,-1.24718 1.24718,0 z m 0,2.50375 0,1.24718 -1.24718,0 0,-1.24718 1.24718,0 z m 0,2.49437 0,1.25656 -1.24718,0 0,-1.25656 1.24718,0 z m 0,2.50375 0,1.24718 -1.24718,0 0,-1.24718 1.24718,0 z m 0,2.50374 0,1.24719 -1.24718,0 0,-1.24719 1
 .24718,0 z m 0,2.49438 0,1.25656 -1.24718,0 0,-1.25656 1.24718,0 z m 0,2.50374 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.50375 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.49437 0,1.25657 -1.24718,0 0,-1.25657 1.24718,0 z m 0,2.50375 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.50375 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.49437 0,1.25657 -1.24718,0 0,-1.25657 1.24718,0 z m 0,2.50375 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.50375 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.49437 0,1.25657 -1.24718,0 0,-1.25657 1.24718,0 z m 0,2.50375 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.50375 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.49437 0,0.68455 -0.0281,0.60015 -1.24718,-0.0656 0.0281,-0.5814 0,0.0281 0,-0.66579 1.24718,0 z m -0.15941,2.58815 -0.10315,0.72205 -0.15004,0.57202 -1.20968,-0.31883 0.14066,-0.53451 -0.009,0.0563 0.10315,-0.68454 1.22843,0.18755 z m -0.63766,2.54125 -0.22506,0.62829 -0.28132,0.58139 -1.12528,-0.54388 0.26
 257,-0.55327 -0.0188,0.0656 0.21568,-0.60015 1.17217,0.42198 z m -1.13466,2.36309 -0.30007,0.497 -0.43136,0.5814 -1.00337,-0.75019 0.4126,-0.55326 -0.0281,0.0469 0.28132,-0.47824 1.06901,0.65641 z m -1.58477,2.08177 -0.30945,0.34696 -0.6189,0.56264 -0.84396,-0.92835 0.59077,-0.54389 -0.0375,0.0375 0.30007,-0.31883 0.91898,0.84396 z m -1.95986,1.72543 -0.2907,0.21568 -0.80645,0.48762 -0.64703,-1.05964 0.77831,-0.47824 -0.0469,0.0281 0.27194,-0.2063 0.74081,1.01275 z m -2.25994,1.30345 -0.25319,0.12191 -0.97524,0.35634 -0.42198,-1.18155 0.94711,-0.33758 -0.0656,0.0188 0.22506,-0.10315 0.54388,1.12528 z m -2.48499,0.82521 -0.21568,0.0563 -1.07839,0.15942 -0.18755,-1.23781 1.04088,-0.15004 -0.0563,0.009 0.17817,-0.0469 0.31883,1.20968 z m -2.6069,0.30945 -0.23444,0.009 -1.05026,0 0,-1.24719 1.03151,0 -0.0281,0 0.21568,-0.009 0.0656,1.24719 z m -2.53188,0.009 -1.24719,0 0,-1.24719 1.24719,0 0,1.24719 z m -2.50375,0 -1.24719,0 0,-1.24719 1.24719,0 0,1.24719 z m -2.49437,0 -1.25657,0 0,-1.
 24719 1.25657,0 0,1.24719 z m -2.50375,0 -1.24719,0 0,-1.24719 1.24719,0 0,1.24719 z m -2.50375,0 -1.24719,0 0,-1.24719 1.24719,0 0,1.24719 z m -2.49437,0 -1.25656,0 0,-1.24719 1.25656,0 0,1.24719 z m -2.50375,0 -1.24719,0 0,-1.24719 1.24719,0 0,1.24719 z m -2.50375,0 -1.24718,0 0,-1.24719 1.24718,0 0,1.24719 z m -2.49437,0 -1.25656,0 0,-1.24719 1.25656,0 0,1.24719 z m -2.50375,0 -1.24718,0 0,-1.24719 1.24718,0 0,1.24719 z m -2.50375,0 -1.24718,0 0,-1.24719 1.24718,0 0,1.24719 z m -2.49437,0 -1.25656,0 0,-1.24719 1.25656,0 0,1.24719 z m -2.50375,0 -1.24718,0 0,-1.24719 1.24718,0 0,1.24719 z m -2.50375,0 -1.24718,0 0,-1.24719 1.24718,0 0,1.24719 z m -2.49437,0 -1.25656,0 0,-1.24719 1.25656,0 0,1.24719 z m -2.50375,0 -1.24718,0 0,-1.24719 1.24718,0 0,1.24719 z m -2.50375,0 -1.24718,0 0,-1.24719 1.24718,0 0,1.24719 z m -2.49437,0 -1.25656,0 0,-1.24719 1.25656,0 0,1.24719 z m -2.50375,0 -1.24718,0 0,-1.24719 1.24718,0 0,1.24719 z m -2.50374,0 -1.247187,0 0,-1.24719 1.247187,0 0,1.24719 
 z m -2.494373,0 -1.256563,0 0,-1.24719 1.256563,0 0,1.24719 z m -2.503749,0 -1.247186,0 0,-1.24719 1.247186,0 0,1.24719 z m -2.503749,0 -1.247185,0 0,-1.24719 1.247185,0 0,1.24719 z m -2.494371,0 -1.256563,0 0,-1.24719 1.256563,0 0,1.24719 z m -2.503749,0 -1.247185,0 0,-1.24719 1.247185,0 0,1.24719 z m -2.503748,0 -1.247186,0 0,-1.24719 1.247186,0 0,1.24719 z m -2.494372,0 -1.256563,0 0,-1.24719 1.256563,0 0,1.24719 z m -2.503749,0 -1.247185,0 0,-1.24719 1.247185,0 0,1.24719 z m -2.503748,0 -1.247186,0 0,-1.24719 1.247186,0 0,1.24719 z m -2.494372,0 -1.256563,0 0,-1.24719 1.256563,0 0,1.24719 z m -2.503749,0 -1.247185,0 0,-1.24719 1.247185,0 0,1.24719 z m -2.503748,0 -1.247186,0 0,-1.24719 1.247186,0 0,1.24719 z m -2.494372,0 -1.256563,0 0,-1.24719 1.256563,0 0,1.24719 z m -2.503748,0 -1.247186,0 0,-1.24719 1.247186,0 0,1.24719 z m -2.503749,0 -1.247186,0 0,-1.24719 1.247186,0 0,1.24719 z m -2.494372,0 -1.256563,0 0,-1.24719 1.256563,0 0,1.24719 z m -2.503748,0 -1.247186,0 0,-1.2471
 9 1.247186,0 0,1.24719 z m -2.531881,-0.0188 -1.003375,-0.0469 -0.300075,-0.0469 0.187547,-1.23781 0.262565,0.0469 -0.05626,-0.009 0.975243,0.0469 -0.06564,1.24718 z m -2.597522,-0.31883 -0.956488,-0.25319 -0.300075,-0.10315 0.42198,-1.18154 0.271943,0.10315 -0.05626,-0.0188 0.928356,0.24382 -0.309452,1.20967 z m -2.484994,-0.84396 -0.825206,-0.39385 -0.337584,-0.2063 0.656414,-1.06901 0.309452,0.19692 -0.05626,-0.0375 0.797073,0.38447 -0.543885,1.12528 z m -2.250561,-1.31283 -0.656413,-0.48762 -0.365717,-0.33758 0.843961,-0.92836 0.337584,0.31883 -0.04689,-0.0375 0.628282,0.45949 -0.74081,1.01275 z m -1.959863,-1.74418 -0.468867,-0.51575 -0.365716,-0.48762 1.003375,-0.75019 0.346961,0.46887 -0.03751,-0.0563 0.450112,0.497 -0.928356,0.84396 z m -1.566015,-2.09115 -0.309452,-0.51575 -0.309452,-0.63766 1.12528,-0.54388 0.300075,0.60952 -0.03751,-0.0469 0.300075,0.48762 -1.069016,0.64703 z m -1.115903,-2.36308 -0.17817,-0.497 -0.196924,-0.75957 1.209677,-0.30945 0.187546,0.73143 -0.018
 75,-0.0563 0.168792,0.45949 -1.172167,0.43136 z m -0.628282,-2.54126 -0.07502,-0.50638 -0.03751,-0.80645 1.247186,-0.0656 0.03751,0.77832 -0.0094,-0.0656 0.07502,0.47824 -1.237809,0.18755 z"
-         id="path3049"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 42.91069,309.13326 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49438 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49438 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49438 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,
 0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-0.73143 0.03751,-0.54389 1.237809,0.0563 -0.01875,0.52513 0,-0.0375 0,0.73143 -1.256563,0 z m 0.150038,-2.58814 0.112528,-0.67517 0.150037,-0.60015 1.219054,0.31883 -0.150038,0.56264 0,-0.075 -0.09377,0.65641 -1.237808,-0.18754 z m 0.656413,-2.53188 0.187547,-0.50638 0.337584,-0.69392 1.12528,0.54388 -0.318829,0.65642 0.01875,-0.0563 -0.168792,0.48762 -1.181544,-0.43135 z m 1.16279,-2.34434 0.187546,-0.31883 0.562641,-0.75018 0.993997,0.75018 -0.543885,0.73144 0.03751,-0.0563 -0.168792,0.30008 -1.069016,-0.65642 z m 1.612902,-2.04426 0.112528,-0.13128 0.84396,-0.76894 0.84396,0.91898 -0.825206,0.75018 0.05627,-0.0375 -0.09377,0.11252 -0.937733,-0.84396 z m 2.063013,-1.72543 0.956489,-0.58139 0.168792,-0.075 0.543885,1.12528 -0.150037,0.0563 0.05626,-0.0188 -0.937734,0
 .56264 -0.637659,-1.06902 z m 2.306825,-1.2003 0.918979,-0.33758 0.337584,-0.0938 0.300075,1.21905 -0.300075,0.075 0.05626,-0.0188 -0.881469,0.31883 -0.431358,-1.16279 z m 2.531881,-0.71267 0.84396,-0.11253 0.450112,-0.0375 0.07502,1.25656 -0.431357,0.0188 0.05626,0 -0.806451,0.11253 -0.187547,-1.23781 z m 2.588145,-0.18755 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.494371,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.513126,0 1.237809,0 0,1.25656 -1.237809,0 0,-1.25656 z m 2.494372,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.494371,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.513126,0 1.237808,0 0,1.25656 -1.237808,0 0,-1.25656 z m 2.494372,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.494371,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.513126,0 1.237808,0 0,1.25656 -1.237808,0 0,-1.25656 z m 2.494371,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.494372,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.513126,0 1.237808,0 0,1.25656 -
 1.237808,0 0,-1.25656 z m 2.494371,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.494372,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.513126,0 1.237808,0 0,1.25656 -1.237808,0 0,-1.25656 z m 2.494371,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.494372,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.513126,0 1.237808,0 0,1.25656 -1.237808,0 0,-1.25656 z m 2.494371,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.53188,0 1.21905,0.075 0.0938,0 -0.18755,1.23781 -0.0563,0 0.0563,0 -1.2003,-0.0563 0.075,-1.25656 z m 2.6069,0.30007 1.06902,0.26257 0.18754,0.075 -0.43135,1.18154 -0.15004,-0.075 0.0563,0.0188 -1.05026,-0.26257 0.31883,-1.2003 z m 2.49437,0.80646 0.82521,0.4126 0.31883,0.18755 -0.63766,1.06901 -0.30008,-0.16879 0.0563,0.0188 -0.82521,-0.39384 0.56264,-1.12528 z m 2.25056,1.33158 0.60015,0.45011 0.4126,0.37509 -0.84396,0.91898 -0.39384,-0.35634 0.0375,0.0375 -0.56264,-0.43136 0.75018,-0.99399 z m 1.93173,1.74418 0.3751,0.4126 0.45011,0.60015 -0.994,0.7
 5019 -0.45011,-0.58139 0.0375,0.0375 -0.33759,-0.3751 0.91898,-0.84396 z m 1.53789,2.11928 0.18754,0.30007 0.41261,0.86272 -1.10653,0.54389 -0.4126,-0.82521 0.0188,0.0375 -0.16879,-0.28132 1.06902,-0.63766 z m 1.08777,2.38184 0.0563,0.18755 0.28132,1.06902 -1.21905,0.31883 -0.26257,-1.05027 0.0188,0.0563 -0.0563,-0.15004 1.18155,-0.43136 z m 0.54388,2.56939 0.0188,0.075 0.0563,1.21906 -1.23781,0.075 -0.075,-1.20029 0,0.0563 0,-0.0563 1.23781,-0.1688 z m 0.075,2.58815 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.49437 0,1.25656 -1.23781,0 0,-1.25656 1.23781,0 z m 0,2.49437 0,1.25656 -1.23781,0 0,-1.25656 1.23781,0 z m 0,2.51313 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.49437 0,1.25656 -1.23781,0 0,-1.25656 1.23781,0 z m 0,2.49437 0,1.25656 -1.23781,0 0,-1.25656 1.23781,0 z m 0,2.51313 0,1.2378 -1.23781,0 0,-1.2378 1.23781,0 z m 0,2.49437 0,1.25656 -1.23781,0 0,-1.25656 1.23781,0 z m 0,2.49437 0,1.25656 -1.23781,0 0,-1.25656 1.23781,0 z m 0,2.51312 0,1.23781 -1.23781,0 0,
 -1.23781 1.23781,0 z m 0,2.49438 0,1.25656 -1.23781,0 0,-1.25656 1.23781,0 z m 0,2.49437 0,1.25656 -1.23781,0 0,-1.25656 1.23781,0 z m 0,2.51312 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.49438 0,1.25656 -1.23781,0 0,-1.25656 1.23781,0 z m 0,2.49437 0,1.25656 -1.23781,0 0,-1.25656 1.23781,0 z m 0,2.51312 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.49437 0,1.25657 -1.23781,0 0,-1.25657 1.23781,0 z m 0,2.49438 0,1.25656 -1.23781,0 0,-1.25656 1.23781,0 z m 0,2.51312 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.49437 0,1.25657 -1.23781,0 0,-1.25657 1.23781,0 z m -0.0187,2.53188 -0.0375,0.76895 -0.0938,0.54388 -1.2378,-0.18755 0.0938,-0.52513 -0.0188,0.075 0.0375,-0.73143 1.25657,0.0563 z m -0.3751,2.58815 -0.16879,0.63766 -0.22505,0.6189 -1.16279,-0.43136 0.2063,-0.60015 -0.0188,0.075 0.15004,-0.60015 1.21905,0.30008 z m -0.91898,2.45686 -0.2063,0.43136 -0.43135,0.71268 -1.06902,-0.65642 0.43136,-0.67517 -0.0375,0.0563 0.18754,-0.4126 1.12528,0.54388 z m -1.38784,2.213
 05 -0.16879,0.22506 -0.69393,0.76894 -0.91897,-0.84396 0.65641,-0.73143 -0.0375,0.0375 0.16879,-0.20631 0.994,0.75019 z m -1.8192,1.87547 -0.0563,0.0563 -0.97524,0.73144 -0.0563,0.0375 -0.65641,-1.06901 0.0375,-0.0188 -0.0563,0.0375 0.91898,-0.69392 -0.0375,0.0375 0.0375,-0.0375 0.84396,0.91898 z m -2.21306,1.50037 -0.95648,0.45011 -0.24381,0.0938 -0.43136,-1.18155 0.22505,-0.075 -0.075,0.0188 0.91897,-0.43136 0.56264,1.12528 z m -2.4381,0.95649 -0.88147,0.22506 -0.41261,0.0563 -0.16879,-1.23781 0.35634,-0.0563 -0.0563,0.0187 0.84396,-0.2063 0.31883,1.2003 z m -2.58815,0.43136 -0.8252,0.0563 -0.45012,0 0,-1.25656 0.43136,0 -0.0375,0 0.82521,-0.0375 0.0563,1.23781 z m -2.53188,0.0563 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.513126,0 -1.237808,0 0,-1.25656 1.237808,0 0,1.25656 z m -2.494371,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.494372,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.513126,0 -1.23780
 8,0 0,-1.25656 1.237808,0 0,1.25656 z m -2.494371,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.494372,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.513126,0 -1.237808,0 0,-1.25656 1.237808,0 0,1.25656 z m -2.494371,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.494371,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.513127,0 -1.237808,0 0,-1.25656 1.237808,0 0,1.25656 z m -2.494371,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.494371,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.513126,0 -1.237809,0 0,-1.25656 1.237809,0 0,1.25656 z m -2.494372,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.494371,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.513126,0 -1.237809,0 0,-1.25656 1.237809,0 0,1.25656 z m -2.494372,0 -1.237808,0 -0.03751,0 0.05626,-1.25656 0.01875,0 -0.03751,0 1.237808,0 0,1.25656 z m -2.588145,-0.075 -1.181544,-0.18754 -0.112528,-0.0188 0.300075,-1.21905 0.09377,0.0187 -0.05626,-0.0187 1.144035,0.18755 -0.187547,1.2378 z
  m -2.56939,-0.56264 -0.975243,-0.35634 -0.24381,-0.11252 0.543885,-1.12528 0.206302,0.0938 -0.05626,-0.0188 0.956488,0.35633 -0.431358,1.16279 z m -2.381843,-1.06901 -0.731432,-0.45011 -0.356339,-0.26257 0.731432,-1.01275 0.337584,0.26256 -0.03751,-0.0375 0.712678,0.43136 -0.656414,1.06902 z m -2.119278,-1.53789 -0.487621,-0.45011 -0.431358,-0.48762 0.918979,-0.8252 0.431358,0.45011 -0.05627,-0.0375 0.468867,0.4126 -0.84396,0.93773 z m -1.744184,-1.93173 -0.281321,-0.37509 -0.450112,-0.73143 1.069017,-0.63766 0.431357,0.69392 -0.03751,-0.0563 0.262565,0.33758 -0.993997,0.76894 z m -1.312828,-2.26931 -0.131282,-0.26257 -0.356339,-0.95649 1.181544,-0.43135 0.337584,0.93773 -0.01875,-0.0563 0.112528,0.22505 -1.125281,0.54389 z m -0.825205,-2.47562 -0.03751,-0.15004 -0.187547,-1.16279 1.237808,-0.18754 0.168792,1.12528 0,-0.0563 0.01876,0.11253 -1.200299,0.31883 z m -0.300075,-2.6069 0,-0.0375 1.256563,-0.075 0,0.0375 -1.256563,0.075 z"
-         id="path3051"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 41.972957,389.14069 0,-1.2378 1.256563,0 0,1.2378 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49438 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49438 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0
  0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-0.75019 0.03751,-0.54388 1.237808,0.075 -0.01875,0.52514 0,-0.0375 0,0.73143 -1.256563,0 z m 0.150037,-2.58815 0.112528,-0.69392 0.150037,-0.60015 1.219054,0.31883 -0.150037,0.56264 0,-0.0563 -0.09377,0.65641 -1.237808,-0.18755 z m 0.656413,-2.53188 0.187547,-0.52513 0.337584,-0.67517 1.125281,0.52513 -0.31883,0.65642 0.01876,-0.0563 -0.168792,0.50638 -1.181545,-0.43136 z m 1.16279,-2.34433 0.187547,-0.31883 0.56264,-0.76894 0.993998,0.76894 -0.543886,0.73143 0.03751,-0.0563 -0.168792,0.28132 -1.069016,-0.63766 z m 1.612902,-2.06302 0.112528,-0.13128 0.84396,-0.75018 0.84396,0.91897 -0.825205,0.75019 0.05626,-0.0563 -0.09377,0.11253 -0.937734,-0.84397 z m 2.063014,-1.70667 0.956488,-0.5814 0.168792,-0.0938 0.543886,1.12528 -0.150038,0.075 0.05626,-0.0375 -0.937733,0.58
 14 -0.637659,-1.06902 z m 2.306825,-1.21905 0.918978,-0.31883 0.337585,-0.0938 0.300074,1.21906 -0.300074,0.075 0.05626,-0.0188 -0.88147,0.31883 -0.431357,-1.18154 z m 2.53188,-0.69393 0.843961,-0.13128 0.450112,-0.0188 0.07502,1.25656 -0.431357,0.0188 0.05626,-0.0188 -0.806451,0.13128 -0.187547,-1.23781 z m 2.588145,-0.18754 1.256563,0 0,1.2378 -1.256563,0 0,-1.2378 z m 2.494371,0 1.256564,0 0,1.2378 -1.256564,0 0,-1.2378 z m 2.513127,0 1.237808,0 0,1.2378 -1.237808,0 0,-1.2378 z m 2.494371,0 1.256563,0 0,1.2378 -1.256563,0 0,-1.2378 z m 2.494371,0 1.256563,0 0,1.2378 -1.256563,0 0,-1.2378 z m 2.513126,0 1.237809,0 0,1.2378 -1.237809,0 0,-1.2378 z m 2.494372,0 1.256563,0 0,1.2378 -1.256563,0 0,-1.2378 z m 2.494371,0 1.256563,0 0,1.2378 -1.256563,0 0,-1.2378 z m 2.513126,0 1.237809,0 0,1.2378 -1.237809,0 0,-1.2378 z m 2.494372,0 1.256563,0 0,1.2378 -1.256563,0 0,-1.2378 z m 2.494371,0 1.256563,0 0,1.2378 -1.256563,0 0,-1.2378 z m 2.513126,0 1.237809,0 0,1.2378 -1.237809,0 0,-1.2378 
 z m 2.494372,0 1.256563,0 0,1.2378 -1.256563,0 0,-1.2378 z m 2.494371,0 1.256563,0 0,1.2378 -1.256563,0 0,-1.2378 z m 2.513126,0 1.237809,0 0,1.2378 -1.237809,0 0,-1.2378 z m 2.494372,0 1.256563,0 0,1.2378 -1.256563,0 0,-1.2378 z m 2.494371,0 1.256563,0 0,1.2378 -1.256563,0 0,-1.2378 z m 2.513126,0 1.237808,0 0,1.2378 -1.237808,0 0,-1.2378 z m 2.494375,0 1.25656,0 0,1.2378 -1.25656,0 0,-1.2378 z m 2.53188,0 1.21905,0.0563 0.0938,0.0188 -0.18754,1.2378 -0.0563,-0.0188 0.0563,0.0188 -1.2003,-0.075 0.075,-1.2378 z m 2.6069,0.28132 1.06901,0.28132 0.18755,0.0563 -0.43136,1.18155 -0.15004,-0.0563 0.0563,0.0188 -1.05026,-0.26257 0.31883,-1.21905 z m 2.49437,0.8252 0.8252,0.4126 0.31883,0.18755 -0.63766,1.06902 -0.30007,-0.18755 0.0563,0.0375 -0.8252,-0.39385 0.56264,-1.12528 z m 2.25056,1.31283 0.60015,0.45011 0.4126,0.37509 -0.84396,0.93774 -0.39385,-0.35634 0.0375,0.0375 -0.56264,-0.45011 0.75019,-0.994 z m 1.93173,1.76294 0.37509,0.39385 0.45012,0.6189 -0.994,0.75019 -0.45011,-0.5814 0
 .0375,0.0375 -0.33759,-0.39385 0.91898,-0.8252 z m 1.53788,2.10052 0.18755,0.31883 0.4126,0.84396 -1.10652,0.54389 -0.41261,-0.82521 0.0188,0.0563 -0.16879,-0.28132 1.06901,-0.65642 z m 1.08777,2.38185 0.0563,0.18754 0.28132,1.08777 -1.21906,0.30008 -0.26256,-1.05026 0.0187,0.0563 -0.0563,-0.15004 1.18154,-0.43135 z m 0.54389,2.56939 0.0187,0.075 0.0563,1.23781 -1.23781,0.0563 -0.075,-1.2003 0,0.0563 0,-0.0375 1.23781,-0.18754 z m 0.075,2.58814 0,1.25656 -1.23781,0 0,-1.25656 1.23781,0 z m 0,2.51313 0,1.2378 -1.23781,0 0,-1.2378 1.23781,0 z m 0,2.49437 0,1.25656 -1.23781,0 0,-1.25656 1.23781,0 z m 0,2.49437 0,1.25656 -1.23781,0 0,-1.25656 1.23781,0 z m 0,2.51312 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.49438 0,1.25656 -1.23781,0 0,-1.25656 1.23781,0 z m 0,2.49437 0,1.25656 -1.23781,0 0,-1.25656 1.23781,0 z m 0,2.51312 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.49438 0,1.25656 -1.23781,0 0,-1.25656 1.23781,0 z m 0,2.49437 0,1.25656 -1.23781,0 0,-1.25656 1.23781,0 z m 
 0,2.51312 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.49437 0,1.25657 -1.23781,0 0,-1.25657 1.23781,0 z m 0,2.49438 0,1.25656 -1.23781,0 0,-1.25656 1.23781,0 z m 0,2.51312 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.49437 0,1.25657 -1.23781,0 0,-1.25657 1.23781,0 z m 0,2.49437 0,1.25657 -1.23781,0 0,-1.25657 1.23781,0 z m 0,2.51313 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.49437 0,1.25656 -1.23781,0 0,-1.25656 1.23781,0 z m 0,2.49437 0,1.25657 -1.23781,0 0,-1.25657 1.23781,0 z m 0,2.51313 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m -0.0188,2.53188 -0.0375,0.75019 -0.0938,0.54388 -1.23781,-0.18754 0.075,-0.50638 0,0.0563 0.0375,-0.73143 1.25656,0.075 z m -0.37509,2.58814 -0.16879,0.61891 -0.22506,0.63766 -1.16279,-0.43136 0.2063,-0.60015 -0.0187,0.0563 0.15004,-0.60015 1.21905,0.31882 z m -0.90022,2.45687 -0.22506,0.43135 -0.4126,0.71268 -1.08777,-0.65641 0.4126,-0.67517 -0.0188,0.0375 0.20631,-0.39385 1.12528,0.54389 z m -1.4066,2.21305 -0.1688,0.22505 -0.67516
 ,0.75019 -0.93774,-0.8252 0.65642,-0.73144 -0.0375,0.0375 0.16879,-0.2063 0.994,0.75019 z m -1.81921,1.87547 -0.0563,0.0563 -0.97524,0.73143 -0.0563,0.0375 -0.63766,-1.06902 0.0188,-0.0188 -0.0563,0.0375 0.91898,-0.69392 -0.0375,0.0375 0.0563,-0.0563 0.8252,0.93774 z m -2.21305,1.48161 -0.95649,0.46887 -0.24381,0.0938 -0.4126,-1.18154 0.2063,-0.075 -0.075,0.0188 0.93774,-0.45012 0.54388,1.12528 z m -2.43811,0.97525 -0.88147,0.22505 -0.39384,0.0563 -0.18755,-1.23781 0.35634,-0.0563 -0.0563,0.0188 0.86272,-0.22506 0.30007,1.21906 z m -2.56939,0.43136 -0.84396,0.0563 -0.45011,0 0,-1.25656 0.43136,0 -0.0375,0 0.8252,-0.0375 0.075,1.23781 z m -2.55063,0.0563 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.494373,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.494372,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.513126,0 -1.237808,0 0,-1.25656 1.237808,0 0,1.25656 z m -2.494371,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.494372,0 -1.256563,0 0,-1.25656 1.256563,0 0,1
 .25656 z m -2.513126,0 -1.237808,0 0,-1.25656 1.237808,0 0,1.25656 z m -2.494371,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.494372,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.513126,0 -1.237808,0 0,-1.25656 1.237808,0 0,1.25656 z m -2.494371,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.494371,0 -1.256564,0 0,-1.25656 1.256564,0 0,1.25656 z m -2.513127,0 -1.237808,0 0,-1.25656 1.237808,0 0,1.25656 z m -2.494371,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.494371,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.513126,0 -1.237809,0 0,-1.25656 1.237809,0 0,1.25656 z m -2.494372,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.494371,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.513126,0 -1.237809,0 -0.03751,0 0.07502,-1.25656 0,0 -0.03751,0 1.237809,0 0,1.25656 z m -2.588145,-0.075 -1.181544,-0.18755 -0.112529,-0.0188 0.31883,-1.21905 0.07502,0.0188 -0.05626,-0.0188 1.144035,0.18754 -0.187547,1.23781 z m -2.550636,-0.54389 -0.993997,-0.
 37509 -0.243811,-0.11253 0.543886,-1.12528 0.206301,0.0938 -0.05626,-0.0188 0.975243,0.35634 -0.431358,1.18154 z m -2.400598,-1.08777 -0.731432,-0.45011 -0.356339,-0.26256 0.750187,-0.994 0.31883,0.24381 -0.03751,-0.0375 0.712678,0.43136 -0.656414,1.06901 z m -2.100523,-1.53788 -0.506376,-0.45011 -0.431358,-0.46887 0.937734,-0.84396 0.412603,0.45011 -0.05626,-0.0375 0.487621,0.43136 -0.84396,0.91898 z m -1.762939,-1.93173 -0.28132,-0.37509 -0.431358,-0.73144 1.069017,-0.63765 0.412602,0.69392 -0.03751,-0.0563 0.262566,0.35634 -0.993998,0.75019 z m -1.312827,-2.26932 -0.131283,-0.26256 -0.337584,-0.95649 1.16279,-0.43136 0.337584,0.93774 -0.01876,-0.0563 0.112528,0.24381 -1.12528,0.52513 z m -0.825206,-2.47561 -0.03751,-0.15004 -0.168792,-1.14403 1.237808,-0.18755 0.150038,1.12528 0,-0.075 0.01875,0.11253 -1.200299,0.31883 z m -0.300074,-2.6069 0,-0.0563 1.256563,-0.0563 0,0.0375 -1.256563,0.075 z"
-         id="path3053"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 521.99879,55.063718 57.20175,42.798162 -0.75018,1.012752 -57.20175,-42.798162 0.75018,-1.012752 z m 57.33304,40.547601 2.51312,5.007501 -5.51387,-0.994001 3.00075,-4.0135 z"
-         id="path3055"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 496.32365,98.086936 0,320.854934 450.71228,0 0,-320.854934 -450.71228,0 z"
-         id="path3057"
-         style="fill:#c5e0b4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 496.32365,98.086936 450.71228,0 0,320.854934 -450.71228,0 z"
-         id="path3059"
-         style="fill:none;stroke:#935f1c;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="664.28186"
-         y="119.99486"
-         id="text3061"
-         xml:space="preserve"
-         style="font-size:17.55437279px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">TaskManager</text>
-      <path
-         d="m 506.78875,137.47175 0,271.79271 136.42149,0 0,-271.79271 -136.42149,0 z"
-         id="path3063"
-         style="fill:#e4eaf4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 506.78875,137.47175 136.42149,0 0,271.79271 -136.42149,0 z"
-         id="path3065"
-         style="fill:none;stroke:#898c92;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="541.06915"
-         y="156.82913"
-         id="text3067"
-         xml:space="preserve"
-         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task Slot</text>
-      <path
-         d="m 653.71286,137.47175 0,271.79271 136.42148,0 0,-271.79271 -136.42148,0 z"
-         id="path3069"
-         style="fill:#e4eaf4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 653.71286,137.47175 136.42148,0 0,271.79271 -136.42148,0 z"
-         id="path3071"
-         style="fill:none;stroke:#898c92;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="687.98322"
-         y="156.82913"
-         id="text3073"
-         xml:space="preserve"
-         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task Slot</text>
-      <path
-         d="m 800.59945,137.32171 0,271.94275 136.45899,0 0,-271.94275 -136.45899,0 z"
-         id="path3075"
-         style="fill:#e4eaf4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 800.59945,137.32171 136.45899,0 0,271.94275 -136.45899,0 z"
-         id="path3077"
-         style="fill:none;stroke:#898c92;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="834.89728"
-         y="156.76062"
-         id="text3079"
-         xml:space="preserve"
-         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task Slot</text>
-      <path
-         d="m 517.87277,203.732 c 0,-17.68565 14.30981,-32.03298 31.95796,-32.03298 17.64814,0 31.95796,14.34733 31.95796,32.03298 0,17.70442 -14.30982,32.03298 -31.95796,32.03298 -17.64815,0 -31.95796,-14.32856 -31.95796,-32.03298"
-         id="path3081"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 517.87277,203.732 c 0,-17.68565 14.30981,-32.03298 31.95796,-32.03298 17.64814,0 31.95796,14.34733 31.95796,32.03298 0,17.70442 -14.30982,32.03298 -31.95796,32.03298 -17.64815,0 -31.95796,-14.32856 -31.95796,-32.03298"
-         id="path3083"
-         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="532.43054"
-         y="201.69257"
-         id="text3085"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source</text>
-      <text
-         x="542.18298"
-         y="213.69556"
-         id="text3087"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[2]</text>
-      <path
-         d="m 567.5914,203.75076 c 0,-17.70441 14.29106,-32.03298 31.95796,-32.03298 17.62939,0 31.95796,14.32857 31.95796,32.03298 0,17.70441 -14.32857,32.03298 -31.95796,32.03298 -17.6669,0 -31.95796,-14.32857 -31.95796,-32.03298"
-         id="path3089"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 567.5914,203.75076 c 0,-17.70441 14.29106,-32.03298 31.95796,-32.03298 17.62939,0 31.95796,14.32857 31.95796,32.03298 0,17.70441 -14.32857,32.03298 -31.95796,32.03298 -17.6669,0 -31.95796,-14.32857 -31.95796,-32.03298"
-         id="path3091"
-         style="fill:none;stroke:#000000;stroke-width:0.63765883px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="584.05627"
-         y="201.69257"
-         id="text3093"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">map()</text>
-      <text
-         x="591.85822"
-         y="213.69556"
-         id="text3095"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[2]</text>
-      <path
-         d="m 517.08507,283.5894 c 0,-17.68566 14.34732,-32.03298 32.05173,-32.03298 17.68566,0 32.03298,14.34732 32.03298,32.03298 0,17.70441 -14.34732,32.05173 -32.03298,32.05173 -17.70441,0 -32.05173,-14.34732 -32.05173,-32.05173"
-         id="path3097"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="526.86896"
-         y="269.51437"
-         id="text3099"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">keyBy</text>
-      <text
-         x="557.77667"
-         y="269.51437"
-         id="text3101"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">()/</text>
-      <text
-         x="523.41809"
-         y="281.51736"
-         id="text3103"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">window()/</text>
-      <text
-         x="530.91998"
-         y="293.52036"
-         id="text3105"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">apply()</text>
-      <text
-         x="541.42261"
-         y="305.52335"
-         id="text3107"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[2]</text>
-      <path
-         d="m 513.05281,228.65696 0,-1.27531 1.23781,0 0,1.27531 -1.23781,0 z m 0,-2.51312 0,-1.23781 1.23781,0 0,1.23781 -1.23781,0 z m 0,-2.51313 0,-1.23781 1.23781,0 0,1.23781 -1.23781,0 z m 0,-2.47561 0,-1.27532 1.23781,0 0,1.27532 -1.23781,0 z m 0,-2.51313 0,-1.23781 1.23781,0 0,1.23781 -1.23781,0 z m 0,-2.51313 0,-1.23781 1.23781,0 0,1.23781 -1.23781,0 z m 0,-2.47561 0,-1.27532 1.23781,0 0,1.27532 -1.23781,0 z m 0,-2.51313 0,-1.23781 1.23781,0 0,1.23781 -1.23781,0 z m 0,-2.51313 0,-1.2378 1.23781,0 0,1.2378 -1.23781,0 z m 0,-2.47561 0,-1.27532 1.23781,0 0,1.27532 -1.23781,0 z m 0,-2.51313 0,-1.23781 1.23781,0 0,1.23781 -1.23781,0 z m 0,-2.51312 0,-1.23781 1.23781,0 0,1.23781 -1.23781,0 z m 0,-2.47562 0,-1.27532 1.23781,0 0,1.27532 -1.23781,0 z m 0,-2.51313 0,-1.23781 1.23781,0 0,1.23781 -1.23781,0 z m 0,-2.51312 0,-1.23781 1.23781,0 0,1.23781 -1.23781,0 z m 0,-2.47562 0,-1.27532 1.23781,0 0,1.27532 -1.23781,0 z m 0,-2.51313 0,-1.2378 1.23781,0 0,1.2378 -1.23781,0 z m 0,-2.5131
 2 0,-1.23781 1.23781,0 0,1.23781 -1.23781,0 z m 0,-2.47562 0,-1.27532 1.23781,0 0,1.27532 -1.23781,0 z m 0,-2.51313 0,-1.2378 1.23781,0 0,1.2378 -1.23781,0 z m 0,-2.51312 0.075,-1.12528 0,-0.18755 1.23781,0.18755 0,0.15004 0,-0.0375 -0.075,1.05026 -1.23781,-0.0375 z m 0.30008,-2.62566 0.30007,-1.05026 0.075,-0.18754 1.16279,0.4126 -0.075,0.18755 0.0375,-0.075 -0.26256,1.01275 -1.23781,-0.30008 z m 0.8252,-2.47561 0.45012,-0.90023 0.15003,-0.26256 1.08778,0.63766 -0.15004,0.22505 0,-0.0375 -0.41261,0.86272 -1.12528,-0.52513 z m 1.31283,-2.25056 0.52513,-0.75019 0.30008,-0.30007 0.90022,0.86271 -0.22505,0.26257 0.0375,-0.0375 -0.52513,0.67516 -1.01275,-0.71267 z m 1.72543,-1.988 0.60015,-0.52513 0.4126,-0.30007 0.75019,0.97524 -0.4126,0.30007 0.0375,-0.0375 -0.56264,0.52514 -0.82521,-0.93774 z m 2.06302,-1.57539 0.60015,-0.37509 0.56264,-0.26257 0.56264,1.12528 -0.56264,0.26257 0.075,-0.0375 -0.56264,0.33758 -0.67516,-1.05026 z m 2.36308,-1.12528 0.60015,-0.22506 0.67517,-0.18754 0.30
 008,1.2378 -0.63766,0.15004 0.0375,0 -0.52513,0.18755 -0.45012,-1.16279 z m 2.55064,-0.67517 0.60015,-0.075 0.71268,-0.0375 0.075,1.23781 -0.71268,0.0375 0.075,0 -0.56264,0.075 -0.18755,-1.23781 z m 2.58814,-0.15004 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.51313,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.47562,0 1.27531,0 0,1.27532 -1.27531,0 0,-1.27532 z m 2.51312,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.51313,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.47562,0 1.27531,0 0,1.27532 -1.27531,0 0,-1.27532 z m 2.51312,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.51313,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.47562,0 1.27531,0 0,1.27532 -1.27531,0 0,-1.27532 z m 2.51312,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.51313,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.47561,0 1.27532,0 0,1.27532 -1.27532,0 0,-1.27532 z m 2.51313,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.51313,0 1.2378,0 0,1.27532 -1.2378,0 0,-1.27532 z m 2.47561,0
  1.27532,0 0,1.27532 -1.27532,0 0,-1.27532 z m 2.51313,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.51313,0 1.2378,0 0,1.27532 -1.2378,0 0,-1.27532 z m 2.47561,0 1.27532,0 0,1.27532 -1.27532,0 0,-1.27532 z m 2.51313,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.51312,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.47562,0 1.27532,0 0,1.27532 -1.27532,0 0,-1.27532 z m 2.51313,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.51312,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.47562,0 1.27532,0 0,1.27532 -1.27532,0 0,-1.27532 z m 2.51313,0 1.2378,0 0,1.27532 -1.2378,0 0,-1.27532 z m 2.51312,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.47562,0 1.27532,0 0,1.27532 -1.27532,0 0,-1.27532 z m 2.51312,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.51313,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.47562,0 1.27532,0 0,1.27532 -1.27532,0 0,-1.27532 z m 2.51312,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.51313,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.
 47562,0 1.27531,0 0,1.27532 -1.27531,0 0,-1.27532 z m 2.51312,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.51313,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.47562,0 1.27531,0 0,1.27532 -1.27531,0 0,-1.27532 z m 2.51312,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.51313,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.51312,0.075 0.11253,0 1.2003,0.18755 -0.18755,1.23781 -1.16279,-0.18755 0.075,0 -0.11253,0 0.075,-1.23781 z m 2.58815,0.48762 0.075,0.0375 1.16279,0.41261 -0.45012,1.16279 -1.08777,-0.3751 0.0375,0 -0.075,0 0.33759,-1.23781 z m 2.47561,1.05027 1.05027,0.63765 0.0375,0.0375 -0.75019,1.01276 -0.0375,-0.0375 0.075,0.0375 -1.05026,-0.63766 0.67516,-1.05026 z m 2.13804,1.46286 0.8252,0.75019 0.15004,0.15003 -0.93773,0.86272 -0.11253,-0.15004 0.0375,0.075 -0.7877,-0.75019 0.82521,-0.93773 z m 1.83796,1.87547 0.60014,0.8252 0.15004,0.26257 -1.08777,0.63766 -0.11253,-0.22506 0.0375,0.0375 -0.60015,-0.7877 1.01276,-0.75018 z m 1.38784,2.21305 0.4126,0.8252 0.150
 04,0.3751 -1.2003,0.4126 -0.11253,-0.33758 0.0375,0.075 -0.4126,-0.78769 1.12528,-0.56264 z m 0.93773,2.43811 0.22506,0.86271 0.075,0.4126 -1.23781,0.18755 -0.075,-0.4126 0.0375,0.075 -0.22505,-0.7877 1.20029,-0.33758 z m 0.45012,2.58814 0.0375,0.90022 0,0.3751 -1.23781,0 0,-0.3751 0,0.0375 -0.0375,-0.86271 1.23781,-0.075 z m 0.0375,2.55064 0,1.2378 -1.23781,0 0,-1.2378 1.23781,0 z m 0,2.47561 0,1.27532 -1.23781,0 0,-1.27532 1.23781,0 z m 0,2.51313 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.51312 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.47562 0,1.27532 -1.23781,0 0,-1.27532 1.23781,0 z m 0,2.51313 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.51312 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.47562 0,1.27532 -1.23781,0 0,-1.27532 1.23781,0 z m 0,2.51313 0,1.2378 -1.23781,0 0,-1.2378 1.23781,0 z m 0,2.51312 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.47562 0,1.27532 -1.23781,0 0,-1.27532 1.23781,0 z m 0,2.51313 0,1.2378 -1.23781,0 0,-1.2378 1.23781,0 z 
 m 0,2.51312 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.47562 0,1.27532 -1.23781,0 0,-1.27532 1.23781,0 z m 0,2.51312 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.51313 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.47562 0,1.27531 -1.23781,0 0,-1.27531 1.23781,0 z m 0,2.51312 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.51313 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.47562 0,0.67516 0,0.60015 -1.27532,-0.0375 0.0375,-0.60015 0,0.0375 0,-0.67516 1.23781,0 z m -0.15004,2.58814 -0.11253,0.71268 -0.15004,0.56264 -1.2003,-0.30008 0.15004,-0.52513 -0.0375,0.0375 0.11253,-0.67517 1.23781,0.18755 z m -0.63766,2.55064 -0.22506,0.60015 -0.26256,0.60015 -1.12528,-0.52514 0.26256,-0.56264 -0.0375,0.0375 0.22506,-0.60015 1.16279,0.45012 z m -1.12528,2.36308 -0.30008,0.48763 -0.45011,0.56264 -1.01275,-0.71268 0.45011,-0.56264 -0.0375,0.0375 0.26257,-0.48762 1.08777,0.67516 z m -1.57539,2.06302 -0.33759,0.33758 -0.60015,0.56264 -0.86271,-0.90022 0.60015,-0.56264 -0.0375
 ,0.0375 0.30007,-0.30008 0.93774,0.82521 z m -1.988,1.72543 -0.26256,0.22505 -0.82521,0.48763 -0.63766,-1.05027 0.7877,-0.48762 -0.0375,0.0375 0.22506,-0.22505 0.75018,1.01275 z m -2.25056,1.31283 -0.26256,0.11252 -0.97525,0.3751 -0.4126,-1.2003 0.93773,-0.33759 -0.0375,0.0375 0.18755,-0.11252 0.56264,1.12528 z m -2.47562,0.8252 -0.22505,0.0375 -1.08777,0.18755 -0.18755,-1.23781 1.05026,-0.18755 -0.0375,0.0375 0.15004,-0.0375 0.33758,1.2003 z m -2.62565,0.30008 -0.22506,0 -1.05026,0 0,-1.23781 1.05026,0 -0.0375,0 0.18754,0 0.075,1.23781 z m -2.51313,0 -1.27531,0 0,-1.23781 1.27531,0 0,1.23781 z m -2.51312,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.51313,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.47561,0 -1.27532,0 0,-1.23781 1.27532,0 0,1.23781 z m -2.51313,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.51313,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.47561,0 -1.27532,0 0,-1.23781 1.27532,0 0,1.23781 z m -2.51313,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.2378
 1 z m -2.51313,0 -1.2378,0 0,-1.23781 1.2378,0 0,1.23781 z m -2.47561,0 -1.27532,0 0,-1.23781 1.27532,0 0,1.23781 z m -2.51313,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.51312,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.47562,0 -1.27532,0 0,-1.23781 1.27532,0 0,1.23781 z m -2.51313,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.51312,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.47562,0 -1.27532,0 0,-1.23781 1.27532,0 0,1.23781 z m -2.51313,0 -1.2378,0 0,-1.23781 1.2378,0 0,1.23781 z m -2.51312,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.47562,0 -1.27532,0 0,-1.23781 1.27532,0 0,1.23781 z m -2.51313,0 -1.2378,0 0,-1.23781 1.2378,0 0,1.23781 z m -2.51312,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.47562,0 -1.27532,0 0,-1.23781 1.27532,0 0,1.23781 z m -2.51312,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.51313,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.47562,0 -1.27531,0 0,-1.23781 1.27531,0 0,1.23781 z m -2.51312,0 -1.23781,0 0,-1.2
 3781 1.23781,0 0,1.23781 z m -2.51313,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.47562,0 -1.27531,0 0,-1.23781 1.27531,0 0,1.23781 z m -2.51312,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.51313,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.47561,0 -1.27532,0 0,-1.23781 1.27532,0 0,1.23781 z m -2.51313,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.51313,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.47561,0 -1.27532,0 0,-1.23781 1.27532,0 0,1.23781 z m -2.51313,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.51313,0 -1.2378,0 0,-1.23781 1.2378,0 0,1.23781 z m -2.47561,0 -1.27532,0 0,-1.23781 1.27532,0 0,1.23781 z m -2.55064,0 -0.97524,-0.0375 -0.33758,-0.075 0.18754,-1.23781 0.30008,0.0375 -0.075,0 0.97524,0.075 -0.075,1.23781 z m -2.58814,-0.33759 -0.93774,-0.22505 -0.33758,-0.11253 0.45011,-1.2003 0.26257,0.11253 -0.0375,0 0.90022,0.22505 -0.30007,1.2003 z m -2.47562,-0.8252 -0.8252,-0.41261 -0.33759,-0.18754 0.63766,-1.08777 0.33758,0.18754 -0.075,0
  0.78769,0.3751 -0.52513,1.12528 z m -2.25056,-1.31283 -0.63766,-0.48762 -0.4126,-0.33759 0.86271,-0.93773 0.33759,0.33758 -0.0375,-0.0375 0.63766,0.45012 -0.75019,1.01275 z m -1.95049,-1.76294 -0.48762,-0.48762 -0.37509,-0.52513 1.01275,-0.75019 0.37509,0.48762 -0.0375,-0.0375 0.4126,0.48762 -0.90023,0.82521 z m -1.57539,-2.10052 -0.30007,-0.48762 -0.33759,-0.67517 1.12528,-0.52513 0.30008,0.63766 0,-0.075 0.26256,0.48762 -1.05026,0.63766 z m -1.12528,-2.36309 -0.15004,-0.48762 -0.22505,-0.75019 1.2003,-0.33758 0.22505,0.75018 -0.0375,-0.0375 0.15004,0.45012 -1.16279,0.4126 z m -0.60015,-2.55064 -0.075,-0.48762 -0.075,-0.8252 1.27532,-0.0375 0.0375,0.78769 0,-0.075 0.075,0.45012 -1.23781,0.18754 z"
-         id="path3109"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 513.03406,309.00198 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.2378 1.25656,0 0,1.2378 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.2378 1.25656,0 0,1.2378 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.5131
 2 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-0.7877 0.0375,-0.48762 1.23781,0.0563 -0.0188,0.46886 0,-0.0375 0,0.7877 -1.25656,0 z m 0.15004,-2.58815 0.11253,-0.73143 0.13128,-0.54388 1.21905,0.30007 -0.13128,0.52513 0,-0.0563 -0.0938,0.69392 -1.23781,-0.18755 z m 0.63766,-2.53188 0.2063,-0.56264 0.30007,-0.63766 1.14404,0.54389 -0.31883,0.6189 0.0375,-0.0563 -0.2063,0.52513 -1.16279,-0.43136 z m 1.16279,-2.34433 0.2063,-0.35634 0.54388,-0.71268 0.994,0.73144 -0.52513,0.71267 0.0375,-0.0563 -0.2063,0.31883 -1.05026,-0.63766 z m 1.59414,-2.06301 0.13129,-0.15004 0.8252,-0.75019 0.84396,0.93774 -0.80645,0.71267 0.0563,-0.0375 -0.11252,0.13129 -0.93774,-0.84396 z m 2.06302,-1.70668 0.97524,-0.60015 0.15004,-0.075 0.54388,1.12528 -0.13128,0.0563 0.0563,-0.0375 -0.95648,0.58139 -0.63766,-1.05026 z m 2.30682,-1.21905 0.93773,-0.33759 0.31883,-0.0938 0.30008,1.
 21905 -0.28132,0.075 0.0563,-0.0187 -0.90022,0.31883 -0.43136,-1.16279 z m 2.53188,-0.71268 0.86272,-0.13128 0.43135,-0.0188 0.075,1.25656 -0.4126,0.0188 0.0563,-0.0188 -0.8252,0.13129 -0.18755,-1.23781 z m 2.58815,-0.18755 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.49437,0 1.25656,0 0,1.23781 -1.25656,0 0,-1.23781 z m 2.49437,0 1.25656,0 0,1.23781 -1.25656,0 0,-1.23781 z m 2.51313,0 1.2378,0 0,1.23781 -1.2378,0 0,-1.23781 z m 2.49437,0 1.25656,0 0,1.23781 -1.25656,0 0,-1.23781 z m 2.49437,0 1.25656,0 0,1.23781 -1.25656,0 0,-1.23781 z m 2.51312,0 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.49438,0 1.25656,0 0,1.23781 -1.25656,0 0,-1.23781 z m 2.49437,0 1.25656,0 0,1.23781 -1.25656,0 0,-1.23781 z m 2.51312,0 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.49437,0 1.25657,0 0,1.23781 -1.25657,0 0,-1.23781 z m 2.49438,0 1.25656,0 0,1.23781 -1.25656,0 0,-1.23781 z m 2.51312,0 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.49437,0 1.25657,0 0,1.23781 -1.25657,0 0,-1.23781 z m 2
 .49438,0 1.25656,0 0,1.23781 -1.25656,0 0,-1.23781 z m 2.51312,0 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.49437,0 1.25657,0 0,1.23781 -1.25657,0 0,-1.23781 z m 2.49437,0 1.25657,0 0,1.23781 -1.25657,0 0,-1.23781 z m 2.51313,0 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.53188,0 1.10653,0.0563 0.18754,0.0375 -0.16879,1.2378 -0.16879,-0.0375 0.0563,0.0188 -1.08777,-0.0563 0.075,-1.25657 z m 2.58815,0.30008 0.97524,0.26256 0.30007,0.0938 -0.43135,1.18154 -0.26257,-0.0938 0.0563,0.0187 -0.93773,-0.24381 0.30008,-1.21905 z m 2.49437,0.84396 0.75018,0.37509 0.39385,0.24381 -0.63766,1.06902 -0.37509,-0.24381 0.0375,0.0375 -0.71268,-0.35634 0.54389,-1.12528 z m 2.2318,1.33158 0.52513,0.39385 0.48763,0.45011 -0.84396,0.91898 -0.46887,-0.43136 0.0563,0.0375 -0.48762,-0.35634 0.73143,-1.01275 z m 1.93173,1.78169 0.30008,0.31883 0.52513,0.69393 -1.01275,0.75018 -0.50638,-0.67517 0.0375,0.0375 -0.26256,-0.28132 0.91897,-0.84396 z m 1.51913,2.11928 0.13129,0.2063 0.46886,0.95649 -1.1252
 8,0.54389 -0.46886,-0.93774 0.0375,0.0563 -0.11252,-0.1688 1.06901,-0.65641 z m 1.06902,2.4006 0.0188,0.0563 0.30008,1.2003 0.0187,0.075 -1.23781,0.18755 -0.0187,-0.0375 0.0187,0.0563 -0.28132,-1.14403 0.0188,0.0563 -0.0188,-0.0187 1.18155,-0.43136 z m 0.50638,2.62565 0.075,1.16279 0,0.11253 -1.25656,0 0,-0.11253 0,0.0375 -0.0563,-1.12528 1.23781,-0.075 z m 0.075,2.53188 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.49438 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51312 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.51313 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.51313 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.51313 0,1.237
 81 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51313 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51313 0,1.2378 -1.25656,0 0,-1.2378 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m -0.0375,2.53188 -0.0375,0.67517 -0.0938,0.63766 -1.23781,-0.18755 0.0938,-0.60015 0,0.0563 0.0375,-0.63765 1.2378,0.0563 z m -0.39384,2.58814 -0.13129,0.54389 -0.26256,0.71268 -1.18155,-0.43136 0.26257,-0.67517 -0.0188,0.0563 0.13129,-0.50638 1.2003,0.30007 z m -0.91898,2.45687 -0.16879,0.35634 -0.48763,0.78769 -1.06901,-0.65641 0.46886,-0.75019 -0.0375,0.0375 0.16879,-0.31883 1.12528,0.54389 z m -1.4066,2.21305 -0.11253,0.15004 -0.76894,0.8252 -0.91898,-0.84396 0.73143,-0.80645 -0.0375,0.0563 0.0938,-0.131
 28 1.01276,0.75019 z m -1.89422,1.89422 -0.90023,0.67517 -0.15004,0.0938 -0.65641,-1.06902 0.13128,-0.075 -0.0375,0.0375 0.88147,-0.65642 0.73144,0.994 z m -2.17555,1.42535 -0.86271,0.41261 -0.33759,0.13128 -0.43135,-1.18154 0.30007,-0.11253 -0.0563,0.0188 0.84396,-0.39385 0.54388,1.12528 z m -2.45686,0.95649 -0.76894,0.18755 -0.50638,0.075 -0.18754,-1.23781 0.48762,-0.075 -0.075,0.0188 0.75019,-0.18755 0.30007,1.21905 z m -2.56939,0.41261 -0.71268,0.0375 -0.58139,0 0,-1.25657 0.56264,0 -0.0375,0 0.69392,-0.0375 0.075,1.25657 z m -2.53188,0.0375 -1.25656,0 0,-1.25657 1.25656,0 0,1.25657 z m -2.51313,0 -1.2378,0 0,-1.25657 1.2378,0 0,1.25657 z m -2.49437,0 -1.25656,0 0,-1.25657 1.25656,0 0,1.25657 z m -2.49437,0 -1.25656,0 0,-1.25657 1.25656,0 0,1.25657 z m -2.51312,0 -1.23781,0 0,-1.25657 1.23781,0 0,1.25657 z m -2.49438,0 -1.25656,0 0,-1.25657 1.25656,0 0,1.25657 z m -2.49437,0 -1.25656,0 0,-1.25657 1.25656,0 0,1.25657 z m -2.51312,0 -1.23781,0 0,-1.25657 1.23781,0 0,1.25657 z m -2
 .49437,0 -1.25657,0 0,-1.25657 1.25657,0 0,1.25657 z m -2.49438,0 -1.25656,0 0,-1.25657 1.25656,0 0,1.25657 z m -2.51312,0 -1.23781,0 0,-1.25657 1.23781,0 0,1.25657 z m -2.49437,0 -1.25657,0 0,-1.25657 1.25657,0 0,1.25657 z m -2.49438,0 -1.25656,0 0,-1.25657 1.25656,0 0,1.25657 z m -2.51312,0 -1.23781,0 0,-1.25657 1.23781,0 0,1.25657 z m -2.49437,0 -1.25657,0 0,-1.25657 1.25657,0 0,1.25657 z m -2.49437,0 -1.25657,0 0,-1.25657 1.25657,0 0,1.25657 z m -2.51313,0 -1.23781,0 0,-1.25657 1.23781,0 0,1.25657 z m -2.49437,0 -1.25656,0 0,-1.25657 1.25656,0 0,1.25657 z m -2.49437,0 -1.03151,0 -0.26257,-0.0188 0.075,-1.25656 0.24381,0.0187 -0.0375,0 1.01276,0 0,1.25657 z m -2.58815,-0.11253 -0.95649,-0.15004 -0.33758,-0.075 0.30007,-1.21905 0.31883,0.075 -0.075,-0.0188 0.91897,0.15004 -0.16879,1.23781 z m -2.56939,-0.60015 -0.75018,-0.28132 -0.45012,-0.2063 0.54389,-1.12528 0.4126,0.18754 -0.0563,-0.0188 0.73143,0.26256 -0.43136,1.18155 z m -2.36309,-1.12528 -0.52513,-0.31883 -0.54388,-0.4126 
 0.73143,-0.994 0.52513,0.39385 -0.0375,-0.0375 0.50638,0.31883 -0.65642,1.05026 z m -2.08177,-1.5754 -0.31882,-0.28132 -0.5814,-0.65641 0.91898,-0.84396 0.58139,0.63766 -0.0563,-0.0375 0.30008,0.26257 -0.84397,0.91897 z m -1.70667,-1.96924 -0.13128,-0.16879 -0.5814,-0.93773 1.06902,-0.63766 0.56264,0.90022 -0.0375,-0.0375 0.11253,0.13129 -0.994,0.75018 z m -1.27532,-2.26931 -0.0188,-0.0375 -0.43136,-1.16279 -0.0188,-0.0938 1.20029,-0.31883 0.0188,0.075 -0.0188,-0.0563 0.41261,1.08777 -0.0375,-0.0563 0.0188,0.0188 -1.12528,0.54389 z m -0.78769,-2.56939 -0.15004,-1.08777 -0.0188,-0.22506 1.25657,-0.0563 0,0.18754 0,-0.0563 0.15003,1.05026 -1.2378,0.18755 z"
-         id="path3111"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="460.63281"
-         y="47.214516"
-         id="text3113"
-         xml:space="preserve"
-         style="font-size:17.55437279px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Processes</text>
-      <path
-         d="m 480.15712,53.938437 -58.81465,35.990216 0.65641,1.050262 58.81465,-35.971461 -0.65641,-1.069017 z m -58.72088,33.720901 -2.96324,4.744932 5.57014,-0.468867 -2.6069,-4.276065 z"
-         id="path3115"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="619.16907"
-         y="461.52097"
-         id="text3117"
-         xml:space="preserve"
-         style="font-size:17.55437279px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Threads</text>
-      <path
-         d="m 640.13447,441.48498 -23.51836,-205.02607 1.23781,-0.15004 23.51836,205.02608 -1.23781,0.15003 z m -25.24379,-203.56321 1.91298,-5.25131 3.07577,4.68867 -4.98875,0.56264 z"
-         id="path3119"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 197.78677,203.732 c 0,-17.69503 14.30982,-32.04235 31.96734,-32.04235 17.64815,0 31.95796,14.34732 31.95796,32.04235 0,17.69504 -14.30981,32.03298 -31.95796,32.03298 -17.65752,0 -31.96734,-14.33794 -31.96734,-32.03298"
-         id="path3121"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 197.78677,203.732 c 0,-17.69503 14.30982,-32.04235 31.96734,-32.04235 17.64815,0 31.95796,14.34732 31.95796,32.04235 0,17.69504 -14.30981,32.03298 -31.95796,32.03298 -17.65752,0 -31.96734,-14.33794 -31.96734,-32.03298"
-         id="path3123"
-         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="212.32973"
-         y="201.69257"
-         id="text3125"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source</text>
-      <text
-         x="222.08215"
-         y="213.69556"
-         id="text3127"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[3]</text>
-      <path
-         d="m 247.33662,203.732 c 0,-17.68565 14.34732,-32.03298 32.03298,-32.03298 17.70441,0 32.05173,14.34733 32.05173,32.03298 0,17.70442 -14.34732,32.03298 -32.05173,32.03298 -17.68566,0 -32.03298,-14.32856 -32.03298,-32.03298"
-         id="path3129"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 247.33662,203.732 c 0,-17.68565 14.34732,-32.03298 32.03298,-32.03298 17.70441,0 32.05173,14.34733 32.05173,32.03298 0,17.70442 -14.34732,32.03298 -32.05173,32.03298 -17.68566,0 -32.03298,-14.32856 -32.03298,-32.03298"
-         id="path3131"
-         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="263.95547"
-         y="201.69257"
-         id="text3133"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">map()</text>
-      <text
-         x="271.75742"
-         y="213.69556"
-         id="text3135"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[3]</text>
-      <path
-         d="m 197.01783,283.75819 c 0,-17.70441 14.30982,-32.05173 31.95796,-32.05173 17.64815,0 31.95797,14.34732 31.95797,32.05173 0,17.68566 -14.30982,32.03298 -31.95797,32.03298 -17.64814,0 -31.95796,-14.34732 -31.95796,-32.03298"
-         id="path3137"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="206.76814"
-         y="269.76364"
-         id="text3139"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">keyBy</text>
-      <text
-         x="237.67584"
-         y="269.76364"
-         id="text3141"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">()/</text>
-      <text
-         x="203.31728"
-         y="281.76663"
-         id="text3143"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">window()/</text>
-      <text
-         x="210.81915"
-         y="293.76962"
-         id="text3145"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">apply()</text>
-      <text
-         x="221.32176"
-         y="305.77261"
-         id="text3147"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[3]</text>
-      <path
-         d="m 192.94807,228.63821 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.2378 1.25656,0 0,1.2378 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51
 312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0.0188,-2.53188 0.0563,-1.10652 0.0188,-0.2063 1.23781,0.18754 -0.0188,0.1688 0,-0.0563 -0.0563,1.06902 -1.23781,-0.0563 z m 0.30008,-2.6069 0.28132,-1.05026 0.075,-0.2063 1.16279,0.4126 -0.0563,0.18755 0.0188,-0.0563 -0.26257,1.0315 -1.21905,-0.31883 z m 0.8252,-2.49437 0.43136,-0.90022 0.16879,-0.26257 1.06902,0.65642 -0.15004,0.22505 0.0188,-0.0563 -0.41261,0.88147 -1.12528,-0.54389 z m 1.31283,-2.25056 0.52513,-0.73143 0.28132,-0.30007 0.91898,0.84396 -0.24381,0.26256 0.0375,-0.0375 -0.52513,0.69392 -0.994,-0.73143 z m 1.72543,-1.96924 0.58139,-0.54388 0.41261,-0.30008 0.75018,0.994 -0.39384,0.30007 0.0375,-0.0375 -0.54389,0.50638 -0.84396,-0.91898 z m 2.08177,-1.57539 0.58139,-0.37509 0.56264,-0.26257 0.54389,1.12528 -0.54389,0.24381 0.0563,-0.0188 -0.56264,0.33758 -0.63766,-1.05026 z m 2.34433,-1.14404 0.5814,-0.206
 3 0.67517,-0.18754 0.31883,1.21905 -0.65642,0.16879 0.0563,-0.0187 -0.54389,0.18754 -0.43136,-1.16279 z m 2.55064,-0.63765 0.58139,-0.0938 0.71268,-0.0375 0.075,1.23781 -0.69392,0.0375 0.0563,0 -0.54389,0.075 -0.18754,-1.21905 z m 2.58814,-0.1688 1.23781,0 0,1.25657 -1.23781,0 0,-1.25657 z m 2.49437,0 1.25657,0 0,1.25657 -1.25657,0 0,-1.25657 z m 2.49438,0 1.25656,0 0,1.25657 -1.25656,0 0,-1.25657 z m 2.51312,0 1.23781,0 0,1.25657 -1.23781,0 0,-1.25657 z m 2.49437,0 1.25657,0 0,1.25657 -1.25657,0 0,-1.25657 z m 2.49437,0 1.25657,0 0,1.25657 -1.25657,0 0,-1.25657 z m 2.51313,0 1.23781,0 0,1.25657 -1.23781,0 0,-1.25657 z m 2.49437,0 1.25657,0 0,1.25657 -1.25657,0 0,-1.25657 z m 2.49437,0 1.25657,0 0,1.25657 -1.25657,0 0,-1.25657 z m 2.51313,0 1.23781,0 0,1.25657 -1.23781,0 0,-1.25657 z m 2.49437,0 1.25656,0 0,1.25657 -1.25656,0 0,-1.25657 z m 2.49437,0 1.25657,0 0,1.25657 -1.25657,0 0,-1.25657 z m 2.51313,0 1.23781,0 0,1.25657 -1.23781,0 0,-1.25657 z m 2.49437,0 1.25656,0 0,1.25657 -1
 .25656,0 0,-1.25657 z m 2.49437,0 1.25656,0 0,1.25657 -1.25656,0 0,-1.25657 z m 2.51313,0 1.23781,0 0,1.25657 -1.23781,0 0,-1.25657 z m 2.49437,0 1.25656,0 0,1.25657 -1.25656,0 0,-1.25657 z m 2.49437,0 1.25656,0 0,1.25657 -1.25656,0 0,-1.25657 z m 2.51313,0 1.2378,0 0,1.25657 -1.2378,0 0,-1.25657 z m 2.49437,0 1.25656,0 0,1.25657 -1.25656,0 0,-1.25657 z m 2.49437,0 1.25656,0 0,1.25657 -1.25656,0 0,-1.25657 z m 2.51313,0 1.2378,0 0,1.25657 -1.2378,0 0,-1.25657 z m 2.49437,0 1.25656,0 0,1.25657 -1.25656,0 0,-1.25657 z m 2.49437,0 1.25656,0 0,1.25657 -1.25656,0 0,-1.25657 z m 2.51312,0 1.23781,0 0,1.25657 -1.23781,0 0,-1.25657 z m 2.49438,0 1.25656,0 0,1.25657 -1.25656,0 0,-1.25657 z m 2.49437,0 1.25656,0 0,1.25657 -1.25656,0 0,-1.25657 z m 2.51312,0 1.23781,0 0,1.25657 -1.23781,0 0,-1.25657 z m 2.49437,0 1.25657,0 0,1.25657 -1.25657,0 0,-1.25657 z m 2.49438,0 1.25656,0 0,1.25657 -1.25656,0 0,-1.25657 z m 2.51312,0 1.23781,0 0,1.25657 -1.23781,0 0,-1.25657 z m 2.49437,0 1.25657,0 0,1.2
 5657 -1.25657,0 0,-1.25657 z m 2.49437,0 1.25657,0 0,1.25657 -1.25657,0 0,-1.25657 z m 2.51313,0 1.23781,0 0,1.25657 -1.23781,0 0,-1.25657 z m 2.49437,0 1.25657,0 0,1.25657 -1.25657,0 0,-1.25657 z m 2.49437,0 1.25657,0 0,1.25657 -1.25657,0 0,-1.25657 z m 2.51313,0 1.23781,0 0,1.25657 -1.23781,0 0,-1.25657 z m 2.49437,0 1.18155,0 0.0938,0 -0.0563,1.25657 -0.0938,0 0.0375,0 -1.16279,0 0,-1.25657 z m 2.58815,0.075 1.21905,0.18755 0.075,0.0188 -0.30008,1.21906 -0.0563,-0.0188 0.0563,0.0188 -1.18154,-0.18755 0.18755,-1.23781 z m 2.56939,0.56264 1.08777,0.39385 0.13128,0.0563 -0.54389,1.12528 -0.0938,-0.0563 0.0563,0.0375 -1.06901,-0.39385 0.43136,-1.16279 z m 2.40059,1.03151 0.91898,0.56264 0.18755,0.13128 -0.75019,1.01275 -0.15003,-0.11252 0.0375,0.0188 -0.90022,-0.54388 0.65641,-1.06902 z m 2.13804,1.51913 0.71267,0.63766 0.22506,0.26256 -0.91898,0.84396 -0.2063,-0.24381 0.0375,0.0563 -0.69392,-0.63766 0.84396,-0.91898 z m 1.80045,1.89422 0.50637,0.69392 0.22506,0.3751 -1.05026,0.65641
  -0.22506,-0.35634 0.0375,0.0375 -0.50638,-0.67517 1.01276,-0.73143 z m 1.38784,2.21305 0.33759,0.69392 0.18754,0.50638 -1.18154,0.43136 -0.16879,-0.48762 0.0188,0.0563 -0.31883,-0.65641 1.12528,-0.54389 z m 0.90023,2.45686 0.18754,0.69393 0.0938,0.58139 -1.23781,0.18755 -0.075,-0.56264 0,0.075 -0.16879,-0.67517 1.2003,-0.30008 z m 0.4126,2.58815 0.0375,0.73143 0,0.54389 -1.23781,0 0,-0.54389 0,0.0375 -0.0375,-0.71268 1.23781,-0.0563 z m 0.0375,2.53188 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.49437 0,1.25656 -1.23781,0 0,-1.25656 1.23781,0 z m 0,2.49437 0,1.25656 -1.23781,0 0,-1.25656 1.23781,0 z m 0,2.51313 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.49437 0,1.25656 -1.23781,0 0,-1.25656 1.23781,0 z m 0,2.49437 0,1.25656 -1.23781,0 0,-1.25656 1.23781,0 z m 0,2.51313 0,1.2378 -1.23781,0 0,-1.2378 1.23781,0 z m 0,2.49437 0,1.25656 -1.23781,0 0,-1.25656 1.23781,0 z m 0,2.49437 0,1.25656 -1.23781,0 0,-1.25656 1.23781,0 z m 0,2.51313 0,1.2378 -1.23781,0 0,-1.2378 1.23781,
 0 z m 0,2.49437 0,1.25656 -1.23781,0 0,-1.25656 1.23781,0 z m 0,2.49437 0,1.25656 -1.23781,0 0,-1.25656 1.23781,0 z m 0,2.51312 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.49438 0,1.25656 -1.23781,0 0,-1.25656 1.23781,0 z m 0,2.49437 0,1.25656 -1.23781,0 0,-1.25656 1.23781,0 z m 0,2.51312 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.49437 0,1.25657 -1.23781,0 0,-1.25657 1.23781,0 z m 0,2.49438 0,1.25656 -1.23781,0 0,-1.25656 1.23781,0 z m 0,2.51312 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.49437 0,0.52513 -0.0375,0.75019 -1.23781,-0.0563 0.0375,-0.75019 0,0.0375 0,-0.50638 1.23781,0 z m -0.16879,2.58815 -0.0938,0.56264 -0.18754,0.71268 -1.2003,-0.31883 0.16879,-0.67517 0,0.0563 0.075,-0.52513 1.23781,0.18755 z m -0.67517,2.53188 -0.16879,0.46887 -0.35634,0.73143 -1.12528,-0.54389 0.33758,-0.69392 -0.0188,0.0563 0.15003,-0.45011 1.18155,0.43136 z m -1.16279,2.34433 -0.22506,0.35634 -0.52513,0.71268 -0.994,-0.75019 0.50638,-0.67517 -0.0375,0.0375 0.2063,-0.33758 1.0
 6902,0.65641 z m -1.6129,2.06302 -0.20631,0.22505 -0.73143,0.67517 -0.84396,-0.91898 0.71268,-0.65641 -0.0375,0.0375 0.18755,-0.2063 0.91898,0.84396 z m -1.96924,1.70667 -0.1688,0.11253 -0.93773,0.58139 -0.65641,-1.06901 0.91897,-0.56264 -0.0375,0.0375 0.13128,-0.11253 0.75019,1.01275 z m -2.28807,1.27532 -0.11253,0.0563 -1.12528,0.39385 -0.41261,-1.16279 1.08777,-0.39385 -0.0563,0.0188 0.075,-0.0375 0.54389,1.12528 z m -2.49438,0.7877 -0.0563,0.0188 -1.23781,0.18755 -0.18755,-1.23781 1.2003,-0.16879 -0.0563,0 0.0188,0 0.31883,1.2003 z m -2.6069,0.28132 -0.075,0 -1.21906,0 0,-1.23781 1.2003,0 -0.0375,0 0.0563,0 0.075,1.23781 z m -2.53188,0 -1.25656,0 0,-1.23781 1.25656,0 0,1.23781 z m -2.51312,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.49437,0 -1.25657,0 0,-1.23781 1.25657,0 0,1.23781 z m -2.49438,0 -1.25656,0 0,-1.23781 1.25656,0 0,1.23781 z m -2.51312,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.49437,0 -1.25657,0 0,-1.23781 1.25657,0 0,1.23781 z m -2.49437,0 -1.2565
 7,0 0,-1.23781 1.25657,0 0,1.23781 z m -2.51313,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.49437,0 -1.25656,0 0,-1.23781 1.25656,0 0,1.23781 z m -2.49437,0 -1.25657,0 0,-1.23781 1.25657,0 0,1.23781 z m -2.51313,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.49437,0 -1.25656,0 0,-1.23781 1.25656,0 0,1.23781 z m -2.49437,0 -1.25657,0 0,-1.23781 1.25657,0 0,1.23781 z m -2.51313,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.49437,0 -1.25656,0 0,-1.23781 1.25656,0 0,1.23781 z m -2.49437,0 -1.25656,0 0,-1.23781 1.25656,0 0,1.23781 z m -2.51313,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.49437,0 -1.25656,0 0,-1.23781 1.25656,0 0,1.23781 z m -2.49437,0 -1.25656,0 0,-1.23781 1.25656,0 0,1.23781 z m -2.51313,0 -1.2378,0 0,-1.23781 1.2378,0 0,1.23781 z m -2.49437,0 -1.25656,0 0,-1.23781 1.25656,0 0,1.23781 z m -2.49437,0 -1.25656,0 0,-1.23781 1.25656,0 0,1.23781 z m -2.51312,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.49438,0 -1.25656,0 0,-1.23781 1.25656,0 0,1.2
 3781 z m -2.49437,0 -1.25656,0 0,-1.23781 1.25656,0 0,1.23781 z m -2.51312,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.49438,0 -1.25656,0 0,-1.23781 1.25656,0 0,1.23781 z m -2.49437,0 -1.25656,0 0,-1.23781 1.25656,0 0,1.23781 z m -2.51312,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.49437,0 -1.25657,0 0,-1.23781 1.25657,0 0,1.23781 z m -2.49438,0 -1.25656,0 0,-1.23781 1.25656,0 0,1.23781 z m -2.51312,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.49437,0 -1.25657,0 0,-1.23781 1.25657,0 0,1.23781 z m -2.49437,0 -1.25657,0 0,-1.23781 1.25657,0 0,1.23781 z m -2.51313,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.49437,0 -1.25656,0 0,-1.23781 1.25656,0 0,1.23781 z m -2.49437,0 -1.25657,0 0,-1.23781 1.25657,0 0,1.23781 z m -2.53188,-0.0188 -0.69393,-0.0375 -0.6189,-0.0938 0.18755,-1.23781 0.58139,0.0938 -0.0563,-0.0188 0.65641,0.0375 -0.0563,1.25656 z m -2.58815,-0.39385 -0.65641,-0.16879 -0.60015,-0.22505 0.43136,-1.16279 0.56264,0.2063 -0.0563,-0.0188 0.61891,0.15
 004 -0.30008,1.21905 z m -2.45686,-0.90022 -0.54389,-0.26257 -0.6189,-0.37509 0.65641,-1.06902 0.5814,0.3751 -0.0563,-0.0375 0.52513,0.24381 -0.54388,1.12528 z m -2.23181,-1.36909 -0.39385,-0.30008 -0.6189,-0.54388 0.84396,-0.91898 0.5814,0.52513 -0.0375,-0.0375 0.37509,0.26257 -0.75019,1.01275 z m -1.91297,-1.7817 -0.26257,-0.28132 -0.54388,-0.75018 0.99399,-0.75019 0.54389,0.73143 -0.0375,-0.0563 0.22506,0.26256 -0.91898,0.84396 z m -1.51913,-2.13803 -0.15004,-0.24381 -0.45011,-0.91898 1.12528,-0.54388 0.43136,0.90022 -0.0188,-0.0563 0.13128,0.22505 -1.06901,0.63766 z m -1.06902,-2.38184 -0.0563,-0.2063 -0.28132,-1.06902 1.2003,-0.30007 0.28132,1.0315 -0.0188,-0.0563 0.0563,0.16879 -1.18155,0.43136 z m -0.56264,-2.56939 -0.0188,-0.18755 -0.0563,-1.12528 1.23781,-0.0563 0.0563,1.08777 0,-0.0563 0.0188,0.15004 -1.23781,0.18755 z"
-         id="path3149"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 192.94807,309.15202 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.2378 1.25656,0 0,1.2378 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51
 312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-0.7877 0.0375,-0.48762 1.23781,0.0563 -0.0188,0.48762 0,-0.0375 0,0.76894 -1.25656,0 z m 0.15004,-2.58815 0.11252,-0.71268 0.13129,-0.56264 1.21905,0.30008 -0.13128,0.54388 0,-0.075 -0.0938,0.69392 -1.2378,-0.18755 z m 0.63765,-2.53188 0.20631,-0.54388 0.31883,-0.65642 1.12528,0.54389 -0.31883,0.6189 0.0375,-0.0563 -0.18755,0.52513 -1.18155,-0.43136 z m 1.16279,-2.34433 0.20631,-0.33759 0.54388,-0.73143 0.994,0.75019 -0.52513,0.69392 0.0375,-0.0375 -0.18755,0.31883 -1.06902,-0.65641 z m 1.59415,-2.06302 0.13128,-0.15003 0.84396,-0.75019 0.82521,0.93773 -0.80645,0.73143 0.0563,-0.0563 -0.11253,0.13128 -0.93773,-0.84396 z m 2.06302,-1.70667 0.97524,-0.60015 0.15004,-0.075 0.54388,1.12528 -0.13128,0.0563 0.0563,-0.0188 -0.93773,0.58139 -0.65641,-1.06901 z m 2.32557,-1.21906 0.91898,-0.33758 0.31883,-0.075 0.318
 83,1.2003 -0.30007,0.075 0.0563,-0.0188 -0.88147,0.33759 -0.43136,-1.18155 z m 2.51313,-0.69392 0.86272,-0.13128 0.45011,-0.0375 0.0563,1.25656 -0.4126,0.0188 0.0563,0 -0.8252,0.11253 -0.18755,-1.21906 z m 2.58815,-0.2063 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51312,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.51313,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.51313,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.51313,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51313,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2
 .49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51313,0 1.2378,0 0,1.25656 -1.2378,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.53188,0.0188 1.10653,0.0563 0.2063,0.0188 -0.18755,1.23781 -0.16879,-0.0188 0.0563,0 -1.06901,-0.0563 0.0563,-1.23781 z m 2.6069,0.30008 0.95649,0.24381 0.30007,0.11253 -0.43136,1.18154 -0.26256,-0.11253 0.0563,0.0188 -0.93773,-0.22506 0.31883,-1.21905 z m 2.47562,0.84396 0.75018,0.35634 0.41261,0.26256 -0.65642,1.05027 -0.37509,-0.22506 0.0375,0.0188 -0.71268,-0.33758 0.54389,-1.12528 z m 2.25056,1.33158 0.50637,0.37509 0.50638,0.46887 -0.84396,0.91898 -0.48762,-0.43136 0.0563,0.0375 -0.48762,-0.35634 0.75019,-1.01275 z m 1.93173,1.7817 0.28132,0.30007 0.52513,0.71268 -0.994,0.75018 -0.52513,-0.69392 0.0375,0.0563 -0.26257,-0.30008 0.93774,-0.8252 z m 1.51913,2.11927 0.11252,0.18755 0.46887,0.97524 -1.12528,
 0.54389 -0.46887,-0.95649 0.0375,0.0563 -0.0938,-0.15003 1.06902,-0.65642 z m 1.05026,2.4006 0.0187,0.0563 0.30008,1.20029 0.0187,0.075 -1.23781,0.18755 -0.0188,-0.0563 0.0188,0.075 -0.28132,-1.14403 0.0188,0.0563 -0.0188,-0.0187 1.18155,-0.43136 z m 0.52513,2.62566 0.0563,1.14403 0,0.13128 -1.25656,0 0,-0.11252 0,0.0375 -0.0563,-1.12528 1.25656,-0.075 z m 0.0563,2.53188 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51312 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49438 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51312 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.49438 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51312 0

<TRUNCATED>

[50/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/dataset_transformations.md
----------------------------------------------------------------------
diff --git a/docs/apis/batch/dataset_transformations.md b/docs/apis/batch/dataset_transformations.md
deleted file mode 100644
index fe85b31..0000000
--- a/docs/apis/batch/dataset_transformations.md
+++ /dev/null
@@ -1,2338 +0,0 @@
----
-title: "DataSet Transformations"
-
-# Sub-level navigation
-sub-nav-group: batch
-sub-nav-parent: dataset_api
-sub-nav-pos: 1
-sub-nav-title: Transformations
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-This document gives a deep-dive into the available transformations on DataSets. For a general introduction to the
-Flink Java API, please refer to the [Programming Guide](index.html).
-
-For zipping elements in a data set with a dense index, please refer to the [Zip Elements Guide](zip_elements_guide.html).
-
-* This will be replaced by the TOC
-{:toc}
-
-### Map
-
-The Map transformation applies a user-defined map function on each element of a DataSet.
-It implements a one-to-one mapping, that is, exactly one element must be returned by
-the function.
-
-The following code transforms a DataSet of Integer pairs into a DataSet of Integers:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-// MapFunction that adds two integer values
-public class IntAdder implements MapFunction<Tuple2<Integer, Integer>, Integer> {
-  @Override
-  public Integer map(Tuple2<Integer, Integer> in) {
-    return in.f0 + in.f1;
-  }
-}
-
-// [...]
-DataSet<Tuple2<Integer, Integer>> intPairs = // [...]
-DataSet<Integer> intSums = intPairs.map(new IntAdder());
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-val intPairs: DataSet[(Int, Int)] = // [...]
-val intSums = intPairs.map { pair => pair._1 + pair._2 }
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
- intSums = intPairs.map(lambda x: sum(x))
-~~~
-
-</div>
-</div>
-
-### FlatMap
-
-The FlatMap transformation applies a user-defined flat-map function on each element of a DataSet.
-This variant of a map function can return arbitrary many result elements (including none) for each input element.
-
-The following code transforms a DataSet of text lines into a DataSet of words:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-// FlatMapFunction that tokenizes a String by whitespace characters and emits all String tokens.
-public class Tokenizer implements FlatMapFunction<String, String> {
-  @Override
-  public void flatMap(String value, Collector<String> out) {
-    for (String token : value.split("\\W")) {
-      out.collect(token);
-    }
-  }
-}
-
-// [...]
-DataSet<String> textLines = // [...]
-DataSet<String> words = textLines.flatMap(new Tokenizer());
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-val textLines: DataSet[String] = // [...]
-val words = textLines.flatMap { _.split(" ") }
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
- words = lines.flat_map(lambda x,c: [line.split() for line in x])
-~~~
-
-</div>
-</div>
-
-### MapPartition
-
-MapPartition transforms a parallel partition in a single function call. The map-partition function
-gets the partition as Iterable and can produce an arbitrary number of result values. The number of elements in each partition depends on the degree-of-parallelism
-and previous operations.
-
-The following code transforms a DataSet of text lines into a DataSet of counts per partition:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-public class PartitionCounter implements MapPartitionFunction<String, Long> {
-
-  public void mapPartition(Iterable<String> values, Collector<Long> out) {
-    long c = 0;
-    for (String s : values) {
-      c++;
-    }
-    out.collect(c);
-  }
-}
-
-// [...]
-DataSet<String> textLines = // [...]
-DataSet<Long> counts = textLines.mapPartition(new PartitionCounter());
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-val textLines: DataSet[String] = // [...]
-// Some is required because the return value must be a Collection.
-// There is an implicit conversion from Option to a Collection.
-val counts = texLines.mapPartition { in => Some(in.size) }
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
- counts = lines.map_partition(lambda x,c: [sum(1 for _ in x)])
-~~~
-
-</div>
-</div>
-
-### Filter
-
-The Filter transformation applies a user-defined filter function on each element of a DataSet and retains only those elements for which the function returns `true`.
-
-The following code removes all Integers smaller than zero from a DataSet:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-// FilterFunction that filters out all Integers smaller than zero.
-public class NaturalNumberFilter implements FilterFunction<Integer> {
-  @Override
-  public boolean filter(Integer number) {
-    return number >= 0;
-  }
-}
-
-// [...]
-DataSet<Integer> intNumbers = // [...]
-DataSet<Integer> naturalNumbers = intNumbers.filter(new NaturalNumberFilter());
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-val intNumbers: DataSet[Int] = // [...]
-val naturalNumbers = intNumbers.filter { _ > 0 }
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
- naturalNumbers = intNumbers.filter(lambda x: x > 0)
-~~~
-
-</div>
-</div>
-
-**IMPORTANT:** The system assumes that the function does not modify the elements on which the predicate is applied. Violating this assumption
-can lead to incorrect results.
-
-### Projection of Tuple DataSet
-
-The Project transformation removes or moves Tuple fields of a Tuple DataSet.
-The `project(int...)` method selects Tuple fields that should be retained by their index and defines their order in the output Tuple.
-
-Projections do not require the definition of a user function.
-
-The following code shows different ways to apply a Project transformation on a DataSet:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-DataSet<Tuple3<Integer, Double, String>> in = // [...]
-// converts Tuple3<Integer, Double, String> into Tuple2<String, Integer>
-DataSet<Tuple2<String, Integer>> out = in.project(2,0);
-~~~
-
-#### Projection with Type Hint
-
-Note that the Java compiler cannot infer the return type of `project` operator. This can cause a problem if you call another operator on a result of `project` operator such as:
-
-~~~java
-DataSet<Tuple5<String,String,String,String,String>> ds = ....
-DataSet<Tuple1<String>> ds2 = ds.project(0).distinct(0);
-~~~
-
-This problem can be overcome by hinting the return type of `project` operator like this:
-
-~~~java
-DataSet<Tuple1<String>> ds2 = ds.<Tuple1<String>>project(0).distinct(0);
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-Not supported.
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
-out = in.project(2,0);
-~~~
-
-</div>
-</div>
-
-### Transformations on Grouped DataSet
-
-The reduce operations can operate on grouped data sets. Specifying the key to
-be used for grouping can be done in many ways:
-
-- key expressions
-- a key-selector function
-- one or more field position keys (Tuple DataSet only)
-- Case Class fields (Case Classes only)
-
-Please look at the reduce examples to see how the grouping keys are specified.
-
-### Reduce on Grouped DataSet
-
-A Reduce transformation that is applied on a grouped DataSet reduces each group to a single
-element using a user-defined reduce function.
-For each group of input elements, a reduce function successively combines pairs of elements into one
-element until only a single element for each group remains.
-
-#### Reduce on DataSet Grouped by Key Expression
-
-Key expressions specify one or more fields of each element of a DataSet. Each key expression is
-either the name of a public field or a getter method. A dot can be used to drill down into objects.
-The key expression "*" selects all fields.
-The following code shows how to group a POJO DataSet using key expressions and to reduce it
-with a reduce function.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-// some ordinary POJO
-public class WC {
-  public String word;
-  public int count;
-  // [...]
-}
-
-// ReduceFunction that sums Integer attributes of a POJO
-public class WordCounter implements ReduceFunction<WC> {
-  @Override
-  public WC reduce(WC in1, WC in2) {
-    return new WC(in1.word, in1.count + in2.count);
-  }
-}
-
-// [...]
-DataSet<WC> words = // [...]
-DataSet<WC> wordCounts = words
-                         // DataSet grouping on field "word"
-                         .groupBy("word")
-                         // apply ReduceFunction on grouped DataSet
-                         .reduce(new WordCounter());
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-// some ordinary POJO
-class WC(val word: String, val count: Int) {
-  def this() {
-    this(null, -1)
-  }
-  // [...]
-}
-
-val words: DataSet[WC] = // [...]
-val wordCounts = words.groupBy("word").reduce {
-  (w1, w2) => new WC(w1.word, w1.count + w2.count)
-}
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
-Not supported.
-~~~
-</div>
-</div>
-
-#### Reduce on DataSet Grouped by KeySelector Function
-
-A key-selector function extracts a key value from each element of a DataSet. The extracted key
-value is used to group the DataSet.
-The following code shows how to group a POJO DataSet using a key-selector function and to reduce it
-with a reduce function.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-// some ordinary POJO
-public class WC {
-  public String word;
-  public int count;
-  // [...]
-}
-
-// ReduceFunction that sums Integer attributes of a POJO
-public class WordCounter implements ReduceFunction<WC> {
-  @Override
-  public WC reduce(WC in1, WC in2) {
-    return new WC(in1.word, in1.count + in2.count);
-  }
-}
-
-// [...]
-DataSet<WC> words = // [...]
-DataSet<WC> wordCounts = words
-                         // DataSet grouping on field "word"
-                         .groupBy(new SelectWord())
-                         // apply ReduceFunction on grouped DataSet
-                         .reduce(new WordCounter());
-
-public class SelectWord implements KeySelector<WC, String> {
-  @Override
-  public String getKey(Word w) {
-    return w.word;
-  }
-}
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-// some ordinary POJO
-class WC(val word: String, val count: Int) {
-  def this() {
-    this(null, -1)
-  }
-  // [...]
-}
-
-val words: DataSet[WC] = // [...]
-val wordCounts = words.groupBy { _.word } reduce {
-  (w1, w2) => new WC(w1.word, w1.count + w2.count)
-}
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
-class WordCounter(ReduceFunction):
-    def reduce(self, in1, in2):
-        return (in1[0], in1[1] + in2[1])
-
-words = // [...]
-wordCounts = words \
-    .group_by(lambda x: x[0]) \
-    .reduce(WordCounter())
-~~~
-</div>
-</div>
-
-#### Reduce on DataSet Grouped by Field Position Keys (Tuple DataSets only)
-
-Field position keys specify one or more fields of a Tuple DataSet that are used as grouping keys.
-The following code shows how to use field position keys and apply a reduce function
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-DataSet<Tuple3<String, Integer, Double>> tuples = // [...]
-DataSet<Tuple3<String, Integer, Double>> reducedTuples = tuples
-                                         // group DataSet on first and second field of Tuple
-                                         .groupBy(0, 1)
-                                         // apply ReduceFunction on grouped DataSet
-                                         .reduce(new MyTupleReducer());
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-val tuples = DataSet[(String, Int, Double)] = // [...]
-// group on the first and second Tuple field
-val reducedTuples = tuples.groupBy(0, 1).reduce { ... }
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
- reducedTuples = tuples.group_by(0, 1).reduce( ... )
-~~~
-
-</div>
-</div>
-
-#### Reduce on DataSet grouped by Case Class Fields
-
-When using Case Classes you can also specify the grouping key using the names of the fields:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-Not supported.
-~~~
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-case class MyClass(val a: String, b: Int, c: Double)
-val tuples = DataSet[MyClass] = // [...]
-// group on the first and second field
-val reducedTuples = tuples.groupBy("a", "b").reduce { ... }
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
-Not supported.
-~~~
-</div>
-</div>
-
-### GroupReduce on Grouped DataSet
-
-A GroupReduce transformation that is applied on a grouped DataSet calls a user-defined
-group-reduce function for each group. The difference
-between this and *Reduce* is that the user defined function gets the whole group at once.
-The function is invoked with an Iterable over all elements of a group and can return an arbitrary
-number of result elements.
-
-#### GroupReduce on DataSet Grouped by Field Position Keys (Tuple DataSets only)
-
-The following code shows how duplicate strings can be removed from a DataSet grouped by Integer.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-public class DistinctReduce
-         implements GroupReduceFunction<Tuple2<Integer, String>, Tuple2<Integer, String>> {
-
-  @Override
-  public void reduce(Iterable<Tuple2<Integer, String>> in, Collector<Tuple2<Integer, String>> out) {
-
-    Set<String> uniqStrings = new HashSet<String>();
-    Integer key = null;
-
-    // add all strings of the group to the set
-    for (Tuple2<Integer, String> t : in) {
-      key = t.f0;
-      uniqStrings.add(t.f1);
-    }
-
-    // emit all unique strings.
-    for (String s : uniqStrings) {
-      out.collect(new Tuple2<Integer, String>(key, s));
-    }
-  }
-}
-
-// [...]
-DataSet<Tuple2<Integer, String>> input = // [...]
-DataSet<Tuple2<Integer, String>> output = input
-                           .groupBy(0)            // group DataSet by the first tuple field
-                           .reduceGroup(new DistinctReduce());  // apply GroupReduceFunction
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-val input: DataSet[(Int, String)] = // [...]
-val output = input.groupBy(0).reduceGroup {
-      (in, out: Collector[(Int, String)]) =>
-        in.toSet foreach (out.collect)
-    }
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
- class DistinctReduce(GroupReduceFunction):
-   def reduce(self, iterator, collector):
-     dic = dict()
-     for value in iterator:
-       dic[value[1]] = 1
-     for key in dic.keys():
-       collector.collect(key)
-
- output = data.group_by(0).reduce_group(DistinctReduce())
-~~~
-
-</div>
-</div>
-
-#### GroupReduce on DataSet Grouped by Key Expression, KeySelector Function, or Case Class Fields
-
-Work analogous to [key expressions](#reduce-on-dataset-grouped-by-key-expression),
-[key-selector functions](#reduce-on-dataset-grouped-by-keyselector-function),
-and [case class fields](#reduce-on-dataset-grouped-by-case-class-fields) in *Reduce* transformations.
-
-
-#### GroupReduce on sorted groups
-
-A group-reduce function accesses the elements of a group using an Iterable. Optionally, the Iterable can hand out the elements of a group in a specified order. In many cases this can help to reduce the complexity of a user-defined
-group-reduce function and improve its efficiency.
-
-The following code shows another example how to remove duplicate Strings in a DataSet grouped by an Integer and sorted by String.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-// GroupReduceFunction that removes consecutive identical elements
-public class DistinctReduce
-         implements GroupReduceFunction<Tuple2<Integer, String>, Tuple2<Integer, String>> {
-
-  @Override
-  public void reduce(Iterable<Tuple2<Integer, String>> in, Collector<Tuple2<Integer, String>> out) {
-    Integer key = null;
-    String comp = null;
-
-    for (Tuple2<Integer, String> t : in) {
-      key = t.f0;
-      String next = t.f1;
-
-      // check if strings are different
-      if (com == null || !next.equals(comp)) {
-        out.collect(new Tuple2<Integer, String>(key, next));
-        comp = next;
-      }
-    }
-  }
-}
-
-// [...]
-DataSet<Tuple2<Integer, String>> input = // [...]
-DataSet<Double> output = input
-                         .groupBy(0)                         // group DataSet by first field
-                         .sortGroup(1, Order.ASCENDING)      // sort groups on second tuple field
-                         .reduceGroup(new DistinctReduce());
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-val input: DataSet[(Int, String)] = // [...]
-val output = input.groupBy(0).sortGroup(1, Order.ASCENDING).reduceGroup {
-      (in, out: Collector[(Int, String)]) =>
-        var prev: (Int, String) = null
-        for (t <- in) {
-          if (prev == null || prev != t)
-            out.collect(t)
-        }
-    }
-
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
- class DistinctReduce(GroupReduceFunction):
-   def reduce(self, iterator, collector):
-     dic = dict()
-     for value in iterator:
-       dic[value[1]] = 1
-     for key in dic.keys():
-       collector.collect(key)
-
- output = data.group_by(0).sort_group(1, Order.ASCENDING).reduce_group(DistinctReduce())
-~~~
-
-
-</div>
-</div>
-
-**Note:** A GroupSort often comes for free if the grouping is established using a sort-based execution strategy of an operator before the reduce operation.
-
-#### Combinable GroupReduceFunctions
-
-In contrast to a reduce function, a group-reduce function is not
-implicitly combinable. In order to make a group-reduce function
-combinable it must implement the `GroupCombineFunction` interface.
-
-**Important**: The generic input and output types of
-the `GroupCombineFunction` interface must be equal to the generic input type
-of the `GroupReduceFunction` as shown in the following example:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-// Combinable GroupReduceFunction that computes a sum.
-public class MyCombinableGroupReducer implements
-  GroupReduceFunction<Tuple2<String, Integer>, String>,
-  GroupCombineFunction<Tuple2<String, Integer>, Tuple2<String, Integer>>
-{
-  @Override
-  public void reduce(Iterable<Tuple2<String, Integer>> in,
-                     Collector<String> out) {
-
-    String key = null;
-    int sum = 0;
-
-    for (Tuple2<String, Integer> curr : in) {
-      key = curr.f0;
-      sum += curr.f1;
-    }
-    // concat key and sum and emit
-    out.collect(key + "-" + sum);
-  }
-
-  @Override
-  public void combine(Iterable<Tuple2<String, Integer>> in,
-                      Collector<Tuple2<String, Integer>> out) {
-    String key = null;
-    int sum = 0;
-
-    for (Tuple2<String, Integer> curr : in) {
-      key = curr.f0;
-      sum += curr.f1;
-    }
-    // emit tuple with key and sum
-    out.collect(new Tuple2<>(key, sum));
-  }
-}
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-
-// Combinable GroupReduceFunction that computes two sums.
-class MyCombinableGroupReducer
-  extends GroupReduceFunction[(String, Int), String]
-  with GroupCombineFunction[(String, Int), (String, Int)]
-{
-  override def reduce(
-    in: java.lang.Iterable[(String, Int)],
-    out: Collector[String]): Unit =
-  {
-    val r: (String, Int) =
-      in.asScala.reduce( (a,b) => (a._1, a._2 + b._2) )
-    // concat key and sum and emit
-    out.collect (r._1 + "-" + r._2)
-  }
-
-  override def combine(
-    in: java.lang.Iterable[(String, Int)],
-    out: Collector[(String, Int)]): Unit =
-  {
-    val r: (String, Int) =
-      in.asScala.reduce( (a,b) => (a._1, a._2 + b._2) )
-    // emit tuple with key and sum
-    out.collect(r)
-  }
-}
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
- class GroupReduce(GroupReduceFunction):
-   def reduce(self, iterator, collector):
-     key, int_sum = iterator.next()
-     for value in iterator:
-       int_sum += value[1]
-     collector.collect(key + "-" + int_sum))
-
-   def combine(self, iterator, collector):
-     key, int_sum = iterator.next()
-     for value in iterator:
-       int_sum += value[1]
-     collector.collect((key, int_sum))
-
-data.reduce_group(GroupReduce(), combinable=True)
-~~~
-
-</div>
-</div>
-
-### GroupCombine on a Grouped DataSet
-
-The GroupCombine transformation is the generalized form of the combine step in
-the combinable GroupReduceFunction. It is generalized in the sense that it
-allows combining of input type `I` to an arbitrary output type `O`. In contrast,
-the combine step in the GroupReduce only allows combining from input type `I` to
-output type `I`. This is because the reduce step in the GroupReduceFunction
-expects input type `I`.
-
-In some applications, it is desirable to combine a DataSet into an intermediate
-format before performing additional transformations (e.g. to reduce data
-size). This can be achieved with a CombineGroup transformation with very little
-costs.
-
-**Note:** The GroupCombine on a Grouped DataSet is performed in memory with a
-  greedy strategy which may not process all data at once but in multiple
-  steps. It is also performed on the individual partitions without a data
-  exchange like in a GroupReduce transformation. This may lead to partial
-  results.
-
-The following example demonstrates the use of a CombineGroup transformation for
-an alternative WordCount implementation.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-DataSet<String> input = [..] // The words received as input
-
-DataSet<Tuple2<String, Integer>> combinedWords = input
-  .groupBy(0); // group identical words
-  .combineGroup(new GroupCombineFunction<String, Tuple2<String, Integer>() {
-
-    public void combine(Iterable<String> words, Collector<Tuple2<String, Integer>>) { // combine
-        String key = null;
-        int count = 0;
-
-        for (String word : words) {
-            key = word;
-            count++;
-        }
-        // emit tuple with word and count
-        out.collect(new Tuple2(key, count));
-    }
-});
-
-DataSet<Tuple2<String, Integer>> output = combinedWords
-  .groupBy(0);                             // group by words again
-  .reduceGroup(new GroupReduceFunction() { // group reduce with full data exchange
-
-    public void reduce(Iterable<Tuple2<String, Integer>>, Collector<Tuple2<String, Integer>>) {
-        String key = null;
-        int count = 0;
-
-        for (Tuple2<String, Integer> word : words) {
-            key = word;
-            count++;
-        }
-        // emit tuple with word and count
-        out.collect(new Tuple2(key, count));
-    }
-});
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-val input: DataSet[String] = [..] // The words received as input
-
-val combinedWords: DataSet[(String, Int)] = input
-  .groupBy(0)
-  .combineGroup {
-    (words, out: Collector[(String, Int)]) =>
-        var key: String = null
-        var count = 0
-
-        for (word <- words) {
-            key = word
-            count += 1
-        }
-        out.collect((key, count))
-}
-
-val output: DataSet[(String, Int)] = combinedWords
-  .groupBy(0)
-  .reduceGroup {
-    (words, out: Collector[(String, Int)]) =>
-        var key: String = null
-        var sum = 0
-
-        for ((word, sum) <- words) {
-            key = word
-            sum += count
-        }
-        out.collect((key, sum))
-}
-
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
-Not supported.
-~~~
-
-</div>
-</div>
-
-The above alternative WordCount implementation demonstrates how the GroupCombine
-combines words before performing the GroupReduce transformation. The above
-example is just a proof of concept. Note, how the combine step changes the type
-of the DataSet which would normally require an additional Map transformation
-before executing the GroupReduce.
-
-### Aggregate on Grouped Tuple DataSet
-
-There are some common aggregation operations that are frequently used. The Aggregate transformation provides the following build-in aggregation functions:
-
-- Sum,
-- Min, and
-- Max.
-
-The Aggregate transformation can only be applied on a Tuple DataSet and supports only field position keys for grouping.
-
-The following code shows how to apply an Aggregation transformation on a DataSet grouped by field position keys:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-DataSet<Tuple3<Integer, String, Double>> input = // [...]
-DataSet<Tuple3<Integer, String, Double>> output = input
-                                   .groupBy(1)        // group DataSet on second field
-                                   .aggregate(SUM, 0) // compute sum of the first field
-                                   .and(MIN, 2);      // compute minimum of the third field
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-val input: DataSet[(Int, String, Double)] = // [...]
-val output = input.groupBy(1).aggregate(SUM, 0).and(MIN, 2)
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
-from flink.functions.Aggregation import Sum, Min
-
-input = # [...]
-output = input.group_by(1).aggregate(Sum, 0).and_agg(Min, 2)
-~~~
-
-</div>
-</div>
-
-To apply multiple aggregations on a DataSet it is necessary to use the `.and()` function after the first aggregate, that means `.aggregate(SUM, 0).and(MIN, 2)` produces the sum of field 0 and the minimum of field 2 of the original DataSet.
-In contrast to that `.aggregate(SUM, 0).aggregate(MIN, 2)` will apply an aggregation on an aggregation. In the given example it would produce the minimum of field 2 after calculating the sum of field 0 grouped by field 1.
-
-**Note:** The set of aggregation functions will be extended in the future.
-
-### MinBy / MaxBy on Grouped Tuple DataSet
-
-The MinBy (MaxBy) transformation selects a single tuple for each group of tuples. The selected tuple is the tuple whose values of one or more specified fields are minimum (maximum). The fields which are used for comparison must be valid key fields, i.e., comparable. If multiple tuples have minimum (maximum) fields values, an arbitrary tuple of these tuples is returned.
-
-The following code shows how to select the tuple with the minimum values for the `Integer` and `Double` fields for each group of tuples with the same `String` value from a `DataSet<Tuple3<Integer, String, Double>>`:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-DataSet<Tuple3<Integer, String, Double>> input = // [...]
-DataSet<Tuple3<Integer, String, Double>> output = input
-                                   .groupBy(1)   // group DataSet on second field
-                                   .minBy(0, 2); // select tuple with minimum values for first and third field.
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-val input: DataSet[(Int, String, Double)] = // [...]
-val output: DataSet[(Int, String, Double)] = input
-                                   .groupBy(1)  // group DataSet on second field
-                                   .minBy(0, 2) // select tuple with minimum values for first and third field.
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
-Not supported.
-~~~
-
-</div>
-</div>
-
-### Reduce on full DataSet
-
-The Reduce transformation applies a user-defined reduce function to all elements of a DataSet.
-The reduce function subsequently combines pairs of elements into one element until only a single element remains.
-
-The following code shows how to sum all elements of an Integer DataSet:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-// ReduceFunction that sums Integers
-public class IntSummer implements ReduceFunction<Integer> {
-  @Override
-  public Integer reduce(Integer num1, Integer num2) {
-    return num1 + num2;
-  }
-}
-
-// [...]
-DataSet<Integer> intNumbers = // [...]
-DataSet<Integer> sum = intNumbers.reduce(new IntSummer());
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-val intNumbers = env.fromElements(1,2,3)
-val sum = intNumbers.reduce (_ + _)
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
- intNumbers = env.from_elements(1,2,3)
- sum = intNumbers.reduce(lambda x,y: x + y)
-~~~
-
-</div>
-</div>
-
-Reducing a full DataSet using the Reduce transformation implies that the final Reduce operation cannot be done in parallel. However, a reduce function is automatically combinable such that a Reduce transformation does not limit scalability for most use cases.
-
-### GroupReduce on full DataSet
-
-The GroupReduce transformation applies a user-defined group-reduce function on all elements of a DataSet.
-A group-reduce can iterate over all elements of DataSet and return an arbitrary number of result elements.
-
-The following example shows how to apply a GroupReduce transformation on a full DataSet:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-DataSet<Integer> input = // [...]
-// apply a (preferably combinable) GroupReduceFunction to a DataSet
-DataSet<Double> output = input.reduceGroup(new MyGroupReducer());
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-val input: DataSet[Int] = // [...]
-val output = input.reduceGroup(new MyGroupReducer())
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
- output = data.reduce_group(MyGroupReducer())
-~~~
-
-</div>
-</div>
-
-**Note:** A GroupReduce transformation on a full DataSet cannot be done in parallel if the
-group-reduce function is not combinable. Therefore, this can be a very compute intensive operation.
-See the paragraph on "Combinable GroupReduceFunctions" above to learn how to implement a
-combinable group-reduce function.
-
-### GroupCombine on a full DataSet
-
-The GroupCombine on a full DataSet works similar to the GroupCombine on a
-grouped DataSet. The data is partitioned on all nodes and then combined in a
-greedy fashion (i.e. only data fitting into memory is combined at once).
-
-### Aggregate on full Tuple DataSet
-
-There are some common aggregation operations that are frequently used. The Aggregate transformation
-provides the following build-in aggregation functions:
-
-- Sum,
-- Min, and
-- Max.
-
-The Aggregate transformation can only be applied on a Tuple DataSet.
-
-The following code shows how to apply an Aggregation transformation on a full DataSet:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-DataSet<Tuple2<Integer, Double>> input = // [...]
-DataSet<Tuple2<Integer, Double>> output = input
-                                     .aggregate(SUM, 0)    // compute sum of the first field
-                                     .and(MIN, 1);    // compute minimum of the second field
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-val input: DataSet[(Int, String, Double)] = // [...]
-val output = input.aggregate(SUM, 0).and(MIN, 2)
-
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
-from flink.functions.Aggregation import Sum, Min
-
-input = # [...]
-output = input.aggregate(Sum, 0).and_agg(Min, 2)
-~~~
-
-</div>
-</div>
-
-**Note:** Extending the set of supported aggregation functions is on our roadmap.
-
-### MinBy / MaxBy on full Tuple DataSet
-
-The MinBy (MaxBy) transformation selects a single tuple from a DataSet of tuples. The selected tuple is the tuple whose values of one or more specified fields are minimum (maximum). The fields which are used for comparison must be valid key fields, i.e., comparable. If multiple tuples have minimum (maximum) fields values, an arbitrary tuple of these tuples is returned.
-
-The following code shows how to select the tuple with the maximum values for the `Integer` and `Double` fields from a `DataSet<Tuple3<Integer, String, Double>>`:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-DataSet<Tuple3<Integer, String, Double>> input = // [...]
-DataSet<Tuple3<Integer, String, Double>> output = input
-                                   .maxBy(0, 2); // select tuple with maximum values for first and third field.
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-val input: DataSet[(Int, String, Double)] = // [...]
-val output: DataSet[(Int, String, Double)] = input                          
-                                   .maxBy(0, 2) // select tuple with maximum values for first and third field.
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
-Not supported.
-~~~
-
-</div>
-</div>
-
-### Distinct
-
-The Distinct transformation computes the DataSet of the distinct elements of the source DataSet.
-The following code removes all duplicate elements from the DataSet:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-DataSet<Tuple2<Integer, Double>> input = // [...]
-DataSet<Tuple2<Integer, Double>> output = input.distinct();
-
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-val input: DataSet[(Int, String, Double)] = // [...]
-val output = input.distinct()
-
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
-Not supported.
-~~~
-
-</div>
-</div>
-
-It is also possible to change how the distinction of the elements in the DataSet is decided, using:
-
-- one or more field position keys (Tuple DataSets only),
-- a key-selector function, or
-- a key expression.
-
-#### Distinct with field position keys
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-DataSet<Tuple2<Integer, Double, String>> input = // [...]
-DataSet<Tuple2<Integer, Double, String>> output = input.distinct(0,2);
-
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-val input: DataSet[(Int, Double, String)] = // [...]
-val output = input.distinct(0,2)
-
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
-Not supported.
-~~~
-
-</div>
-</div>
-
-#### Distinct with KeySelector function
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-private static class AbsSelector implements KeySelector<Integer, Integer> {
-private static final long serialVersionUID = 1L;
-	@Override
-	public Integer getKey(Integer t) {
-    	return Math.abs(t);
-	}
-}
-DataSet<Integer> input = // [...]
-DataSet<Integer> output = input.distinct(new AbsSelector());
-
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-val input: DataSet[Int] = // [...]
-val output = input.distinct {x => Math.abs(x)}
-
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
-Not supported.
-~~~
-
-</div>
-</div>
-
-#### Distinct with key expression
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-// some ordinary POJO
-public class CustomType {
-  public String aName;
-  public int aNumber;
-  // [...]
-}
-
-DataSet<CustomType> input = // [...]
-DataSet<CustomType> output = input.distinct("aName", "aNumber");
-
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-// some ordinary POJO
-case class CustomType(aName : String, aNumber : Int) { }
-
-val input: DataSet[CustomType] = // [...]
-val output = input.distinct("aName", "aNumber")
-
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
-Not supported.
-~~~
-
-</div>
-</div>
-
-It is also possible to indicate to use all the fields by the wildcard character:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-DataSet<CustomType> input = // [...]
-DataSet<CustomType> output = input.distinct("*");
-
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-// some ordinary POJO
-val input: DataSet[CustomType] = // [...]
-val output = input.distinct("_")
-
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
-Not supported.
-~~~
-
-</div>
-</div>
-
-### Join
-
-The Join transformation joins two DataSets into one DataSet. The elements of both DataSets are joined on one or more keys which can be specified using
-
-- a key expression
-- a key-selector function
-- one or more field position keys (Tuple DataSet only).
-- Case Class Fields
-
-There are a few different ways to perform a Join transformation which are shown in the following.
-
-#### Default Join (Join into Tuple2)
-
-The default Join transformation produces a new Tuple DataSet with two fields. Each tuple holds a joined element of the first input DataSet in the first tuple field and a matching element of the second input DataSet in the second field.
-
-The following code shows a default Join transformation using field position keys:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-public static class User { public String name; public int zip; }
-public static class Store { public Manager mgr; public int zip; }
-DataSet<User> input1 = // [...]
-DataSet<Store> input2 = // [...]
-// result dataset is typed as Tuple2
-DataSet<Tuple2<User, Store>>
-            result = input1.join(input2)
-                           .where("zip")       // key of the first input (users)
-                           .equalTo("zip");    // key of the second input (stores)
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-val input1: DataSet[(Int, String)] = // [...]
-val input2: DataSet[(Double, Int)] = // [...]
-val result = input1.join(input2).where(0).equalTo(1)
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
- result = input1.join(input2).where(0).equal_to(1)
-~~~
-
-</div>
-</div>
-
-#### Join with Join Function
-
-A Join transformation can also call a user-defined join function to process joining tuples.
-A join function receives one element of the first input DataSet and one element of the second input DataSet and returns exactly one element.
-
-The following code performs a join of DataSet with custom java objects and a Tuple DataSet using key-selector functions and shows how to use a user-defined join function:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-// some POJO
-public class Rating {
-  public String name;
-  public String category;
-  public int points;
-}
-
-// Join function that joins a custom POJO with a Tuple
-public class PointWeighter
-         implements JoinFunction<Rating, Tuple2<String, Double>, Tuple2<String, Double>> {
-
-  @Override
-  public Tuple2<String, Double> join(Rating rating, Tuple2<String, Double> weight) {
-    // multiply the points and rating and construct a new output tuple
-    return new Tuple2<String, Double>(rating.name, rating.points * weight.f1);
-  }
-}
-
-DataSet<Rating> ratings = // [...]
-DataSet<Tuple2<String, Double>> weights = // [...]
-DataSet<Tuple2<String, Double>>
-            weightedRatings =
-            ratings.join(weights)
-
-                   // key of the first input
-                   .where("category")
-
-                   // key of the second input
-                   .equalTo("f0")
-
-                   // applying the JoinFunction on joining pairs
-                   .with(new PointWeighter());
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-case class Rating(name: String, category: String, points: Int)
-
-val ratings: DataSet[Ratings] = // [...]
-val weights: DataSet[(String, Double)] = // [...]
-
-val weightedRatings = ratings.join(weights).where("category").equalTo(0) {
-  (rating, weight) => (rating.name, rating.points * weight._2)
-}
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
- class PointWeighter(JoinFunction):
-   def join(self, rating, weight):
-     return (rating[0], rating[1] * weight[1])
-       if value1[3]:
-
- weightedRatings =
-   ratings.join(weights).where(0).equal_to(0). \
-   with(new PointWeighter());
-~~~
-
-</div>
-</div>
-
-#### Join with Flat-Join Function
-
-Analogous to Map and FlatMap, a FlatJoin behaves in the same
-way as a Join, but instead of returning one element, it can
-return (collect), zero, one, or more elements.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-public class PointWeighter
-         implements FlatJoinFunction<Rating, Tuple2<String, Double>, Tuple2<String, Double>> {
-  @Override
-  public void join(Rating rating, Tuple2<String, Double> weight,
-	  Collector<Tuple2<String, Double>> out) {
-	if (weight.f1 > 0.1) {
-		out.collect(new Tuple2<String, Double>(rating.name, rating.points * weight.f1));
-	}
-  }
-}
-
-DataSet<Tuple2<String, Double>>
-            weightedRatings =
-            ratings.join(weights) // [...]
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-case class Rating(name: String, category: String, points: Int)
-
-val ratings: DataSet[Ratings] = // [...]
-val weights: DataSet[(String, Double)] = // [...]
-
-val weightedRatings = ratings.join(weights).where("category").equalTo(0) {
-  (rating, weight, out: Collector[(String, Double)]) =>
-    if (weight._2 > 0.1) out.collect(rating.name, rating.points * weight._2)
-}
-
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-Not supported.
-</div>
-</div>
-
-#### Join with Projection (Java/Python Only)
-
-A Join transformation can construct result tuples using a projection as shown here:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-DataSet<Tuple3<Integer, Byte, String>> input1 = // [...]
-DataSet<Tuple2<Integer, Double>> input2 = // [...]
-DataSet<Tuple4<Integer, String, Double, Byte>
-            result =
-            input1.join(input2)
-                  // key definition on first DataSet using a field position key
-                  .where(0)
-                  // key definition of second DataSet using a field position key
-                  .equalTo(0)
-                  // select and reorder fields of matching tuples
-                  .projectFirst(0,2).projectSecond(1).projectFirst(1);
-~~~
-
-`projectFirst(int...)` and `projectSecond(int...)` select the fields of the first and second joined input that should be assembled into an output Tuple. The order of indexes defines the order of fields in the output tuple.
-The join projection works also for non-Tuple DataSets. In this case, `projectFirst()` or `projectSecond()` must be called without arguments to add a joined element to the output Tuple.
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-Not supported.
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
- result = input1.join(input2).where(0).equal_to(0) \
-  .project_first(0,2).project_second(1).project_first(1);
-~~~
-
-`project_first(int...)` and `project_second(int...)` select the fields of the first and second joined input that should be assembled into an output Tuple. The order of indexes defines the order of fields in the output tuple.
-The join projection works also for non-Tuple DataSets. In this case, `project_first()` or `project_second()` must be called without arguments to add a joined element to the output Tuple.
-
-</div>
-</div>
-
-#### Join with DataSet Size Hint
-
-In order to guide the optimizer to pick the right execution strategy, you can hint the size of a DataSet to join as shown here:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-DataSet<Tuple2<Integer, String>> input1 = // [...]
-DataSet<Tuple2<Integer, String>> input2 = // [...]
-
-DataSet<Tuple2<Tuple2<Integer, String>, Tuple2<Integer, String>>>
-            result1 =
-            // hint that the second DataSet is very small
-            input1.joinWithTiny(input2)
-                  .where(0)
-                  .equalTo(0);
-
-DataSet<Tuple2<Tuple2<Integer, String>, Tuple2<Integer, String>>>
-            result2 =
-            // hint that the second DataSet is very large
-            input1.joinWithHuge(input2)
-                  .where(0)
-                  .equalTo(0);
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-val input1: DataSet[(Int, String)] = // [...]
-val input2: DataSet[(Int, String)] = // [...]
-
-// hint that the second DataSet is very small
-val result1 = input1.joinWithTiny(input2).where(0).equalTo(0)
-
-// hint that the second DataSet is very large
-val result1 = input1.joinWithHuge(input2).where(0).equalTo(0)
-
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
-
- #hint that the second DataSet is very small
- result1 = input1.join_with_tiny(input2).where(0).equal_to(0)
-
- #hint that the second DataSet is very large
- result1 = input1.join_with_huge(input2).where(0).equal_to(0)
-
-~~~
-
-</div>
-</div>
-
-#### Join Algorithm Hints
-
-The Flink runtime can execute joins in various ways. Each possible way outperforms the others under
-different circumstances. The system tries to pick a reasonable way automatically, but allows you
-to manually pick a strategy, in case you want to enforce a specific way of executing the join.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-DataSet<SomeType> input1 = // [...]
-DataSet<AnotherType> input2 = // [...]
-
-DataSet<Tuple2<SomeType, AnotherType> result =
-      input1.join(input2, JoinHint.BROADCAST_HASH_FIRST)
-            .where("id").equalTo("key");
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-val input1: DataSet[SomeType] = // [...]
-val input2: DataSet[AnotherType] = // [...]
-
-// hint that the second DataSet is very small
-val result1 = input1.join(input2, JoinHint.BROADCAST_HASH_FIRST).where("id").equalTo("key")
-
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
-Not supported.
-~~~
-
-</div>
-</div>
-
-The following hints are available:
-
-* `OPTIMIZER_CHOOSES`: Equivalent to not giving a hint at all, leaves the choice to the system.
-
-* `BROADCAST_HASH_FIRST`: Broadcasts the first input and builds a hash table from it, which is
-  probed by the second input. A good strategy if the first input is very small.
-
-* `BROADCAST_HASH_SECOND`: Broadcasts the second input and builds a hash table from it, which is
-  probed by the first input. A good strategy if the second input is very small.
-
-* `REPARTITION_HASH_FIRST`: The system partitions (shuffles) each input (unless the input is already
-  partitioned) and builds a hash table from the first input. This strategy is good if the first
-  input is smaller than the second, but both inputs are still large.
-  *Note:* This is the default fallback strategy that the system uses if no size estimates can be made
-  and no pre-existing partitions and sort-orders can be re-used.
-
-* `REPARTITION_HASH_SECOND`: The system partitions (shuffles) each input (unless the input is already
-  partitioned) and builds a hash table from the second input. This strategy is good if the second
-  input is smaller than the first, but both inputs are still large.
-
-* `REPARTITION_SORT_MERGE`: The system partitions (shuffles) each input (unless the input is already
-  partitioned) and sorts each input (unless it is already sorted). The inputs are joined by
-  a streamed merge of the sorted inputs. This strategy is good if one or both of the inputs are
-  already sorted.
-
-
-### OuterJoin
-
-The OuterJoin transformation performs a left, right, or full outer join on two data sets. Outer joins are similar to regular (inner) joins and create all pairs of elements that are equal on their keys. In addition, records of the "outer" side (left, right, or both in case of full) are preserved if no matching key is found in the other side. Matching pair of elements (or one element and a `null` value for the other input) are given to a `JoinFunction` to turn the pair of elements into a single element, or to a `FlatJoinFunction` to turn the pair of elements into arbitrarily many (including none) elements.
-
-The elements of both DataSets are joined on one or more keys which can be specified using
-
-- a key expression
-- a key-selector function
-- one or more field position keys (Tuple DataSet only).
-- Case Class Fields
-
-**OuterJoins are only supported for the Java and Scala DataSet API.**
-
-
-#### OuterJoin with Join Function
-
-A OuterJoin transformation calls a user-defined join function to process joining tuples.
-A join function receives one element of the first input DataSet and one element of the second input DataSet and returns exactly one element. Depending on the type of the outer join (left, right, full) one of both input elements of the join function can be `null`.
-
-The following code performs a left outer join of DataSet with custom java objects and a Tuple DataSet using key-selector functions and shows how to use a user-defined join function:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-// some POJO
-public class Rating {
-  public String name;
-  public String category;
-  public int points;
-}
-
-// Join function that joins a custom POJO with a Tuple
-public class PointAssigner
-         implements JoinFunction<Tuple2<String, String>, Rating, Tuple2<String, Integer>> {
-
-  @Override
-  public Tuple2<String, Integer> join(Tuple2<String, String> movie, Rating rating) {
-    // Assigns the rating points to the movie.
-    // NOTE: rating might be null
-    return new Tuple2<String, Double>(movie.f0, rating == null ? -1 : rating.points;
-  }
-}
-
-DataSet<Tuple2<String, String>> movies = // [...]
-DataSet<Rating> ratings = // [...]
-DataSet<Tuple2<String, Integer>>
-            moviesWithPoints =
-            movies.leftOuterJoin(ratings)
-
-                   // key of the first input
-                   .where("f0")
-
-                   // key of the second input
-                   .equalTo("name")
-
-                   // applying the JoinFunction on joining pairs
-                   .with(new PointAssigner());
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-case class Rating(name: String, category: String, points: Int)
-
-val movies: DataSet[(String, String)] = // [...]
-val ratings: DataSet[Ratings] = // [...]
-
-val moviesWithPoints = movies.leftOuterJoin(ratings).where(0).equalTo("name") {
-  (movie, rating) => (movie._1, if (rating == null) -1 else rating.points)
-}
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
-Not supported.
-~~~
-
-</div>
-</div>
-
-#### OuterJoin with Flat-Join Function
-
-Analogous to Map and FlatMap, an OuterJoin with flat-join function behaves in the same
-way as an OuterJoin with join function, but instead of returning one element, it can
-return (collect), zero, one, or more elements.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-public class PointAssigner
-         implements FlatJoinFunction<Tuple2<String, String>, Rating, Tuple2<String, Integer>> {
-  @Override
-  public void join(Tuple2<String, String> movie, Rating rating
-    Collector<Tuple2<String, Integer>> out) {
-  if (rating == null ) {
-    out.collect(new Tuple2<String, Integer>(movie.f0, -1));
-  } else if (rating.points < 10) {
-    out.collect(new Tuple2<String, Integer>(movie.f0, rating.points));
-  } else {
-    // do not emit
-  }
-}
-
-DataSet<Tuple2<String, Integer>>
-            moviesWithPoints =
-            movies.leftOuterJoin(ratings) // [...]
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-Not supported.
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
-Not supported.
-~~~
-
-</div>
-</div>
-
-#### Join Algorithm Hints
-
-The Flink runtime can execute outer joins in various ways. Each possible way outperforms the others under
-different circumstances. The system tries to pick a reasonable way automatically, but allows you
-to manually pick a strategy, in case you want to enforce a specific way of executing the outer join.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-DataSet<SomeType> input1 = // [...]
-DataSet<AnotherType> input2 = // [...]
-
-DataSet<Tuple2<SomeType, AnotherType> result1 =
-      input1.leftOuterJoin(input2, JoinHint.REPARTITION_SORT_MERGE)
-            .where("id").equalTo("key");
-
-DataSet<Tuple2<SomeType, AnotherType> result2 =
-      input1.rightOuterJoin(input2, JoinHint.BROADCAST_HASH_FIRST)
-            .where("id").equalTo("key");
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-val input1: DataSet[SomeType] = // [...]
-val input2: DataSet[AnotherType] = // [...]
-
-// hint that the second DataSet is very small
-val result1 = input1.leftOuterJoin(input2, JoinHint.REPARTITION_SORT_MERGE).where("id").equalTo("key")
-
-val result2 = input1.rightOuterJoin(input2, JoinHint.BROADCAST_HASH_FIRST).where("id").equalTo("key")
-
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
-Not supported.
-~~~
-
-</div>
-</div>
-
-The following hints are available.
-
-* `OPTIMIZER_CHOOSES`: Equivalent to not giving a hint at all, leaves the choice to the system.
-
-* `BROADCAST_HASH_FIRST`: Broadcasts the first input and builds a hash table from it, which is
-  probed by the second input. A good strategy if the first input is very small.
-
-* `BROADCAST_HASH_SECOND`: Broadcasts the second input and builds a hash table from it, which is
-  probed by the first input. A good strategy if the second input is very small.
-
-* `REPARTITION_HASH_FIRST`: The system partitions (shuffles) each input (unless the input is already
-  partitioned) and builds a hash table from the first input. This strategy is good if the first
-  input is smaller than the second, but both inputs are still large.
-
-* `REPARTITION_HASH_SECOND`: The system partitions (shuffles) each input (unless the input is already
-  partitioned) and builds a hash table from the second input. This strategy is good if the second
-  input is smaller than the first, but both inputs are still large.
-
-* `REPARTITION_SORT_MERGE`: The system partitions (shuffles) each input (unless the input is already
-  partitioned) and sorts each input (unless it is already sorted). The inputs are joined by
-  a streamed merge of the sorted inputs. This strategy is good if one or both of the inputs are
-  already sorted.
-
-**NOTE:** Not all execution strategies are supported by every outer join type, yet.
-
-* `LeftOuterJoin` supports:
-  * `OPTIMIZER_CHOOSES`
-  * `BROADCAST_HASH_SECOND`
-  * `REPARTITION_HASH_SECOND`
-  * `REPARTITION_SORT_MERGE`
-
-* `RightOuterJoin` supports:
-  * `OPTIMIZER_CHOOSES`
-  * `BROADCAST_HASH_FIRST`
-  * `REPARTITION_HASH_FIRST`
-  * `REPARTITION_SORT_MERGE`
-
-* `FullOuterJoin` supports:
-  * `OPTIMIZER_CHOOSES`
-  * `REPARTITION_SORT_MERGE`
-
-
-### Cross
-
-The Cross transformation combines two DataSets into one DataSet. It builds all pairwise combinations of the elements of both input DataSets, i.e., it builds a Cartesian product.
-The Cross transformation either calls a user-defined cross function on each pair of elements or outputs a Tuple2. Both modes are shown in the following.
-
-**Note:** Cross is potentially a *very* compute-intensive operation which can challenge even large compute clusters!
-
-#### Cross with User-Defined Function
-
-A Cross transformation can call a user-defined cross function. A cross function receives one element of the first input and one element of the second input and returns exactly one result element.
-
-The following code shows how to apply a Cross transformation on two DataSets using a cross function:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-public class Coord {
-  public int id;
-  public int x;
-  public int y;
-}
-
-// CrossFunction computes the Euclidean distance between two Coord objects.
-public class EuclideanDistComputer
-         implements CrossFunction<Coord, Coord, Tuple3<Integer, Integer, Double>> {
-
-  @Override
-  public Tuple3<Integer, Integer, Double> cross(Coord c1, Coord c2) {
-    // compute Euclidean distance of coordinates
-    double dist = sqrt(pow(c1.x - c2.x, 2) + pow(c1.y - c2.y, 2));
-    return new Tuple3<Integer, Integer, Double>(c1.id, c2.id, dist);
-  }
-}
-
-DataSet<Coord> coords1 = // [...]
-DataSet<Coord> coords2 = // [...]
-DataSet<Tuple3<Integer, Integer, Double>>
-            distances =
-            coords1.cross(coords2)
-                   // apply CrossFunction
-                   .with(new EuclideanDistComputer());
-~~~
-
-#### Cross with Projection
-
-A Cross transformation can also construct result tuples using a projection as shown here:
-
-~~~java
-DataSet<Tuple3<Integer, Byte, String>> input1 = // [...]
-DataSet<Tuple2<Integer, Double>> input2 = // [...]
-DataSet<Tuple4<Integer, Byte, Integer, Double>
-            result =
-            input1.cross(input2)
-                  // select and reorder fields of matching tuples
-                  .projectSecond(0).projectFirst(1,0).projectSecond(1);
-~~~
-
-The field selection in a Cross projection works the same way as in the projection of Join results.
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-case class Coord(id: Int, x: Int, y: Int)
-
-val coords1: DataSet[Coord] = // [...]
-val coords2: DataSet[Coord] = // [...]
-
-val distances = coords1.cross(coords2) {
-  (c1, c2) =>
-    val dist = sqrt(pow(c1.x - c2.x, 2) + pow(c1.y - c2.y, 2))
-    (c1.id, c2.id, dist)
-}
-~~~
-
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
- class Euclid(CrossFunction):
-   def cross(self, c1, c2):
-     return (c1[0], c2[0], sqrt(pow(c1[1] - c2.[1], 2) + pow(c1[2] - c2[2], 2)))
-
- distances = coords1.cross(coords2).using(Euclid())
-~~~
-
-#### Cross with Projection
-
-A Cross transformation can also construct result tuples using a projection as shown here:
-
-~~~python
-result = input1.cross(input2).projectFirst(1,0).projectSecond(0,1);
-~~~
-
-The field selection in a Cross projection works the same way as in the projection of Join results.
-
-</div>
-</div>
-
-#### Cross with DataSet Size Hint
-
-In order to guide the optimizer to pick the right execution strategy, you can hint the size of a DataSet to cross as shown here:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-DataSet<Tuple2<Integer, String>> input1 = // [...]
-DataSet<Tuple2<Integer, String>> input2 = // [...]
-
-DataSet<Tuple4<Integer, String, Integer, String>>
-            udfResult =
-                  // hint that the second DataSet is very small
-            input1.crossWithTiny(input2)
-                  // apply any Cross function (or projection)
-                  .with(new MyCrosser());
-
-DataSet<Tuple3<Integer, Integer, String>>
-            projectResult =
-                  // hint that the second DataSet is very large
-            input1.crossWithHuge(input2)
-                  // apply a projection (or any Cross function)
-                  .projectFirst(0,1).projectSecond(1);
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-val input1: DataSet[(Int, String)] = // [...]
-val input2: DataSet[(Int, String)] = // [...]
-
-// hint that the second DataSet is very small
-val result1 = input1.crossWithTiny(input2)
-
-// hint that the second DataSet is very large
-val result1 = input1.crossWithHuge(input2)
-
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
- #hint that the second DataSet is very small
- result1 = input1.cross_with_tiny(input2)
-
- #hint that the second DataSet is very large
- result1 = input1.cross_with_huge(input2)
-
-~~~
-
-</div>
-</div>
-
-### CoGroup
-
-The CoGroup transformation jointly processes groups of two DataSets. Both DataSets are grouped on a defined key and groups of both DataSets that share the same key are handed together to a user-defined co-group function. If for a specific key only one DataSet has a group, the co-group function is called with this group and an empty group.
-A co-group function can separately iterate over the elements of both groups and return an arbitrary number of result elements.
-
-Similar to Reduce, GroupReduce, and Join, keys can be defined using the different key-selection methods.
-
-#### CoGroup on DataSets
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-The example shows how to group by Field Position Keys (Tuple DataSets only). You can do the same with Pojo-types and key expressions.
-
-~~~java
-// Some CoGroupFunction definition
-class MyCoGrouper
-         implements CoGroupFunction<Tuple2<String, Integer>, Tuple2<String, Double>, Double> {
-
-  @Override
-  public void coGroup(Iterable<Tuple2<String, Integer>> iVals,
-                      Iterable<Tuple2<String, Double>> dVals,
-                      Collector<Double> out) {
-
-    Set<Integer> ints = new HashSet<Integer>();
-
-    // add all Integer values in group to set
-    for (Tuple2<String, Integer>> val : iVals) {
-      ints.add(val.f1);
-    }
-
-    // multiply each Double value with each unique Integer values of group
-    for (Tuple2<String, Double> val : dVals) {
-      for (Integer i : ints) {
-        out.collect(val.f1 * i);
-      }
-    }
-  }
-}
-
-// [...]
-DataSet<Tuple2<String, Integer>> iVals = // [...]
-DataSet<Tuple2<String, Double>> dVals = // [...]
-DataSet<Double> output = iVals.coGroup(dVals)
-                         // group first DataSet on first tuple field
-                         .where(0)
-                         // group second DataSet on first tuple field
-                         .equalTo(0)
-                         // apply CoGroup function on each pair of groups
-                         .with(new MyCoGrouper());
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-val iVals: DataSet[(String, Int)] = // [...]
-val dVals: DataSet[(String, Double)] = // [...]
-
-val output = iVals.coGroup(dVals).where(0).equalTo(0) {
-  (iVals, dVals, out: Collector[Double]) =>
-    val ints = iVals map { _._2 } toSet
-
-    for (dVal <- dVals) {
-      for (i <- ints) {
-        out.collect(dVal._2 * i)
-      }
-    }
-}
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
- class CoGroup(CoGroupFunction):
-   def co_group(self, ivals, dvals, collector):
-     ints = dict()
-     # add all Integer values in group to set
-     for value in ivals:
-       ints[value[1]] = 1
-     # multiply each Double value with each unique Integer values of group
-     for value in dvals:
-       for i in ints.keys():
-         collector.collect(value[1] * i)
-
-
- output = ivals.co_group(dvals).where(0).equal_to(0).using(CoGroup())
-~~~
-
-</div>
-</div>
-
-
-### Union
-
-Produces the union of two DataSets, which have to be of the same type. A union of more than two DataSets can be implemented with multiple union calls, as shown here:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-DataSet<Tuple2<String, Integer>> vals1 = // [...]
-DataSet<Tuple2<String, Integer>> vals2 = // [...]
-DataSet<Tuple2<String, Integer>> vals3 = // [...]
-DataSet<Tuple2<String, Integer>> unioned = vals1.union(vals2).union(vals3);
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-val vals1: DataSet[(String, Int)] = // [...]
-val vals2: DataSet[(String, Int)] = // [...]
-val vals3: DataSet[(String, Int)] = // [...]
-
-val unioned = vals1.union(vals2).union(vals3)
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
- unioned = vals1.union(vals2).union(vals3)
-~~~
-
-</div>
-</div>
-
-### Rebalance
-Evenly rebalances the parallel partitions of a DataSet to eliminate data skew.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-DataSet<String> in = // [...]
-// rebalance DataSet and apply a Map transformation.
-DataSet<Tuple2<String, String>> out = in.rebalance()
-                                        .map(new Mapper());
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-val in: DataSet[String] = // [...]
-// rebalance DataSet and apply a Map transformation.
-val out = in.rebalance().map { ... }
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
-Not supported.
-~~~
-
-</div>
-</div>
-
-
-### Hash-Partition
-
-Hash-partitions a DataSet on a given key.
-Keys can be specified as position keys, expression keys, and key selector functions (see [Reduce examples](#reduce-on-grouped-dataset) for how to specify keys).
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-DataSet<Tuple2<String, Integer>> in = // [...]
-// hash-partition DataSet by String value and apply a MapPartition transformation.
-DataSet<Tuple2<String, String>> out = in.partitionByHash(0)
-                                        .mapPartition(new PartitionMapper());
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-val in: DataSet[(String, Int)] = // [...]
-// hash-partition DataSet by String value and apply a MapPartition transformation.
-val out = in.partitionByHash(0).mapPartition { ... }
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
-Not supported.
-~~~
-
-</div>
-</div>
-
-### Range-Partition
-
-Range-partitions a DataSet on a given key.
-Keys can be specified as position keys, expression keys, and key selector functions (see [Reduce examples](#reduce-on-grouped-dataset) for how to specify keys).
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-DataSet<Tuple2<String, Integer>> in = // [...]
-// range-partition DataSet by String value and apply a MapPartition transformation.
-DataSet<Tuple2<String, String>> out = in.partitionByRange(0)
-                                        .mapPartition(new PartitionMapper());
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-val in: DataSet[(String, Int)] = // [...]
-// range-partition DataSet by String value and apply a MapPartition transformation.
-val out = in.partitionByRange(0).mapPartition { ... }
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
-Not supported.
-~~~
-
-</div>
-</div>
-
-
-### Sort Partition
-
-Locally sorts all partitions of a DataSet on a specified field in a specified order.
-Fields can be specified as field expressions or field positions (see [Reduce examples](#reduce-on-grouped-dataset) for how to specify keys).
-Partitions can be sorted on multiple fields by chaining `sortPartition()` calls.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-DataSet<Tuple2<String, Integer>> in = // [...]
-// Locally sort partitions in ascending order on the second String field and
-// in descending order on the first String field.
-// Apply a MapPartition transformation on the sorted partitions.
-DataSet<Tuple2<String, String>> out = in.sortPartition(1, Order.ASCENDING)
-                                        .sortPartition(0, Order.DESCENDING)
-                                        .mapPartition(new PartitionMapper());
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-val in: DataSet[(String, Int)] = // [...]
-// Locally sort partitions in ascending order on the second String field and
-// in descending order on the first String field.
-// Apply a MapPartition transformation on the sorted partitions.
-val out = in.sortPartition(1, Order.ASCENDING)
-            .sortPartition(0, Order.DESCENDING)
-            .mapPartition { ... }
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
-Not supported.
-~~~
-
-</div>
-</div>
-
-### First-n
-
-Returns the first n (arbitrary) elements of a DataSet. First-n can be applied on a regular DataSet, a grouped DataSet, or a grouped-sorted DataSet. Grouping keys can be specified as key-selector functions or field position keys (see [Reduce examples](#reduce-on-grouped-dataset) for how to specify keys).
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-DataSet<Tuple2<String, Integer>> in = // [...]
-// Return the first five (arbitrary) elements of the DataSet
-DataSet<Tuple2<String, Integer>> out1 = in.first(5);
-
-// Return the first two (arbitrary) elements of each String group
-DataSet<Tuple2<String, Integer>> out2 = in.groupBy(0)
-                                          .first(2);
-
-// Return the first three elements of each String group ordered by the Integer field
-DataSet<Tuple2<String, Integer>> out3 = in.groupBy(0)
-                                          .sortGroup(1, Order.ASCENDING)
-                                          .first(3);
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-val in: DataSet[(String, Int)] = // [...]
-// Return the first five (arbitrary) elements of the DataSet
-val out1 = in.first(5)
-
-// Return the first two (arbitrary) elements of each String group
-val out2 = in.groupBy(0).first(2)
-
-// Return the first three elements of each String group ordered by the Integer field
-val out3 = in.groupBy(0).sortGroup(1, Order.ASCENDING).first(3)
-~~~
-
-</div>
-<div data-lang="python" markdown="1">
-
-~~~python
-Not supported.
-~~~
-
-</div>
-</div>


[48/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/index.md
----------------------------------------------------------------------
diff --git a/docs/apis/batch/index.md b/docs/apis/batch/index.md
deleted file mode 100644
index ee4cf6c..0000000
--- a/docs/apis/batch/index.md
+++ /dev/null
@@ -1,2274 +0,0 @@
----
-title: "Flink DataSet API Programming Guide"
-
-# Top-level navigation
-top-nav-group: apis
-top-nav-pos: 3
-top-nav-title: <strong>Batch Guide</strong> (DataSet API)
-
-# Sub-level navigation
-sub-nav-group: batch
-sub-nav-group-title: Batch Guide
-sub-nav-id: dataset_api
-sub-nav-pos: 1
-sub-nav-title: DataSet API
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-DataSet programs in Flink are regular programs that implement transformations on data sets
-(e.g., filtering, mapping, joining, grouping). The data sets are initially created from certain
-sources (e.g., by reading files, or from local collections). Results are returned via sinks, which may for
-example write the data to (distributed) files, or to standard output (for example the command line
-terminal). Flink programs run in a variety of contexts, standalone, or embedded in other programs.
-The execution can happen in a local JVM, or on clusters of many machines.
-
-Please see [basic concepts]({{ site.baseurl }}/apis/common/index.html) for an introduction
-to the basic concepts of the Flink API.
-
-In order to create your own Flink DataSet program, we encourage you to start with the
-[anatomy of a Flink Program]({{ site.baseurl }}/apis/common/index.html#anatomy-of-a-flink-program)
-and gradually add your own
-[transformations](#dataset-transformations). The remaining sections act as references for additional
-operations and advanced features.
-
-* This will be replaced by the TOC
-{:toc}
-
-Example Program
----------------
-
-The following program is a complete, working example of WordCount. You can copy &amp; paste the code
-to run it locally. You only have to include the correct Flink's library into your project
-(see Section [Linking with Flink]({{ site.baseurl }}/apis/common/index.html#linking-with-flink)) and specify the imports. Then you are ready
-to go!
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-{% highlight java %}
-public class WordCountExample {
-    public static void main(String[] args) throws Exception {
-        final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-
-        DataSet<String> text = env.fromElements(
-            "Who's there?",
-            "I think I hear them. Stand, ho! Who's there?");
-
-        DataSet<Tuple2<String, Integer>> wordCounts = text
-            .flatMap(new LineSplitter())
-            .groupBy(0)
-            .sum(1);
-
-        wordCounts.print();
-    }
-
-    public static class LineSplitter implements FlatMapFunction<String, Tuple2<String, Integer>> {
-        @Override
-        public void flatMap(String line, Collector<Tuple2<String, Integer>> out) {
-            for (String word : line.split(" ")) {
-                out.collect(new Tuple2<String, Integer>(word, 1));
-            }
-        }
-    }
-}
-{% endhighlight %}
-
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-import org.apache.flink.api.scala._
-
-object WordCount {
-  def main(args: Array[String]) {
-
-    val env = ExecutionEnvironment.getExecutionEnvironment
-    val text = env.fromElements(
-      "Who's there?",
-      "I think I hear them. Stand, ho! Who's there?")
-
-    val counts = text.flatMap { _.toLowerCase.split("\\W+") filter { _.nonEmpty } }
-      .map { (_, 1) }
-      .groupBy(0)
-      .sum(1)
-
-    counts.print()
-  }
-}
-{% endhighlight %}
-</div>
-
-</div>
-
-{% top %}
-
-DataSet Transformations
------------------------
-
-Data transformations transform one or more DataSets into a new DataSet. Programs can combine
-multiple transformations into sophisticated assemblies.
-
-This section gives a brief overview of the available transformations. The [transformations
-documentation](dataset_transformations.html) has a full description of all transformations with
-examples.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-<br />
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 20%">Transformation</th>
-      <th class="text-center">Description</th>
-    </tr>
-  </thead>
-
-  <tbody>
-    <tr>
-      <td><strong>Map</strong></td>
-      <td>
-        <p>Takes one element and produces one element.</p>
-{% highlight java %}
-data.map(new MapFunction<String, Integer>() {
-  public Integer map(String value) { return Integer.parseInt(value); }
-});
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>FlatMap</strong></td>
-      <td>
-        <p>Takes one element and produces zero, one, or more elements. </p>
-{% highlight java %}
-data.flatMap(new FlatMapFunction<String, String>() {
-  public void flatMap(String value, Collector<String> out) {
-    for (String s : value.split(" ")) {
-      out.collect(s);
-    }
-  }
-});
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>MapPartition</strong></td>
-      <td>
-        <p>Transforms a parallel partition in a single function call. The function gets the partition
-        as an <code>Iterable</code> stream and can produce an arbitrary number of result values. The number of
-        elements in each partition depends on the degree-of-parallelism and previous operations.</p>
-{% highlight java %}
-data.mapPartition(new MapPartitionFunction<String, Long>() {
-  public void mapPartition(Iterable<String> values, Collector<Long> out) {
-    long c = 0;
-    for (String s : values) {
-      c++;
-    }
-    out.collect(c);
-  }
-});
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>Filter</strong></td>
-      <td>
-        <p>Evaluates a boolean function for each element and retains those for which the function
-        returns true.<br/>
-
-        <strong>IMPORTANT:</strong> The system assumes that the function does not modify the elements on which the predicate is applied. Violating this assumption
-        can lead to incorrect results.
-        </p>
-{% highlight java %}
-data.filter(new FilterFunction<Integer>() {
-  public boolean filter(Integer value) { return value > 1000; }
-});
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>Reduce</strong></td>
-      <td>
-        <p>Combines a group of elements into a single element by repeatedly combining two elements
-        into one. Reduce may be applied on a full data set, or on a grouped data set.</p>
-{% highlight java %}
-data.reduce(new ReduceFunction<Integer> {
-  public Integer reduce(Integer a, Integer b) { return a + b; }
-});
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>ReduceGroup</strong></td>
-      <td>
-        <p>Combines a group of elements into one or more elements. ReduceGroup may be applied on a
-        full data set, or on a grouped data set.</p>
-{% highlight java %}
-data.reduceGroup(new GroupReduceFunction<Integer, Integer> {
-  public void reduce(Iterable<Integer> values, Collector<Integer> out) {
-    int prefixSum = 0;
-    for (Integer i : values) {
-      prefixSum += i;
-      out.collect(prefixSum);
-    }
-  }
-});
-{% endhighlight %}
-        <p>If the reduce was applied to a grouped data set, you can specify the way that the
-        runtime executes the combine phase of the reduce via supplying a CombineHint as a second
-        parameter. The hash-based strategy should be faster in most cases, especially if the
-        number of different keys is small compared to the number of input elements (eg. 1/10).</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>Aggregate</strong></td>
-      <td>
-        <p>Aggregates a group of values into a single value. Aggregation functions can be thought of
-        as built-in reduce functions. Aggregate may be applied on a full data set, or on a grouped
-        data set.</p>
-{% highlight java %}
-Dataset<Tuple3<Integer, String, Double>> input = // [...]
-DataSet<Tuple3<Integer, String, Double>> output = input.aggregate(SUM, 0).and(MIN, 2);
-{% endhighlight %}
-	<p>You can also use short-hand syntax for minimum, maximum, and sum aggregations.</p>
-	{% highlight java %}
-	Dataset<Tuple3<Integer, String, Double>> input = // [...]
-DataSet<Tuple3<Integer, String, Double>> output = input.sum(0).andMin(2);
-	{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>Distinct</strong></td>
-      <td>
-        <p>Returns the distinct elements of a data set. It removes the duplicate entries
-        from the input DataSet, with respect to all fields of the elements, or a subset of fields.</p>
-    {% highlight java %}
-        data.distinct();
-    {% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>Join</strong></td>
-      <td>
-        Joins two data sets by creating all pairs of elements that are equal on their keys.
-        Optionally uses a JoinFunction to turn the pair of elements into a single element, or a
-        FlatJoinFunction to turn the pair of elements into arbitrarily many (including none)
-        elements. See the <a href="#specifying-keys">keys section</a> to learn how to define join keys.
-{% highlight java %}
-result = input1.join(input2)
-               .where(0)       // key of the first input (tuple field 0)
-               .equalTo(1);    // key of the second input (tuple field 1)
-{% endhighlight %}
-        You can specify the way that the runtime executes the join via <i>Join Hints</i>. The hints
-        describe whether the join happens through partitioning or broadcasting, and whether it uses
-        a sort-based or a hash-based algorithm. Please refer to the
-        <a href="dataset_transformations.html#join-algorithm-hints">Transformations Guide</a> for
-        a list of possible hints and an example.</br>
-        If no hint is specified, the system will try to make an estimate of the input sizes and
-        pick the best strategy according to those estimates.
-{% highlight java %}
-// This executes a join by broadcasting the first data set
-// using a hash table for the broadcasted data
-result = input1.join(input2, JoinHint.BROADCAST_HASH_FIRST)
-               .where(0).equalTo(1);
-{% endhighlight %}
-        Note that the join transformation works only for equi-joins. Other join types need to be expressed using OuterJoin or CoGroup.
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>OuterJoin</strong></td>
-      <td>
-        Performs a left, right, or full outer join on two data sets. Outer joins are similar to regular (inner) joins and create all pairs of elements that are equal on their keys. In addition, records of the "outer" side (left, right, or both in case of full) are preserved if no matching key is found in the other side. Matching pairs of elements (or one element and a <code>null</code> value for the other input) are given to a JoinFunction to turn the pair of elements into a single element, or to a FlatJoinFunction to turn the pair of elements into arbitrarily many (including none)         elements. See the <a href="#specifying-keys">keys section</a> to learn how to define join keys.
-{% highlight java %}
-input1.leftOuterJoin(input2) // rightOuterJoin or fullOuterJoin for right or full outer joins
-      .where(0)              // key of the first input (tuple field 0)
-      .equalTo(1)            // key of the second input (tuple field 1)
-      .with(new JoinFunction<String, String, String>() {
-          public String join(String v1, String v2) {
-             // NOTE:
-             // - v2 might be null for leftOuterJoin
-             // - v1 might be null for rightOuterJoin
-             // - v1 OR v2 might be null for fullOuterJoin
-          }
-      });
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>CoGroup</strong></td>
-      <td>
-        <p>The two-dimensional variant of the reduce operation. Groups each input on one or more
-        fields and then joins the groups. The transformation function is called per pair of groups.
-        See the <a href="#specifying-keys">keys section</a> to learn how to define coGroup keys.</p>
-{% highlight java %}
-data1.coGroup(data2)
-     .where(0)
-     .equalTo(1)
-     .with(new CoGroupFunction<String, String, String>() {
-         public void coGroup(Iterable<String> in1, Iterable<String> in2, Collector<String> out) {
-           out.collect(...);
-         }
-      });
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>Cross</strong></td>
-      <td>
-        <p>Builds the Cartesian product (cross product) of two inputs, creating all pairs of
-        elements. Optionally uses a CrossFunction to turn the pair of elements into a single
-        element</p>
-{% highlight java %}
-DataSet<Integer> data1 = // [...]
-DataSet<String> data2 = // [...]
-DataSet<Tuple2<Integer, String>> result = data1.cross(data2);
-{% endhighlight %}
-      <p>Note: Cross is potentially a <b>very</b> compute-intensive operation which can challenge even large compute clusters! It is advised to hint the system with the DataSet sizes by using <i>crossWithTiny()</i> and <i>crossWithHuge()</i>.</p>
-      </td>
-    </tr>
-    <tr>
-      <td><strong>Union</strong></td>
-      <td>
-        <p>Produces the union of two data sets.</p>
-{% highlight java %}
-DataSet<String> data1 = // [...]
-DataSet<String> data2 = // [...]
-DataSet<String> result = data1.union(data2);
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>Rebalance</strong></td>
-      <td>
-        <p>Evenly rebalances the parallel partitions of a data set to eliminate data skew. Only Map-like transformations may follow a rebalance transformation.</p>
-{% highlight java %}
-DataSet<String> in = // [...]
-DataSet<String> result = in.rebalance()
-                           .map(new Mapper());
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>Hash-Partition</strong></td>
-      <td>
-        <p>Hash-partitions a data set on a given key. Keys can be specified as position keys, expression keys, and key selector functions.</p>
-{% highlight java %}
-DataSet<Tuple2<String,Integer>> in = // [...]
-DataSet<Integer> result = in.partitionByHash(0)
-                            .mapPartition(new PartitionMapper());
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>Range-Partition</strong></td>
-      <td>
-        <p>Range-partitions a data set on a given key. Keys can be specified as position keys, expression keys, and key selector functions.</p>
-{% highlight java %}
-DataSet<Tuple2<String,Integer>> in = // [...]
-DataSet<Integer> result = in.partitionByRange(0)
-                            .mapPartition(new PartitionMapper());
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>Custom Partitioning</strong></td>
-      <td>
-        <p>Manually specify a partitioning over the data.
-          <br/>
-          <i>Note</i>: This method works only on single field keys.</p>
-{% highlight java %}
-DataSet<Tuple2<String,Integer>> in = // [...]
-DataSet<Integer> result = in.partitionCustom(Partitioner<K> partitioner, key)
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>Sort Partition</strong></td>
-      <td>
-        <p>Locally sorts all partitions of a data set on a specified field in a specified order.
-          Fields can be specified as tuple positions or field expressions.
-          Sorting on multiple fields is done by chaining sortPartition() calls.</p>
-{% highlight java %}
-DataSet<Tuple2<String,Integer>> in = // [...]
-DataSet<Integer> result = in.sortPartition(1, Order.ASCENDING)
-                            .mapPartition(new PartitionMapper());
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>First-n</strong></td>
-      <td>
-        <p>Returns the first n (arbitrary) elements of a data set. First-n can be applied on a regular data set, a grouped data set, or a grouped-sorted data set. Grouping keys can be specified as key-selector functions or field position keys.</p>
-{% highlight java %}
-DataSet<Tuple2<String,Integer>> in = // [...]
-// regular data set
-DataSet<Tuple2<String,Integer>> result1 = in.first(3);
-// grouped data set
-DataSet<Tuple2<String,Integer>> result2 = in.groupBy(0)
-                                            .first(3);
-// grouped-sorted data set
-DataSet<Tuple2<String,Integer>> result3 = in.groupBy(0)
-                                            .sortGroup(1, Order.ASCENDING)
-                                            .first(3);
-{% endhighlight %}
-      </td>
-    </tr>
-  </tbody>
-</table>
-
-----------
-
-The following transformations are available on data sets of Tuples:
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 20%">Transformation</th>
-      <th class="text-center">Description</th>
-    </tr>
-  </thead>
-  <tbody>
-   <tr>
-      <td><strong>Project</strong></td>
-      <td>
-        <p>Selects a subset of fields from the tuples</p>
-{% highlight java %}
-DataSet<Tuple3<Integer, Double, String>> in = // [...]
-DataSet<Tuple2<String, Integer>> out = in.project(2,0);
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>MinBy / MaxBy</strong></td>
-      <td>
-        <p>Selects a tuple from a group of tuples whose values of one or more fields are minimum (maximum). The fields which are used for comparison must be valid key fields, i.e., comparable. If multiple tuples have minimum (maximum) field values, an arbitrary tuple of these tuples is returned. MinBy (MaxBy) may be applied on a full data set or a grouped data set.</p>
-{% highlight java %}
-DataSet<Tuple3<Integer, Double, String>> in = // [...]
-// a DataSet with a single tuple with minimum values for the Integer and String fields.
-DataSet<Tuple3<Integer, Double, String>> out = in.minBy(0, 2);
-// a DataSet with one tuple for each group with the minimum value for the Double field.
-DataSet<Tuple3<Integer, Double, String>> out2 = in.groupBy(2)
-                                                  .minBy(1);
-{% endhighlight %}
-      </td>
-    </tr>
-  </tbody>
-</table>
-
-</div>
-<div data-lang="scala" markdown="1">
-<br />
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 20%">Transformation</th>
-      <th class="text-center">Description</th>
-    </tr>
-  </thead>
-
-  <tbody>
-    <tr>
-      <td><strong>Map</strong></td>
-      <td>
-        <p>Takes one element and produces one element.</p>
-{% highlight scala %}
-data.map { x => x.toInt }
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>FlatMap</strong></td>
-      <td>
-        <p>Takes one element and produces zero, one, or more elements. </p>
-{% highlight scala %}
-data.flatMap { str => str.split(" ") }
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>MapPartition</strong></td>
-      <td>
-        <p>Transforms a parallel partition in a single function call. The function get the partition
-        as an `Iterator` and can produce an arbitrary number of result values. The number of
-        elements in each partition depends on the degree-of-parallelism and previous operations.</p>
-{% highlight scala %}
-data.mapPartition { in => in map { (_, 1) } }
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>Filter</strong></td>
-      <td>
-        <p>Evaluates a boolean function for each element and retains those for which the function
-        returns true.<br/>
-        <strong>IMPORTANT:</strong> The system assumes that the function does not modify the element on which the predicate is applied.
-        Violating this assumption can lead to incorrect results.</p>
-{% highlight scala %}
-data.filter { _ > 1000 }
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>Reduce</strong></td>
-      <td>
-        <p>Combines a group of elements into a single element by repeatedly combining two elements
-        into one. Reduce may be applied on a full data set, or on a grouped data set.</p>
-{% highlight scala %}
-data.reduce { _ + _ }
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>ReduceGroup</strong></td>
-      <td>
-        <p>Combines a group of elements into one or more elements. ReduceGroup may be applied on a
-        full data set, or on a grouped data set.</p>
-{% highlight scala %}
-data.reduceGroup { elements => elements.sum }
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>Aggregate</strong></td>
-      <td>
-        <p>Aggregates a group of values into a single value. Aggregation functions can be thought of
-        as built-in reduce functions. Aggregate may be applied on a full data set, or on a grouped
-        data set.</p>
-{% highlight scala %}
-val input: DataSet[(Int, String, Double)] = // [...]
-val output: DataSet[(Int, String, Doublr)] = input.aggregate(SUM, 0).aggregate(MIN, 2);
-{% endhighlight %}
-  <p>You can also use short-hand syntax for minimum, maximum, and sum aggregations.</p>
-{% highlight scala %}
-val input: DataSet[(Int, String, Double)] = // [...]
-val output: DataSet[(Int, String, Double)] = input.sum(0).min(2)
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>Distinct</strong></td>
-      <td>
-        <p>Returns the distinct elements of a data set. It removes the duplicate entries
-        from the input DataSet, with respect to all fields of the elements, or a subset of fields.</p>
-      {% highlight scala %}
-         data.distinct()
-      {% endhighlight %}
-      </td>
-    </tr>
-
-    </tr>
-      <td><strong>Join</strong></td>
-      <td>
-        Joins two data sets by creating all pairs of elements that are equal on their keys.
-        Optionally uses a JoinFunction to turn the pair of elements into a single element, or a
-        FlatJoinFunction to turn the pair of elements into arbitrarily many (including none)
-        elements. See the <a href="#specifying-keys">keys section</a> to learn how to define join keys.
-{% highlight scala %}
-// In this case tuple fields are used as keys. "0" is the join field on the first tuple
-// "1" is the join field on the second tuple.
-val result = input1.join(input2).where(0).equalTo(1)
-{% endhighlight %}
-        You can specify the way that the runtime executes the join via <i>Join Hints</i>. The hints
-        describe whether the join happens through partitioning or broadcasting, and whether it uses
-        a sort-based or a hash-based algorithm. Please refer to the
-        <a href="dataset_transformations.html#join-algorithm-hints">Transformations Guide</a> for
-        a list of possible hints and an example.</br>
-        If no hint is specified, the system will try to make an estimate of the input sizes and
-        pick the best strategy according to those estimates.
-{% highlight scala %}
-// This executes a join by broadcasting the first data set
-// using a hash table for the broadcasted data
-val result = input1.join(input2, JoinHint.BROADCAST_HASH_FIRST)
-                   .where(0).equalTo(1)
-{% endhighlight %}
-          Note that the join transformation works only for equi-joins. Other join types need to be expressed using OuterJoin or CoGroup.
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>OuterJoin</strong></td>
-      <td>
-        Performs a left, right, or full outer join on two data sets. Outer joins are similar to regular (inner) joins and create all pairs of elements that are equal on their keys. In addition, records of the "outer" side (left, right, or both in case of full) are preserved if no matching key is found in the other side. Matching pairs of elements (or one element and a `null` value for the other input) are given to a JoinFunction to turn the pair of elements into a single element, or to a FlatJoinFunction to turn the pair of elements into arbitrarily many (including none)         elements. See the <a href="#specifying-keys">keys section</a> to learn how to define join keys.
-{% highlight scala %}
-val joined = left.leftOuterJoin(right).where(0).equalTo(1) {
-   (left, right) =>
-     val a = if (left == null) "none" else left._1
-     (a, right)
-  }
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>CoGroup</strong></td>
-      <td>
-        <p>The two-dimensional variant of the reduce operation. Groups each input on one or more
-        fields and then joins the groups. The transformation function is called per pair of groups.
-        See the <a href="#specifying-keys">keys section</a> to learn how to define coGroup keys.</p>
-{% highlight scala %}
-data1.coGroup(data2).where(0).equalTo(1)
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>Cross</strong></td>
-      <td>
-        <p>Builds the Cartesian product (cross product) of two inputs, creating all pairs of
-        elements. Optionally uses a CrossFunction to turn the pair of elements into a single
-        element</p>
-{% highlight scala %}
-val data1: DataSet[Int] = // [...]
-val data2: DataSet[String] = // [...]
-val result: DataSet[(Int, String)] = data1.cross(data2)
-{% endhighlight %}
-        <p>Note: Cross is potentially a <b>very</b> compute-intensive operation which can challenge even large compute clusters! It is adviced to hint the system with the DataSet sizes by using <i>crossWithTiny()</i> and <i>crossWithHuge()</i>.</p>
-      </td>
-    </tr>
-    <tr>
-      <td><strong>Union</strong></td>
-      <td>
-        <p>Produces the union of two data sets.</p>
-{% highlight scala %}
-data.union(data2)
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>Rebalance</strong></td>
-      <td>
-        <p>Evenly rebalances the parallel partitions of a data set to eliminate data skew. Only Map-like transformations may follow a rebalance transformation.</p>
-{% highlight scala %}
-val data1: DataSet[Int] = // [...]
-val result: DataSet[(Int, String)] = data1.rebalance().map(...)
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>Hash-Partition</strong></td>
-      <td>
-        <p>Hash-partitions a data set on a given key. Keys can be specified as position keys, expression keys, and key selector functions.</p>
-{% highlight scala %}
-val in: DataSet[(Int, String)] = // [...]
-val result = in.partitionByHash(0).mapPartition { ... }
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>Range-Partition</strong></td>
-      <td>
-        <p>Range-partitions a data set on a given key. Keys can be specified as position keys, expression keys, and key selector functions.</p>
-{% highlight scala %}
-val in: DataSet[(Int, String)] = // [...]
-val result = in.partitionByRange(0).mapPartition { ... }
-{% endhighlight %}
-      </td>
-    </tr>
-    </tr>
-    <tr>
-      <td><strong>Custom Partitioning</strong></td>
-      <td>
-        <p>Manually specify a partitioning over the data.
-          <br/>
-          <i>Note</i>: This method works only on single field keys.</p>
-{% highlight scala %}
-val in: DataSet[(Int, String)] = // [...]
-val result = in
-  .partitionCustom(partitioner: Partitioner[K], key)
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>Sort Partition</strong></td>
-      <td>
-        <p>Locally sorts all partitions of a data set on a specified field in a specified order.
-          Fields can be specified as tuple positions or field expressions.
-          Sorting on multiple fields is done by chaining sortPartition() calls.</p>
-{% highlight scala %}
-val in: DataSet[(Int, String)] = // [...]
-val result = in.sortPartition(1, Order.ASCENDING).mapPartition { ... }
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>First-n</strong></td>
-      <td>
-        <p>Returns the first n (arbitrary) elements of a data set. First-n can be applied on a regular data set, a grouped data set, or a grouped-sorted data set. Grouping keys can be specified as key-selector functions,
-        tuple positions or case class fields.</p>
-{% highlight scala %}
-val in: DataSet[(Int, String)] = // [...]
-// regular data set
-val result1 = in.first(3)
-// grouped data set
-val result2 = in.groupBy(0).first(3)
-// grouped-sorted data set
-val result3 = in.groupBy(0).sortGroup(1, Order.ASCENDING).first(3)
-{% endhighlight %}
-      </td>
-    </tr>
-  </tbody>
-</table>
-
-----------
-
-The following transformations are available on data sets of Tuples:
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 20%">Transformation</th>
-      <th class="text-center">Description</th>
-    </tr>
-  </thead>
-  <tbody>
-    <tr>
-      <td><strong>MinBy / MaxBy</strong></td>
-      <td>
-        <p>Selects a tuple from a group of tuples whose values of one or more fields are minimum (maximum). The fields which are used for comparison must be valid key fields, i.e., comparable. If multiple tuples have minimum (maximum) field values, an arbitrary tuple of these tuples is returned. MinBy (MaxBy) may be applied on a full data set or a grouped data set.</p>
-{% highlight java %}
-val in: DataSet[(Int, Double, String)] = // [...]
-// a data set with a single tuple with minimum values for the Int and String fields.
-val out: DataSet[(Int, Double, String)] = in.minBy(0, 2)
-// a data set with one tuple for each group with the minimum value for the Double field.
-val out2: DataSet[(Int, Double, String)] = in.groupBy(2)
-                                             .minBy(1)
-{% endhighlight %}
-      </td>
-    </tr>
-  </tbody>
-</table>
-
-Extraction from tuples, case classes and collections via anonymous pattern matching, like the following:
-{% highlight scala %}
-val data: DataSet[(Int, String, Double)] = // [...]
-data.map {
-  case (id, name, temperature) => // [...]
-}
-{% endhighlight %}
-is not supported by the API out-of-the-box. To use this feature, you should use a <a href="../scala_api_extensions.html">Scala API extension</a>.
-
-</div>
-</div>
-
-The [parallelism]({{ site.baseurl }}/apis/common/index.html#parallel-execution) of a transformation can be defined by `setParallelism(int)` while
-`name(String)` assigns a custom name to a transformation which is helpful for debugging. The same is
-possible for [Data Sources](#data-sources) and [Data Sinks](#data-sinks).
-
-`withParameters(Configuration)` passes Configuration objects, which can be accessed from the `open()` method inside the user function.
-
-{% top %}
-
-Data Sources
-------------
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-Data sources create the initial data sets, such as from files or from Java collections. The general
-mechanism of creating data sets is abstracted behind an
-{% gh_link /flink-core/src/main/java/org/apache/flink/api/common/io/InputFormat.java "InputFormat"%}.
-Flink comes
-with several built-in formats to create data sets from common file formats. Many of them have
-shortcut methods on the *ExecutionEnvironment*.
-
-File-based:
-
-- `readTextFile(path)` / `TextInputFormat` - Reads files line wise and returns them as Strings.
-
-- `readTextFileWithValue(path)` / `TextValueInputFormat` - Reads files line wise and returns them as
-  StringValues. StringValues are mutable strings.
-
-- `readCsvFile(path)` / `CsvInputFormat` - Parses files of comma (or another char) delimited fields.
-  Returns a DataSet of tuples or POJOs. Supports the basic java types and their Value counterparts as field
-  types.
-
-- `readFileOfPrimitives(path, Class)` / `PrimitiveInputFormat` - Parses files of new-line (or another char sequence)
-  delimited primitive data types such as `String` or `Integer`.
-
-- `readFileOfPrimitives(path, delimiter, Class)` / `PrimitiveInputFormat` - Parses files of new-line (or another char sequence)
-   delimited primitive data types such as `String` or `Integer` using the given delimiter.
-
-- `readHadoopFile(FileInputFormat, Key, Value, path)` / `FileInputFormat` - Creates a JobConf and reads file from the specified
-   path with the specified FileInputFormat, Key class and Value class and returns them as Tuple2<Key, Value>.
-
-- `readSequenceFile(Key, Value, path)` / `SequenceFileInputFormat` - Creates a JobConf and reads file from the specified path with
-   type SequenceFileInputFormat, Key class and Value class and returns them as Tuple2<Key, Value>.
-
-
-Collection-based:
-
-- `fromCollection(Collection)` - Creates a data set from the Java Java.util.Collection. All elements
-  in the collection must be of the same type.
-
-- `fromCollection(Iterator, Class)` - Creates a data set from an iterator. The class specifies the
-  data type of the elements returned by the iterator.
-
-- `fromElements(T ...)` - Creates a data set from the given sequence of objects. All objects must be
-  of the same type.
-
-- `fromParallelCollection(SplittableIterator, Class)` - Creates a data set from an iterator, in
-  parallel. The class specifies the data type of the elements returned by the iterator.
-
-- `generateSequence(from, to)` - Generates the sequence of numbers in the given interval, in
-  parallel.
-
-Generic:
-
-- `readFile(inputFormat, path)` / `FileInputFormat` - Accepts a file input format.
-
-- `createInput(inputFormat)` / `InputFormat` - Accepts a generic input format.
-
-**Examples**
-
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-
-// read text file from local files system
-DataSet<String> localLines = env.readTextFile("file:///path/to/my/textfile");
-
-// read text file from a HDFS running at nnHost:nnPort
-DataSet<String> hdfsLines = env.readTextFile("hdfs://nnHost:nnPort/path/to/my/textfile");
-
-// read a CSV file with three fields
-DataSet<Tuple3<Integer, String, Double>> csvInput = env.readCsvFile("hdfs:///the/CSV/file")
-	                       .types(Integer.class, String.class, Double.class);
-
-// read a CSV file with five fields, taking only two of them
-DataSet<Tuple2<String, Double>> csvInput = env.readCsvFile("hdfs:///the/CSV/file")
-                               .includeFields("10010")  // take the first and the fourth field
-	                       .types(String.class, Double.class);
-
-// read a CSV file with three fields into a POJO (Person.class) with corresponding fields
-DataSet<Person>> csvInput = env.readCsvFile("hdfs:///the/CSV/file")
-                         .pojoType(Person.class, "name", "age", "zipcode");
-
-
-// read a file from the specified path of type TextInputFormat
-DataSet<Tuple2<LongWritable, Text>> tuples =
- env.readHadoopFile(new TextInputFormat(), LongWritable.class, Text.class, "hdfs://nnHost:nnPort/path/to/file");
-
-// read a file from the specified path of type SequenceFileInputFormat
-DataSet<Tuple2<IntWritable, Text>> tuples =
- env.readSequenceFile(IntWritable.class, Text.class, "hdfs://nnHost:nnPort/path/to/file");
-
-// creates a set from some given elements
-DataSet<String> value = env.fromElements("Foo", "bar", "foobar", "fubar");
-
-// generate a number sequence
-DataSet<Long> numbers = env.generateSequence(1, 10000000);
-
-// Read data from a relational database using the JDBC input format
-DataSet<Tuple2<String, Integer> dbData =
-    env.createInput(
-      // create and configure input format
-      JDBCInputFormat.buildJDBCInputFormat()
-                     .setDrivername("org.apache.derby.jdbc.EmbeddedDriver")
-                     .setDBUrl("jdbc:derby:memory:persons")
-                     .setQuery("select name, age from persons")
-                     .finish(),
-      // specify type information for DataSet
-      new TupleTypeInfo(Tuple2.class, STRING_TYPE_INFO, INT_TYPE_INFO)
-    );
-
-// Note: Flink's program compiler needs to infer the data types of the data items which are returned
-// by an InputFormat. If this information cannot be automatically inferred, it is necessary to
-// manually provide the type information as shown in the examples above.
-{% endhighlight %}
-
-#### Configuring CSV Parsing
-
-Flink offers a number of configuration options for CSV parsing:
-
-- `types(Class ... types)` specifies the types of the fields to parse. **It is mandatory to configure the types of the parsed fields.**
-  In case of the type class Boolean.class, "True" (case-insensitive), "False" (case-insensitive), "1" and "0" are treated as booleans.
-
-- `lineDelimiter(String del)` specifies the delimiter of individual records. The default line delimiter is the new-line character `'\n'`.
-
-- `fieldDelimiter(String del)` specifies the delimiter that separates fields of a record. The default field delimiter is the comma character `','`.
-
-- `includeFields(boolean ... flag)`, `includeFields(String mask)`, or `includeFields(long bitMask)` defines which fields to read from the input file (and which to ignore). By default the first *n* fields (as defined by the number of types in the `types()` call) are parsed.
-
-- `parseQuotedStrings(char quoteChar)` enables quoted string parsing. Strings are parsed as quoted strings if the first character of the string field is the quote character (leading or tailing whitespaces are *not* trimmed). Field delimiters within quoted strings are ignored. Quoted string parsing fails if the last character of a quoted string field is not the quote character or if the quote character appears at some point which is not the start or the end of the quoted string field (unless the quote character is escaped using '\'). If quoted string parsing is enabled and the first character of the field is *not* the quoting string, the string is parsed as unquoted string. By default, quoted string parsing is disabled.
-
-- `ignoreComments(String commentPrefix)` specifies a comment prefix. All lines that start with the specified comment prefix are not parsed and ignored. By default, no lines are ignored.
-
-- `ignoreInvalidLines()` enables lenient parsing, i.e., lines that cannot be correctly parsed are ignored. By default, lenient parsing is disabled and invalid lines raise an exception.
-
-- `ignoreFirstLine()` configures the InputFormat to ignore the first line of the input file. By default no line is ignored.
-
-
-#### Recursive Traversal of the Input Path Directory
-
-For file-based inputs, when the input path is a directory, nested files are not enumerated by default. Instead, only the files inside the base directory are read, while nested files are ignored. Recursive enumeration of nested files can be enabled through the `recursive.file.enumeration` configuration parameter, like in the following example.
-
-{% highlight java %}
-// enable recursive enumeration of nested input files
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-
-// create a configuration object
-Configuration parameters = new Configuration();
-
-// set the recursive enumeration parameter
-parameters.setBoolean("recursive.file.enumeration", true);
-
-// pass the configuration to the data source
-DataSet<String> logs = env.readTextFile("file:///path/with.nested/files")
-			  .withParameters(parameters);
-{% endhighlight %}
-
-</div>
-<div data-lang="scala" markdown="1">
-
-Data sources create the initial data sets, such as from files or from Java collections. The general
-mechanism of creating data sets is abstracted behind an
-{% gh_link /flink-core/src/main/java/org/apache/flink/api/common/io/InputFormat.java "InputFormat"%}.
-Flink comes
-with several built-in formats to create data sets from common file formats. Many of them have
-shortcut methods on the *ExecutionEnvironment*.
-
-File-based:
-
-- `readTextFile(path)` / `TextInputFormat` - Reads files line wise and returns them as Strings.
-
-- `readTextFileWithValue(path)` / `TextValueInputFormat` - Reads files line wise and returns them as
-  StringValues. StringValues are mutable strings.
-
-- `readCsvFile(path)` / `CsvInputFormat` - Parses files of comma (or another char) delimited fields.
-  Returns a DataSet of tuples, case class objects, or POJOs. Supports the basic java types and their Value counterparts as field
-  types.
-
-- `readFileOfPrimitives(path, delimiter)` / `PrimitiveInputFormat` - Parses files of new-line (or another char sequence)
-  delimited primitive data types such as `String` or `Integer` using the given delimiter.
-
-- `readHadoopFile(FileInputFormat, Key, Value, path)` / `FileInputFormat` - Creates a JobConf and reads file from the specified
-   path with the specified FileInputFormat, Key class and Value class and returns them as Tuple2<Key, Value>.
-
-- `readSequenceFile(Key, Value, path)` / `SequenceFileInputFormat` - Creates a JobConf and reads file from the specified path with
-   type SequenceFileInputFormat, Key class and Value class and returns them as Tuple2<Key, Value>.
-
-Collection-based:
-
-- `fromCollection(Seq)` - Creates a data set from a Seq. All elements
-  in the collection must be of the same type.
-
-- `fromCollection(Iterator)` - Creates a data set from an Iterator. The class specifies the
-  data type of the elements returned by the iterator.
-
-- `fromElements(elements: _*)` - Creates a data set from the given sequence of objects. All objects
-  must be of the same type.
-
-- `fromParallelCollection(SplittableIterator)` - Creates a data set from an iterator, in
-  parallel. The class specifies the data type of the elements returned by the iterator.
-
-- `generateSequence(from, to)` - Generates the squence of numbers in the given interval, in
-  parallel.
-
-Generic:
-
-- `readFile(inputFormat, path)` / `FileInputFormat` - Accepts a file input format.
-
-- `createInput(inputFormat)` / `InputFormat` - Accepts a generic input format.
-
-**Examples**
-
-{% highlight scala %}
-val env  = ExecutionEnvironment.getExecutionEnvironment
-
-// read text file from local files system
-val localLines = env.readTextFile("file:///path/to/my/textfile")
-
-// read text file from a HDFS running at nnHost:nnPort
-val hdfsLines = env.readTextFile("hdfs://nnHost:nnPort/path/to/my/textfile")
-
-// read a CSV file with three fields
-val csvInput = env.readCsvFile[(Int, String, Double)]("hdfs:///the/CSV/file")
-
-// read a CSV file with five fields, taking only two of them
-val csvInput = env.readCsvFile[(String, Double)](
-  "hdfs:///the/CSV/file",
-  includedFields = Array(0, 3)) // take the first and the fourth field
-
-// CSV input can also be used with Case Classes
-case class MyCaseClass(str: String, dbl: Double)
-val csvInput = env.readCsvFile[MyCaseClass](
-  "hdfs:///the/CSV/file",
-  includedFields = Array(0, 3)) // take the first and the fourth field
-
-// read a CSV file with three fields into a POJO (Person) with corresponding fields
-val csvInput = env.readCsvFile[Person](
-  "hdfs:///the/CSV/file",
-  pojoFields = Array("name", "age", "zipcode"))
-
-// create a set from some given elements
-val values = env.fromElements("Foo", "bar", "foobar", "fubar")
-
-// generate a number sequence
-val numbers = env.generateSequence(1, 10000000);
-
-// read a file from the specified path of type TextInputFormat
-val tuples = env.readHadoopFile(new TextInputFormat, classOf[LongWritable],
- classOf[Text], "hdfs://nnHost:nnPort/path/to/file")
-
-// read a file from the specified path of type SequenceFileInputFormat
-val tuples = env.readSequenceFile(classOf[IntWritable], classOf[Text],
- "hdfs://nnHost:nnPort/path/to/file")
-
-{% endhighlight %}
-
-#### Configuring CSV Parsing
-
-Flink offers a number of configuration options for CSV parsing:
-
-- `lineDelimiter: String` specifies the delimiter of individual records. The default line delimiter is the new-line character `'\n'`.
-
-- `fieldDelimiter: String` specifies the delimiter that separates fields of a record. The default field delimiter is the comma character `','`.
-
-- `includeFields: Array[Int]` defines which fields to read from the input file (and which to ignore). By default the first *n* fields (as defined by the number of types in the `types()` call) are parsed.
-
-- `pojoFields: Array[String]` specifies the fields of a POJO that are mapped to CSV fields. Parsers for CSV fields are automatically initialized based on the type and order of the POJO fields.
-
-- `parseQuotedStrings: Character` enables quoted string parsing. Strings are parsed as quoted strings if the first character of the string field is the quote character (leading or tailing whitespaces are *not* trimmed). Field delimiters within quoted strings are ignored. Quoted string parsing fails if the last character of a quoted string field is not the quote character. If quoted string parsing is enabled and the first character of the field is *not* the quoting string, the string is parsed as unquoted string. By default, quoted string parsing is disabled.
-
-- `ignoreComments: String` specifies a comment prefix. All lines that start with the specified comment prefix are not parsed and ignored. By default, no lines are ignored.
-
-- `lenient: Boolean` enables lenient parsing, i.e., lines that cannot be correctly parsed are ignored. By default, lenient parsing is disabled and invalid lines raise an exception.
-
-- `ignoreFirstLine: Boolean` configures the InputFormat to ignore the first line of the input file. By default no line is ignored.
-
-#### Recursive Traversal of the Input Path Directory
-
-For file-based inputs, when the input path is a directory, nested files are not enumerated by default. Instead, only the files inside the base directory are read, while nested files are ignored. Recursive enumeration of nested files can be enabled through the `recursive.file.enumeration` configuration parameter, like in the following example.
-
-{% highlight scala %}
-// enable recursive enumeration of nested input files
-val env  = ExecutionEnvironment.getExecutionEnvironment
-
-// create a configuration object
-val parameters = new Configuration
-
-// set the recursive enumeration parameter
-parameters.setBoolean("recursive.file.enumeration", true)
-
-// pass the configuration to the data source
-env.readTextFile("file:///path/with.nested/files").withParameters(parameters)
-{% endhighlight %}
-
-</div>
-</div>
-
-### Read Compressed Files
-
-Flink currently supports transparent decompression of input files if these are marked with an appropriate file extension. In particular, this means that no further configuration of the input formats is necessary and any `FileInputFormat` support the compression, including custom input formats. Please notice that compressed files might not be read in parallel, thus impacting job scalability.
-
-The following table lists the currently supported compression methods.
-
-<br />
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 20%">Compression method</th>
-      <th class="text-left">File extensions</th>
-      <th class="text-left" style="width: 20%">Parallelizable</th>
-    </tr>
-  </thead>
-
-  <tbody>
-    <tr>
-      <td><strong>DEFLATE</strong></td>
-      <td><code>.deflate</code></td>
-      <td>no</td>
-    </tr>
-    <tr>
-      <td><strong>GZip</strong></td>
-      <td><code>.gz</code>, <code>.gzip</code></td>
-      <td>no</td>
-    </tr>
-  </tbody>
-</table>
-
-
-{% top %}
-
-Data Sinks
-----------
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-Data sinks consume DataSets and are used to store or return them. Data sink operations are described
-using an
-{% gh_link /flink-core/src/main/java/org/apache/flink/api/common/io/OutputFormat.java "OutputFormat" %}.
-Flink comes with a variety of built-in output formats that are encapsulated behind operations on the
-DataSet:
-
-- `writeAsText()` / `TextOuputFormat` - Writes elements line-wise as Strings. The Strings are
-  obtained by calling the *toString()* method of each element.
-- `writeAsFormattedText()` / `TextOutputFormat` - Write elements line-wise as Strings. The Strings
-  are obtained by calling a user-defined *format()* method for each element.
-- `writeAsCsv(...)` / `CsvOutputFormat` - Writes tuples as comma-separated value files. Row and field
-  delimiters are configurable. The value for each field comes from the *toString()* method of the objects.
-- `print()` / `printToErr()` / `print(String msg)` / `printToErr(String msg)` - Prints the *toString()* value
-of each element on the standard out / standard error stream. Optionally, a prefix (msg) can be provided which is
-prepended to the output. This can help to distinguish between different calls to *print*. If the parallelism is
-greater than 1, the output will also be prepended with the identifier of the task which produced the output.
-- `write()` / `FileOutputFormat` - Method and base class for custom file outputs. Supports
-  custom object-to-bytes conversion.
-- `output()`/ `OutputFormat` - Most generic output method, for data sinks that are not file based
-  (such as storing the result in a database).
-
-A DataSet can be input to multiple operations. Programs can write or print a data set and at the
-same time run additional transformations on them.
-
-**Examples**
-
-Standard data sink methods:
-
-{% highlight java %}
-// text data
-DataSet<String> textData = // [...]
-
-// write DataSet to a file on the local file system
-textData.writeAsText("file:///my/result/on/localFS");
-
-// write DataSet to a file on a HDFS with a namenode running at nnHost:nnPort
-textData.writeAsText("hdfs://nnHost:nnPort/my/result/on/localFS");
-
-// write DataSet to a file and overwrite the file if it exists
-textData.writeAsText("file:///my/result/on/localFS", WriteMode.OVERWRITE);
-
-// tuples as lines with pipe as the separator "a|b|c"
-DataSet<Tuple3<String, Integer, Double>> values = // [...]
-values.writeAsCsv("file:///path/to/the/result/file", "\n", "|");
-
-// this writes tuples in the text formatting "(a, b, c)", rather than as CSV lines
-values.writeAsText("file:///path/to/the/result/file");
-
-// this writes values as strings using a user-defined TextFormatter object
-values.writeAsFormattedText("file:///path/to/the/result/file",
-    new TextFormatter<Tuple2<Integer, Integer>>() {
-        public String format (Tuple2<Integer, Integer> value) {
-            return value.f1 + " - " + value.f0;
-        }
-    });
-{% endhighlight %}
-
-Using a custom output format:
-
-{% highlight java %}
-DataSet<Tuple3<String, Integer, Double>> myResult = [...]
-
-// write Tuple DataSet to a relational database
-myResult.output(
-    // build and configure OutputFormat
-    JDBCOutputFormat.buildJDBCOutputFormat()
-                    .setDrivername("org.apache.derby.jdbc.EmbeddedDriver")
-                    .setDBUrl("jdbc:derby:memory:persons")
-                    .setQuery("insert into persons (name, age, height) values (?,?,?)")
-                    .finish()
-    );
-{% endhighlight %}
-
-#### Locally Sorted Output
-
-The output of a data sink can be locally sorted on specified fields in specified orders using [tuple field positions]({{ site.baseurl }}/apis/common/index.html#define-keys-for-tuples) or [field expressions]({{ site.baseurl }}/apis/common/index.html#define-keys-using-field-expressions). This works for every output format.
-
-The following examples show how to use this feature:
-
-{% highlight java %}
-
-DataSet<Tuple3<Integer, String, Double>> tData = // [...]
-DataSet<Tuple2<BookPojo, Double>> pData = // [...]
-DataSet<String> sData = // [...]
-
-// sort output on String field in ascending order
-tData.sortPartition(1, Order.ASCENDING).print();
-
-// sort output on Double field in descending and Integer field in ascending order
-tData.sortPartition(2, Order.DESCENDING).sortPartition(0, Order.ASCENDING).print();
-
-// sort output on the "author" field of nested BookPojo in descending order
-pData.sortPartition("f0.author", Order.DESCENDING).writeAsText(...);
-
-// sort output on the full tuple in ascending order
-tData.sortPartition("*", Order.ASCENDING).writeAsCsv(...);
-
-// sort atomic type (String) output in descending order
-sData.sortPartition("*", Order.DESCENDING).writeAsText(...);
-
-{% endhighlight %}
-
-Globally sorted output is not supported yet.
-
-</div>
-<div data-lang="scala" markdown="1">
-Data sinks consume DataSets and are used to store or return them. Data sink operations are described
-using an
-{% gh_link /flink-core/src/main/java/org/apache/flink/api/common/io/OutputFormat.java "OutputFormat" %}.
-Flink comes with a variety of built-in output formats that are encapsulated behind operations on the
-DataSet:
-
-- `writeAsText()` / `TextOutputFormat` - Writes elements line-wise as Strings. The Strings are
-  obtained by calling the *toString()* method of each element.
-- `writeAsCsv(...)` / `CsvOutputFormat` - Writes tuples as comma-separated value files. Row and field
-  delimiters are configurable. The value for each field comes from the *toString()* method of the objects.
-- `print()` / `printToErr()` - Prints the *toString()* value of each element on the
-  standard out / standard error stream.
-- `write()` / `FileOutputFormat` - Method and base class for custom file outputs. Supports
-  custom object-to-bytes conversion.
-- `output()`/ `OutputFormat` - Most generic output method, for data sinks that are not file based
-  (such as storing the result in a database).
-
-A DataSet can be input to multiple operations. Programs can write or print a data set and at the
-same time run additional transformations on them.
-
-**Examples**
-
-Standard data sink methods:
-
-{% highlight scala %}
-// text data
-val textData: DataSet[String] = // [...]
-
-// write DataSet to a file on the local file system
-textData.writeAsText("file:///my/result/on/localFS")
-
-// write DataSet to a file on a HDFS with a namenode running at nnHost:nnPort
-textData.writeAsText("hdfs://nnHost:nnPort/my/result/on/localFS")
-
-// write DataSet to a file and overwrite the file if it exists
-textData.writeAsText("file:///my/result/on/localFS", WriteMode.OVERWRITE)
-
-// tuples as lines with pipe as the separator "a|b|c"
-val values: DataSet[(String, Int, Double)] = // [...]
-values.writeAsCsv("file:///path/to/the/result/file", "\n", "|")
-
-// this writes tuples in the text formatting "(a, b, c)", rather than as CSV lines
-values.writeAsText("file:///path/to/the/result/file");
-
-// this writes values as strings using a user-defined formatting
-values map { tuple => tuple._1 + " - " + tuple._2 }
-  .writeAsText("file:///path/to/the/result/file")
-{% endhighlight %}
-
-
-#### Locally Sorted Output
-
-The output of a data sink can be locally sorted on specified fields in specified orders using [tuple field positions]({{ site.baseurl }}/apis/common/index.html#define-keys-for-tuples) or [field expressions]({{ site.baseurl }}/apis/common/index.html#define-keys-using-field-expressions). This works for every output format.
-
-The following examples show how to use this feature:
-
-{% highlight scala %}
-
-val tData: DataSet[(Int, String, Double)] = // [...]
-val pData: DataSet[(BookPojo, Double)] = // [...]
-val sData: DataSet[String] = // [...]
-
-// sort output on String field in ascending order
-tData.sortPartition(1, Order.ASCENDING).print;
-
-// sort output on Double field in descending and Int field in ascending order
-tData.sortPartition(2, Order.DESCENDING).sortPartition(0, Order.ASCENDING).print;
-
-// sort output on the "author" field of nested BookPojo in descending order
-pData.sortPartition("_1.author", Order.DESCENDING).writeAsText(...);
-
-// sort output on the full tuple in ascending order
-tData.sortPartition("_", Order.ASCENDING).writeAsCsv(...);
-
-// sort atomic type (String) output in descending order
-sData.sortPartition("_", Order.DESCENDING).writeAsText(...);
-
-{% endhighlight %}
-
-Globally sorted output is not supported yet.
-
-</div>
-</div>
-
-{% top %}
-
-
-Iteration Operators
--------------------
-
-Iterations implement loops in Flink programs. The iteration operators encapsulate a part of the
-program and execute it repeatedly, feeding back the result of one iteration (the partial solution)
-into the next iteration. There are two types of iterations in Flink: **BulkIteration** and
-**DeltaIteration**.
-
-This section provides quick examples on how to use both operators. Check out the [Introduction to
-Iterations](iterations.html) page for a more detailed introduction.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-#### Bulk Iterations
-
-To create a BulkIteration call the `iterate(int)` method of the DataSet the iteration should start
-at. This will return an `IterativeDataSet`, which can be transformed with the regular operators. The
-single argument to the iterate call specifies the maximum number of iterations.
-
-To specify the end of an iteration call the `closeWith(DataSet)` method on the `IterativeDataSet` to
-specify which transformation should be fed back to the next iteration. You can optionally specify a
-termination criterion with `closeWith(DataSet, DataSet)`, which evaluates the second DataSet and
-terminates the iteration, if this DataSet is empty. If no termination criterion is specified, the
-iteration terminates after the given maximum number iterations.
-
-The following example iteratively estimates the number Pi. The goal is to count the number of random
-points, which fall into the unit circle. In each iteration, a random point is picked. If this point
-lies inside the unit circle, we increment the count. Pi is then estimated as the resulting count
-divided by the number of iterations multiplied by 4.
-
-{% highlight java %}
-final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-
-// Create initial IterativeDataSet
-IterativeDataSet<Integer> initial = env.fromElements(0).iterate(10000);
-
-DataSet<Integer> iteration = initial.map(new MapFunction<Integer, Integer>() {
-    @Override
-    public Integer map(Integer i) throws Exception {
-        double x = Math.random();
-        double y = Math.random();
-
-        return i + ((x * x + y * y < 1) ? 1 : 0);
-    }
-});
-
-// Iteratively transform the IterativeDataSet
-DataSet<Integer> count = initial.closeWith(iteration);
-
-count.map(new MapFunction<Integer, Double>() {
-    @Override
-    public Double map(Integer count) throws Exception {
-        return count / (double) 10000 * 4;
-    }
-}).print();
-
-env.execute("Iterative Pi Example");
-{% endhighlight %}
-
-You can also check out the
-{% gh_link /flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/clustering/KMeans.java "K-Means example" %},
-which uses a BulkIteration to cluster a set of unlabeled points.
-
-#### Delta Iterations
-
-Delta iterations exploit the fact that certain algorithms do not change every data point of the
-solution in each iteration.
-
-In addition to the partial solution that is fed back (called workset) in every iteration, delta
-iterations maintain state across iterations (called solution set), which can be updated through
-deltas. The result of the iterative computation is the state after the last iteration. Please refer
-to the [Introduction to Iterations](iterations.html) for an overview of the basic principle of delta
-iterations.
-
-Defining a DeltaIteration is similar to defining a BulkIteration. For delta iterations, two data
-sets form the input to each iteration (workset and solution set), and two data sets are produced as
-the result (new workset, solution set delta) in each iteration.
-
-To create a DeltaIteration call the `iterateDelta(DataSet, int, int)` (or `iterateDelta(DataSet,
-int, int[])` respectively). This method is called on the initial solution set. The arguments are the
-initial delta set, the maximum number of iterations and the key positions. The returned
-`DeltaIteration` object gives you access to the DataSets representing the workset and solution set
-via the methods `iteration.getWorkset()` and `iteration.getSolutionSet()`.
-
-Below is an example for the syntax of a delta iteration
-
-{% highlight java %}
-// read the initial data sets
-DataSet<Tuple2<Long, Double>> initialSolutionSet = // [...]
-
-DataSet<Tuple2<Long, Double>> initialDeltaSet = // [...]
-
-int maxIterations = 100;
-int keyPosition = 0;
-
-DeltaIteration<Tuple2<Long, Double>, Tuple2<Long, Double>> iteration = initialSolutionSet
-    .iterateDelta(initialDeltaSet, maxIterations, keyPosition);
-
-DataSet<Tuple2<Long, Double>> candidateUpdates = iteration.getWorkset()
-    .groupBy(1)
-    .reduceGroup(new ComputeCandidateChanges());
-
-DataSet<Tuple2<Long, Double>> deltas = candidateUpdates
-    .join(iteration.getSolutionSet())
-    .where(0)
-    .equalTo(0)
-    .with(new CompareChangesToCurrent());
-
-DataSet<Tuple2<Long, Double>> nextWorkset = deltas
-    .filter(new FilterByThreshold());
-
-iteration.closeWith(deltas, nextWorkset)
-	.writeAsCsv(outputPath);
-{% endhighlight %}
-
-</div>
-<div data-lang="scala" markdown="1">
-#### Bulk Iterations
-
-To create a BulkIteration call the `iterate(int)` method of the DataSet the iteration should start
-at and also specify a step function. The step function gets the input DataSet for the current
-iteration and must return a new DataSet. The parameter of the iterate call is the maximum number
-of iterations after which to stop.
-
-There is also the `iterateWithTermination(int)` function that accepts a step function that
-returns two DataSets: The result of the iteration step and a termination criterion. The iterations
-are stopped once the termination criterion DataSet is empty.
-
-The following example iteratively estimates the number Pi. The goal is to count the number of random
-points, which fall into the unit circle. In each iteration, a random point is picked. If this point
-lies inside the unit circle, we increment the count. Pi is then estimated as the resulting count
-divided by the number of iterations multiplied by 4.
-
-{% highlight scala %}
-val env = ExecutionEnvironment.getExecutionEnvironment()
-
-// Create initial DataSet
-val initial = env.fromElements(0)
-
-val count = initial.iterate(10000) { iterationInput: DataSet[Int] =>
-  val result = iterationInput.map { i =>
-    val x = Math.random()
-    val y = Math.random()
-    i + (if (x * x + y * y < 1) 1 else 0)
-  }
-  result
-}
-
-val result = count map { c => c / 10000.0 * 4 }
-
-result.print()
-
-env.execute("Iterative Pi Example");
-{% endhighlight %}
-
-You can also check out the
-{% gh_link /flink-examples/flink-examples-batch/src/main/scala/org/apache/flink/examples/scala/clustering/KMeans.scala "K-Means example" %},
-which uses a BulkIteration to cluster a set of unlabeled points.
-
-#### Delta Iterations
-
-Delta iterations exploit the fact that certain algorithms do not change every data point of the
-solution in each iteration.
-
-In addition to the partial solution that is fed back (called workset) in every iteration, delta
-iterations maintain state across iterations (called solution set), which can be updated through
-deltas. The result of the iterative computation is the state after the last iteration. Please refer
-to the [Introduction to Iterations](iterations.html) for an overview of the basic principle of delta
-iterations.
-
-Defining a DeltaIteration is similar to defining a BulkIteration. For delta iterations, two data
-sets form the input to each iteration (workset and solution set), and two data sets are produced as
-the result (new workset, solution set delta) in each iteration.
-
-To create a DeltaIteration call the `iterateDelta(initialWorkset, maxIterations, key)` on the
-initial solution set. The step function takes two parameters: (solutionSet, workset), and must
-return two values: (solutionSetDelta, newWorkset).
-
-Below is an example for the syntax of a delta iteration
-
-{% highlight scala %}
-// read the initial data sets
-val initialSolutionSet: DataSet[(Long, Double)] = // [...]
-
-val initialWorkset: DataSet[(Long, Double)] = // [...]
-
-val maxIterations = 100
-val keyPosition = 0
-
-val result = initialSolutionSet.iterateDelta(initialWorkset, maxIterations, Array(keyPosition)) {
-  (solution, workset) =>
-    val candidateUpdates = workset.groupBy(1).reduceGroup(new ComputeCandidateChanges())
-    val deltas = candidateUpdates.join(solution).where(0).equalTo(0)(new CompareChangesToCurrent())
-
-    val nextWorkset = deltas.filter(new FilterByThreshold())
-
-    (deltas, nextWorkset)
-}
-
-result.writeAsCsv(outputPath)
-
-env.execute()
-{% endhighlight %}
-
-</div>
-</div>
-
-{% top %}
-
-Operating on data objects in functions
---------------------------------------
-
-Flink's runtime exchanges data with user functions in form of Java objects. Functions receive input objects from the runtime as method parameters and return output objects as result. Because these objects are accessed by user functions and runtime code, it is very important to understand and follow the rules about how the user code may access, i.e., read and modify, these objects.
- 
-User functions receive objects from Flink's runtime either as regular method parameters (like a `MapFunction`) or through an `Iterable` parameter (like a `GroupReduceFunction`). We refer to objects that the runtime passes to a user function as *input objects*. User functions can emit objects to the Flink runtime either as a method return value (like a `MapFunction`) or through a `Collector` (like a `FlatMapFunction`). We refer to objects which have been emitted by the user function to the runtime as *output objects*.
- 
-Flink's DataSet API features two modes that differ in how Flink's runtime creates or reuses input objects. This behavior affects the guarantees and constraints for how user functions may interact with input and output objects. The following sections define these rules and give coding guidelines to write safe user function code. 
-
-### Object-Reuse Disabled (DEFAULT)
-
-By default, Flink operates in object-reuse disabled mode. This mode ensures that functions always receive new input objects within a function call. The object-reuse disabled mode gives better guarantees and is safer to use. However, it comes with a certain processing overhead and might cause higher Java garbage collection activity. The following table explains how user functions may access input and output objects in object-reuse disabled mode.
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 20%">Operation</th>
-      <th class="text-center">Guarantees and Restrictions</th>
-    </tr>
-  </thead>
-  <tbody>
-   <tr>
-      <td><strong>Reading Input Objects</strong></td>
-      <td>
-        Within a method call it is guaranteed that the value of an input object does not change. This includes objects served by an Iterable. For example it is safe to collect input objects served by an Iterable in a List or Map. Note that objects may be modified after the method call is left. It is <strong>not safe</strong> to remember objects across function calls.
-      </td>
-   </tr>
-   <tr>
-      <td><strong>Modifying Input Objects</strong></td>
-      <td>You may modify input objects.</td>
-   </tr>
-   <tr>
-      <td><strong>Emitting Input Objects</strong></td>
-      <td>
-        You may emit input objects. The value of an input object may have changed after it was emitted. It is <strong>not safe</strong> to read an input object after it was emitted.
-      </td>
-   </tr>
-   <tr>
-      <td><strong>Reading Output Objects</strong></td>
-      <td>
-        An object that was given to a Collector or returned as method result might have changed its value. It is <strong>not safe</strong> to read an output object.
-      </td>
-   </tr>
-   <tr>
-      <td><strong>Modifying Output Objects</strong></td>
-      <td>You may modify an object after it was emitted and emit it again.</td>
-   </tr>
-  </tbody>
-</table>
-
-**Coding guidelines for the object-reuse disabled (default) mode:**
-
-- Do not remember and read input objects across method calls.
-- Do not read objects after you emitted them.
-
-
-### Object-Reuse Enabled
-
-In object-reuse enabled mode, Flink's runtime minimizes the number of object instantiations. This can improve the performance and can reduce the Java garbage collection pressure. The object-reuse enabled mode is activated by calling `ExecutionConfig.enableObjectReuse()`. The following table explains how user functions may access input and output objects in object-reuse enabled mode.
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 20%">Operation</th>
-      <th class="text-center">Guarantees and Restrictions</th>
-    </tr>
-  </thead>
-  <tbody>
-   <tr>
-      <td><strong>Reading input objects received as regular method parameters</strong></td>
-      <td>
-        Input objects received as regular method arguments are not modified within a function call. Objects may be modified after method call is left. It is <strong>not safe</strong> to remember objects across function calls.
-      </td>
-   </tr>
-   <tr>
-      <td><strong>Reading input objects received from an Iterable parameter</strong></td>
-      <td>
-        Input objects received from an Iterable are only valid until the next() method is called. An Iterable or Iterator may serve the same object instance multiple times. It is <strong>not safe</strong> to remember input objects received from an Iterable, e.g., by putting them in a List or Map.
-      </td>
-   </tr>
-   <tr>
-      <td><strong>Modifying Input Objects</strong></td>
-      <td>You <strong>must not</strong> modify input objects, except for input objects of MapFunction, FlatMapFunction, MapPartitionFunction, GroupReduceFunction, GroupCombineFunction, CoGroupFunction, and InputFormat.next(reuse).</td>
-   </tr>
-   <tr>
-      <td><strong>Emitting Input Objects</strong></td>
-      <td>
-        You <strong>must not</strong> emit input objects, except for input objects of MapFunction, FlatMapFunction, MapPartitionFunction, GroupReduceFunction, GroupCombineFunction, CoGroupFunction, and InputFormat.next(reuse).</td>
-      </td>
-   </tr>
-   <tr>
-      <td><strong>Reading Output Objects</strong></td>
-      <td>
-        An object that was given to a Collector or returned as method result might have changed its value. It is <strong>not safe</strong> to read an output object.
-      </td>
-   </tr>
-   <tr>
-      <td><strong>Modifying Output Objects</strong></td>
-      <td>You may modify an output object and emit it again.</td>
-   </tr>
-  </tbody>
-</table>
-
-**Coding guidelines for object-reuse enabled:**
-
-- Do not remember input objects received from an `Iterable`.
-- Do not remember and read input objects across method calls.
-- Do not modify or emit input objects, except for input objects of `MapFunction`, `FlatMapFunction`, `MapPartitionFunction`, `GroupReduceFunction`, `GroupCombineFunction`, `CoGroupFunction`, and `InputFormat.next(reuse)`.
-- To reduce object instantiations, you can always emit a dedicated output object which is repeatedly modified but never read.
-
-{% top %}
-
-Debugging
----------
-
-Before running a data analysis program on a large data set in a distributed cluster, it is a good
-idea to make sure that the implemented algorithm works as desired. Hence, implementing data analysis
-programs is usually an incremental process of checking results, debugging, and improving.
-
-Flink provides a few nice features to significantly ease the development process of data analysis
-programs by supporting local debugging from within an IDE, injection of test data, and collection of
-result data. This section give some hints how to ease the development of Flink programs.
-
-### Local Execution Environment
-
-A `LocalEnvironment` starts a Flink system within the same JVM process it was created in. If you
-start the LocalEnvironment from an IDE, you can set breakpoints in your code and easily debug your
-program.
-
-A LocalEnvironment is created and used as follows:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-final ExecutionEnvironment env = ExecutionEnvironment.createLocalEnvironment();
-
-DataSet<String> lines = env.readTextFile(pathToTextFile);
-// build your program
-
-env.execute();
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-
-{% highlight scala %}
-val env = ExecutionEnvironment.createLocalEnvironment()
-
-val lines = env.readTextFile(pathToTextFile)
-// build your program
-
-env.execute();
-{% endhighlight %}
-</div>
-</div>
-
-### Collection Data Sources and Sinks
-
-Providing input for an analysis program and checking its output is cumbersome when done by creating
-input files and reading output files. Flink features special data sources and sinks which are backed
-by Java collections to ease testing. Once a program has been tested, the sources and sinks can be
-easily replaced by sources and sinks that read from / write to external data stores such as HDFS.
-
-Collection data sources can be used as follows:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-final ExecutionEnvironment env = ExecutionEnvironment.createLocalEnvironment();
-
-// Create a DataSet from a list of elements
-DataSet<Integer> myInts = env.fromElements(1, 2, 3, 4, 5);
-
-// Create a DataSet from any Java collection
-List<Tuple2<String, Integer>> data = ...
-DataSet<Tuple2<String, Integer>> myTuples = env.fromCollection(data);
-
-// Create a DataSet from an Iterator
-Iterator<Long> longIt = ...
-DataSet<Long> myLongs = env.fromCollection(longIt, Long.class);
-{% endhighlight %}
-
-A collection data sink is specified as follows:
-
-{% highlight java %}
-DataSet<Tuple2<String, Integer>> myResult = ...
-
-List<Tuple2<String, Integer>> outData = new ArrayList<Tuple2<String, Integer>>();
-myResult.output(new LocalCollectionOutputFormat(outData));
-{% endhighlight %}
-
-**Note:** Currently, the collection data sink is restricted to local execution, as a debugging tool.
-
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = ExecutionEnvironment.createLocalEnvironment()
-
-// Create a DataSet from a list of elements
-val myInts = env.fromElements(1, 2, 3, 4, 5)
-
-// Create a DataSet from any Collection
-val data: Seq[(String, Int)] = ...
-val myTuples = env.fromCollection(data)
-
-// Create a DataSet from an Iterator
-val longIt: Iterator[Long] = ...
-val myLongs = env.fromCollection(longIt)
-{% endhighlight %}
-</div>
-</div>
-
-**Note:** Currently, the collection data source requires that data types and iterators implement
-`Serializable`. Furthermore, collection data sources can not be executed in parallel (
-parallelism = 1).
-
-{% top %}
-
-Semantic Annotations
------------
-
-Semantic annotations can be used to give Flink hints about the behavior of a function.
-They tell the system which fields of a function's input the function reads and evaluates and
-which fields it unmodified forwards from its input to its output.
-Semantic annotations are a powerful means to speed up execution, because they
-allow the system to reason about reusing sort orders or partitions across multiple operations. Using
-semantic annotations may eventually save the program from unnecessary data shuffling or unnecessary
-sorts and significantly improve the performance of a program.
-
-**Note:** The use of semantic annotations is optional. However, it is absolutely crucial to
-be conservative when providing semantic annotations!
-Incorrect semantic annotations will cause Flink to make incorrect assumptions about your program and
-might eventually lead to incorrect results.
-If the behavior of an operator is not clearly predictable, no annotation should be provided.
-Please read the documentation carefully.
-
-The following semantic annotations are currently supported.
-
-#### Forwarded Fields Annotation
-
-Forwarded fields information declares input fields which are unmodified forwarded by a function to the same position or to another position in the output.
-This information is used by the optimizer to infer whether a data property such as sorting or
-partitioning is preserved by a function.
-For functions that operate on groups of input elements such as `GroupReduce`, `GroupCombine`, `CoGroup`, and `MapPartition`, all fields that are defined as forwarded fields must always be jointly forwarded from the same input element. The forwarded fields of each element that is emitted by a group-wise function may originate from a different element of the function's input group.
-
-Field forward information is specified using [field expressions]({{ site.baseurl }}/apis/common/index.html#define-keys-using-field-expressions).
-Fields that are forwarded to the same position in the output can be specified by their position.
-The specified position must be valid for the input and output data type and have the same type.
-For example the String `"f2"` declares that the third field of a Java input tuple is always equal to the third field in the output tuple.
-
-Fields which are unmodified forwarded to another position in the output are declared by specifying the
-source field in the input and the target field in the output as field expressions.
-The String `"f0->f2"` denotes that the first field of the Java input tuple is
-unchanged copied to the third field of the Java output tuple. The wildcard expression `*` can be used to refer to a whole input or output type, i.e., `"f0->*"` denotes that the output of a function is always equal to the first field of its Java input tuple.
-
-Multiple forwarded fields can be declared in a single String by separating them with semicolons as `"f0; f2->f1; f3->f2"` or in separate Strings `"f0", "f2->f1", "f3->f2"`. When specifying forwarded fields it is not required that all forwarded fields are declared, but all declarations must be correct.
-
-Forwarded field information can be declared by attaching Java annotations on function class definitions or
-by passing them as operator arguments after invoking a function on a DataSet as shown below.
-
-##### Function Class Annotations
-
-* `@ForwardedFields` for single input functions such as Map and Reduce.
-* `@ForwardedFieldsFirst` for the first input of a functions with two inputs such as Join and CoGroup.
-* `@ForwardedFieldsSecond` for the second input of a functions with two inputs such as Join and CoGroup.
-
-##### Operator Arguments
-
-* `data.map(myMapFnc).withForwardedFields()` for single input function such as Map and Reduce.
-* `data1.join(data2).where().equalTo().with(myJoinFnc).withForwardFieldsFirst()` for the first input of a function with two inputs such as Join and CoGroup.
-* `data1.join(data2).where().equalTo().with(myJoinFnc).withForwardFieldsSecond()` for the second input of a function with two inputs such as Join and CoGroup.
-
-Please note that it is not possible to overwrite field forward information which was specified as a class annotation by operator arguments.
-
-##### Example
-
-The following example shows how to declare forwarded field information using a function class annotation:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-@ForwardedFields("f0->f2")
-public class MyMap implements
-              MapFunction<Tuple2<Integer, Integer>, Tuple3<String, Integer, Integer>> {
-  @Override
-  public Tuple3<String, Integer, Integer> map(Tuple2<Integer, Integer> val) {
-    return new Tuple3<String, Integer, Integer>("foo", val.f1 / 2, val.f0);
-  }
-}
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-@ForwardedFields("_1->_3")
-class MyMap extends MapFunction[(Int, Int), (String, Int, Int)]{
-   def map(value: (Int, Int)): (String, Int, Int) = {
-    return ("foo", value._2 / 2, value._1)
-  }
-}
-{% endhighlight %}
-
-</div>
-</div>
-
-#### Non-Forwarded Fields
-
-Non-forwarded fields information declares all fields which are not preserved on the same position in a function's output.
-The values of all other fields are considered to be preserved at the same position in the output.
-Hence, non-forwarded fields information is inverse to forwarded fields information.
-Non-forwarded field information for group-wise operators such as `GroupReduce`, `GroupCombine`, `CoGroup`, and `MapPartition` must fulfill the same requirements as for forwarded field information.
-
-**IMPORTANT**: The specification of non-forwarded fields information is optional. However if used,
-**ALL!** non-forwarded fields must be specified, because all other fields are considered to be forwarded in place. It is safe to declare a forwarded field as non-forwarded.
-
-Non-forwarded fields are specified as a list of [field expressions]({{ site.baseurl }}/apis/common/index.html#define-keys-using-field-expressions). The list can be either given as a single String with field expressions separated by semicolons or as multiple Strings.
-For example both `"f1; f3"` and `"f1", "f3"` declare that the second and fourth field of a Java tuple
-are not preserved in place and all other fields are preserved in place.
-Non-forwarded field information can only be specified for functions which have identical input and output types.
-
-Non-forwarded field information is specified as function class annotations using the following annotations:
-
-* `@NonForwardedFields` for single input functions such as Map and Reduce.
-* `@NonForwardedFieldsFirst` for the first input of a function with two inputs such as Join and CoGroup.
-* `@NonForwardedFieldsSecond` for the second input of a function with two inputs such as Join and CoGroup.
-
-##### Example
-
-The following example shows how to declare non-forwarded field information:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-@NonForwardedFields("f1") // second field is not forwarded
-public class MyMap implements
-              MapFunction<Tuple2<Integer, Integer>, Tuple2<Integer, Integer>> {
-  @Override
-  public Tuple2<Integer, Integer> map(Tuple2<Integer, Integer> val) {
-    return new Tuple2<Integer, Integer>(val.f0, val.f1 / 2);
-  }
-}
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-@NonForwardedFields("_2") // second field is not forwarded
-class MyMap extends MapFunction[(Int, Int), (Int, Int)]{
-  def map(value: (Int, Int)): (Int, Int) = {
-    return (value._1, value._2 / 2)
-  }
-}
-{% endhighlight %}
-
-</div>
-</div>
-
-#### Read Fields
-
-Read fields information declares all fields that are accessed and evaluated by a function, i.e.,
-all fields that are used by the function to compute its result.
-For example, fields which are evaluated in conditional statements or used for computations must be marked as read when specifying read fields information.
-Fields which are only unmodified forwarded to the output without evaluating their values or fields which are not accessed at all are not considered to be read.
-
-**IMPORTANT**: The specification of read fields information is optional. However if used,
-**ALL!** read fields must be specified. It is safe to declare a non-read field as read.
-
-Read fields are specified as a list of [field expressions]({{ site.baseurl }}/apis/common/index.html#define-keys-using-field-expressions). The list can be either given as a single String with field expressions separated by semicolons or as multiple Strings.
-For example both `"f1; f3"` and `"f1", "f3"` declare that the second and fourth field of a Java tuple are read and evaluated by the function.
-
-Read field information is specified as function class annotations using the following annotations:
-
-* `@ReadFields` for single input functions such as Map and Reduce.
-* `@ReadFieldsFirst` for the first input of a function with two inputs such as Join and CoGroup.
-* `@ReadFieldsSecond` for the second input of a function with two inputs such as Join and CoGroup.
-
-##### Example
-
-The following example shows how to declare read field information:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-@ReadFields("f0; f3") // f0 and f3 are read and evaluated by the function.
-public class MyMap implements
-              MapFunction<Tuple4<Integer, Integer, Integer, Integer>,
-                          Tuple2<Integer, Integer>> {
-  @Override
-  public Tuple2<Integer, Integer> map(Tuple4<Integer, Integer, Integer, Integer> val) {
-    if(val.f0 == 42) {
-      return new Tuple2<Integer, Integer>(val.f0, val.f1);
-    } else {
-      return new Tuple2<Integer, Integer>(val.f3+10, val.f1);
-    }
-  }
-}
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-@ReadFields("_1; _4") // _1 and _4 are read and evaluated by the function.
-class MyMap extends MapFunction[(Int, Int, Int, Int), (Int, Int)]{
-   def map(value: (Int, Int, Int, Int)): (Int, Int) = {
-    if (value._1 == 42) {
-      return (value._1, value._2)
-    } else {
-      return (value._4 + 10, value._2)
-    }
-  }
-}
-{% endhighlight %}
-
-</div>
-</div>
-
-{% top %}
-
-
-Broadcast Variables
--------------------
-
-Broadcast variables allow you to make a data set available to all parallel instances of an
-operation, in addition to the regular input of the operation. This is useful for auxiliary data
-sets, or data-dependent parameterization. The data set will then be accessible at the operator as a
-Collection.
-
-- **Broadcast**: broadcast sets are registered by name via `withBroadcastSet(DataSet, String)`, and
-- **Access**: accessible via `getRuntimeContext().getBroadcastVariable(String)` at the target operator.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-// 1. The DataSet to be broadcasted
-DataSet<Integer> toBroadcast = env.fromElements(1, 2, 3);
-
-DataSet<String> data = env.fromElements("a", "b");
-
-data.map(new RichMapFunction<String, String>() {
-    @Override
-    public void open(Configuration parameters) throws Exception {
-      // 3. Access the broadcasted DataSet as a Collection
-      Collection<Integer> broadcastSet = getRuntimeContext().getBroadcastVariable("broadcastSetName");
-    }
-
-
-    @Override
-    public String map(String value) throws Exception {
-        ...
-    }
-}).withBroadcastSet(toBroadcast, "broadcastSetName"); // 2. Broadcast the DataSet
-{% endhighlight %}
-
-Make sure that the names (`broadcastSetName` in the previous example) match when registering and
-accessing broadcasted data sets. For a complete example program, have a look at
-{% gh_link /flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/clustering/KMeans.java#L96 "K-Means Algorithm" %}.
-</div>
-<div data-lang="scala" markdown="1">
-
-{% highlight scala %}
-// 1. The DataSet to be broadcasted
-val toBroadcast = env.fromElements(1, 2, 3)
-
-val data = env.fromElements("a", "b")
-
-data.map(new RichMapFunction[String, String]() {
-    var broadcastSet: Traversable[String] = null
-
-    override def open(config: Configuration): Unit = {
-      // 3. Access the broadcasted DataSet as a Collection
-      broadcastSet = getRuntimeContext().getBroadcastVariable[String]("broadcastSetName").asScala
-    }
-
-    def map(in: String): String = {
-        ...
-    }
-}).withBroadcastSet(toBroadcast, "broadcastSetName") // 2. Broadcast the DataSet
-{% endhighlight %}
-
-Make sure that the names (`broadcastSetName` in the previous example) match when registering and
-accessing broadcasted data sets. For a complete example program, have a look at
-{% gh_link /flink-examples/flink-examples-batch/src/main/scala/org/apache/flink/examples/scala/clustering/KMeans.scala#L96 "KMeans Algorithm" %}.
-</div>
-</div>
-
-**Note**: As the content of broadcast variables is kept in-memory on each node, it should not become
-too large. For simpler things like scalar values you can simply make parameters part of the closure
-of a function, or use the `withParameters(...)` method to pass in a configuration.
-
-{% top %}
-
-Distributed Cache
--------------------
-
-Flink offers a distributed cache, similar to Apache Hadoop, to make files locally accessible to parallel instances of user functions. This f

<TRUNCATED>

[76/89] [abbrv] flink git commit: [FLINK-4384] [rpc] Add "scheduleRunAsync()" to the RpcEndpoint

Posted by se...@apache.org.
[FLINK-4384] [rpc] Add "scheduleRunAsync()" to the RpcEndpoint

This closes #2360


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/df0bf944
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/df0bf944
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/df0bf944

Branch: refs/heads/flip-6
Commit: df0bf944a81f8a3aae9d4e0e5d78c11f55d15034
Parents: dc808e7
Author: Stephan Ewen <se...@apache.org>
Authored: Thu Aug 11 19:10:48 2016 +0200
Committer: Stephan Ewen <se...@apache.org>
Committed: Thu Aug 25 20:21:03 2016 +0200

----------------------------------------------------------------------
 .../flink/runtime/rpc/MainThreadExecutor.java   |   9 +
 .../apache/flink/runtime/rpc/RpcEndpoint.java   |  12 ++
 .../runtime/rpc/akka/AkkaInvocationHandler.java |  13 +-
 .../flink/runtime/rpc/akka/AkkaRpcActor.java    |  15 +-
 .../runtime/rpc/akka/messages/RunAsync.java     |  24 ++-
 .../runtime/rpc/akka/AkkaRpcServiceTest.java    |   3 +
 .../flink/runtime/rpc/akka/AsyncCallsTest.java  | 216 +++++++++++++++++++
 7 files changed, 286 insertions(+), 6 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/df0bf944/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/MainThreadExecutor.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/MainThreadExecutor.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/MainThreadExecutor.java
index 882c1b7..4efb382 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/MainThreadExecutor.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/MainThreadExecutor.java
@@ -52,4 +52,13 @@ public interface MainThreadExecutor {
 	 * @return Future of the callable result
 	 */
 	<V> Future<V> callAsync(Callable<V> callable, Timeout callTimeout);
+
+	/**
+	 * Execute the runnable in the main thread of the underlying RPC endpoint, with
+	 * a delay of the given number of milliseconds.
+	 *
+	 * @param runnable Runnable to be executed
+	 * @param delay    The delay, in milliseconds, after which the runnable will be executed
+	 */
+	void scheduleRunAsync(Runnable runnable, long delay);
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/df0bf944/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java
index aef0803..44933d5 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java
@@ -28,6 +28,7 @@ import scala.concurrent.ExecutionContext;
 import scala.concurrent.Future;
 
 import java.util.concurrent.Callable;
+import java.util.concurrent.TimeUnit;
 
 import static org.apache.flink.util.Preconditions.checkNotNull;
 
@@ -168,6 +169,17 @@ public abstract class RpcEndpoint<C extends RpcGateway> {
 	}
 
 	/**
+	 * Execute the runnable in the main thread of the underlying RPC endpoint, with
+	 * a delay of the given number of milliseconds.
+	 *
+	 * @param runnable Runnable to be executed
+	 * @param delay    The delay after which the runnable will be executed
+	 */
+	public void scheduleRunAsync(Runnable runnable, long delay, TimeUnit unit) {
+		((MainThreadExecutor) self).scheduleRunAsync(runnable, unit.toMillis(delay));
+	}
+
+	/**
 	 * Execute the callable in the main thread of the underlying RPC service, returning a future for
 	 * the result of the callable. If the callable is not completed within the given timeout, then
 	 * the future will be failed with a {@link java.util.concurrent.TimeoutException}.

http://git-wip-us.apache.org/repos/asf/flink/blob/df0bf944/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaInvocationHandler.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaInvocationHandler.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaInvocationHandler.java
index e8e383a..580b161 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaInvocationHandler.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaInvocationHandler.java
@@ -38,6 +38,9 @@ import java.lang.reflect.Method;
 import java.util.BitSet;
 import java.util.concurrent.Callable;
 
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.apache.flink.util.Preconditions.checkArgument;
+
 /**
  * Invocation handler to be used with a {@link AkkaRpcActor}. The invocation handler wraps the
  * rpc in a {@link RpcInvocation} message and then sends it to the {@link AkkaRpcActor} where it is
@@ -106,9 +109,17 @@ class AkkaInvocationHandler implements InvocationHandler, AkkaGateway, MainThrea
 
 	@Override
 	public void runAsync(Runnable runnable) {
+		scheduleRunAsync(runnable, 0);
+	}
+
+	@Override
+	public void scheduleRunAsync(Runnable runnable, long delay) {
+		checkNotNull(runnable, "runnable");
+		checkArgument(delay >= 0, "delay must be zero or greater");
+		
 		// Unfortunately I couldn't find a way to allow only local communication. Therefore, the
 		// runnable field is transient transient
-		rpcServer.tell(new RunAsync(runnable), ActorRef.noSender());
+		rpcServer.tell(new RunAsync(runnable, delay), ActorRef.noSender());
 	}
 
 	@Override

http://git-wip-us.apache.org/repos/asf/flink/blob/df0bf944/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActor.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActor.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActor.java
index 57da38a..18ccf1b 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActor.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActor.java
@@ -18,6 +18,7 @@
 
 package org.apache.flink.runtime.rpc.akka;
 
+import akka.actor.ActorRef;
 import akka.actor.Status;
 import akka.actor.UntypedActor;
 import akka.pattern.Patterns;
@@ -30,9 +31,11 @@ import org.apache.flink.util.Preconditions;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 import scala.concurrent.Future;
+import scala.concurrent.duration.FiniteDuration;
 
 import java.lang.reflect.Method;
 import java.util.concurrent.Callable;
+import java.util.concurrent.TimeUnit;
 
 /**
  * Akka rpc actor which receives {@link RpcInvocation}, {@link RunAsync} and {@link CallAsync}
@@ -152,13 +155,23 @@ class AkkaRpcActor<C extends RpcGateway, T extends RpcEndpoint<C>> extends Untyp
 				"{} is only supported with local communication.",
 				runAsync.getClass().getName(),
 				runAsync.getClass().getName());
-		} else {
+		}
+		else if (runAsync.getDelay() == 0) {
+			// run immediately
 			try {
 				runAsync.getRunnable().run();
 			} catch (final Throwable e) {
 				LOG.error("Caught exception while executing runnable in main thread.", e);
 			}
 		}
+		else {
+			// schedule for later. send a new message after the delay, which will then be immediately executed 
+			FiniteDuration delay = new FiniteDuration(runAsync.getDelay(), TimeUnit.MILLISECONDS);
+			RunAsync message = new RunAsync(runAsync.getRunnable(), 0);
+
+			getContext().system().scheduler().scheduleOnce(delay, getSelf(), message,
+					getContext().dispatcher(), ActorRef.noSender());
+		}
 	}
 
 	/**

http://git-wip-us.apache.org/repos/asf/flink/blob/df0bf944/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RunAsync.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RunAsync.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RunAsync.java
index fb95852..c18906c 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RunAsync.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RunAsync.java
@@ -18,23 +18,39 @@
 
 package org.apache.flink.runtime.rpc.akka.messages;
 
-import org.apache.flink.util.Preconditions;
-
 import java.io.Serializable;
 
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.apache.flink.util.Preconditions.checkArgument;
+
 /**
  * Message for asynchronous runnable invocations
  */
 public final class RunAsync implements Serializable {
 	private static final long serialVersionUID = -3080595100695371036L;
 
+	/** The runnable to be executed. Transient, so it gets lost upon serialization */ 
 	private final transient Runnable runnable;
 
-	public RunAsync(Runnable runnable) {
-		this.runnable = Preconditions.checkNotNull(runnable);
+	/** The delay after which the runnable should be called */
+	private final long delay;
+
+	/**
+	 * 
+	 * @param runnable  The Runnable to run.
+	 * @param delay     The delay in milliseconds. Zero indicates immediate execution.
+	 */
+	public RunAsync(Runnable runnable, long delay) {
+		checkArgument(delay >= 0);
+		this.runnable = checkNotNull(runnable);
+		this.delay = delay;
 	}
 
 	public Runnable getRunnable() {
 		return runnable;
 	}
+
+	public long getDelay() {
+		return delay;
+	}
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/df0bf944/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java
index a4e1d7f..5e37e10 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java
@@ -21,6 +21,9 @@ package org.apache.flink.runtime.rpc.akka;
 import akka.actor.ActorSystem;
 import akka.util.Timeout;
 import org.apache.flink.runtime.akka.AkkaUtils;
+import org.apache.flink.runtime.rpc.RpcEndpoint;
+import org.apache.flink.runtime.rpc.RpcGateway;
+import org.apache.flink.runtime.rpc.RpcService;
 import org.apache.flink.runtime.rpc.jobmaster.JobMaster;
 import org.apache.flink.runtime.rpc.resourcemanager.ResourceManagerGateway;
 import org.apache.flink.runtime.rpc.resourcemanager.ResourceManager;

http://git-wip-us.apache.org/repos/asf/flink/blob/df0bf944/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AsyncCallsTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AsyncCallsTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AsyncCallsTest.java
new file mode 100644
index 0000000..f2ce52d
--- /dev/null
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AsyncCallsTest.java
@@ -0,0 +1,216 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.akka;
+
+import akka.actor.ActorSystem;
+import akka.util.Timeout;
+
+import org.apache.flink.core.testutils.OneShotLatch;
+import org.apache.flink.runtime.akka.AkkaUtils;
+import org.apache.flink.runtime.rpc.RpcEndpoint;
+import org.apache.flink.runtime.rpc.RpcGateway;
+import org.apache.flink.runtime.rpc.RpcMethod;
+import org.apache.flink.runtime.rpc.RpcService;
+
+import org.junit.AfterClass;
+import org.junit.Test;
+
+import scala.concurrent.Await;
+import scala.concurrent.Future;
+import scala.concurrent.duration.FiniteDuration;
+
+import java.util.concurrent.Callable;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.locks.ReentrantLock;
+
+import static org.junit.Assert.*;
+
+public class AsyncCallsTest {
+
+	// ------------------------------------------------------------------------
+	//  shared test members
+	// ------------------------------------------------------------------------
+
+	private static ActorSystem actorSystem = AkkaUtils.createDefaultActorSystem();
+
+	private static AkkaRpcService akkaRpcService = 
+			new AkkaRpcService(actorSystem, new Timeout(10000, TimeUnit.MILLISECONDS));
+
+	@AfterClass
+	public static void shutdown() {
+		akkaRpcService.stopService();
+		actorSystem.shutdown();
+	}
+
+
+	// ------------------------------------------------------------------------
+	//  tests
+	// ------------------------------------------------------------------------
+
+	@Test
+	public void testScheduleWithNoDelay() throws Exception {
+
+		// to collect all the thread references
+		final ReentrantLock lock = new ReentrantLock();
+		final AtomicBoolean concurrentAccess = new AtomicBoolean(false);
+
+		TestEndpoint testEndpoint = new TestEndpoint(akkaRpcService, lock);
+		TestGateway gateway = testEndpoint.getSelf();
+
+		// a bunch of gateway calls
+		gateway.someCall();
+		gateway.anotherCall();
+		gateway.someCall();
+
+		// run something asynchronously
+		for (int i = 0; i < 10000; i++) {
+			testEndpoint.runAsync(new Runnable() {
+				@Override
+				public void run() {
+					boolean holdsLock = lock.tryLock();
+					if (holdsLock) {
+						lock.unlock();
+					} else {
+						concurrentAccess.set(true);
+					}
+				}
+			});
+		}
+	
+		Future<String> result = testEndpoint.callAsync(new Callable<String>() {
+			@Override
+			public String call() throws Exception {
+				boolean holdsLock = lock.tryLock();
+				if (holdsLock) {
+					lock.unlock();
+				} else {
+					concurrentAccess.set(true);
+				}
+				return "test";
+			}
+		}, new Timeout(30, TimeUnit.SECONDS));
+		String str = Await.result(result, new FiniteDuration(30, TimeUnit.SECONDS));
+		assertEquals("test", str);
+
+		// validate that no concurrent access happened
+		assertFalse("Rpc Endpoint had concurrent access", testEndpoint.hasConcurrentAccess());
+		assertFalse("Rpc Endpoint had concurrent access", concurrentAccess.get());
+
+		akkaRpcService.stopServer(testEndpoint.getSelf());
+	}
+
+	@Test
+	public void testScheduleWithDelay() throws Exception {
+
+		// to collect all the thread references
+		final ReentrantLock lock = new ReentrantLock();
+		final AtomicBoolean concurrentAccess = new AtomicBoolean(false);
+		final OneShotLatch latch = new OneShotLatch();
+
+		final long delay = 200;
+
+		TestEndpoint testEndpoint = new TestEndpoint(akkaRpcService, lock);
+
+		// run something asynchronously
+		testEndpoint.runAsync(new Runnable() {
+			@Override
+			public void run() {
+				boolean holdsLock = lock.tryLock();
+				if (holdsLock) {
+					lock.unlock();
+				} else {
+					concurrentAccess.set(true);
+				}
+			}
+		});
+
+		final long start = System.nanoTime();
+
+		testEndpoint.scheduleRunAsync(new Runnable() {
+			@Override
+			public void run() {
+				boolean holdsLock = lock.tryLock();
+				if (holdsLock) {
+					lock.unlock();
+				} else {
+					concurrentAccess.set(true);
+				}
+				latch.trigger();
+			}
+		}, delay, TimeUnit.MILLISECONDS);
+
+		latch.await();
+		final long stop = System.nanoTime();
+
+		// validate that no concurrent access happened
+		assertFalse("Rpc Endpoint had concurrent access", testEndpoint.hasConcurrentAccess());
+		assertFalse("Rpc Endpoint had concurrent access", concurrentAccess.get());
+
+		assertTrue("call was not properly delayed", ((stop - start) / 1000000) >= delay);
+	}
+
+	// ------------------------------------------------------------------------
+	//  test RPC endpoint
+	// ------------------------------------------------------------------------
+	
+	interface TestGateway extends RpcGateway {
+
+		void someCall();
+
+		void anotherCall();
+	}
+
+	@SuppressWarnings("unused")
+	public static class TestEndpoint extends RpcEndpoint<TestGateway> {
+
+		private final ReentrantLock lock;
+
+		private volatile boolean concurrentAccess;
+
+		public TestEndpoint(RpcService rpcService, ReentrantLock lock) {
+			super(rpcService);
+			this.lock = lock;
+		}
+
+		@RpcMethod
+		public void someCall() {
+			boolean holdsLock = lock.tryLock();
+			if (holdsLock) {
+				lock.unlock();
+			} else {
+				concurrentAccess = true;
+			}
+		}
+
+		@RpcMethod
+		public void anotherCall() {
+			boolean holdsLock = lock.tryLock();
+			if (holdsLock) {
+				lock.unlock();
+			} else {
+				concurrentAccess = true;
+			}
+		}
+
+		public boolean hasConcurrentAccess() {
+			return concurrentAccess;
+		}
+	}
+}


[28/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/concepts/fig/program_dataflow.svg
----------------------------------------------------------------------
diff --git a/docs/concepts/fig/program_dataflow.svg b/docs/concepts/fig/program_dataflow.svg
deleted file mode 100644
index 7c1ec8d..0000000
--- a/docs/concepts/fig/program_dataflow.svg
+++ /dev/null
@@ -1,546 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<svg
-   xmlns:dc="http://purl.org/dc/elements/1.1/"
-   xmlns:cc="http://creativecommons.org/ns#"
-   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
-   xmlns:svg="http://www.w3.org/2000/svg"
-   xmlns="http://www.w3.org/2000/svg"
-   version="1.1"
-   width="632.86151"
-   height="495.70895"
-   id="svg2">
-  <defs
-     id="defs4" />
-  <metadata
-     id="metadata7">
-    <rdf:RDF>
-      <cc:Work
-         rdf:about="">
-        <dc:format>image/svg+xml</dc:format>
-        <dc:type
-           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
-        <dc:title></dc:title>
-      </cc:Work>
-    </rdf:RDF>
-  </metadata>
-  <g
-     transform="translate(-89.288343,-87.370121)"
-     id="layer1">
-    <g
-       transform="translate(65.132093,66.963871)"
-       id="g2989">
-      <text
-         x="571.35248"
-         y="45.804131"
-         id="text2991"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source</text>
-      <text
-         x="25.304533"
-         y="37.765511"
-         id="text2993"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#843c0c;font-family:Courier New">DataStream</text>
-      <text
-         x="107.97513"
-         y="37.765511"
-         id="text2995"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">&lt;</text>
-      <text
-         x="116.22718"
-         y="37.765511"
-         id="text2997"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#5b9bd5;font-family:Courier New">String</text>
-      <text
-         x="166.0396"
-         y="37.765511"
-         id="text2999"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">&gt; </text>
-      <text
-         x="182.69376"
-         y="37.765511"
-         id="text3001"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">lines = </text>
-      <text
-         x="248.86023"
-         y="37.765511"
-         id="text3003"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">env.</text>
-      <text
-         x="282.01849"
-         y="37.765511"
-         id="text3005"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#843c0c;font-family:Courier New">addSource</text>
-      <text
-         x="356.58704"
-         y="37.765511"
-         id="text3007"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">(</text>
-      <text
-         x="282.01849"
-         y="54.269619"
-         id="text3009"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#2f5597;font-family:Courier New">new</text>
-      <text
-         x="315.17673"
-         y="54.269619"
-         id="text3011"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">FlinkKafkaConsumer</text>
-      <text
-         x="464.3139"
-         y="54.269619"
-         id="text3013"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">&lt;&gt;(\u2026));</text>
-      <text
-         x="25.304533"
-         y="87.277847"
-         id="text3015"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#843c0c;font-family:Courier New">DataStream</text>
-      <text
-         x="107.97513"
-         y="87.277847"
-         id="text3017"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">&lt;</text>
-      <text
-         x="116.22718"
-         y="87.277847"
-         id="text3019"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#5b9bd5;font-family:Courier New">Event</text>
-      <text
-         x="157.78754"
-         y="87.277847"
-         id="text3021"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">&gt; events = </text>
-      <text
-         x="248.86023"
-         y="87.277847"
-         id="text3023"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">lines.</text>
-      <text
-         x="298.52261"
-         y="87.277847"
-         id="text3025"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#843c0c;font-family:Courier New">map</text>
-      <text
-         x="323.57883"
-         y="87.277847"
-         id="text3027"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">((line) </text>
-      <text
-         x="389.7453"
-         y="87.277847"
-         id="text3029"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#2f5597;font-family:Courier New">-</text>
-      <text
-         x="397.99738"
-         y="87.277847"
-         id="text3031"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#2f5597;font-family:Courier New">&gt;</text>
-      <text
-         x="414.50146"
-         y="87.277847"
-         id="text3033"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">parse</text>
-      <text
-         x="456.06183"
-         y="87.277847"
-         id="text3035"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">(line));</text>
-      <text
-         x="25.304533"
-         y="120.28607"
-         id="text3037"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#843c0c;font-family:Courier New">DataStream</text>
-      <text
-         x="107.97513"
-         y="120.28607"
-         id="text3039"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">&lt;</text>
-      <text
-         x="116.22718"
-         y="120.28607"
-         id="text3041"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#5b9bd5;font-family:Courier New">Statistics</text>
-      <text
-         x="199.19786"
-         y="120.28607"
-         id="text3043"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">&gt; stats = events</text>
-      <text
-         x="91.471016"
-         y="136.79018"
-         id="text3045"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">.</text>
-      <text
-         x="99.723068"
-         y="136.79018"
-         id="text3047"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#843c0c;font-family:Courier New">keyBy</text>
-      <text
-         x="141.13339"
-         y="136.79018"
-         id="text3049"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">(</text>
-      <text
-         x="149.38544"
-         y="136.79018"
-         id="text3051"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#548235;font-family:Courier New">&quot;id&quot;</text>
-      <text
-         x="182.69376"
-         y="136.79018"
-         id="text3053"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">)</text>
-      <text
-         x="91.471016"
-         y="153.2943"
-         id="text3055"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">.</text>
-      <text
-         x="99.723068"
-         y="153.2943"
-         id="text3057"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#843c0c;font-family:Courier New">timeWindow</text>
-      <text
-         x="182.69376"
-         y="153.2943"
-         id="text3059"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">(</text>
-      <text
-         x="190.94582"
-         y="153.2943"
-         id="text3061"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">Time.seconds</text>
-      <text
-         x="290.27054"
-         y="153.2943"
-         id="text3063"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">(10))</text>
-      <text
-         x="91.471016"
-         y="169.79842"
-         id="text3065"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">.</text>
-      <text
-         x="99.723068"
-         y="169.79842"
-         id="text3067"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#843c0c;font-family:Courier New">apply</text>
-      <text
-         x="141.13339"
-         y="169.79842"
-         id="text3069"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">(</text>
-      <text
-         x="149.38544"
-         y="169.79842"
-         id="text3071"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#2f5597;font-family:Courier New">new</text>
-      <text
-         x="182.69376"
-         y="169.79842"
-         id="text3073"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">MyWindowAggregationFunction</text>
-      <text
-         x="406.24942"
-         y="169.79842"
-         id="text3075"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">());</text>
-      <text
-         x="25.304533"
-         y="202.80663"
-         id="text3077"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">stats.</text>
-      <text
-         x="74.816864"
-         y="202.80663"
-         id="text3079"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#843c0c;font-family:Courier New">addSink</text>
-      <text
-         x="132.88133"
-         y="202.80663"
-         id="text3081"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">(</text>
-      <text
-         x="141.13339"
-         y="202.80663"
-         id="text3083"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#2f5597;font-family:Courier New">new</text>
-      <text
-         x="174.4417"
-         y="202.80663"
-         id="text3085"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">RollingSink</text>
-      <text
-         x="265.36435"
-         y="202.80663"
-         id="text3087"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">(path));</text>
-      <path
-         d="m 32.445583,379.40702 c 0,-17.70441 14.309815,-32.05174 31.957962,-32.05174 17.648146,0 31.957961,14.34733 31.957961,32.05174 0,17.68566 -14.309815,32.03298 -31.957961,32.03298 -17.648147,0 -31.957962,-14.34732 -31.957962,-32.03298"
-         id="path3089"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="46.974251"
-         y="383.34982"
-         id="text3091"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source</text>
-      <path
-         d="m 170.44246,379.40702 c 0,-17.70441 14.34733,-32.05174 32.03298,-32.05174 17.70441,0 32.05174,14.34733 32.05174,32.05174 0,17.68566 -14.34733,32.03298 -32.05174,32.03298 -17.68565,0 -32.03298,-14.34732 -32.03298,-32.03298"
-         id="path3093"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="187.06161"
-         y="383.34982"
-         id="text3095"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">map()</text>
-      <path
-         d="m 104.1822,375.18722 50.16875,0 0,-4.2198 8.43961,8.4396 -8.43961,8.4396 0,-4.2198 -50.16875,0 z"
-         id="path3097"
-         style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 532.55767,21.023988 c 4.93248,0 8.90847,0.675168 8.90847,1.500373 l 0,17.49811 c 0,0.825205 3.99475,1.481619 8.90847,1.481619 -4.91372,0 -8.90847,0.675168 -8.90847,1.481619 l 0,17.516864 c 0,0.806451 -3.97599,1.481619 -8.90847,1.481619"
-         id="path3099"
-         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 532.55767,70.573832 c 4.93248,0 8.90847,0.675168 8.90847,1.481619 l 0,9.771184 c 0,0.825206 3.99475,1.481619 8.90847,1.481619 -4.91372,0 -8.90847,0.675168 -8.90847,1.481619 l 0,9.771185 c 0,0.825205 -3.97599,1.481619 -8.90847,1.481619"
-         id="path3101"
-         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 532.08881,104.65107 c 4.93248,0 8.90847,0.65641 8.90847,1.48162 l 0,33.83343 c 0,0.8252 3.99474,1.48162 8.90847,1.48162 -4.91373,0 -8.90847,0.67517 -8.90847,1.48162 l 0,33.85218 c 0,0.80645 -3.97599,1.48162 -8.90847,1.48162"
-         id="path3103"
-         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 532.08881,188.2594 c 4.93248,0 8.90847,0.67517 8.90847,1.48162 l 0,9.62115 c 0,0.8252 3.99474,1.48162 8.90847,1.48162 -4.91373,0 -8.90847,0.65641 -8.90847,1.48161 l 0,9.62115 c 0,0.80645 -3.97599,1.48162 -8.90847,1.48162"
-         id="path3105"
-         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="571.35248"
-         y="86.226501"
-         id="text3107"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Transformation</text>
-      <text
-         x="571.35248"
-         y="145.72234"
-         id="text3109"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Transformation</text>
-      <text
-         x="94.209488"
-         y="313.34818"
-         id="text3111"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source</text>
-      <text
-         x="88.508072"
-         y="326.85153"
-         id="text3113"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Operator</text>
-      <path
-         d="m 242.17908,375.18722 50.16875,0 0,-4.2198 8.4396,8.4396 -8.4396,8.4396 0,-4.2198 -50.16875,0 z"
-         id="path3115"
-         style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 308.43934,379.40702 c 0,-17.70441 14.34732,-32.05174 32.05173,-32.05174 17.68566,0 32.03299,14.34733 32.03299,32.05174 0,17.68566 -14.34733,32.03298 -32.03299,32.03298 -17.70441,0 -32.05173,-14.34732 -32.05173,-32.03298"
-         id="path3117"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="318.24503"
-         y="371.34683"
-         id="text3119"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">keyBy</text>
-      <text
-         x="349.15274"
-         y="371.34683"
-         id="text3121"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">()/</text>
-      <text
-         x="314.79416"
-         y="383.34982"
-         id="text3123"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">window()/</text>
-      <text
-         x="322.29605"
-         y="395.35281"
-         id="text3125"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">apply()</text>
-      <path
-         d="m 380.17596,375.18722 50.16875,0 0,-4.2198 8.4396,8.4396 -8.4396,8.4396 0,-4.2198 -50.16875,0 z"
-         id="path3127"
-         style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 446.45497,379.40702 c 0,-17.70441 14.34733,-32.05174 32.03298,-32.05174 17.70441,0 32.03298,14.34733 32.03298,32.05174 0,17.68566 -14.32857,32.03298 -32.03298,32.03298 -17.68565,0 -32.03298,-14.34732 -32.03298,-32.03298"
-         id="path3129"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="467.4761"
-         y="383.34982"
-         id="text3131"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Sink</text>
-      <path
-         d="m 240.45365,250.46865 17.01049,0 0,-16.2603 33.98347,0 0,16.2603 16.99173,0 -33.98347,16.24154 z"
-         id="path3133"
-         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="227.00369"
-         y="313.34818"
-         id="text3135"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Transformation</text>
-      <text
-         x="242.00743"
-         y="326.85153"
-         id="text3137"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Operators</text>
-      <text
-         x="438.99158"
-         y="313.34818"
-         id="text3139"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Sink</text>
-      <text
-         x="426.08835"
-         y="326.85153"
-         id="text3141"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Operator</text>
-      <path
-         d="m 451.65002,333.72064 5.6264,8.21454 -1.03151,0.69393 -5.6264,-8.19579 1.03151,-0.71268 z m 6.4516,6.11402 0.76895,5.53263 -4.89497,-2.70067 4.12602,-2.83196 z"
-         id="path3143"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 89.591069,331.58261 -4.688668,8.04575 1.069017,0.63766 4.688668,-8.06451 -1.069017,-0.6189 z m -5.682665,6.02025 -0.356339,5.58889 4.669913,-3.07577 -4.313574,-2.51312 z"
-         id="path3145"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 234.03956,329.40706 -12.92197,12.80944 0.88147,0.88147 12.92196,-12.79068 -0.88146,-0.90023 z m -13.35333,10.59639 -1.80045,5.28882 5.32633,-1.74418 -3.52588,-3.54464 z"
-         id="path3147"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 301.21879,329.40706 15.51012,14.72242 -0.86272,0.90023 -15.51011,-14.72242 0.86271,-0.90023 z m 15.88521,12.49062 1.91298,5.27006 -5.34509,-1.63166 3.43211,-3.6384 z"
-         id="path3149"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 241.72897,447.91784 -87.19047,-54.5761 0.65641,-1.06902 87.20923,54.5761 -0.67517,1.06902 z m -87.13421,-52.32554 -2.90697,-4.78244 5.57014,0.54389 -2.66317,4.23855 z"
-         id="path3151"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="245.67148"
-         y="463.08081"
-         id="text3153"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Stream</text>
-      <path
-         d="m 291.37259,447.9741 102.90688,-49.34354 -0.54388,-1.12528 -102.90689,49.34354 0.54389,1.12528 z m 102.58806,-47.11174 3.41335,-4.4261 -5.5889,-0.0938 2.17555,4.51987 z"
-         id="path3155"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 270.77996,440.92234 0,-39.79741 -1.25656,0 0,39.79741 1.25656,0 z m 1.87547,-38.54085 -2.49438,-5.00749 -2.51312,5.00749 5.0075,0 z"
-         id="path3157"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="571.35248"
-         y="205.12807"
-         id="text3159"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Sink</text>
-      <path
-         d="m 516.93503,463.33418 c 0,8.15828 -1.10652,14.75993 -2.45686,14.75993 l -241.14758,0 c -1.36909,0 -2.47561,6.62039 -2.47561,14.77868 0,-8.15829 -1.08777,-14.77868 -2.45687,-14.77868 l -241.147571,0 c -1.369091,0 -2.475617,-6.60165 -2.475617,-14.75993"
-         id="path3161"
-         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="214.03168"
-         y="513.79102"
-         id="text3163"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Streaming Dataflow</text>
-      <text
-         x="-125.25892"
-         y="179.02612"
-         transform="translate(24.15625,20.40625)"
-         id="text3257"
-         xml:space="preserve"
-         style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"><tspan
-           x="-125.25892"
-           y="179.02612"
-           id="tspan3259"></tspan></text>
-    </g>
-  </g>
-</svg>


[34/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/tumbling-windows.svg
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/tumbling-windows.svg b/docs/apis/streaming/tumbling-windows.svg
deleted file mode 100644
index 1857076..0000000
--- a/docs/apis/streaming/tumbling-windows.svg
+++ /dev/null
@@ -1,22 +0,0 @@
-<?xml version="1.0" standalone="yes"?>
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<svg version="1.1" viewBox="0.0 0.0 800.0 600.0" fill="none" stroke="none" stroke-linecap="square" stroke-miterlimit="10" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><clipPath id="p.0"><path d="m0 0l800.0 0l0 600.0l-800.0 0l0 -600.0z" clip-rule="nonzero"></path></clipPath><g clip-path="url(#p.0)"><path fill="#000000" fill-opacity="0.0" d="m0 0l800.0 0l0 600.0l-800.0 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m145.49606 485.0l509.0079 0" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m145.49606 485.0l503.0079 0" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m648.50397 486.65173l4.538086 -1.6517334l-4.538086 -1.6517334z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m145.49606 485.0l0 -394.99213" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" s
 troke-linejoin="round" stroke-linecap="butt" d="m145.49606 485.0l0 -388.99213" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m147.1478 96.00787l-1.6517334 -4.5380936l-1.6517334 4.5380936z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m587.0 477.0l60.0 0l0 42.992126l-60.0 0z" fill-rule="nonzero"></path><path fill="#000000" d="m600.90625 502.41998l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm1.5426636 -10.1875l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm4.1292114 0l0 -9.859375l1.5 0l0 1.390625q0.453125 
 -0.71875 1.21875 -1.15625q0.78125 -0.453125 1.765625 -0.453125q1.09375 0 1.796875 0.453125q0.703125 0.453125 0.984375 1.28125q1.171875 -1.734375 3.046875 -1.734375q1.46875 0 2.25 0.8125q0.796875 0.8125 0.796875 2.5l0 6.765625l-1.671875 0l0 -6.203125q0 -1.0 -0.15625 -1.4375q-0.15625 -0.453125 -0.59375 -0.71875q-0.421875 -0.265625 -1.0 -0.265625q-1.03125 0 -1.71875 0.6875q-0.6875 0.6875 -0.6875 2.21875l0 5.71875l-1.671875 0l0 -6.40625q0 -1.109375 -0.40625 -1.65625q-0.40625 -0.5625 -1.34375 -0.5625q-0.703125 0 -1.3125 0.375q-0.59375 0.359375 -0.859375 1.078125q-0.265625 0.71875 -0.265625 2.0625l0 5.109375l-1.671875 0zm22.290771 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.85
 9375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m42.0 133.0l82.01575 0l0 42.992126l-82.01575 0z" fill-rule="nonzero"></path><path fill="#000000" d="m58.703125 159.92l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.671875 0l0 9.859375l-1.5 0zm3.2507172 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46
 875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm16.75 -0.234375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.
 34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094467 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm17.23973 0l-1.671875 0l0 -10.640625q-0.59375 0.578125 -1.578125 1.15625q-0.984375 0.5625 -1.765625 0.859375l0
  -1.625q1.40625 -0.65625 2.453125 -1.59375q1.046875 -0.9375 1.484375 -1.8125l1.078125 0l0 13.65625z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m42.0 254.0l82.01575 0l0 42.992126l-82.01575 0z" fill-rule="nonzero"></path><path fill="#000000" d="m58.703125 280.91998l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.671875 0l0 9.859375l-1.5 0zm3.2507172 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.
 34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm16.75 -0.234375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 
 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094467 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm19.724106 -1.609375l0 1.609375l-8.984375 0q-0.015625 -0.609375 0.1875 -1.15625q0.34375 -0.921875 1.09375 -1.8125q0.765625 -0.890625 2.1875 -2.0625q2.21875 -1.8125 3.0 -2.875q0.78125 -1.0625 0.78125 -2.015625q0 -0.984375 -0.71875 -1.671875q-0.703125 -0.6875 -1.
 84375 -0.6875q-1.203125 0 -1.9375 0.734375q-0.71875 0.71875 -0.71875 2.0l-1.71875 -0.171875q0.171875 -1.921875 1.328125 -2.921875q1.15625 -1.015625 3.09375 -1.015625q1.953125 0 3.09375 1.09375q1.140625 1.078125 1.140625 2.6875q0 0.8125 -0.34375 1.609375q-0.328125 0.78125 -1.109375 1.65625q-0.765625 0.859375 -2.5625 2.390625q-1.5 1.265625 -1.9375 1.71875q-0.421875 0.4375 -0.703125 0.890625l6.671875 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m42.0 375.0l82.01575 0l0 42.992126l-82.01575 0z" fill-rule="nonzero"></path><path fill="#000000" d="m58.703125 401.91998l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.
 671875 0l0 9.859375l-1.5 0zm3.2507172 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.9218
 75 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm16.75 -0.234375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094467 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.8906
 25 -0.28125 1.953125l0 5.15625l-1.671875 0zm10.958481 -3.59375l1.671875 -0.21875q0.28125 1.421875 0.96875 2.046875q0.703125 0.625 1.6875 0.625q1.1875 0 2.0 -0.8125q0.8125 -0.828125 0.8125 -2.03125q0 -1.140625 -0.765625 -1.890625q-0.75 -0.75 -1.90625 -0.75q-0.46875 0 -1.171875 0.1875l0.1875 -1.46875q0.15625 0.015625 0.265625 0.015625q1.0625 0 1.90625 -0.546875q0.859375 -0.5625 0.859375 -1.71875q0 -0.921875 -0.625 -1.515625q-0.609375 -0.609375 -1.59375 -0.609375q-0.96875 0 -1.625 0.609375q-0.640625 0.609375 -0.828125 1.84375l-1.671875 -0.296875q0.296875 -1.6875 1.375 -2.609375q1.09375 -0.921875 2.71875 -0.921875q1.109375 0 2.046875 0.484375q0.9375 0.46875 1.421875 1.296875q0.5 0.828125 0.5 1.75q0 0.890625 -0.46875 1.609375q-0.46875 0.71875 -1.40625 1.15625q1.21875 0.265625 1.875 1.15625q0.671875 0.875 0.671875 2.1875q0 1.78125 -1.296875 3.015625q-1.296875 1.234375 -3.28125 1.234375q-1.796875 0 -2.984375 -1.0625q-1.171875 -1.0625 -1.34375 -2.765625z" fill-rule="nonzero"></path><path fi
 ll="#9900ff" d="m177.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.518509 0 4.9338684 1.000473 6.714737 2.7813263c1.7808533 1.7808685 2.7813263 4.196228 2.7813263 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m203.49606 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.518509 0 4.9338684 1.000473 6.714737 2.7813263c1.7808533 1.7808685 2.7813263 4.196228 2.7813263 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m290.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.4960
 63 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m323.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m348.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m373.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.4
 96063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m442.50394 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m469.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m492.50394 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416
  6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m524.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496033 -9.496063l0 0c2.5185547 0 4.933899 1.000473 6.7147827 2.7813263c1.7808228 1.7808685 2.781311 4.196228 2.781311 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496094 9.496063l0 0c-5.244507 0 -9.496033 -4.251526 -9.496033 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m603.0079 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496094 -9.496063l0 0c2.5184937 0 4.933838 1.000473 6.7147217 2.7813263c1.7808228 1.7808685 2.781311 4.196228 2.781311 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496033 9.496063l0 0c-5.244568 0 -9.496094 -4.251526 -9.496094 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m374.97638 275.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.781341
 6c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m401.47244 275.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m209.0 275.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.518509 0 4.9338684 1.0004883 6.714737 2.7813416c1.7808533 1.7808533 2.7813263 4.1961975 2.7813263 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m242.0 275.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.
 496063l0 0c2.518509 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m267.0 275.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m292.0 275.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m568.48
 03 275.49606l0 0c0 -5.2445374 4.251587 -9.496063 9.496094 -9.496063l0 0c2.5184937 0 4.933899 1.0004883 6.7147217 2.7813416c1.7808838 1.7808533 2.781311 4.1961975 2.781311 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496033 9.496063l0 0c-5.244507 0 -9.496094 -4.251526 -9.496094 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m594.9764 275.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496033 -9.496063l0 0c2.5185547 0 4.933899 1.0004883 6.7147827 2.7813416c1.7808228 1.7808533 2.781311 4.1961975 2.781311 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496094 9.496063l0 0c-5.244507 0 -9.496033 -4.251526 -9.496033 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m618.4803 275.49606l0 0c0 -5.2445374 4.251587 -9.496063 9.496094 -9.496063l0 0c2.5184937 0 4.933899 1.0004883 6.7147217 2.7813416c1.7808838 1.7808533 2.781311 4.1961975 2.781311 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496033 9.496063l0 0c-5.244507 0 -9.496094 -4.251526 -9.496094 -9.496063z" fi
 ll-rule="nonzero"></path><path fill="#9900ff" d="m477.0 275.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m487.99213 396.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m514.48816 396.49606l0 0c0 -5.2445374 4.251587 -9.496063 9.496094 -9.496063l0 0c2.5184937 0 4.933899 1.0004883 6.7147217 2.7813416c1.7808838 1.7808533 2.781311 4.1961975 2.781311 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496033 9.4960
 63l0 0c-5.244507 0 -9.496094 -4.251526 -9.496094 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m185.76378 396.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.518509 0 4.9338684 1.0004883 6.714737 2.7813416c1.7808533 1.7808533 2.7813263 4.1961975 2.7813263 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m265.0 396.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m291.49606 396.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.78134
 16 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m315.0 396.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m558.01575 396.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496094 -9.496063l0 0c2.5184937 0 4.933838 1.0004883 6.7147217 2.7813416c1.7808228 1.7808533 2.781311 4.1961975 2.781311 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496033 9.496063l0 0c-5.244568 0 -9.496094 -4.251526 -9.496094 -9.496063z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m173.0 111.0l0 354.99213" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" st
 roke-linejoin="round" stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m173.0 111.0l0 354.99213" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m266.59973 110.00787l0 354.99213" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m266.59973 110.00787l0 354.99213" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m360.19946 110.00787l0 354.99213" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m360.19946 110.00787l0 354.99213" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m453.79922 110.00787l0 354.99213" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m453.79922 110.00787l0 354.99213" fill-rule="nonzero"></path><path fill="#000000" 
 fill-opacity="0.0" d="m547.3989 111.0l0 354.99213" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m547.3989 111.0l0 354.99213" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m640.99866 111.0l0 354.99213" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m640.99866 111.0l0 354.99213" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m186.75 86.0l102.99213 0l0 38.992126l-102.99213 0z" fill-rule="nonzero"></path><path fill="#000000" d="m197.84375 107.8l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm8.3907
 78 -7.984375l0 -1.328125l1.140625 0l0 1.328125l-1.140625 0zm0 7.984375l0 -6.734375l1.140625 0l0 6.734375l-1.140625 0zm2.96109 0l0 -6.734375l1.03125 0l0 0.953125q0.734375 -1.109375 2.140625 -1.109375q0.609375 0 1.109375 0.21875q0.515625 0.21875 0.765625 0.578125q0.265625 0.34375 0.359375 0.84375q0.0625 0.3125 0.0625 1.109375l0 4.140625l-1.140625 0l0 -4.09375q0 -0.703125 -0.140625 -1.046875q-0.125 -0.34375 -0.46875 -0.546875q-0.328125 -0.21875 -0.78125 -0.21875q-0.734375 0 -1.265625 0.46875q-0.53125 0.453125 -0.53125 1.75l0 3.6875l-1.140625 0zm11.787476 0l0 -0.84375q-0.640625 1.0 -1.890625 1.0q-0.796875 0 -1.484375 -0.4375q-0.671875 -0.453125 -1.046875 -1.25q-0.375 -0.796875 -0.375 -1.828125q0 -1.015625 0.34375 -1.828125q0.34375 -0.828125 1.015625 -1.265625q0.671875 -0.4375 1.5 -0.4375q0.609375 0 1.078125 0.265625q0.484375 0.25 0.78125 0.65625l0 -3.34375l1.140625 0l0 9.3125l-1.0625 0zm-3.609375 -3.359375q0 1.296875 0.53125 1.9375q0.546875 0.640625 1.296875 0.640625q0.75 0 1.265625 -0.
 609375q0.53125 -0.625 0.53125 -1.875q0 -1.390625 -0.53125 -2.03125q-0.53125 -0.65625 -1.3125 -0.65625q-0.765625 0 -1.28125 0.625q-0.5 0.625 -0.5 1.96875zm6.2249756 -0.015625q0 -1.875 1.03125 -2.765625q0.875 -0.75 2.125 -0.75q1.390625 0 2.265625 0.90625q0.890625 0.90625 0.890625 2.515625q0 1.296875 -0.390625 2.046875q-0.390625 0.75 -1.140625 1.171875q-0.75 0.40625 -1.625 0.40625q-1.421875 0 -2.296875 -0.90625q-0.859375 -0.90625 -0.859375 -2.625zm1.171875 0q0 1.296875 0.5625 1.953125q0.5625 0.640625 1.421875 0.640625q0.84375 0 1.40625 -0.640625q0.578125 -0.65625 0.578125 -1.984375q0 -1.25 -0.578125 -1.890625q-0.5625 -0.65625 -1.40625 -0.65625q-0.859375 0 -1.421875 0.640625q-0.5625 0.640625 -0.5625 1.9375zm7.8968506 3.375l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.156
 25l-1.359375 5.1875l-1.203125 0zm16.07814 0l-1.140625 0l0 -7.28125q-0.421875 0.390625 -1.09375 0.796875q-0.65625 0.390625 -1.1875 0.578125l0 -1.109375q0.953125 -0.4375 1.671875 -1.078125q0.71875 -0.640625 1.015625 -1.25l0.734375 0l0 9.34375z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m278.97604 86.0l102.99213 0l0 38.992126l-102.99213 0z" fill-rule="nonzero"></path><path fill="#000000" d="m290.0698 107.8l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm8.390778 -7.984375l0 -1.328125l1.140625 0l0 1.328125l-1.140625 0zm0 7.984375l0 -6.734375l1.140625 0l0 6.734375l-1.140625 0zm2.96109 0l0 -6.734375l1.03125 0l0 0.953125q0.734375 -1.109375 2.140625 -1.109375q0.609375 0 1.109375 0.21875q0.515625 0.21875 0.765625 0.
 578125q0.265625 0.34375 0.359375 0.84375q0.0625 0.3125 0.0625 1.109375l0 4.140625l-1.140625 0l0 -4.09375q0 -0.703125 -0.140625 -1.046875q-0.125 -0.34375 -0.46875 -0.546875q-0.328125 -0.21875 -0.78125 -0.21875q-0.734375 0 -1.265625 0.46875q-0.53125 0.453125 -0.53125 1.75l0 3.6875l-1.140625 0zm11.787476 0l0 -0.84375q-0.640625 1.0 -1.890625 1.0q-0.796875 0 -1.484375 -0.4375q-0.671875 -0.453125 -1.046875 -1.25q-0.375 -0.796875 -0.375 -1.828125q0 -1.015625 0.34375 -1.828125q0.34375 -0.828125 1.015625 -1.265625q0.671875 -0.4375 1.5 -0.4375q0.609375 0 1.078125 0.265625q0.484375 0.25 0.78125 0.65625l0 -3.34375l1.140625 0l0 9.3125l-1.0625 0zm-3.609375 -3.359375q0 1.296875 0.53125 1.9375q0.546875 0.640625 1.296875 0.640625q0.75 0 1.265625 -0.609375q0.53125 -0.625 0.53125 -1.875q0 -1.390625 -0.53125 -2.03125q-0.53125 -0.65625 -1.3125 -0.65625q-0.765625 0 -1.28125 0.625q-0.5 0.625 -0.5 1.96875zm6.2249756 -0.015625q0 -1.875 1.03125 -2.765625q0.875 -0.75 2.125 -0.75q1.390625 0 2.265625 0.90625q0.
 890625 0.90625 0.890625 2.515625q0 1.296875 -0.390625 2.046875q-0.390625 0.75 -1.140625 1.171875q-0.75 0.40625 -1.625 0.40625q-1.421875 0 -2.296875 -0.90625q-0.859375 -0.90625 -0.859375 -2.625zm1.171875 0q0 1.296875 0.5625 1.953125q0.5625 0.640625 1.421875 0.640625q0.84375 0 1.40625 -0.640625q0.578125 -0.65625 0.578125 -1.984375q0 -1.25 -0.578125 -1.890625q-0.5625 -0.65625 -1.40625 -0.65625q-0.859375 0 -1.421875 0.640625q-0.5625 0.640625 -0.5625 1.9375zm7.8968506 3.375l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm17.78128 -1.09375l0 1.09375l-6.15625 0q-0.015625 -0.40625 0.140625 -0.796875q0.234375 -0.625 0.75 -1.234375q0.515625 -0.609375 1.5 -1.40625q1.515625 -1.25 2.046875 -1.96875q0.53125 -0.734375 0.53125 -1.375q0 -0.6875 -0.484
 375 -1.140625q-0.484375 -0.46875 -1.265625 -0.46875q-0.828125 0 -1.328125 0.5q-0.484375 0.484375 -0.5 1.359375l-1.171875 -0.125q0.125 -1.3125 0.90625 -2.0q0.78125 -0.6875 2.109375 -0.6875q1.34375 0 2.125 0.75q0.78125 0.734375 0.78125 1.828125q0 0.5625 -0.234375 1.109375q-0.21875 0.53125 -0.75 1.140625q-0.53125 0.59375 -1.765625 1.625q-1.03125 0.859375 -1.328125 1.171875q-0.28125 0.3125 -0.46875 0.625l4.5625 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m372.83102 86.0l102.99213 0l0 38.992126l-102.99213 0z" fill-rule="nonzero"></path><path fill="#000000" d="m383.92477 107.8l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm8.390778 -7.984375l0 -1.328125l1.140625 0l0 1.328125l-1.140625 0zm0 7.984375l0 -6.734375
 l1.140625 0l0 6.734375l-1.140625 0zm2.96109 0l0 -6.734375l1.03125 0l0 0.953125q0.734375 -1.109375 2.140625 -1.109375q0.609375 0 1.109375 0.21875q0.515625 0.21875 0.765625 0.578125q0.265625 0.34375 0.359375 0.84375q0.0625 0.3125 0.0625 1.109375l0 4.140625l-1.140625 0l0 -4.09375q0 -0.703125 -0.140625 -1.046875q-0.125 -0.34375 -0.46875 -0.546875q-0.328125 -0.21875 -0.78125 -0.21875q-0.734375 0 -1.265625 0.46875q-0.53125 0.453125 -0.53125 1.75l0 3.6875l-1.140625 0zm11.787476 0l0 -0.84375q-0.640625 1.0 -1.890625 1.0q-0.796875 0 -1.484375 -0.4375q-0.671875 -0.453125 -1.046875 -1.25q-0.375 -0.796875 -0.375 -1.828125q0 -1.015625 0.34375 -1.828125q0.34375 -0.828125 1.015625 -1.265625q0.671875 -0.4375 1.5 -0.4375q0.609375 0 1.078125 0.265625q0.484375 0.25 0.78125 0.65625l0 -3.34375l1.140625 0l0 9.3125l-1.0625 0zm-3.609375 -3.359375q0 1.296875 0.53125 1.9375q0.546875 0.640625 1.296875 0.640625q0.75 0 1.265625 -0.609375q0.53125 -0.625 0.53125 -1.875q0 -1.390625 -0.53125 -2.03125q-0.53125 -0.656
 25 -1.3125 -0.65625q-0.765625 0 -1.28125 0.625q-0.5 0.625 -0.5 1.96875zm6.2249756 -0.015625q0 -1.875 1.03125 -2.765625q0.875 -0.75 2.125 -0.75q1.390625 0 2.265625 0.90625q0.890625 0.90625 0.890625 2.515625q0 1.296875 -0.390625 2.046875q-0.390625 0.75 -1.140625 1.171875q-0.75 0.40625 -1.625 0.40625q-1.421875 0 -2.296875 -0.90625q-0.859375 -0.90625 -0.859375 -2.625zm1.171875 0q0 1.296875 0.5625 1.953125q0.5625 0.640625 1.421875 0.640625q0.84375 0 1.40625 -0.640625q0.578125 -0.65625 0.578125 -1.984375q0 -1.25 -0.578125 -1.890625q-0.5625 -0.65625 -1.40625 -0.65625q-0.859375 0 -1.421875 0.640625q-0.5625 0.640625 -0.5625 1.9375zm7.8968506 3.375l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm11.7812805 -2.453125l1.140625 -0.15625q0.203125 0
 .96875 0.671875 1.40625q0.46875 0.421875 1.15625 0.421875q0.796875 0 1.34375 -0.546875q0.5625 -0.5625 0.5625 -1.390625q0 -0.796875 -0.515625 -1.296875q-0.5 -0.515625 -1.296875 -0.515625q-0.328125 0 -0.8125 0.125l0.125 -1.0q0.125 0.015625 0.1875 0.015625q0.734375 0 1.3125 -0.375q0.59375 -0.390625 0.59375 -1.1875q0 -0.625 -0.4375 -1.03125q-0.421875 -0.421875 -1.09375 -0.421875q-0.671875 0 -1.109375 0.421875q-0.4375 0.421875 -0.578125 1.25l-1.140625 -0.203125q0.21875 -1.140625 0.953125 -1.765625q0.75 -0.640625 1.84375 -0.640625q0.765625 0 1.40625 0.328125q0.640625 0.328125 0.984375 0.890625q0.34375 0.5625 0.34375 1.203125q0 0.59375 -0.328125 1.09375q-0.328125 0.5 -0.953125 0.78125q0.8125 0.203125 1.265625 0.796875q0.46875 0.59375 0.46875 1.5q0 1.21875 -0.890625 2.078125q-0.890625 0.84375 -2.25 0.84375q-1.21875 0 -2.03125 -0.734375q-0.8125 -0.734375 -0.921875 -1.890625z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m467.55054 86.0l102.99213 0l0 38.992126l-102.99
 213 0z" fill-rule="nonzero"></path><path fill="#000000" d="m478.6443 107.8l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm8.390778 -7.984375l0 -1.328125l1.140625 0l0 1.328125l-1.140625 0zm0 7.984375l0 -6.734375l1.140625 0l0 6.734375l-1.140625 0zm2.96109 0l0 -6.734375l1.03125 0l0 0.953125q0.734375 -1.109375 2.140625 -1.109375q0.609375 0 1.109375 0.21875q0.515625 0.21875 0.765625 0.578125q0.265625 0.34375 0.359375 0.84375q0.0625 0.3125 0.0625 1.109375l0 4.140625l-1.140625 0l0 -4.09375q0 -0.703125 -0.140625 -1.046875q-0.125 -0.34375 -0.46875 -0.546875q-0.328125 -0.21875 -0.78125 -0.21875q-0.734375 0 -1.265625 0.46875q-0.53125 0.453125 -0.53125 1.75l0 3.6875l-1.140625 0zm11.787476 0l0 -0.84375q-0.640625 1.0 -1.890625 1.0q-0.796875 0 -1.4
 84375 -0.4375q-0.671875 -0.453125 -1.046875 -1.25q-0.375 -0.796875 -0.375 -1.828125q0 -1.015625 0.34375 -1.828125q0.34375 -0.828125 1.015625 -1.265625q0.671875 -0.4375 1.5 -0.4375q0.609375 0 1.078125 0.265625q0.484375 0.25 0.78125 0.65625l0 -3.34375l1.140625 0l0 9.3125l-1.0625 0zm-3.609375 -3.359375q0 1.296875 0.53125 1.9375q0.546875 0.640625 1.296875 0.640625q0.75 0 1.265625 -0.609375q0.53125 -0.625 0.53125 -1.875q0 -1.390625 -0.53125 -2.03125q-0.53125 -0.65625 -1.3125 -0.65625q-0.765625 0 -1.28125 0.625q-0.5 0.625 -0.5 1.96875zm6.2249756 -0.015625q0 -1.875 1.03125 -2.765625q0.875 -0.75 2.125 -0.75q1.390625 0 2.265625 0.90625q0.890625 0.90625 0.890625 2.515625q0 1.296875 -0.390625 2.046875q-0.390625 0.75 -1.140625 1.171875q-0.75 0.40625 -1.625 0.40625q-1.421875 0 -2.296875 -0.90625q-0.859375 -0.90625 -0.859375 -2.625zm1.171875 0q0 1.296875 0.5625 1.953125q0.5625 0.640625 1.421875 0.640625q0.84375 0 1.40625 -0.640625q0.578125 -0.65625 0.578125 -1.984375q0 -1.25 -0.578125 -1.890625q-
 0.5625 -0.65625 -1.40625 -0.65625q-0.859375 0 -1.421875 0.640625q-0.5625 0.640625 -0.5625 1.9375zm7.8968506 3.375l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm15.4375 0l0 -2.234375l-4.03125 0l0 -1.046875l4.234375 -6.03125l0.9375 0l0 6.03125l1.265625 0l0 1.046875l-1.265625 0l0 2.234375l-1.140625 0zm0 -3.28125l0 -4.1875l-2.921875 4.1875l2.921875 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m559.92816 86.0l102.99213 0l0 38.992126l-102.99213 0z" fill-rule="nonzero"></path><path fill="#000000" d="m571.0219 107.8l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375
  0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm8.390747 -7.984375l0 -1.328125l1.140625 0l0 1.328125l-1.140625 0zm0 7.984375l0 -6.734375l1.140625 0l0 6.734375l-1.140625 0zm2.9611206 0l0 -6.734375l1.03125 0l0 0.953125q0.734375 -1.109375 2.140625 -1.109375q0.609375 0 1.109375 0.21875q0.515625 0.21875 0.765625 0.578125q0.265625 0.34375 0.359375 0.84375q0.0625 0.3125 0.0625 1.109375l0 4.140625l-1.140625 0l0 -4.09375q0 -0.703125 -0.140625 -1.046875q-0.125 -0.34375 -0.46875 -0.546875q-0.328125 -0.21875 -0.78125 -0.21875q-0.734375 0 -1.265625 0.46875q-0.53125 0.453125 -0.53125 1.75l0 3.6875l-1.140625 0zm11.787476 0l0 -0.84375q-0.640625 1.0 -1.890625 1.0q-0.796875 0 -1.484375 -0.4375q-0.671875 -0.453125 -1.046875 -1.25q-0.375 -0.796875 -0.375 -1.828125q0 -1.015625 0.34375 -1.828125q0.34375 -0.828125 1.015625 -1.265625q0.671875 -0.4375 1.5 -0.4375q0.609375 0 1.078125 0.265625q0.484375 0.25 0.78125 0.65625l0 -3.34375l1.140625 0l0 9.3125l-1
 .0625 0zm-3.609375 -3.359375q0 1.296875 0.53125 1.9375q0.546875 0.640625 1.296875 0.640625q0.75 0 1.265625 -0.609375q0.53125 -0.625 0.53125 -1.875q0 -1.390625 -0.53125 -2.03125q-0.53125 -0.65625 -1.3125 -0.65625q-0.765625 0 -1.28125 0.625q-0.5 0.625 -0.5 1.96875zm6.2249756 -0.015625q0 -1.875 1.03125 -2.765625q0.875 -0.75 2.125 -0.75q1.390625 0 2.265625 0.90625q0.890625 0.90625 0.890625 2.515625q0 1.296875 -0.390625 2.046875q-0.390625 0.75 -1.140625 1.171875q-0.75 0.40625 -1.625 0.40625q-1.421875 0 -2.296875 -0.90625q-0.859375 -0.90625 -0.859375 -2.625zm1.171875 0q0 1.296875 0.5625 1.953125q0.5625 0.640625 1.421875 0.640625q0.84375 0 1.40625 -0.640625q0.578125 -0.65625 0.578125 -1.984375q0 -1.25 -0.578125 -1.890625q-0.5625 -0.65625 -1.40625 -0.65625q-0.859375 0 -1.421875 0.640625q-0.5625 0.640625 -0.5625 1.9375zm7.8968506 3.375l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0
 .375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm11.78125 -2.4375l1.1875 -0.109375q0.140625 0.890625 0.625 1.328125q0.484375 0.4375 1.171875 0.4375q0.828125 0 1.390625 -0.625q0.578125 -0.625 0.578125 -1.640625q0 -0.984375 -0.546875 -1.546875q-0.546875 -0.5625 -1.4375 -0.5625q-0.5625 0 -1.015625 0.25q-0.4375 0.25 -0.6875 0.640625l-1.0625 -0.140625l0.890625 -4.765625l4.625 0l0 1.078125l-3.703125 0l-0.5 2.5q0.828125 -0.578125 1.75 -0.578125q1.21875 0 2.046875 0.84375q0.84375 0.84375 0.84375 2.171875q0 1.265625 -0.734375 2.1875q-0.890625 1.125 -2.4375 1.125q-1.265625 0 -2.078125 -0.703125q-0.796875 -0.71875 -0.90625 -1.890625z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m371.0 446.0l72.0 0" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m374.4271 446.0l65.14581 0" fill-rule="evenodd"></path><path f
 ill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m374.4271 446.0l1.1245728 -1.1245728l-3.0897522 1.1245728l3.0897522 1.1245728z" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m439.5729 446.0l-1.1245728 1.1245728l3.0897522 -1.1245728l-3.0897522 -1.1245728z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m390.0 515.0l10.015747 -57.007874" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m390.0 515.0l8.977539 -51.09839" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m400.6043 464.18744l-0.8415222 -4.7554626l-2.4121094 4.1838074z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m330.77167 507.0l123.40155 0l0 42.992126l-123.40155 0z" fill-rule="nonzero"></path><path fill="#000000" d="m342.8498 533.92l-3.015625 -9.859375l1.71875
  0l1.5625 5.6875l0.59375 2.125q0.03125 -0.15625 0.5 -2.03125l1.578125 -5.78125l1.71875 0l1.46875 5.71875l0.484375 1.890625l0.578125 -1.90625l1.6875 -5.703125l1.625 0l-3.078125 9.859375l-1.734375 0l-1.578125 -5.90625l-0.375 -1.671875l-2.0 7.578125l-1.734375 0zm11.660431 -11.6875l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm4.1292114 0l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0zm16.766327 0l0 -1.25q-0.9375 1.46875 -2.75 1.46875q-1.171875 0 -2.171875 -0.640625q-0.984375 -0.65625 -1.53125 -1.8125q-0.53125 -1.171875 -0.53125 -2.6875q0 -1.46875 0.484375 -2.671875q0.5 -1.203125 1.46875 -1.8
 4375q0.984375 -0.640625 2.203125 -0.640625q0.890625 0 1.578125 0.375q0.703125 0.375 1.140625 0.984375l0 -4.875l1.65625 0l0 13.59375l-1.546875 0zm-5.28125 -4.921875q0 1.890625 0.796875 2.828125q0.8125 0.9375 1.890625 0.9375q1.09375 0 1.859375 -0.890625q0.765625 -0.890625 0.765625 -2.734375q0 -2.015625 -0.78125 -2.953125q-0.78125 -0.953125 -1.921875 -0.953125q-1.109375 0 -1.859375 0.90625q-0.75 0.90625 -0.75 2.859375zm8.641357 0q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm11.110077 4.921875l-3.01562
 5 -9.859375l1.71875 0l1.5625 5.6875l0.59375 2.125q0.03125 -0.15625 0.5 -2.03125l1.578125 -5.78125l1.71875 0l1.46875 5.71875l0.484375 1.890625l0.578125 -1.90625l1.6875 -5.703125l1.625 0l-3.078125 9.859375l-1.734375 0l-1.578125 -5.90625l-0.375 -1.671875l-2.0 7.578125l-1.734375 0zm16.15625 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0
 .203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm10.015625 -8.75l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm3.2542114 0l0 -1.359375l6.265625 -7.1875q-1.0625 0.046875 -1.875 0.046875l-4.015625 0l0 -1.359375l8.046875 0l0 1.109375l-5.34375 6.25l-1.015625 1.140625q1.109375 -0.078125 2.09375 -0.078125l4.5625 0l0 1.4375l-8.71875 0zm16.953125 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.
 4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875z" fill-rule="nonzero"></path></g></svg>
-

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/windows.md
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/windows.md b/docs/apis/streaming/windows.md
deleted file mode 100644
index b9847a5..0000000
--- a/docs/apis/streaming/windows.md
+++ /dev/null
@@ -1,678 +0,0 @@
----
-title: "Windows"
-
-sub-nav-id: windows
-sub-nav-group: streaming
-sub-nav-pos: 4
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-Flink uses a concept called *windows* to divide a (potentially) infinite `DataStream` into finite
-slices based on the timestamps of elements or other criteria. This division is required when working
-with infinite streams of data and performing transformations that aggregate elements.
-
-<span class="label label-info">Info</span> We will mostly talk about *keyed windowing* here, i.e.
-windows that are applied on a `KeyedStream`. Keyed windows have the advantage that elements are
-subdivided based on both window and key before being given to
-a user function. The work can thus be distributed across the cluster
-because the elements for different keys can be processed independently. If you absolutely have to,
-you can check out [non-keyed windowing](#non-keyed-windowing) where we describe how non-keyed
-windows work.
-
-* This will be replaced by the TOC
-{:toc}
-
-## Basics
-
-For a windowed transformation you must at least specify a *key*
-(see [specifying keys](/apis/common/index.html#specifying-keys)),
-a *window assigner* and a *window function*. The *key* divides the infinite, non-keyed, stream
-into logical keyed streams while the *window assigner* assigns elements to finite per-key windows.
-Finally, the *window function* is used to process the elements of each window.
-
-The basic structure of a windowed transformation is thus as follows:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-DataStream<T> input = ...;
-
-input
-    .keyBy(<key selector>)
-    .window(<window assigner>)
-    .<windowed transformation>(<window function>);
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val input: DataStream[T] = ...
-
-input
-    .keyBy(<key selector>)
-    .window(<window assigner>)
-    .<windowed transformation>(<window function>)
-{% endhighlight %}
-</div>
-</div>
-
-We will cover [window assigners](#window-assigners) in a separate section below.
-
-The window transformation can be one of `reduce()`, `fold()` or `apply()`. Which respectively
-takes a `ReduceFunction`, `FoldFunction` or `WindowFunction`. We describe each of these ways
-of specifying a windowed transformation in detail below: [window functions](#window-functions).
-
-For more advanced use cases you can also specify a `Trigger` that determines when exactly a window
-is being considered as *ready for processing*. These will be covered in more detail in
-[triggers](#triggers).
-
-## Window Assigners
-
-The window assigner specifies how elements of the stream are divided into finite slices. Flink comes
-with pre-implemented window assigners for the most typical use cases, namely *tumbling windows*,
-*sliding windows*, *session windows* and *global windows*, but you can implement your own by
-extending the `WindowAssigner` class. All the built-in window assigners, except for the global
-windows one, assign elements to windows based on time, which can either be processing time or event
-time. Please take a look at our section on [event time](/apis/streaming/event_time.html) for more
-information about how Flink deals with time.
-
-Let's first look at how each of these window assigners works before looking at how they can be used
-in a Flink program. We will be using abstract figures to visualize the workings of each assigner:
-in the following, the purple circles are elements of the stream, they are partitioned
-by some key (in this case *user 1*, *user 2* and *user 3*) and the x-axis shows the progress
-of time.
-
-### Global Windows
-
-Global windows are a way of specifying that we don't want to subdivide our elements into windows.
-Each element is assigned to one single per-key *global window*.
-This windowing scheme is only useful if you also specify a custom [trigger](#triggers). Otherwise,
-no computation is ever going to be performed, as the global window does not have a natural end at
-which we could process the aggregated elements.
-
-<img src="non-windowed.svg" class="center" style="width: 80%;" />
-
-### Tumbling Windows
-
-A *tumbling windows* assigner assigns elements to fixed length, non-overlapping windows of a
-specified *window size*.. For example, if you specify a window size of 5 minutes, the window
-function will get 5 minutes worth of elements in each invocation.
-
-<img src="tumbling-windows.svg" class="center" style="width: 80%;" />
-
-### Sliding Windows
-
-The *sliding windows* assigner assigns elements to windows of fixed length equal to *window size*,
-as the tumbling windows assigner, but in this case, windows can be overlapping. The size of the
-overlap is defined by the user-specified parameter *window slide*. As windows are overlapping, an
-element can be assigned to multiple windows
-
-For example, you could have windows of size 10 minutes that slide by 5 minutes. With this you get 10
-minutes worth of elements in each invocation of the window function and it will be invoked for every
-5 minutes of data.
-
-<img src="sliding-windows.svg" class="center" style="width: 80%;" />
-
-### Session Windows
-
-The *session windows* assigner is ideal for cases where the window boundaries need to adjust to the
-incoming data. Both the *tumbling windows* and *sliding windows* assigner assign elements to windows
-that start at fixed time points and have a fixed *window size*. With session windows it is possible
-to have windows that start at individual points in time for each key and that end once there has
-been a certain period of inactivity. The configuration parameter is the *session gap* that specifies
-how long to wait for new data before considering a session as closed.
-
-<img src="session-windows.svg" class="center" style="width: 80%;" />
-
-### Specifying a Window Assigner
-
-The built-in window assigners (except `GlobalWindows`) come in two versions. One for processing-time
-windowing and one for event-time windowing. The processing-time assigners assign elements to
-windows based on the current clock of the worker machines while the event-time assigners assign
-windows based on the timestamps of elements. Please have a look at
-[event time](/apis/streaming/event_time.html) to learn about the difference between processing time
-and event time and about how timestamps can be assigned to elements.
-
-The following code snippets show how each of the window assigners can be used in a program:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-DataStream<T> input = ...;
-
-// tumbling event-time windows
-input
-    .keyBy(<key selector>)
-    .window(TumblingEventTimeWindows.of(Time.seconds(5)))
-    .<windowed transformation>(<window function>);
-
-// sliding event-time windows
-input
-    .keyBy(<key selector>)
-    .window(SlidingEventTimeWindows.of(Time.seconds(10), Time.seconds(5)))
-    .<windowed transformation>(<window function>);
-
-// event-time session windows
-input
-    .keyBy(<key selector>)
-    .window(EventTimeSessionWindows.withGap(Time.minutes(10)))
-    .<windowed transformation>(<window function>);
-
-// tumbling processing-time windows
-input
-    .keyBy(<key selector>)
-    .window(TumblingProcessingTimeWindows.of(Time.seconds(5)))
-    .<windowed transformation>(<window function>);
-
-// sliding processing-time windows
-input
-    .keyBy(<key selector>)
-    .window(SlidingProcessingTimeWindows.of(Time.seconds(10), Time.seconds(5)))
-    .<windowed transformation>(<window function>);
-
-// processing-time session windows
-input
-    .keyBy(<key selector>)
-    .window(ProcessingTimeSessionWindows.withGap(Time.minutes(10)))
-    .<windowed transformation>(<window function>);
-
-// global windows
-input
-    .keyBy(<key selector>)
-    .window(GlobalWindows.create())
-    .<windowed transformation>(<window function>);
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val input: DataStream[T] = ...
-
-// tumbling event-time windows
-input
-    .keyBy(<key selector>)
-    .window(TumblingEventTimeWindows.of(Time.seconds(5)))
-    .<windowed transformation>(<window function>)
-
-// sliding event-time windows
-input
-    .keyBy(<key selector>)
-    .window(SlidingEventTimeWindows.of(Time.seconds(10), Time.seconds(5)))
-    .<windowed transformation>(<window function>)
-
-// event-time session windows
-input
-    .keyBy(<key selector>)
-    .window(EventTimeSessionWindows.withGap(Time.minutes(10)))
-    .<windowed transformation>(<window function>)
-
-// tumbling processing-time windows
-input
-    .keyBy(<key selector>)
-    .window(TumblingProcessingTimeWindows.of(Time.seconds(5)))
-    .<windowed transformation>(<window function>)
-
-// sliding processing-time windows
-input
-    .keyBy(<key selector>)
-    .window(SlidingProcessingTimeWindows.of(Time.seconds(10), Time.seconds(5)))
-    .<windowed transformation>(<window function>)
-
-// processing-time session windows
-input
-    .keyBy(<key selector>)
-    .window(ProcessingTimeSessionWindows.withGap(Time.minutes(10)))
-    .<windowed transformation>(<window function>)
-
-// global windows
-input
-    .keyBy(<key selector>)
-    .window(GlobalWindows.create())
-{% endhighlight %}
-</div>
-</div>
-
-Note, how we can specify a time interval by using one of `Time.milliseconds(x)`, `Time.seconds(x)`,
-`Time.minutes(x)`, and so on.
-
-The time-based window assigners also take an optional `offset` parameter that can be used to
-change the alignment of windows. For example, without offsets hourly windows are aligned
-with epoch, that is you will get windows such as `1:00 - 1:59`, `2:00 - 2:59` and so on. If you
-want to change that you can give an offset. With an offset of 15 minutes you would, for example,
-get `1:15 - 2:14`, `2:15 - 3:14` etc. Another important use case for offsets is when you
-want to have daily windows and live in a timezone other than UTC-0. For example, in China
-you would have to specify an offset of `Time.hours(-8)`.
-
-This example shows how an offset can be specified for tumbling event time windows (the other
-windows work accordingly):
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-DataStream<T> input = ...;
-
-// tumbling event-time windows
-input
-    .keyBy(<key selector>)
-    .window(TumblingEventTimeWindows.of(Time.days(1), Time.hours(-8)))
-    .<windowed transformation>(<window function>);
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val input: DataStream[T] = ...
-
-// tumbling event-time windows
-input
-    .keyBy(<key selector>)
-    .window(TumblingEventTimeWindows.of(Time.days(1), Time.hours(-8)))
-    .<windowed transformation>(<window function>)
-{% endhighlight %}
-</div>
-</div>
-
-## Window Functions
-
-The *window function* is used to process the elements of each window (and key) once the system
-determines that a window is ready for processing (see [triggers](#triggers) for how the system
-determines when a window is ready).
-
-The window function can be one of `ReduceFunction`, `FoldFunction` or `WindowFunction`. The first
-two can be executed more efficiently because Flink can incrementally aggregate the elements for each
-window as they arrive. A `WindowFunction` gets an `Iterable` for all the elements contained in a
-window and additional meta information about the window to which the elements belong.
-
-A windowed transformation with a `WindowFunction` cannot be executed as efficiently as the other
-cases because Flink has to buffer *all* elements for a window internally before invoking the function.
-This can be mitigated by combining a `WindowFunction` with a `ReduceFunction` or `FoldFunction` to
-get both incremental aggregation of window elements and the additional information that the
-`WindowFunction` receives. We will look at examples for each of these variants.
-
-### ReduceFunction
-
-A reduce function specifies how two values can be combined to form one element. Flink can use this
-to incrementally aggregate the elements in a window.
-
-A `ReduceFunction` can be used in a program like this:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-DataStream<Tuple2<String, Long>> input = ...;
-
-input
-    .keyBy(<key selector>)
-    .window(<window assigner>)
-    .reduce(new ReduceFunction<Tuple2<String, Long>> {
-      public Tuple2<String, Long> reduce(Tuple2<String, Long> v1, Tuple2<String, Long> v2) {
-        return new Tuple2<>(v1.f0, v1.f1 + v2.f1);
-      }
-    });
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val input: DataStream[(String, Long)] = ...
-
-input
-    .keyBy(<key selector>)
-    .window(<window assigner>)
-    .reduce { (v1, v2) => (v1._1, v1._2 + v2._2) }
-{% endhighlight %}
-</div>
-</div>
-
-A `ReduceFunction` specifies how two elements from the input can be combined to produce
-an output element. This example will sum up the second field of the tuple for all elements
-in a window.
-
-### FoldFunction
-
-A fold function can be specified like this:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-DataStream<Tuple2<String, Long>> input = ...;
-
-input
-    .keyBy(<key selector>)
-    .window(<window assigner>)
-    .fold("", new FoldFunction<Tuple2<String, Long>, String>> {
-       public String fold(String acc, Tuple2<String, Long> value) {
-         return acc + value.f1;
-       }
-    });
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val input: DataStream[(String, Long)] = ...
-
-input
-    .keyBy(<key selector>)
-    .window(<window assigner>)
-    .fold("") { (acc, v) => acc + v._2 }
-{% endhighlight %}
-</div>
-</div>
-
-A `FoldFunction` specifies how elements from the input will be added to an initial
-accumulator value (`""`, the empty string, in our example). This example will compute
-a concatenation of all the `Long` fields of the input.
-
-### WindowFunction - The Generic Case
-
-Using a `WindowFunction` provides most flexibility, at the cost of performance. The reason for this
-is that elements cannot be incrementally aggregated for a window and instead need to be buffered
-internally until the window is considered ready for processing. A `WindowFunction` gets an
-`Iterable` containing all the elements of the window being processed. The signature of
-`WindowFunction` is this:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-public interface WindowFunction<IN, OUT, KEY, W extends Window> extends Function, Serializable {
-
-  /**
-   * Evaluates the window and outputs none or several elements.
-   *
-   * @param key The key for which this window is evaluated.
-   * @param window The window that is being evaluated.
-   * @param input The elements in the window being evaluated.
-   * @param out A collector for emitting elements.
-   *
-   * @throws Exception The function may throw exceptions to fail the program and trigger recovery.
-   */
-  void apply(KEY key, W window, Iterable<IN> input, Collector<OUT> out) throws Exception;
-}
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-public interface WindowFunction<IN, OUT, KEY, W extends Window> extends Function, Serializable {
-
-  /**
-   * Evaluates the window and outputs none or several elements.
-   *
-   * @param key The key for which this window is evaluated.
-   * @param window The window that is being evaluated.
-   * @param input The elements in the window being evaluated.
-   * @param out A collector for emitting elements.
-   *
-   * @throws Exception The function may throw exceptions to fail the program and trigger recovery.
-   */
-  void apply(KEY key, W window, Iterable<IN> input, Collector<OUT> out) throws Exception;
-}
-{% endhighlight %}
-</div>
-</div>
-
-Here we show an example that uses a `WindowFunction` to count the elements in a window. We do this
-because we want to access information about the window itself to emit it along with the count.
-This is very inefficient, however, and should be implemented with a
-`ReduceFunction` in practice. Below, we will see an example of how a `ReduceFunction` can
-be combined with a `WindowFunction` to get both incremental aggregation and the added
-information of a `WindowFunction`.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-DataStream<Tuple2<String, Long>> input = ...;
-
-input
-    .keyBy(<key selector>)
-    .window(<window assigner>)
-    .apply(new MyWindowFunction());
-
-/* ... */
-
-public class MyWindowFunction implements WindowFunction<Tuple<String, Long>, String, String, TimeWindow> {
-
-  void apply(String key, TimeWindow window, Iterable<Tuple<String, Long>> input, Collector<String> out) {
-    long count = 0;
-    for (Tuple<String, Long> in: input) {
-      count++;
-    }
-    out.collect("Window: " + window + "count: " + count);
-  }
-}
-
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val input: DataStream[(String, Long)] = ...
-
-input
-    .keyBy(<key selector>)
-    .window(<window assigner>)
-    .apply(new MyWindowFunction())
-
-/* ... */
-
-class MyWindowFunction extends WindowFunction[(String, Long), String, String, TimeWindow] {
-
-  def apply(key: String, window: TimeWindow, input: Iterable[(String, Long)], out: Collector[String]): () = {
-    var count = 0L
-    for (in <- input) {
-      count = count + 1
-    }
-    out.collect(s"Window $window count: $count")
-  }
-}
-{% endhighlight %}
-</div>
-</div>
-
-### WindowFunction with Incremental Aggregation
-
-A `WindowFunction` can be combined with either a `ReduceFunction` or a `FoldFunction`. When doing
-this, the `ReduceFunction`/`FoldFunction` will be used to incrementally aggregate elements as they
-arrive while the `WindowFunction` will be provided with the aggregated result when the window is
-ready for processing. This allows to get the benefit of incremental window computation and also have
-the additional meta information that writing a `WindowFunction` provides.
-
-This is an example that shows how incremental aggregation functions can be combined with
-a `WindowFunction`.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-DataStream<Tuple2<String, Long>> input = ...;
-
-// for folding incremental computation
-input
-    .keyBy(<key selector>)
-    .window(<window assigner>)
-    .apply(<initial value>, new MyFoldFunction(), new MyWindowFunction());
-
-// for reducing incremental computation
-input
-    .keyBy(<key selector>)
-    .window(<window assigner>)
-    .apply(new MyReduceFunction(), new MyWindowFunction());
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val input: DataStream[(String, Long)] = ...
-
-// for folding incremental computation
-input
-    .keyBy(<key selector>)
-    .window(<window assigner>)
-    .apply(<initial value>, new MyFoldFunction(), new MyWindowFunction())
-
-// for reducing incremental computation
-input
-    .keyBy(<key selector>)
-    .window(<window assigner>)
-    .apply(new MyReduceFunction(), new MyWindowFunction())
-{% endhighlight %}
-</div>
-</div>
-
-## Dealing with Late Data
-
-When working with event-time windowing it can happen that elements arrive late, i.e the
-watermark that Flink uses to keep track of the progress of event-time is already past the
-end timestamp of a window to which an element belongs. Please
-see [event time](/apis/streaming/event_time.html) and especially
-[late elements](/apis/streaming/event_time.html#late-elements) for a more thorough discussion of
-how Flink deals with event time.
-
-You can specify how a windowed transformation should deal with late elements and how much lateness
-is allowed. The parameter for this is called *allowed lateness*. This specifies by how much time
-elements can be late. Elements that arrive within the allowed lateness are still put into windows
-and are considered when computing window results. If elements arrive after the allowed lateness they
-will be dropped. Flink will also make sure that any state held by the windowing operation is garbage
-collected once the watermark passes the end of a window plus the allowed lateness.
-
-<span class="label label-info">Default</span> By default, the allowed lateness is set to
-`0`. That is, elements that arrive behind the watermark will be dropped.
-
-You can specify an allowed lateness like this:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-DataStream<T> input = ...;
-
-input
-    .keyBy(<key selector>)
-    .window(<window assigner>)
-    .allowedLateness(<time>)
-    .<windowed transformation>(<window function>);
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val input: DataStream[T] = ...
-
-input
-    .keyBy(<key selector>)
-    .window(<window assigner>)
-    .allowedLateness(<time>)
-    .<windowed transformation>(<window function>)
-{% endhighlight %}
-</div>
-</div>
-
-<span class="label label-info">Note</span> When using the `GlobalWindows` window assigner no
-data is ever considered late because the end timestamp of the global window is `Long.MAX_VALUE`.
-
-## Triggers
-
-A `Trigger` determines when a window (as assigned by the `WindowAssigner`) is ready for being
-processed by the *window function*. The trigger observes how elements are added to windows
-and can also keep track of the progress of processing time and event time. Once a trigger
-determines that a window is ready for processing, it fires. This is the signal for the
-window operation to take the elements that are currently in the window and pass them along to
-the window function to produce output for the firing window.
-
-Each `WindowAssigner` (except `GlobalWindows`) comes with a default trigger that should be
-appropriate for most use cases. For example, `TumblingEventTimeWindows` has an `EventTimeTrigger` as
-default trigger. This trigger simply fires once the watermark passes the end of a window.
-
-You can specify the trigger to be used by calling `trigger()` with a given `Trigger`. The
-whole specification of the windowed transformation would then look like this:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-DataStream<T> input = ...;
-
-input
-    .keyBy(<key selector>)
-    .window(<window assigner>)
-    .trigger(<trigger>)
-    .<windowed transformation>(<window function>);
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val input: DataStream[T] = ...
-
-input
-    .keyBy(<key selector>)
-    .window(<window assigner>)
-    .trigger(<trigger>)
-    .<windowed transformation>(<window function>)
-{% endhighlight %}
-</div>
-</div>
-
-Flink comes with a few triggers out-of-box: there is the already mentioned `EventTimeTrigger` that
-fires based on the progress of event-time as measured by the watermark, the `ProcessingTimeTrigger`
-does the same but based on processing time and the `CountTrigger` fires once the number of elements
-in a window exceeds the given limit.
-
-<span class="label label-danger">Attention</span> By specifying a trigger using `trigger()` you
-are overwriting the default trigger of a `WindowAssigner`. For example, if you specify a
-`CountTrigger` for `TumblingEventTimeWindows` you will no longer get window firings based on the
-progress of time but only by count. Right now, you have to write your own custom trigger if
-you want to react based on both time and count.
-
-The internal `Trigger` API is still considered experimental but you can check out the code
-if you want to write your own custom trigger:
-{% gh_link /flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/triggers/Trigger.java "Trigger.java" %}.
-
-## Non-keyed Windowing
-
-You can also leave out the `keyBy()` when specifying a windowed transformation. This means, however,
-that Flink cannot process windows for different keys in parallel, essentially turning the
-transformation into a non-parallel operation.
-
-<span class="label label-danger">Warning</span> As mentioned in the introduction, non-keyed
-windows have the disadvantage that work cannot be distributed in the cluster because
-windows cannot be computed independently per key. This can have severe performance implications.
-
-
-The basic structure of a non-keyed windowed transformation is as follows:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-DataStream<T> input = ...;
-
-input
-    .windowAll(<window assigner>)
-    .<windowed transformation>(<window function>);
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val input: DataStream[T] = ...
-
-input
-    .windowAll(<window assigner>)
-    .<windowed transformation>(<window function>)
-{% endhighlight %}
-</div>
-</div>

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming_guide.md
----------------------------------------------------------------------
diff --git a/docs/apis/streaming_guide.md b/docs/apis/streaming_guide.md
deleted file mode 100644
index a09cf64..0000000
--- a/docs/apis/streaming_guide.md
+++ /dev/null
@@ -1,26 +0,0 @@
----
-title: DataStream API
----
-
-<meta http-equiv="refresh" content="1; url={{ site.baseurl }}/apis/streaming/index.html" />
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-The *DataStream API guide* has been moved. Redirecting to [{{ site.baseurl }}/apis/streaming/index.html]({{ site.baseurl }}/apis/streaming/index.html) in 1 second.


[78/89] [abbrv] flink git commit: [FLINK-4392] [rpc] Make RPC Service thread-safe

Posted by se...@apache.org.
[FLINK-4392] [rpc] Make RPC Service thread-safe


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/0d38da04
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/0d38da04
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/0d38da04

Branch: refs/heads/flip-6
Commit: 0d38da040e1d51ce5251da0fd43c19269d7c7b38
Parents: df0bf94
Author: Stephan Ewen <se...@apache.org>
Authored: Sat Aug 13 19:11:47 2016 +0200
Committer: Stephan Ewen <se...@apache.org>
Committed: Thu Aug 25 20:21:03 2016 +0200

----------------------------------------------------------------------
 .../flink/runtime/rpc/akka/AkkaGateway.java     |  3 +-
 .../flink/runtime/rpc/akka/AkkaRpcService.java  | 92 +++++++++++++++-----
 2 files changed, 70 insertions(+), 25 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/0d38da04/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaGateway.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaGateway.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaGateway.java
index a826e7d..ec3091c 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaGateway.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaGateway.java
@@ -19,11 +19,12 @@
 package org.apache.flink.runtime.rpc.akka;
 
 import akka.actor.ActorRef;
+import org.apache.flink.runtime.rpc.RpcGateway;
 
 /**
  * Interface for Akka based rpc gateways
  */
-interface AkkaGateway {
+interface AkkaGateway extends RpcGateway {
 
 	ActorRef getRpcServer();
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/0d38da04/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
index 17983d0..448216c 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
@@ -28,47 +28,61 @@ import akka.actor.Props;
 import akka.dispatch.Mapper;
 import akka.pattern.AskableActorSelection;
 import akka.util.Timeout;
+
 import org.apache.flink.runtime.akka.AkkaUtils;
 import org.apache.flink.runtime.rpc.MainThreadExecutor;
 import org.apache.flink.runtime.rpc.RpcGateway;
 import org.apache.flink.runtime.rpc.RpcEndpoint;
 import org.apache.flink.runtime.rpc.RpcService;
-import org.apache.flink.util.Preconditions;
+
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
+
 import scala.concurrent.Future;
 
+import javax.annotation.concurrent.ThreadSafe;
 import java.lang.reflect.InvocationHandler;
 import java.lang.reflect.Proxy;
-import java.util.Collection;
 import java.util.HashSet;
+import java.util.Set;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.apache.flink.util.Preconditions.checkState;
 
 /**
- * Akka based {@link RpcService} implementation. The rpc service starts an Akka actor to receive
- * rpcs from a {@link RpcGateway}.
+ * Akka based {@link RpcService} implementation. The RPC service starts an Akka actor to receive
+ * RPC invocations from a {@link RpcGateway}.
  */
+@ThreadSafe
 public class AkkaRpcService implements RpcService {
+
 	private static final Logger LOG = LoggerFactory.getLogger(AkkaRpcService.class);
 
+	private final Object lock = new Object();
+
 	private final ActorSystem actorSystem;
 	private final Timeout timeout;
-	private final Collection<ActorRef> actors = new HashSet<>(4);
+	private final Set<ActorRef> actors = new HashSet<>(4);
+
+	private volatile boolean stopped;
 
 	public AkkaRpcService(final ActorSystem actorSystem, final Timeout timeout) {
-		this.actorSystem = Preconditions.checkNotNull(actorSystem, "actor system");
-		this.timeout = Preconditions.checkNotNull(timeout, "timeout");
+		this.actorSystem = checkNotNull(actorSystem, "actor system");
+		this.timeout = checkNotNull(timeout, "timeout");
 	}
 
+	// this method does not mutate state and is thus thread-safe
 	@Override
 	public <C extends RpcGateway> Future<C> connect(final String address, final Class<C> clazz) {
-		LOG.info("Try to connect to remote rpc server with address {}. Returning a {} gateway.", address, clazz.getName());
+		checkState(!stopped, "RpcService is stopped");
 
-		final ActorSelection actorSel = actorSystem.actorSelection(address);
+		LOG.debug("Try to connect to remote RPC endpoint with address {}. Returning a {} gateway.",
+				address, clazz.getName());
 
+		final ActorSelection actorSel = actorSystem.actorSelection(address);
 		final AskableActorSelection asker = new AskableActorSelection(actorSel);
 
 		final Future<Object> identify = asker.ask(new Identify(42), timeout);
-
 		return identify.map(new Mapper<Object, C>(){
 			@Override
 			public C apply(Object obj) {
@@ -89,20 +103,29 @@ public class AkkaRpcService implements RpcService {
 
 	@Override
 	public <C extends RpcGateway, S extends RpcEndpoint<C>> C startServer(S rpcEndpoint) {
-		Preconditions.checkNotNull(rpcEndpoint, "rpc endpoint");
-
-		LOG.info("Start Akka rpc actor to handle rpcs for {}.", rpcEndpoint.getClass().getName());
+		checkNotNull(rpcEndpoint, "rpc endpoint");
 
 		Props akkaRpcActorProps = Props.create(AkkaRpcActor.class, rpcEndpoint);
+		ActorRef actorRef;
+
+		synchronized (lock) {
+			checkState(!stopped, "RpcService is stopped");
+			actorRef = actorSystem.actorOf(akkaRpcActorProps);
+			actors.add(actorRef);
+		}
 
-		ActorRef actorRef = actorSystem.actorOf(akkaRpcActorProps);
-		actors.add(actorRef);
+		LOG.info("Starting RPC endpoint for {} at {} .", rpcEndpoint.getClass().getName(), actorRef.path());
 
 		InvocationHandler akkaInvocationHandler = new AkkaInvocationHandler(actorRef, timeout);
 
+		// Rather than using the System ClassLoader directly, we derive the ClassLoader
+		// from this class . That works better in cases where Flink runs embedded and all Flink
+		// code is loaded dynamically (for example from an OSGI bundle) through a custom ClassLoader
+		ClassLoader classLoader = getClass().getClassLoader();
+
 		@SuppressWarnings("unchecked")
 		C self = (C) Proxy.newProxyInstance(
-			ClassLoader.getSystemClassLoader(),
+			classLoader,
 			new Class<?>[]{rpcEndpoint.getSelfGatewayType(), MainThreadExecutor.class, AkkaGateway.class},
 			akkaInvocationHandler);
 
@@ -110,35 +133,56 @@ public class AkkaRpcService implements RpcService {
 	}
 
 	@Override
-	public <C extends RpcGateway> void stopServer(C selfGateway) {
+	public void stopServer(RpcGateway selfGateway) {
 		if (selfGateway instanceof AkkaGateway) {
 			AkkaGateway akkaClient = (AkkaGateway) selfGateway;
 
-			if (actors.contains(akkaClient.getRpcServer())) {
-				ActorRef selfActorRef = akkaClient.getRpcServer();
-
-				LOG.info("Stop Akka rpc actor {}.", selfActorRef.path());
+			boolean fromThisService;
+			synchronized (lock) {
+				if (stopped) {
+					return;
+				} else {
+					fromThisService = actors.remove(akkaClient.getRpcServer());
+				}
+			}
 
+			if (fromThisService) {
+				ActorRef selfActorRef = akkaClient.getRpcServer();
+				LOG.info("Stopping RPC endpoint {}.", selfActorRef.path());
 				selfActorRef.tell(PoisonPill.getInstance(), ActorRef.noSender());
+			} else {
+				LOG.debug("RPC endpoint {} already stopped or from different RPC service");
 			}
 		}
 	}
 
 	@Override
 	public void stopService() {
-		LOG.info("Stop Akka rpc service.");
-		actorSystem.shutdown();
+		LOG.info("Stopping Akka RPC service.");
+
+		synchronized (lock) {
+			if (stopped) {
+				return;
+			}
+
+			stopped = true;
+			actorSystem.shutdown();
+			actors.clear();
+		}
+
 		actorSystem.awaitTermination();
 	}
 
 	@Override
 	public <C extends RpcGateway> String getAddress(C selfGateway) {
+		checkState(!stopped, "RpcService is stopped");
+
 		if (selfGateway instanceof AkkaGateway) {
 			ActorRef actorRef = ((AkkaGateway) selfGateway).getRpcServer();
 			return AkkaUtils.getAkkaURL(actorSystem, actorRef);
 		} else {
 			String className = AkkaGateway.class.getName();
-			throw new RuntimeException("Cannot get address for non " + className + '.');
+			throw new IllegalArgumentException("Cannot get address for non " + className + '.');
 		}
 	}
 }


[67/89] [abbrv] flink git commit: [FLINK-3823] Fix travis log upload & upload logs to transfer.sh (14 days)

Posted by se...@apache.org.
[FLINK-3823] Fix travis log upload & upload logs to transfer.sh (14 days)


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/444315a1
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/444315a1
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/444315a1

Branch: refs/heads/flip-6
Commit: 444315a12ca2b1d3de44ea50dda9b8bb5a36bb9e
Parents: d16dcd2
Author: Robert Metzger <rm...@apache.org>
Authored: Thu Aug 25 10:34:57 2016 +0200
Committer: Robert Metzger <rm...@apache.org>
Committed: Thu Aug 25 15:19:01 2016 +0200

----------------------------------------------------------------------
 .travis.yml                  | 14 +++-----------
 tools/travis_mvn_watchdog.sh | 14 ++++++++++----
 2 files changed, 13 insertions(+), 15 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/444315a1/.travis.yml
----------------------------------------------------------------------
diff --git a/.travis.yml b/.travis.yml
index b8b39f1..e15673e 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -44,23 +44,15 @@ matrix:
 git:
   depth: 100
 
-notifications:
-  webhooks:
-    urls:
-      - https://webhooks.gitter.im/e/d70a7e674cb9354c77b2
-    on_success: always  # options: [always|never|change] default: always
-    on_failure: always  # options: [always|never|change] default: always
-  slack:
-    secure: iYjxJn8OkCRslJ30/PcE+EbMiqfKwsvUJiVUEQAEXqCEwZg+wYDsN0ilPQQT0zU16mYWKoMTx71zrOZpjirGq7ww0XZ0wAfXDjgmTxX/DaEdp87uNgTRdQzLV7mQouMKZni28eoa08Rb2NIoLLQ39q7uCu0W/p7vAD2e9xHlBBE=
 
 env:
     global:
         # Global variable to avoid hanging travis builds when downloading cache archives.
         - MALLOC_ARENA_MAX=2
         # Build artifacts like logs (variables for apache/flink repo)
-        - secure: "Fm3NK28qN8yLtpJl4VI58biBECpOodMYbYXPVWwa62R7jkhHl2U1s4Xa5ujEgNIDcsUsY66z0V4pU0Es0XLNOY2ajlaFOHTmngzFIXul1r4vuNy0H8okEBjs9Ks0TOWYrE6ndAv1J4/oUsRtehayrriaehn31emXL9c4RSKgaiQ="
-        - secure: "CGcWDpoPLKVPVxFCa+rh5svyrSy7tWTsydsFuLlw5BH5QR57FWH0P5ZBZ31MPppoNNpKEp1V5PBxOH0dUAx8SVNWQFNCsQrOwVpTnTlyl3Cd1udj2hahbB3l+IGf0+O3v2vv6blYm6vJb98NqzZknjdIefDDBfu52ndJy1UqHQw="
-        - secure: "J4IJ7ZG5X+x/2K00kCpj6N/j3wEc6vG59KdDFVZp1WnKH8H0cty2wujZvDhBV+krbqja2MHhXQt/2mDjqm7pkdk1YElDOWsx909aw29wUdDN4yOsxFekIa5jMCrcQxbwzDRal6JmAzCakk51qIEgCYuAKquT0N+oETmnOhmcQe0="
+        - secure: "c8AY4ucfq3eWpw1fzFqIoXg0B2JyBYFPruje6OJNN+eYZ/TEkXgoFXTXBYvx0Ovuy6T+nxokPyx+s+wFphVssEkJMhWZk7tYuWkOxM/ZeZ1tZpkrCUgeb2jFpmV0dbfOTeTW9ZSSSXUWCVIHfdDwm0BAoabsEwG2WcPZvnO9/js="
+        - secure: "Y1VnJbGPSC2trnV0RMN1NQtYQd97/WiFGuqHsoN3G778rPyX2NN9lPg9ZkWp4SZQrJewIR+te4TWgpmckDhMSxHFjQWlj6NBGdC9wrg13Tgll1Lh5ypg7QWhlMcob32K6xWmFaDYKf0RFx5PHnlKAZN4o9EyFHZoZXanoY/PS4w="
+        - secure: "Hl4fDGRUaV1YG8tWKamOZMgbmhy/NuzYRhyJI9arFkhoY5WD2waOEb+jIuEYiS6mNqgjed/Wimurpab2J5eIrHjeWZspqks0ROdCtlZCVXbXjsnado5bFOYXrrb7X3SPhm+0O99uKXdYkPyCn/WQ9Zj00Gz8urap05IzCT2JXjg="
 
 before_script:
    - "gem install --no-document --version 0.8.9 faraday "

http://git-wip-us.apache.org/repos/asf/flink/blob/444315a1/tools/travis_mvn_watchdog.sh
----------------------------------------------------------------------
diff --git a/tools/travis_mvn_watchdog.sh b/tools/travis_mvn_watchdog.sh
index 5be39ed..d1b396c 100755
--- a/tools/travis_mvn_watchdog.sh
+++ b/tools/travis_mvn_watchdog.sh
@@ -72,11 +72,13 @@ upload_artifacts_s3() {
 
 	ls $ARTIFACTS_DIR
 
-	if [ -n "$UPLOAD_BUCKET" ] && [ -n "$UPLOAD_ACCESS_KEY" ] && [ -n "$UPLOAD_SECRET_KEY" ]; then
-		echo "COMPRESSING build artifacts."
+	echo "COMPRESSING build artifacts."
+
+	cd $ARTIFACTS_DIR
+	tar -zcvf $ARTIFACTS_FILE *
 
-		cd $ARTIFACTS_DIR
-		tar -zcvf $ARTIFACTS_FILE *
+	# Upload to secured S3
+	if [ -n "$UPLOAD_BUCKET" ] && [ -n "$UPLOAD_ACCESS_KEY" ] && [ -n "$UPLOAD_SECRET_KEY" ]; then
 
 		# Install artifacts tool
 		curl -sL https://raw.githubusercontent.com/travis-ci/artifacts/master/install | bash
@@ -89,6 +91,10 @@ upload_artifacts_s3() {
 		# re-creates the whole directory structure from root.
 		artifacts upload --bucket $UPLOAD_BUCKET --key $UPLOAD_ACCESS_KEY --secret $UPLOAD_SECRET_KEY --target-paths $UPLOAD_TARGET_PATH $ARTIFACTS_FILE
 	fi
+
+	# upload to http://transfer.sh
+	echo "Uploading to transfer.sh"
+	curl --upload-file $ARTIFACTS_FILE http://transfer.sh
 }
 
 print_stacktraces () {


[05/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/fig/job_status.svg
----------------------------------------------------------------------
diff --git a/docs/fig/job_status.svg b/docs/fig/job_status.svg
new file mode 100644
index 0000000..c259db4
--- /dev/null
+++ b/docs/fig/job_status.svg
@@ -0,0 +1,1049 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<!-- Created with Inkscape (http://www.inkscape.org/) -->
+
+<svg
+   xmlns:dc="http://purl.org/dc/elements/1.1/"
+   xmlns:cc="http://creativecommons.org/ns#"
+   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+   xmlns:svg="http://www.w3.org/2000/svg"
+   xmlns="http://www.w3.org/2000/svg"
+   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+   width="240mm"
+   height="220mm"
+   viewBox="0 0 850.3937 779.52755"
+   id="svg2"
+   version="1.1"
+   inkscape:version="0.91 r13725"
+   sodipodi:docname="job_status.svg">
+  <defs
+     id="defs4">
+    <marker
+       inkscape:stockid="Arrow2Mstart"
+       orient="auto"
+       refY="0.0"
+       refX="0.0"
+       id="Arrow2Mstart"
+       style="overflow:visible"
+       inkscape:isstock="true">
+      <path
+         id="path4577"
+         style="fill-rule:evenodd;stroke-width:0.625;stroke-linejoin:round;stroke:#000000;stroke-opacity:1;fill:#000000;fill-opacity:1"
+         d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
+         transform="scale(0.6) translate(0,0)" />
+    </marker>
+    <marker
+       inkscape:stockid="Arrow1Mstart"
+       orient="auto"
+       refY="0.0"
+       refX="0.0"
+       id="Arrow1Mstart"
+       style="overflow:visible"
+       inkscape:isstock="true">
+      <path
+         id="path4559"
+         d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+         style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
+         transform="scale(0.4) translate(10,0)" />
+    </marker>
+    <marker
+       inkscape:isstock="true"
+       style="overflow:visible"
+       id="marker4407"
+       refX="0"
+       refY="0"
+       orient="auto"
+       inkscape:stockid="Arrow2Mend">
+      <path
+         transform="scale(-0.6,-0.6)"
+         d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
+         id="path4409"
+         inkscape:connector-curvature="0" />
+    </marker>
+    <marker
+       inkscape:stockid="Arrow2Mend"
+       orient="auto"
+       refY="0"
+       refX="0"
+       id="marker4534"
+       style="overflow:visible"
+       inkscape:isstock="true"
+       inkscape:collect="always">
+      <path
+         inkscape:connector-curvature="0"
+         id="path4536"
+         style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
+         d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+         transform="scale(-0.6,-0.6)" />
+    </marker>
+    <marker
+       inkscape:isstock="true"
+       style="overflow:visible"
+       id="marker7718"
+       refX="0"
+       refY="0"
+       orient="auto"
+       inkscape:stockid="Arrow2Mend"
+       inkscape:collect="always">
+      <path
+         transform="scale(-0.6,-0.6)"
+         d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
+         id="path7720"
+         inkscape:connector-curvature="0" />
+    </marker>
+    <marker
+       inkscape:stockid="Arrow2Mend"
+       orient="auto"
+       refY="0"
+       refX="0"
+       id="marker6764"
+       style="overflow:visible"
+       inkscape:isstock="true"
+       inkscape:collect="always">
+      <path
+         id="path6766"
+         style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
+         d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+         transform="scale(-0.6,-0.6)"
+         inkscape:connector-curvature="0" />
+    </marker>
+    <marker
+       inkscape:isstock="true"
+       style="overflow:visible"
+       id="marker6634"
+       refX="0"
+       refY="0"
+       orient="auto"
+       inkscape:stockid="Arrow2Mend"
+       inkscape:collect="always">
+      <path
+         transform="scale(-0.6,-0.6)"
+         d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
+         id="path6636"
+         inkscape:connector-curvature="0" />
+    </marker>
+    <marker
+       inkscape:stockid="Arrow2Mend"
+       orient="auto"
+       refY="0"
+       refX="0"
+       id="marker6510"
+       style="overflow:visible"
+       inkscape:isstock="true"
+       inkscape:collect="always">
+      <path
+         id="path6512"
+         style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
+         d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+         transform="scale(-0.6,-0.6)"
+         inkscape:connector-curvature="0" />
+    </marker>
+    <marker
+       inkscape:isstock="true"
+       style="overflow:visible"
+       id="marker6392"
+       refX="0"
+       refY="0"
+       orient="auto"
+       inkscape:stockid="Arrow2Mend"
+       inkscape:collect="always">
+      <path
+         transform="scale(-0.6,-0.6)"
+         d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
+         id="path6394"
+         inkscape:connector-curvature="0" />
+    </marker>
+    <marker
+       inkscape:stockid="Arrow2Mend"
+       orient="auto"
+       refY="0"
+       refX="0"
+       id="marker6280"
+       style="overflow:visible"
+       inkscape:isstock="true"
+       inkscape:collect="always">
+      <path
+         id="path6282"
+         style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
+         d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+         transform="scale(-0.6,-0.6)"
+         inkscape:connector-curvature="0" />
+    </marker>
+    <marker
+       inkscape:isstock="true"
+       style="overflow:visible"
+       id="marker6174"
+       refX="0"
+       refY="0"
+       orient="auto"
+       inkscape:stockid="Arrow2Mend"
+       inkscape:collect="always">
+      <path
+         transform="scale(-0.6,-0.6)"
+         d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
+         id="path6176"
+         inkscape:connector-curvature="0" />
+    </marker>
+    <marker
+       inkscape:stockid="Arrow2Mend"
+       orient="auto"
+       refY="0"
+       refX="0"
+       id="marker6074"
+       style="overflow:visible"
+       inkscape:isstock="true"
+       inkscape:collect="always">
+      <path
+         id="path6076"
+         style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
+         d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+         transform="scale(-0.6,-0.6)"
+         inkscape:connector-curvature="0" />
+    </marker>
+    <marker
+       inkscape:stockid="Arrow2Mend"
+       orient="auto"
+       refY="0"
+       refX="0"
+       id="marker5892"
+       style="overflow:visible"
+       inkscape:isstock="true"
+       inkscape:collect="always">
+      <path
+         id="path5894"
+         style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
+         d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+         transform="scale(-0.6,-0.6)"
+         inkscape:connector-curvature="0" />
+    </marker>
+    <marker
+       inkscape:isstock="true"
+       style="overflow:visible"
+       id="marker5237"
+       refX="0"
+       refY="0"
+       orient="auto"
+       inkscape:stockid="Arrow2Mend"
+       inkscape:collect="always">
+      <path
+         transform="scale(-0.6,-0.6)"
+         d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
+         id="path5239"
+         inkscape:connector-curvature="0" />
+    </marker>
+    <marker
+       inkscape:stockid="Arrow2Mend"
+       orient="auto"
+       refY="0"
+       refX="0"
+       id="marker5157"
+       style="overflow:visible"
+       inkscape:isstock="true"
+       inkscape:collect="always">
+      <path
+         id="path5159"
+         style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
+         d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+         transform="scale(-0.6,-0.6)"
+         inkscape:connector-curvature="0" />
+    </marker>
+    <marker
+       inkscape:isstock="true"
+       style="overflow:visible"
+       id="marker5075"
+       refX="0"
+       refY="0"
+       orient="auto"
+       inkscape:stockid="Arrow2Mend"
+       inkscape:collect="always">
+      <path
+         transform="scale(-0.6,-0.6)"
+         d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
+         id="path5077"
+         inkscape:connector-curvature="0" />
+    </marker>
+    <marker
+       inkscape:stockid="Arrow2Mend"
+       orient="auto"
+       refY="0"
+       refX="0"
+       id="marker5005"
+       style="overflow:visible"
+       inkscape:isstock="true"
+       inkscape:collect="always">
+      <path
+         id="path5007"
+         style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
+         d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+         transform="scale(-0.6,-0.6)"
+         inkscape:connector-curvature="0" />
+    </marker>
+    <marker
+       inkscape:isstock="true"
+       style="overflow:visible"
+       id="marker4947"
+       refX="0"
+       refY="0"
+       orient="auto"
+       inkscape:stockid="Arrow2Mend"
+       inkscape:collect="always">
+      <path
+         transform="scale(-0.6,-0.6)"
+         d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
+         id="path4949"
+         inkscape:connector-curvature="0" />
+    </marker>
+    <marker
+       inkscape:stockid="Arrow2Mend"
+       orient="auto"
+       refY="0"
+       refX="0"
+       id="marker4831"
+       style="overflow:visible"
+       inkscape:isstock="true"
+       inkscape:collect="always">
+      <path
+         id="path4833"
+         style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
+         d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+         transform="scale(-0.6,-0.6)"
+         inkscape:connector-curvature="0" />
+    </marker>
+    <marker
+       inkscape:stockid="Arrow2Mend"
+       orient="auto"
+       refY="0"
+       refX="0"
+       id="Arrow2Mend"
+       style="overflow:visible"
+       inkscape:isstock="true">
+      <path
+         id="path4486"
+         style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
+         d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+         transform="scale(-0.6,-0.6)"
+         inkscape:connector-curvature="0" />
+    </marker>
+    <marker
+       inkscape:stockid="Arrow2Lend"
+       orient="auto"
+       refY="0"
+       refX="0"
+       id="Arrow2Lend"
+       style="overflow:visible"
+       inkscape:isstock="true">
+      <path
+         id="path4480"
+         style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
+         d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+         transform="matrix(-1.1,0,0,-1.1,-1.1,0)"
+         inkscape:connector-curvature="0" />
+    </marker>
+    <marker
+       inkscape:stockid="Arrow1Lend"
+       orient="auto"
+       refY="0"
+       refX="0"
+       id="Arrow1Lend"
+       style="overflow:visible"
+       inkscape:isstock="true">
+      <path
+         id="path4462"
+         d="M 0,0 5,-5 -12.5,0 5,5 0,0 Z"
+         style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1"
+         transform="matrix(-0.8,0,0,-0.8,-10,0)"
+         inkscape:connector-curvature="0" />
+    </marker>
+    <marker
+       inkscape:stockid="Arrow1Mend"
+       orient="auto"
+       refY="0"
+       refX="0"
+       id="Arrow1Mend"
+       style="overflow:visible"
+       inkscape:isstock="true">
+      <path
+         id="path4468"
+         d="M 0,0 5,-5 -12.5,0 5,5 0,0 Z"
+         style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1"
+         transform="matrix(-0.4,0,0,-0.4,-4,0)"
+         inkscape:connector-curvature="0" />
+    </marker>
+  </defs>
+  <sodipodi:namedview
+     id="base"
+     pagecolor="#ffffff"
+     bordercolor="#666666"
+     borderopacity="1.0"
+     inkscape:pageopacity="0.0"
+     inkscape:pageshadow="2"
+     inkscape:zoom="1.4"
+     inkscape:cx="366.44711"
+     inkscape:cy="435.59833"
+     inkscape:document-units="px"
+     inkscape:current-layer="layer1"
+     showgrid="true"
+     inkscape:window-width="1402"
+     inkscape:window-height="855"
+     inkscape:window-x="38"
+     inkscape:window-y="1"
+     inkscape:window-maximized="1">
+    <inkscape:grid
+       type="xygrid"
+       id="grid4136" />
+  </sodipodi:namedview>
+  <metadata
+     id="metadata7">
+    <rdf:RDF>
+      <cc:Work
+         rdf:about="">
+        <dc:format>image/svg+xml</dc:format>
+        <dc:type
+           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+        <dc:title></dc:title>
+      </cc:Work>
+    </rdf:RDF>
+  </metadata>
+  <g
+     inkscape:label="Layer 1"
+     inkscape:groupmode="layer"
+     id="layer1"
+     transform="translate(0,-272.83465)">
+    <path
+       style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2.5;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-start:url(#Arrow2Mstart);marker-end:url(#marker4407)"
+       d="M 369.28571,490.93361 C 340,572.36218 330,712.36218 340.71429,802.36218"
+       id="path3473"
+       inkscape:connector-curvature="0"
+       sodipodi:nodetypes="cc" />
+    <g
+       id="g4324"
+       transform="translate(-30.285714,162.34191)">
+      <rect
+         ry="22.587013"
+         rx="21.337021"
+         y="462.81091"
+         x="79.020126"
+         height="45.174026"
+         width="126.60261"
+         id="rect4140"
+         style="opacity:1;fill:#5599ff;fill-opacity:1;stroke:#000000;stroke-width:1.5;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+      <text
+         sodipodi:linespacing="125%"
+         id="text4142"
+         y="494.71799"
+         x="93.243065"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:25px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+         xml:space="preserve"><tspan
+           y="494.71799"
+           x="93.243065"
+           id="tspan4144"
+           sodipodi:role="line">Created</tspan></text>
+    </g>
+    <g
+       id="g4286"
+       transform="translate(-39.560883,231.66354)">
+      <rect
+         style="opacity:1;fill:#5599ff;fill-opacity:1;stroke:#000000;stroke-width:1.5;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+         id="rect4254"
+         width="126.60261"
+         height="45.174026"
+         x="336.96799"
+         y="393.48929"
+         rx="21.337021"
+         ry="22.587013" />
+      <text
+         xml:space="preserve"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:25px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+         x="348.88379"
+         y="422.97327"
+         id="text4182"
+         sodipodi:linespacing="125%"><tspan
+           sodipodi:role="line"
+           id="tspan4184"
+           x="348.88379"
+           y="422.97327">Running</tspan></text>
+    </g>
+    <g
+       id="g4426"
+       transform="translate(38,166)">
+      <rect
+         style="opacity:1;fill:#5599ff;fill-opacity:1;stroke:#000000;stroke-width:4;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:24, 4;stroke-dashoffset:0;stroke-opacity:1"
+         id="rect4260"
+         width="126.60261"
+         height="45.174026"
+         x="532.07977"
+         y="459.15283"
+         rx="21.337021"
+         ry="22.587013" />
+      <text
+         sodipodi:linespacing="125%"
+         id="text4199"
+         y="491.05991"
+         x="544.06885"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:25px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+         xml:space="preserve"><tspan
+           y="491.05991"
+           x="544.06885"
+           id="tspan4201"
+           sodipodi:role="line">Finished</tspan></text>
+    </g>
+    <g
+       id="g4276"
+       transform="translate(-8.802002,175.91335)">
+      <rect
+         style="opacity:1;fill:#5599ff;fill-opacity:1;stroke:#000000;stroke-width:1.5;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+         id="rect4256"
+         width="126.60261"
+         height="45.174026"
+         x="319.79538"
+         y="264.69485"
+         rx="21.337021"
+         ry="22.587013" />
+      <text
+         sodipodi:linespacing="125%"
+         id="text4216"
+         y="294.17883"
+         x="342.99048"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:25px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+         xml:space="preserve"><tspan
+           y="294.17883"
+           x="342.99048"
+           id="tspan4218"
+           sodipodi:role="line">Failing</tspan></text>
+    </g>
+    <g
+       id="g4421"
+       transform="translate(40,166)">
+      <rect
+         ry="22.587013"
+         rx="21.337021"
+         y="274.60822"
+         x="529.78455"
+         height="45.174026"
+         width="126.60261"
+         id="rect4258"
+         style="opacity:1;fill:#5599ff;fill-opacity:1;stroke:#000000;stroke-width:4;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:16, 4;stroke-dashoffset:0;stroke-opacity:1" />
+      <text
+         xml:space="preserve"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:25px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+         x="556.71503"
+         y="306.51529"
+         id="text4222"
+         sodipodi:linespacing="125%"><tspan
+           sodipodi:role="line"
+           id="tspan4224"
+           x="556.71503"
+           y="306.51529">Failed</tspan></text>
+    </g>
+    <g
+       id="g4416"
+       transform="translate(14,166)">
+      <rect
+         ry="22.500114"
+         rx="26.670492"
+         y="639.99316"
+         x="238.02916"
+         height="45.000229"
+         width="158.24863"
+         id="rect4252"
+         style="opacity:1;fill:#5599ff;fill-opacity:1;stroke:#000000;stroke-width:1.5;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+      <text
+         sodipodi:linespacing="125%"
+         id="text4228"
+         y="669.39026"
+         x="252.4257"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:25px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+         xml:space="preserve"><tspan
+           y="669.39026"
+           x="252.4257"
+           id="tspan4230"
+           sodipodi:role="line">Cancelling</tspan></text>
+    </g>
+    <g
+       id="g4431"
+       transform="translate(38,166)">
+      <rect
+         ry="22.551325"
+         rx="23.453072"
+         y="639.94202"
+         x="519.86011"
+         height="45.10265"
+         width="139.15814"
+         id="rect4262"
+         style="opacity:1;fill:#5599ff;fill-opacity:1;stroke:#000000;stroke-width:4;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:24, 4;stroke-dashoffset:0;stroke-opacity:1" />
+      <text
+         xml:space="preserve"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:25px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+         x="531.91351"
+         y="671.81342"
+         id="text4234"
+         sodipodi:linespacing="125%"><tspan
+           sodipodi:role="line"
+           id="tspan4236"
+           x="531.91351"
+           y="671.81342">Canceled</tspan></text>
+    </g>
+    <g
+       id="g4411"
+       transform="translate(14,166)">
+      <rect
+         style="opacity:1;fill:#5599ff;fill-opacity:1;stroke:#000000;stroke-width:1.5;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+         id="rect4250"
+         width="158.24863"
+         height="45.000229"
+         x="32.46925"
+         y="274.6951"
+         rx="26.670492"
+         ry="22.500114" />
+      <text
+         sodipodi:linespacing="125%"
+         id="text4240"
+         y="304.09219"
+         x="47.170963"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:25px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+         xml:space="preserve"><tspan
+           y="304.09219"
+           x="47.170963"
+           id="tspan4242"
+           sodipodi:role="line">Restarting</tspan></text>
+    </g>
+    <g
+       id="g4436"
+       transform="translate(11.142857,169.57143)">
+      <rect
+         style="opacity:1;fill:#5599ff;fill-opacity:1;stroke:#000000;stroke-width:4;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:24, 4;stroke-dashoffset:0;stroke-opacity:1"
+         id="rect4264"
+         width="167.29372"
+         height="44.953941"
+         x="492.55664"
+         y="790.35205"
+         rx="28.194908"
+         ry="22.476971" />
+      <text
+         xml:space="preserve"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:25px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+         x="507.03235"
+         y="819.72601"
+         id="text4246"
+         sodipodi:linespacing="125%"><tspan
+           sodipodi:role="line"
+           id="tspan4248"
+           x="507.03235"
+           y="819.72601">Suspended</tspan></text>
+    </g>
+    <path
+       style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2.5;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#Arrow2Mend)"
+       d="m 175.31595,646.72195 122.27415,0.17603"
+       id="path4441"
+       inkscape:connector-type="polyline"
+       inkscape:connector-curvature="0"
+       inkscape:connection-start="#g4324"
+       inkscape:connection-end="#g4286"
+       sodipodi:nodetypes="cc" />
+    <path
+       style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2.5;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#marker4831)"
+       d="m 423.67902,643.73984 146.73144,0"
+       id="path4443"
+       inkscape:connector-type="polyline"
+       inkscape:connector-curvature="0"
+       inkscape:connection-start="#g4286"
+       inkscape:connection-end="#g4426" />
+    <path
+       style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2.5;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#marker5157)"
+       d="M 143.45346,625.15282 C 240.11678,592.2528 289.05237,539.16028 337.31308,485.78223"
+       id="path4445"
+       inkscape:connector-type="polyline"
+       inkscape:connector-curvature="0"
+       inkscape:connection-start="#g4324"
+       inkscape:connection-end="#g4276"
+       sodipodi:nodetypes="cc" />
+    <path
+       style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2.5;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#marker5075)"
+       d="m 437.2653,459.19522 132.84994,10e-6"
+       id="path4447"
+       inkscape:connector-type="polyline"
+       inkscape:connector-curvature="0"
+       inkscape:connection-start="#g4276"
+       inkscape:connection-end="#g4421" />
+    <path
+       style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2.5;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#marker4947)"
+       d="M 140.03639,670.32685 308.21925,805.99316"
+       id="path4449"
+       inkscape:connector-type="polyline"
+       inkscape:connector-curvature="0"
+       inkscape:connection-start="#g4324"
+       inkscape:connection-end="#g4416" />
+    <path
+       style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2.5;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#marker5005)"
+       d="m 409.8612,824.4933 148.36356,3e-5"
+       id="path4451"
+       inkscape:connector-type="polyline"
+       inkscape:connector-curvature="0"
+       inkscape:connection-start="#g4416"
+       inkscape:connection-end="#g4431" />
+    <path
+       style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2.5;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#marker5237)"
+       d="M 119.50359,668.89828 C 120,902.36221 317.44733,915.62541 504.35792,974.20919"
+       id="path4453"
+       inkscape:connector-type="polyline"
+       inkscape:connector-curvature="0"
+       inkscape:connection-start="#g4324"
+       inkscape:connection-end="#g4436"
+       sodipodi:nodetypes="cc" />
+    <text
+       xml:space="preserve"
+       style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:25px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       x="182.57143"
+       y="643.07654"
+       id="text4913"
+       sodipodi:linespacing="125%"><tspan
+         sodipodi:role="line"
+         id="tspan4915"
+         x="182.57143"
+         y="643.07654"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:15px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;writing-mode:lr-tb;text-anchor:start">Schedule job</tspan></text>
+    <text
+       xml:space="preserve"
+       style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:25px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       x="432"
+       y="638.36218"
+       id="text4929"
+       sodipodi:linespacing="125%"><tspan
+         sodipodi:role="line"
+         id="tspan4931"
+         x="432"
+         y="638.36218"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:15px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;writing-mode:lr-tb;text-anchor:start">All job vertices </tspan><tspan
+         sodipodi:role="line"
+         x="432"
+         y="657.11218"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:15px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;writing-mode:lr-tb;text-anchor:start"
+         id="tspan4933">in final state</tspan></text>
+    <text
+       xml:space="preserve"
+       style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:25px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       x="424"
+       y="820.36218"
+       id="text5063"
+       sodipodi:linespacing="125%"><tspan
+         sodipodi:role="line"
+         id="tspan5065"
+         x="424"
+         y="820.36218"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:15px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;writing-mode:lr-tb;text-anchor:start">All job vertices </tspan><tspan
+         sodipodi:role="line"
+         x="424"
+         y="839.11218"
+         id="tspan5067"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:15px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;writing-mode:lr-tb;text-anchor:start">in final state</tspan></text>
+    <text
+       xml:space="preserve"
+       style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:25px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       x="442"
+       y="456.36221"
+       id="text5139"
+       sodipodi:linespacing="125%"><tspan
+         sodipodi:role="line"
+         id="tspan5141"
+         x="442"
+         y="456.36221"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:15px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;writing-mode:lr-tb;text-anchor:start">All job vertices </tspan><tspan
+         sodipodi:role="line"
+         x="442"
+         y="475.11221"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:15px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;writing-mode:lr-tb;text-anchor:start"
+         id="tspan5143">in final state &amp; </tspan><tspan
+         sodipodi:role="line"
+         x="442"
+         y="493.86221"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:15px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;writing-mode:lr-tb;text-anchor:start"
+         id="tspan5145">not restartable</tspan></text>
+    <text
+       xml:space="preserve"
+       style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:25px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       x="219"
+       y="606.93359"
+       id="text5227"
+       sodipodi:linespacing="125%"><tspan
+         sodipodi:role="line"
+         id="tspan5229"
+         x="219"
+         y="606.93359"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:15px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;writing-mode:lr-tb;text-anchor:start">Fail job</tspan></text>
+    <text
+       xml:space="preserve"
+       style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:25px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       x="156.57143"
+       y="764.07648"
+       id="text5565"
+       sodipodi:linespacing="125%"><tspan
+         sodipodi:role="line"
+         id="tspan5567"
+         x="156.57143"
+         y="764.07648"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:15px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;writing-mode:lr-tb;text-anchor:start">Cancel job</tspan></text>
+    <path
+       style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2.5;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#marker5892)"
+       d="m 314.18121,477.05236 c -47.69818,20.03987 -84.94599,9.32911 -116.30849,2.14285"
+       id="path5569"
+       inkscape:connector-type="polyline"
+       inkscape:connector-curvature="0"
+       inkscape:connection-start="#g4276"
+       inkscape:connection-end="#g4411"
+       sodipodi:nodetypes="cc" />
+    <path
+       style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2.5;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#marker6174)"
+       d="M 419.49016,485.78223 C 974.28571,652.36221 835.65722,822.42397 665.47877,968.49491"
+       id="path5571"
+       inkscape:connector-type="polyline"
+       inkscape:connector-curvature="0"
+       inkscape:connection-start="#g4276"
+       inkscape:connection-end="#g4436"
+       sodipodi:nodetypes="cc" />
+    <path
+       style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2.5;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#marker6280)"
+       d="m 413.71359,666.04114 c 466.9675,42.03536 351.85357,186.4168 228.2627,292.45377"
+       id="path5573"
+       inkscape:connector-type="polyline"
+       inkscape:connector-curvature="0"
+       inkscape:connection-start="#g4286"
+       inkscape:connection-end="#g4436"
+       sodipodi:nodetypes="cc" />
+    <path
+       style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2.5;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#marker6392)"
+       d="M 375.26539,850.99339 556.58971,959.92348"
+       id="path5575"
+       inkscape:connector-type="polyline"
+       inkscape:connector-curvature="0"
+       inkscape:connection-start="#g4416"
+       inkscape:connection-end="#g4436" />
+    <path
+       style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2.5;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#marker6074)"
+       d="M 173.42792,441.40962 C 790,62.362207 855,633.79078 686.95342,807.37059"
+       id="path5579"
+       inkscape:connector-type="polyline"
+       inkscape:connector-curvature="0"
+       inkscape:connection-start="#g4411"
+       inkscape:connection-end="#g4431"
+       sodipodi:nodetypes="cc" />
+    <path
+       style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2.5;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#marker6764)"
+       d="M 123.688,485.69533 113.6599,625.15282"
+       id="path5581"
+       inkscape:connector-type="polyline"
+       inkscape:connector-curvature="0"
+       inkscape:connection-start="#g4411"
+       inkscape:connection-end="#g4324" />
+    <path
+       style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2.5;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#marker6510)"
+       d="m 400.64822,624.43854 c 14.16495,-34.7353 13.47368,-78.92413 6.68911,-136.51345"
+       id="path5585"
+       inkscape:connector-type="polyline"
+       inkscape:connector-curvature="0"
+       sodipodi:nodetypes="cc"
+       inkscape:connection-end="#g4276"
+       inkscape:connection-start="#g4286" />
+    <path
+       style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2.5;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#marker6634)"
+       d="m 394.21832,671.04114 c 18.43152,58.30256 7.7951,100.67644 -12.18276,132.80916"
+       id="path5587"
+       inkscape:connector-type="polyline"
+       inkscape:connector-curvature="0"
+       inkscape:connection-start="#g4286"
+       inkscape:connection-end="#g4416"
+       sodipodi:nodetypes="cc" />
+    <path
+       style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2.5;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#marker7718)"
+       d="M 194.71429,444.79077 C 295.26058,393.86555 426.46327,380.03465 584,438.3622"
+       id="path7710"
+       inkscape:connector-curvature="0"
+       sodipodi:nodetypes="cc" />
+    <text
+       xml:space="preserve"
+       style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:25px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       x="10"
+       y="556.36218"
+       id="text8166"
+       sodipodi:linespacing="125%"><tspan
+         sodipodi:role="line"
+         id="tspan8168"
+         x="10"
+         y="556.36218"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:15px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;writing-mode:lr-tb;text-anchor:start">Restarted job</tspan></text>
+    <text
+       xml:space="preserve"
+       style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:25px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       x="156"
+       y="906.36218"
+       id="text8170"
+       sodipodi:linespacing="125%"><tspan
+         sodipodi:role="line"
+         id="tspan8172"
+         x="156"
+         y="906.36218"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:15px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;writing-mode:lr-tb;text-anchor:start">Suspend job</tspan></text>
+    <text
+       sodipodi:linespacing="125%"
+       id="text8174"
+       y="906.93359"
+       x="468.57144"
+       style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:25px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       xml:space="preserve"><tspan
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:15px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;writing-mode:lr-tb;text-anchor:start"
+         y="906.93359"
+         x="468.57144"
+         id="tspan8176"
+         sodipodi:role="line">Suspend job</tspan></text>
+    <text
+       xml:space="preserve"
+       style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:25px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       x="746.42859"
+       y="906.2193"
+       id="text8178"
+       sodipodi:linespacing="125%"><tspan
+         sodipodi:role="line"
+         id="tspan8180"
+         x="746.42859"
+         y="906.2193"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:15px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;writing-mode:lr-tb;text-anchor:start">Suspend job</tspan></text>
+    <text
+       sodipodi:linespacing="125%"
+       id="text8182"
+       y="717.64789"
+       x="482.14288"
+       style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:25px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       xml:space="preserve"><tspan
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:15px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;writing-mode:lr-tb;text-anchor:start"
+         y="717.64789"
+         x="482.14288"
+         id="tspan8184"
+         sodipodi:role="line">Suspend job</tspan></text>
+    <text
+       sodipodi:linespacing="125%"
+       id="text8186"
+       y="752.64789"
+       x="409.42856"
+       style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:25px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       xml:space="preserve"><tspan
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:15px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;writing-mode:lr-tb;text-anchor:start"
+         y="752.64789"
+         x="409.42856"
+         id="tspan8188"
+         sodipodi:role="line">Cancel job</tspan></text>
+    <text
+       sodipodi:linespacing="125%"
+       id="text8190"
+       y="390.50507"
+       x="361.14285"
+       style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:25px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       xml:space="preserve"><tspan
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:15px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;writing-mode:lr-tb;text-anchor:start"
+         y="390.50507"
+         x="361.14285"
+         id="tspan8192"
+         sodipodi:role="line">Fail job</tspan></text>
+    <text
+       sodipodi:linespacing="125%"
+       id="text8194"
+       y="306.21933"
+       x="487.28571"
+       style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:25px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       xml:space="preserve"><tspan
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:15px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;writing-mode:lr-tb;text-anchor:start"
+         y="306.21933"
+         x="487.28571"
+         id="tspan8196"
+         sodipodi:role="line">Cancel job</tspan></text>
+    <text
+       xml:space="preserve"
+       style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:25px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       x="174"
+       y="510.36221"
+       id="text8198"
+       sodipodi:linespacing="125%"><tspan
+         sodipodi:role="line"
+         id="tspan8200"
+         x="174"
+         y="510.36221"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:15px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;writing-mode:lr-tb;text-anchor:start">All job vertices</tspan><tspan
+         sodipodi:role="line"
+         x="174"
+         y="529.11218"
+         id="tspan8202"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:15px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;writing-mode:lr-tb;text-anchor:start">in final state &amp;</tspan><tspan
+         sodipodi:role="line"
+         x="174"
+         y="547.86218"
+         id="tspan8204"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:15px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;writing-mode:lr-tb;text-anchor:start">restartable</tspan></text>
+    <text
+       sodipodi:linespacing="125%"
+       id="text8206"
+       y="566.93372"
+       x="418.28571"
+       style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:25px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       xml:space="preserve"><tspan
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:15px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;writing-mode:lr-tb;text-anchor:start"
+         y="566.93372"
+         x="418.28571"
+         id="tspan8208"
+         sodipodi:role="line">Fail job</tspan></text>
+    <path
+       style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2.5;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#marker4534)"
+       d="M 9.5714286,648.36221 46,648.36221"
+       id="path3470"
+       inkscape:connector-curvature="0"
+       sodipodi:nodetypes="cc" />
+    <text
+       xml:space="preserve"
+       style="font-style:normal;font-weight:normal;font-size:25px;line-height:125%;font-family:Sans;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       x="250.71428"
+       y="710.93359"
+       id="text7267"
+       sodipodi:linespacing="125%"><tspan
+         sodipodi:role="line"
+         id="tspan7269"
+         x="250.71428"
+         y="710.93359"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:15px;font-family:Sans-serif;-inkscape-font-specification:Sans-serif">Cancel job</tspan></text>
+    <text
+       sodipodi:linespacing="125%"
+       id="text7271"
+       y="565.505"
+       x="293.28571"
+       style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:25px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       xml:space="preserve"><tspan
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:15px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;writing-mode:lr-tb;text-anchor:start"
+         y="565.505"
+         x="293.28571"
+         id="tspan7273"
+         sodipodi:role="line">Fail job</tspan></text>
+  </g>
+</svg>

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/fig/jobmanager_ha_overview.png
----------------------------------------------------------------------
diff --git a/docs/fig/jobmanager_ha_overview.png b/docs/fig/jobmanager_ha_overview.png
new file mode 100644
index 0000000..ff82cae
Binary files /dev/null and b/docs/fig/jobmanager_ha_overview.png differ

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/fig/non-windowed.svg
----------------------------------------------------------------------
diff --git a/docs/fig/non-windowed.svg b/docs/fig/non-windowed.svg
new file mode 100644
index 0000000..3c1cdaa
--- /dev/null
+++ b/docs/fig/non-windowed.svg
@@ -0,0 +1,22 @@
+<?xml version="1.0" standalone="yes"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<svg version="1.1" viewBox="0.0 0.0 800.0 600.0" fill="none" stroke="none" stroke-linecap="square" stroke-miterlimit="10" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><clipPath id="p.0"><path d="m0 0l800.0 0l0 600.0l-800.0 0l0 -600.0z" clip-rule="nonzero"></path></clipPath><g clip-path="url(#p.0)"><path fill="#000000" fill-opacity="0.0" d="m0 0l800.0 0l0 600.0l-800.0 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m145.49606 485.0l509.0079 0" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m145.49606 485.0l503.0079 0" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m648.50397 486.65173l4.538086 -1.6517334l-4.538086 -1.6517334z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m145.49606 485.0l0 -394.99213" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" s
 troke-linejoin="round" stroke-linecap="butt" d="m145.49606 485.0l0 -388.99213" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m147.1478 96.00787l-1.6517334 -4.5380936l-1.6517334 4.5380936z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m587.0 477.0l60.0 0l0 42.992126l-60.0 0z" fill-rule="nonzero"></path><path fill="#000000" d="m600.90625 502.41998l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm1.5426636 -10.1875l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm4.1292114 0l0 -9.859375l1.5 0l0 1.390625q0.453125 
 -0.71875 1.21875 -1.15625q0.78125 -0.453125 1.765625 -0.453125q1.09375 0 1.796875 0.453125q0.703125 0.453125 0.984375 1.28125q1.171875 -1.734375 3.046875 -1.734375q1.46875 0 2.25 0.8125q0.796875 0.8125 0.796875 2.5l0 6.765625l-1.671875 0l0 -6.203125q0 -1.0 -0.15625 -1.4375q-0.15625 -0.453125 -0.59375 -0.71875q-0.421875 -0.265625 -1.0 -0.265625q-1.03125 0 -1.71875 0.6875q-0.6875 0.6875 -0.6875 2.21875l0 5.71875l-1.671875 0l0 -6.40625q0 -1.109375 -0.40625 -1.65625q-0.40625 -0.5625 -1.34375 -0.5625q-0.703125 0 -1.3125 0.375q-0.59375 0.359375 -0.859375 1.078125q-0.265625 0.71875 -0.265625 2.0625l0 5.109375l-1.671875 0zm22.290771 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.85
 9375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m42.0 133.0l82.01575 0l0 42.992126l-82.01575 0z" fill-rule="nonzero"></path><path fill="#000000" d="m58.703125 159.92l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.671875 0l0 9.859375l-1.5 0zm3.2507172 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46
 875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm16.75 -0.234375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.
 34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094467 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm17.23973 0l-1.671875 0l0 -10.640625q-0.59375 0.578125 -1.578125 1.15625q-0.984375 0.5625 -1.765625 0.859375l0
  -1.625q1.40625 -0.65625 2.453125 -1.59375q1.046875 -0.9375 1.484375 -1.8125l1.078125 0l0 13.65625z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m42.0 254.0l82.01575 0l0 42.992126l-82.01575 0z" fill-rule="nonzero"></path><path fill="#000000" d="m58.703125 280.91998l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.671875 0l0 9.859375l-1.5 0zm3.2507172 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.
 34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm16.75 -0.234375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 
 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094467 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm19.724106 -1.609375l0 1.609375l-8.984375 0q-0.015625 -0.609375 0.1875 -1.15625q0.34375 -0.921875 1.09375 -1.8125q0.765625 -0.890625 2.1875 -2.0625q2.21875 -1.8125 3.0 -2.875q0.78125 -1.0625 0.78125 -2.015625q0 -0.984375 -0.71875 -1.671875q-0.703125 -0.6875 -1.
 84375 -0.6875q-1.203125 0 -1.9375 0.734375q-0.71875 0.71875 -0.71875 2.0l-1.71875 -0.171875q0.171875 -1.921875 1.328125 -2.921875q1.15625 -1.015625 3.09375 -1.015625q1.953125 0 3.09375 1.09375q1.140625 1.078125 1.140625 2.6875q0 0.8125 -0.34375 1.609375q-0.328125 0.78125 -1.109375 1.65625q-0.765625 0.859375 -2.5625 2.390625q-1.5 1.265625 -1.9375 1.71875q-0.421875 0.4375 -0.703125 0.890625l6.671875 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m42.0 375.0l82.01575 0l0 42.992126l-82.01575 0z" fill-rule="nonzero"></path><path fill="#000000" d="m58.703125 401.91998l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.
 671875 0l0 9.859375l-1.5 0zm3.2507172 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.9218
 75 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm16.75 -0.234375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094467 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.8906
 25 -0.28125 1.953125l0 5.15625l-1.671875 0zm10.958481 -3.59375l1.671875 -0.21875q0.28125 1.421875 0.96875 2.046875q0.703125 0.625 1.6875 0.625q1.1875 0 2.0 -0.8125q0.8125 -0.828125 0.8125 -2.03125q0 -1.140625 -0.765625 -1.890625q-0.75 -0.75 -1.90625 -0.75q-0.46875 0 -1.171875 0.1875l0.1875 -1.46875q0.15625 0.015625 0.265625 0.015625q1.0625 0 1.90625 -0.546875q0.859375 -0.5625 0.859375 -1.71875q0 -0.921875 -0.625 -1.515625q-0.609375 -0.609375 -1.59375 -0.609375q-0.96875 0 -1.625 0.609375q-0.640625 0.609375 -0.828125 1.84375l-1.671875 -0.296875q0.296875 -1.6875 1.375 -2.609375q1.09375 -0.921875 2.71875 -0.921875q1.109375 0 2.046875 0.484375q0.9375 0.46875 1.421875 1.296875q0.5 0.828125 0.5 1.75q0 0.890625 -0.46875 1.609375q-0.46875 0.71875 -1.40625 1.15625q1.21875 0.265625 1.875 1.15625q0.671875 0.875 0.671875 2.1875q0 1.78125 -1.296875 3.015625q-1.296875 1.234375 -3.28125 1.234375q-1.796875 0 -2.984375 -1.0625q-1.171875 -1.0625 -1.34375 -2.765625z" fill-rule="nonzero"></path><path fi
 ll="#9900ff" d="m177.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.518509 0 4.9338684 1.000473 6.714737 2.7813263c1.7808533 1.7808685 2.7813263 4.196228 2.7813263 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m203.49606 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.518509 0 4.9338684 1.000473 6.714737 2.7813263c1.7808533 1.7808685 2.7813263 4.196228 2.7813263 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m290.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.4960
 63 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m323.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m348.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m373.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.4
 96063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m442.50394 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m469.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m492.50394 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416
  6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m524.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496033 -9.496063l0 0c2.5185547 0 4.933899 1.000473 6.7147827 2.7813263c1.7808228 1.7808685 2.781311 4.196228 2.781311 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496094 9.496063l0 0c-5.244507 0 -9.496033 -4.251526 -9.496033 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m603.0079 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496094 -9.496063l0 0c2.5184937 0 4.933838 1.000473 6.7147217 2.7813263c1.7808228 1.7808685 2.781311 4.196228 2.781311 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496033 9.496063l0 0c-5.244568 0 -9.496094 -4.251526 -9.496094 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m374.97638 275.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.781341
 6c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m401.47244 275.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m209.0 275.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.518509 0 4.9338684 1.0004883 6.714737 2.7813416c1.7808533 1.7808533 2.7813263 4.1961975 2.7813263 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m242.0 275.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.
 496063l0 0c2.518509 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m267.0 275.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m292.0 275.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m568.48
 03 275.49606l0 0c0 -5.2445374 4.251587 -9.496063 9.496094 -9.496063l0 0c2.5184937 0 4.933899 1.0004883 6.7147217 2.7813416c1.7808838 1.7808533 2.781311 4.1961975 2.781311 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496033 9.496063l0 0c-5.244507 0 -9.496094 -4.251526 -9.496094 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m594.9764 275.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496033 -9.496063l0 0c2.5185547 0 4.933899 1.0004883 6.7147827 2.7813416c1.7808228 1.7808533 2.781311 4.1961975 2.781311 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496094 9.496063l0 0c-5.244507 0 -9.496033 -4.251526 -9.496033 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m618.4803 275.49606l0 0c0 -5.2445374 4.251587 -9.496063 9.496094 -9.496063l0 0c2.5184937 0 4.933899 1.0004883 6.7147217 2.7813416c1.7808838 1.7808533 2.781311 4.1961975 2.781311 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496033 9.496063l0 0c-5.244507 0 -9.496094 -4.251526 -9.496094 -9.496063z" fi
 ll-rule="nonzero"></path><path fill="#9900ff" d="m477.0 275.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m487.99213 396.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m514.48816 396.49606l0 0c0 -5.2445374 4.251587 -9.496063 9.496094 -9.496063l0 0c2.5184937 0 4.933899 1.0004883 6.7147217 2.7813416c1.7808838 1.7808533 2.781311 4.1961975 2.781311 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496033 9.4960
 63l0 0c-5.244507 0 -9.496094 -4.251526 -9.496094 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m185.76378 396.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.518509 0 4.9338684 1.0004883 6.714737 2.7813416c1.7808533 1.7808533 2.7813263 4.1961975 2.7813263 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m265.0 396.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m291.49606 396.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.78134
 16 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m315.0 396.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m558.01575 396.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496094 -9.496063l0 0c2.5184937 0 4.933838 1.0004883 6.7147217 2.7813416c1.7808228 1.7808533 2.781311 4.1961975 2.781311 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496033 9.496063l0 0c-5.244568 0 -9.496094 -4.251526 -9.496094 -9.496063z" fill-rule="nonzero"></path></g></svg>
+


[08/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/fig/checkpoints.svg
----------------------------------------------------------------------
diff --git a/docs/fig/checkpoints.svg b/docs/fig/checkpoints.svg
new file mode 100644
index 0000000..c824296
--- /dev/null
+++ b/docs/fig/checkpoints.svg
@@ -0,0 +1,249 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<svg
+   xmlns:dc="http://purl.org/dc/elements/1.1/"
+   xmlns:cc="http://creativecommons.org/ns#"
+   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+   xmlns:svg="http://www.w3.org/2000/svg"
+   xmlns="http://www.w3.org/2000/svg"
+   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+   width="481.59604"
+   height="368.51669"
+   id="svg2"
+   version="1.1"
+   inkscape:version="0.48.5 r10040"
+   sodipodi:docname="state_partitioning.svg">
+  <defs
+     id="defs4" />
+  <sodipodi:namedview
+     id="base"
+     pagecolor="#ffffff"
+     bordercolor="#666666"
+     borderopacity="1.0"
+     inkscape:pageopacity="0.0"
+     inkscape:pageshadow="2"
+     inkscape:zoom="2.8"
+     inkscape:cx="354.96251"
+     inkscape:cy="137.95685"
+     inkscape:document-units="px"
+     inkscape:current-layer="layer1"
+     showgrid="false"
+     fit-margin-right="0.5"
+     fit-margin-bottom="0.3"
+     fit-margin-top="0.3"
+     fit-margin-left="0"
+     inkscape:window-width="2560"
+     inkscape:window-height="1418"
+     inkscape:window-x="1592"
+     inkscape:window-y="-8"
+     inkscape:window-maximized="1" />
+  <metadata
+     id="metadata7">
+    <rdf:RDF>
+      <cc:Work
+         rdf:about="">
+        <dc:format>image/svg+xml</dc:format>
+        <dc:type
+           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+        <dc:title></dc:title>
+      </cc:Work>
+    </rdf:RDF>
+  </metadata>
+  <g
+     inkscape:label="Layer 1"
+     inkscape:groupmode="layer"
+     id="layer1"
+     transform="translate(-130.78007,-350.87488)">
+    <g
+       id="g3138"
+       transform="translate(116.16121,190.10975)">
+      <path
+         id="path3140"
+         d="m 78.39453,344.07322 0,74.86865 95.01117,0 0,-74.86865 -95.01117,0 z"
+         style="fill:#c5e0b4;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3142"
+         style="font-size:17.55437279px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="377.96512"
+         x="106.64163"
+         xml:space="preserve">Task</text>
+      <text
+         id="text3144"
+         style="font-size:17.55437279px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="398.97034"
+         x="88.036995"
+         xml:space="preserve">Manager</text>
+      <path
+         id="path3146"
+         d="m 207.48294,344.07322 0,74.86865 95.02992,0 0,-74.86865 -95.02992,0 z"
+         style="fill:#c5e0b4;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3148"
+         style="font-size:17.55437279px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="377.96512"
+         x="235.75273"
+         xml:space="preserve">Task</text>
+      <text
+         id="text3150"
+         style="font-size:17.55437279px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="398.97034"
+         x="217.1481"
+         xml:space="preserve">Manager</text>
+      <path
+         id="path3152"
+         d="m 336.57135,344.07322 0,74.86865 95.17996,0 0,-74.86865 -95.17996,0 z"
+         style="fill:#c5e0b4;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3154"
+         style="font-size:17.55437279px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="377.96512"
+         x="364.86383"
+         xml:space="preserve">Task</text>
+      <text
+         id="text3156"
+         style="font-size:17.55437279px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="398.97034"
+         x="346.25919"
+         xml:space="preserve">Manager</text>
+      <path
+         id="path3158"
+         d="m 93.079438,161.06513 0,74.85927 95.179962,0 0,-74.85927 -95.179962,0 z"
+         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3160"
+         style="font-size:17.55437279px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="194.95898"
+         x="125.94909"
+         xml:space="preserve">Job</text>
+      <text
+         id="text3162"
+         style="font-size:17.55437279px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="215.96423"
+         x="102.84333"
+         xml:space="preserve">Manager</text>
+      <text
+         id="text3164"
+         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="202.80112"
+         x="33.991787"
+         xml:space="preserve">(master)</text>
+      <text
+         id="text3166"
+         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="385.80722"
+         x="13.838635"
+         xml:space="preserve">(workers)</text>
+      <path
+         id="path3168"
+         d="m 106.5828,243.0418 -2.25056,5.53263 0,0 -2.15679,5.53262 0,0 -1.0315,2.7757 0,0 -0.994,2.79444 0.0187,-0.0188 -0.918974,2.79444 0,0 -0.843961,2.8132 0,-0.0188 -0.768941,2.83196 0,-0.0188 -0.675168,2.85071 0,-0.0188 -0.581395,2.86946 0,-0.0188 -0.468867,2.88822 0.01875,-0.0188 -0.356339,2.90698 0,-0.0188 -0.225056,2.94448 0,-0.0375 -0.07502,2.96324 0,-0.0187 0.07502,3.00075 0,-0.0375 0.225056,3.0195 0,-0.0188 0.375093,3.05702 -0.01875,-0.0375 0.52513,3.09452 0,-0.0188 0.637659,3.11328 0,-0.0188 0.768942,3.13203 0,-0.0188 0.881469,3.15078 -0.01875,-0.0188 0.975243,3.16954 0,-0.0188 1.069009,3.20705 -0.0187,-0.0187 1.16279,3.20705 0,0 1.21905,3.20705 -0.0187,0 1.27532,3.2258 0,0 1.33158,3.24456 0,-0.0188 2.75693,6.50788 0,0 2.34434,5.40134 -1.14404,0.48762 -2.34433,-5.38259 -2.77569,-6.52662 -1.31283,-3.24456 -1.29407,-3.24456 -1.21906,-3.2258 -1.162782,-3.22581 -1.050262,-3.20705 -0.993997,-3.18829 -0.88147,-3.16954 -0.768941,-3.15079 -0.656414,-3.13203 -0.525131,-3.11327
  -0.375093,-3.09452 -0.225056,-3.05701 -0.05626,-3.03826 0.07502,-2.98199 0.225056,-2.982 0.337585,-2.94448 0.487621,-2.92573 0.581395,-2.88822 0.675168,-2.86946 0.787696,-2.85071 0.843961,-2.83196 0.918979,-2.8132 0.993997,-2.8132 1.031508,-2.79445 2.17554,-5.55138 2.25056,-5.53263 z m 5.30757,86.5153 -1.14403,9.69617 -7.87696,-5.77644 c -0.28132,-0.2063 -0.33759,-0.60015 -0.13129,-0.86272 0.20631,-0.28132 0.60015,-0.33758 0.88147,-0.15003 l 6.9955,5.13878 -0.97525,0.43135 1.01276,-8.62715 c 0.0375,-0.33758 0.35634,-0.58139 0.69392,-0.54388 0.33758,0.0375 0.58139,0.35634 0.54388,0.69392 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3170"
+         d="m 115.3975,337.34029 1.70667,-6.07651 0,0 1.65041,-6.07652 0,0.0188 1.53789,-6.07652 0,0.0188 1.36909,-6.02025 0,0 0.60015,-3.00075 0,0.0188 0.54388,-3.00075 0,0.0188 0.46887,-3.00075 0,0.0188 0.39385,-2.96324 0,0 0.30007,-2.94448 0,0.0188 0.22506,-2.96324 0,0.0375 0.13128,-2.92573 0,0.0188 0.0188,-2.92573 0,0.0375 -0.0938,-2.90697 0,0.0188 -0.2063,-2.86946 0,0.0188 -0.28132,-2.86946 0,0.0188 -0.39385,-2.85071 0.0188,0.0187 -0.48762,-2.85071 0,0.0188 -0.54389,-2.8132 0,0.0188 -0.6189,-2.8132 0.0187,0 -0.67517,-2.8132 0,0.0188 -1.53788,-5.57014 0.0188,0.0188 -1.68792,-5.57014 0,0.0188 -1.80045,-5.55138 0,0 -1.46287,-4.35109 1.18155,-0.4126 1.46286,4.36984 1.80045,5.55138 1.68792,5.58889 1.53788,5.5889 0.67517,2.83195 0.63766,2.83196 0.54389,2.83195 0.46886,2.85071 0.39385,2.88822 0.30008,2.88822 0.18754,2.88822 0.0938,2.92573 -0.0188,2.94448 -0.11253,2.94449 -0.22505,2.96324 -0.31883,2.98199 -0.39385,2.98199 -0.46887,3.0195 -0.54388,3.00075 -0.61891,3.0195 -1.36909,6.0390
 1 -1.53788,6.07651 -1.65041,6.09527 -1.70668,6.07651 z m -2.56939,-83.25199 1.96924,-9.56488 7.35183,6.43285 c 0.26257,0.22506 0.28132,0.6189 0.0563,0.88147 -0.22505,0.26256 -0.6189,0.28132 -0.88147,0.0563 l -6.52662,-5.72017 1.01275,-0.33759 -1.76294,8.49587 c -0.075,0.33758 -0.39385,0.56264 -0.73143,0.48762 -0.33758,-0.075 -0.56264,-0.39385 -0.48762,-0.73143 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3172"
+         d="m 127.0629,243.41689 10.12753,7.78319 5.02625,3.88222 4.98874,3.84471 4.91373,3.82595 4.85746,3.76969 4.76368,3.73218 4.65116,3.67591 4.53863,3.6009 4.40735,3.54463 4.27607,3.46962 4.10727,3.35708 3.91973,3.28207 3.73218,3.16954 3.52587,3.05701 1.68792,1.50038 1.61291,1.44411 1.57539,1.42535 1.50037,1.38785 2.86947,2.70067 2.62565,2.58814 2.47562,2.47562 2.26931,2.38184 2.11928,2.28807 1.96924,2.23181 1.83796,2.11928 1.72543,2.08177 1.6129,2.0255 1.51913,1.96924 1.44411,1.95049 1.38785,1.87547 1.36909,1.89422 1.89422,2.68192 -1.03151,0.71267 -1.89422,-2.68191 0,0 -1.35034,-1.87547 0.0188,0.0188 -1.38785,-1.89422 0,0 -1.44411,-1.93173 0,0.0188 -1.51913,-1.96924 0,0.0188 -1.59414,-2.02551 0,0.0188 -1.70668,-2.08177 0,0.0187 -1.8192,-2.13803 0,0.0188 -1.95049,-2.21305 0,0 -2.10052,-2.26932 0,0 -2.26932,-2.38184 0,0 -2.45686,-2.45687 0.0188,0 -2.64441,-2.56939 0.0188,0 -2.85071,-2.68191 0,0 -1.50037,-1.38785 0,0 -1.55664,-1.4066 0,0 -1.63166,-1.46286 0,0 -1.66916,-1.48162 0,
 0.0188 -3.52588,-3.05701 0,0 -3.71342,-3.16954 0,0 -3.91973,-3.26331 0,0 -4.08852,-3.37584 0,0 -4.27607,-3.45086 0.0188,0 -4.40735,-3.54464 0,0.0188 -4.53863,-3.61965 -4.65116,-3.67592 0,0 -4.76368,-3.73218 0,0 -4.83871,-3.76969 -4.93248,-3.82595 -4.96999,-3.84471 -5.02625,-3.86346 -10.12752,-7.80195 z m 100.69384,82.6706 0.84396,9.71492 -8.88971,-4.05101 c -0.30008,-0.15004 -0.45012,-0.50638 -0.30008,-0.82521 0.13128,-0.31883 0.50638,-0.45011 0.82521,-0.31883 l 7.89571,3.61965 -0.88147,0.61891 -0.75018,-8.64591 c -0.0188,-0.35633 0.22505,-0.65641 0.58139,-0.69392 0.33759,-0.0187 0.63766,0.22506 0.67517,0.5814 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3174"
+         d="m 242.72297,335.48358 -4.57614,-8.25206 0,0 -4.5949,-8.15828 0,0 -2.32558,-4.03226 0,0 -2.36308,-3.97599 0,0.0188 -2.38185,-3.91973 0.0188,0 -2.41936,-3.84471 0.0188,0 -2.45686,-3.76968 0,0.0188 -2.47562,-3.67592 0,0 -2.53188,-3.58214 0,0.0188 -2.56939,-3.46962 0,0.0188 -2.64441,-3.35709 0.0188,0.0188 -2.70068,-3.22581 0.0188,0.0188 -2.75694,-3.07577 0.0188,0.0188 -2.83195,-2.92572 0,0.0188 -2.90697,-2.77569 0.0188,0 -2.96324,-2.62566 0.0188,0.0188 -3.03826,-2.47562 0.0188,0.0188 -3.09452,-2.36309 0.0188,0.0188 -3.15079,-2.2318 0.0188,0 -3.1883,-2.11928 0,0.0188 -3.24456,-2.0255 0.0188,0 -3.28207,-1.93173 0.0188,0 -3.33834,-1.85671 0.0188,0 -3.35709,-1.7817 0.0188,0.0188 -3.3946,-1.72543 0.0188,0 -3.41335,-1.68792 0.0187,0.0187 -6.86421,-3.24456 0.0188,0 -5.77644,-2.64441 0.52513,-1.14403 5.77644,2.64441 6.84545,3.24456 3.41335,1.68792 3.3946,1.72543 3.35708,1.80044 3.35709,1.85672 3.28207,1.95048 3.26331,2.02551 3.20705,2.13803 3.16954,2.25056 3.11328,2.36309 3.05701,2.
 49437 2.98199,2.64441 2.92573,2.79445 2.85071,2.94448 2.77569,3.09452 2.70067,3.2258 2.66317,3.37585 2.58814,3.46961 2.55064,3.6009 2.49437,3.69467 2.45686,3.76969 2.4006,3.86346 2.4006,3.91973 2.34433,3.99474 2.34433,4.03225 4.61365,8.17704 4.57614,8.2333 z m -85.57757,-83.60833 -5.60765,-7.97074 9.71492,-0.95649 c 0.33759,-0.0375 0.63766,0.20631 0.67517,0.56264 0.0375,0.33759 -0.22506,0.63766 -0.56264,0.67517 l -8.6459,0.86272 0.45011,-0.994 5.0075,7.10802 c 0.18754,0.28132 0.13128,0.67517 -0.1688,0.88147 -0.28132,0.18755 -0.65641,0.13128 -0.86271,-0.16879 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3176"
+         d="m 341.37254,343.34179 -123.2557,-87.0967 -2.25056,-1.63166 -2.08177,-1.51913 -1.93173,-1.42535 -1.80045,-1.33158 -1.66917,-1.21906 -1.53788,-1.14403 -1.4066,-1.05027 -1.29407,-0.97524 -1.18155,-0.91898 -1.06901,-0.8252 -0.97525,-0.75019 -0.88147,-0.69392 -0.80645,-0.65642 -0.71267,-0.58139 -0.63766,-0.52513 -0.56264,-0.48762 -0.50638,-0.45012 -0.45011,-0.4126 -0.75019,-0.75018 -0.58139,-0.65642 -0.45012,-0.58139 -0.39384,-0.56264 -0.33759,-0.54389 -0.13128,-0.18755 1.06902,-0.67516 0.11252,0.2063 0.33759,0.52513 -0.0188,-0.0188 0.3751,0.54389 -0.0188,-0.0375 0.43136,0.54389 -0.0187,-0.0188 0.54388,0.6189 -0.0188,-0.0375 0.73143,0.73143 -0.0375,-0.0188 0.45011,0.39385 0,0 0.48762,0.45011 0,-0.0188 0.56264,0.48762 -0.0187,0 0.63766,0.52513 0,0 0.71267,0.5814 -0.0188,0 0.7877,0.63765 0,-0.0188 0.88147,0.69392 0,0 0.97524,0.76894 0,0 1.06901,0.82521 0,-0.0188 1.18155,0.90022 0,0 1.29407,0.97524 -0.0187,0 1.4066,1.05026 1.53788,1.14404 0,0 1.65041,1.23781 0,0 1.80045,1.31282 
 1.93173,1.42536 0,0 2.10052,1.51913 2.23181,1.63165 -0.0188,0 123.27446,87.0967 z m -147.31795,-98.61207 -0.48762,-9.73368 8.72092,4.36984 c 0.31883,0.15004 0.45012,0.52513 0.28133,0.84396 -0.15004,0.30008 -0.52514,0.43136 -0.82521,0.28132 l -7.76443,-3.90097 0.90022,-0.58139 0.43136,8.66465 c 0.0188,0.33759 -0.24381,0.63766 -0.60015,0.65642 -0.33759,0.0188 -0.63766,-0.24381 -0.65642,-0.60015 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3178"
+         d="m 200.99382,225.67497 15.21004,6.07652 7.55813,3.05701 7.50187,3.05701 7.42685,3.07576 7.35184,3.09453 7.2393,3.11327 7.10802,3.15079 6.95798,3.15078 6.80795,3.1883 6.63915,3.2258 6.4141,3.26331 3.15079,1.65041 3.07576,1.65041 3.0195,1.66917 2.96324,1.68792 2.88822,1.68792 2.85071,1.68792 2.75694,1.72543 2.70067,1.72543 2.62566,1.74419 2.56939,1.74418 2.47561,1.76294 2.43811,1.78169 2.36309,1.80045 2.28807,1.7817 2.25056,1.8192 2.1943,1.8192 4.20104,3.67592 4.0135,3.69467 3.82596,3.75093 3.67591,3.76969 3.50713,3.78845 3.39459,3.80719 3.28207,3.82596 3.20705,3.86346 3.11327,3.86346 3.07577,3.86346 5.27006,6.77044 -0.97524,0.76894 -5.28882,-6.77043 0,0 -3.05701,-3.86347 0.0188,0 -3.13204,-3.8447 0,0 -3.18829,-3.84471 0.0188,0.0187 -3.28207,-3.82595 0,0 -3.37584,-3.78844 0,0 -3.50713,-3.78845 0.0188,0.0188 -3.65716,-3.75094 0,0 -3.8072,-3.73218 0,0.0188 -3.99475,-3.69467 0.0188,0.0187 -4.20105,-3.65716 0.0188,0 -2.1943,-1.80045 0,0 -2.23181,-1.8192 0.0188,0.0187 -2.28807,-
 1.80045 0,0.0188 -2.34434,-1.78169 0,0 -2.41935,-1.7817 0.0188,0.0188 -2.49438,-1.76294 0.0188,0 -2.55064,-1.74419 0,0 -2.6069,-1.72543 0,0 -2.70067,-1.72543 0.0188,0 -2.75694,-1.70667 0,0.0188 -2.83196,-1.70667 0.0188,0 -2.88822,-1.68792 0,0 -2.96324,-1.66917 0.0187,0 -3.0195,-1.65041 0,0 -3.07576,-1.65041 0,0 -3.13203,-1.65041 0.0188,0 -6.4141,-3.24456 0,0 -6.6204,-3.2258 0.0188,0.0187 -6.80795,-3.18829 0.0188,0 -6.97674,-3.16954 0,0 -7.08927,-3.13203 0,0 -7.2393,-3.09452 0.0188,0 -7.33307,-3.09453 0,0 -7.44561,-3.07576 0,0 -7.48311,-3.05701 -7.55814,-3.05702 0,0 -15.19128,-6.09526 z m 168.3607,101.55655 1.31282,9.67741 -9.0585,-3.6384 c -0.33759,-0.11253 -0.48763,-0.48763 -0.35634,-0.80645 0.13128,-0.31883 0.48762,-0.46887 0.80645,-0.35634 l 8.06451,3.2258 -0.84396,0.67517 -1.16279,-8.6084 c -0.0563,-0.33758 0.18754,-0.65641 0.52513,-0.71267 0.35634,-0.0375 0.67517,0.2063 0.71268,0.54388 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3180"
+         d="m 270.9675,241.59769 -2.58814,-1.33158 -4.8012,5.70142 -0.97524,-0.48762 4.8012,-5.72018 -2.6069,-1.31283 0.60015,-0.73143 6.18904,3.15079 z m 2.79445,3.90097 -0.0375,-0.0375 c -0.11253,-0.0938 -0.22505,-0.16879 -0.33758,-0.24381 -0.0938,-0.075 -0.22506,-0.15004 -0.39385,-0.24381 -0.30008,-0.15004 -0.63766,-0.22506 -1.01275,-0.24381 -0.3751,-0.0188 -0.75019,0 -1.12528,0.0563 l -2.86947,3.43211 -0.91898,-0.46887 4.03226,-4.8387 0.91898,0.46886 -0.5814,0.71268 c 0.5814,-0.075 1.05026,-0.0938 1.42536,-0.0563 0.35634,0.0375 0.67516,0.11252 0.95648,0.26256 0.15004,0.075 0.26257,0.15004 0.33759,0.18755 0.075,0.0375 0.16879,0.11253 0.28132,0.2063 z m 4.0135,-1.53788 -0.71268,0.84396 -1.05026,-0.54389 0.71268,-0.84396 z m -1.46286,1.6129 -4.05101,4.81995 -0.91898,-0.46887 4.05101,-4.81995 z m 6.30157,3.20705 -3.6009,4.29482 c -0.67517,0.80645 -1.36909,1.31282 -2.08177,1.50037 -0.71268,0.16879 -1.50037,0.0375 -2.38184,-0.4126 -0.28132,-0.13128 -0.54389,-0.30008 -0.80645,-0.48762 
 -0.26257,-0.18755 -0.50638,-0.3751 -0.71268,-0.5814 l 0.69392,-0.80645 0.0375,0.0375 c 0.15004,0.16879 0.35634,0.37509 0.61891,0.60015 0.28132,0.24381 0.54388,0.43136 0.8252,0.56264 0.30008,0.15004 0.5814,0.26257 0.84396,0.28132 0.26257,0.0375 0.48762,0.0188 0.71268,-0.0563 0.2063,-0.0563 0.39385,-0.1688 0.58139,-0.30008 0.1688,-0.13128 0.33759,-0.30007 0.50638,-0.50637 l 0.35634,-0.43136 c -0.45011,0.0563 -0.82521,0.075 -1.12528,0.0563 -0.30008,-0.0187 -0.65642,-0.13128 -1.05026,-0.31883 -0.52513,-0.28132 -0.86272,-0.65641 -0.994,-1.12528 -0.13128,-0.48762 -0.0563,-1.01275 0.24381,-1.59414 0.24381,-0.48763 0.56264,-0.90023 0.93773,-1.27532 0.3751,-0.35634 0.80645,-0.63766 1.25657,-0.84396 0.43135,-0.2063 0.90022,-0.30008 1.38784,-0.31883 0.48762,0 0.93774,0.0938 1.36909,0.31883 0.31883,0.16879 0.5814,0.33758 0.7877,0.54388 0.2063,0.18755 0.37509,0.39385 0.50638,0.60015 l 0.22505,-0.16879 z m -1.65041,0.39385 c -0.15004,-0.20631 -0.31883,-0.39385 -0.50638,-0.5814 -0.18755,-0.16879 -
 0.39385,-0.30007 -0.61891,-0.43136 -0.33758,-0.15003 -0.65641,-0.22505 -0.99399,-0.22505 -0.33759,0.0188 -0.67517,0.0938 -0.994,0.26256 -0.30008,0.15004 -0.5814,0.3751 -0.86272,0.65642 -0.26256,0.30007 -0.48762,0.60015 -0.65641,0.95648 -0.2063,0.3751 -0.26256,0.71268 -0.2063,1.01276 0.075,0.30007 0.30007,0.56264 0.69392,0.75018 0.26257,0.15004 0.5814,0.22506 0.91898,0.26257 0.33759,0.0188 0.67517,0.0188 1.03151,-0.0188 z m 7.93322,2.8132 -3.6009,4.29482 c -0.67516,0.80645 -1.36909,1.31282 -2.08176,1.48162 -0.71268,0.18754 -1.50038,0.0563 -2.38185,-0.39385 -0.28132,-0.15004 -0.54388,-0.30008 -0.80645,-0.48762 -0.26256,-0.18755 -0.50637,-0.39385 -0.71268,-0.5814 l 0.69393,-0.80645 0.0375,0.0188 c 0.15003,0.18755 0.35633,0.39385 0.6189,0.61891 0.28132,0.22506 0.54389,0.4126 0.82521,0.56264 0.30007,0.15004 0.58139,0.24381 0.84396,0.28132 0.26256,0.0375 0.48762,0.0188 0.71267,-0.0563 0.20631,-0.0563 0.39385,-0.1688 0.5814,-0.30008 0.16879,-0.15004 0.33758,-0.31883 0.50637,-0.50638 l 0.35
 634,-0.43135 c -0.45011,0.0563 -0.8252,0.075 -1.12528,0.0563 -0.30007,-0.0188 -0.65641,-0.13128 -1.05026,-0.31883 -0.52513,-0.28132 -0.86271,-0.65641 -0.994,-1.12528 -0.13128,-0.48762 -0.0563,-1.01275 0.24381,-1.6129 0.24382,-0.46887 0.56264,-0.88147 0.93774,-1.25656 0.37509,-0.35634 0.80645,-0.65642 1.25656,-0.86272 0.43136,-0.18755 0.90023,-0.30007 1.38785,-0.30007 0.48762,-0.0188 0.93773,0.0938 1.36909,0.31883 0.31883,0.15003 0.58139,0.33758 0.78769,0.54388 0.20631,0.18755 0.3751,0.39385 0.50638,0.60015 l 0.22506,-0.18755 z m -1.65041,0.39384 c -0.15004,-0.2063 -0.31883,-0.4126 -0.50638,-0.58139 -0.18754,-0.16879 -0.39384,-0.31883 -0.6189,-0.43136 -0.33758,-0.16879 -0.65641,-0.24381 -0.994,-0.22505 -0.33758,0.0187 -0.67517,0.0938 -0.994,0.26256 -0.30007,0.15004 -0.58139,0.3751 -0.86271,0.65642 -0.26257,0.28132 -0.48762,0.60014 -0.65641,0.93773 -0.20631,0.39385 -0.26257,0.73143 -0.20631,1.03151 0.075,0.30007 0.30008,0.54388 0.69393,0.75018 0.26256,0.13129 0.58139,0.22506 0.91898,0
 .24381 0.33758,0.0375 0.67516,0.0375 1.0315,0 z m 5.58889,4.29482 c 0.0563,-0.075 0.11253,-0.15003 0.15004,-0.2063 0.0375,-0.075 0.075,-0.13128 0.13128,-0.22505 0.16879,-0.33759 0.2063,-0.65642 0.13129,-0.95649 -0.0938,-0.30008 -0.35634,-0.54389 -0.76895,-0.76894 -0.46886,-0.22506 -0.95648,-0.30008 -1.46286,-0.18755 -0.50638,0.0938 -0.93773,0.35634 -1.31283,0.76894 z m -3.60089,2.53189 c -0.75019,-0.39385 -1.25657,-0.86272 -1.51913,-1.44411 -0.24381,-0.5814 -0.18755,-1.2003 0.16879,-1.89423 0.50638,-0.99399 1.25656,-1.70667 2.25056,-2.10052 0.994,-0.39385 1.96924,-0.33758 2.90697,0.13128 0.61891,0.31883 1.01276,0.71268 1.18155,1.18155 0.16879,0.48762 0.11253,0.994 -0.16879,1.55664 -0.0563,0.0938 -0.15004,0.24381 -0.28132,0.45011 -0.13129,0.18754 -0.30008,0.4126 -0.50638,0.65641 l -4.03225,-2.04426 c -0.075,0.075 -0.13129,0.16879 -0.18755,0.26257 -0.0563,0.075 -0.0938,0.16879 -0.13128,0.24381 -0.24381,0.45011 -0.28132,0.90022 -0.13129,1.29407 0.13129,0.4126 0.46887,0.75019 0.994,1.01
 275 0.35634,0.16879 0.75019,0.30008 1.2003,0.35634 0.45011,0.0375 0.86272,0.0563 1.2003,0 l 0.0563,0.0375 -0.71267,0.88147 c -0.18755,-0.0188 -0.35634,-0.0375 -0.50638,-0.0563 -0.16879,-0.0188 -0.35634,-0.0563 -0.5814,-0.11253 -0.22505,-0.0563 -0.43135,-0.11253 -0.60014,-0.15004 -0.1688,-0.0563 -0.3751,-0.15004 -0.60015,-0.26256 z m 10.20254,-0.63766 -0.0563,-0.0188 c -0.11252,-0.0938 -0.22505,-0.18754 -0.31883,-0.26256 -0.11252,-0.075 -0.24381,-0.15004 -0.39384,-0.22506 -0.31883,-0.15004 -0.65642,-0.24381 -1.03151,-0.26257 -0.35634,0 -0.73143,0.0188 -1.10653,0.0563 l -2.86946,3.4321 -0.93774,-0.46886 4.03226,-4.83871 0.93773,0.48762 -0.60015,0.71268 c 0.5814,-0.0938 1.06902,-0.11253 1.42536,-0.075 0.35634,0.0375 0.67517,0.13128 0.95649,0.26257 0.16879,0.0938 0.28132,0.15003 0.33758,0.18754 0.075,0.0563 0.16879,0.11253 0.30008,0.20631 z m 4.31357,8.04575 c -0.91898,-0.46887 -1.51913,-1.08777 -1.80045,-1.83796 -0.28132,-0.73143 -0.2063,-1.55663 0.24381,-2.41935 0.33759,-0.67517 0.768
 95,-1.25656 1.25657,-1.72543 0.50637,-0.48762 1.05026,-0.84396 1.65041,-1.08777 0.60015,-0.24381 1.23781,-0.35634 1.91297,-0.31883 0.67517,0.0375 1.35034,0.22506 2.02551,0.56264 0.4126,0.2063 0.7877,0.46887 1.10653,0.76894 0.31882,0.28132 0.63765,0.63766 0.91897,1.06902 l -0.78769,0.97524 -0.075,-0.0375 c -0.0563,-0.15004 -0.11253,-0.31883 -0.16879,-0.45011 -0.0563,-0.15004 -0.16879,-0.33758 -0.33759,-0.56264 -0.13128,-0.18755 -0.30007,-0.37509 -0.52513,-0.56264 -0.2063,-0.18755 -0.46886,-0.35634 -0.76894,-0.52513 -0.45011,-0.22506 -0.91898,-0.33759 -1.4066,-0.35634 -0.46887,0 -0.93773,0.11253 -1.38785,0.30007 -0.45011,0.20631 -0.86271,0.48763 -1.27531,0.88147 -0.39385,0.41261 -0.73144,0.88147 -1.01276,1.4066 -0.33758,0.67517 -0.43135,1.29408 -0.28132,1.83796 0.1688,0.56264 0.5814,0.994 1.23781,1.33158 0.30008,0.1688 0.61891,0.28132 0.93774,0.35634 0.31883,0.0563 0.60015,0.11253 0.90022,0.11253 0.26257,0 0.50638,0 0.73143,-0.0375 0.22506,-0.0188 0.41261,-0.0375 0.5814,-0.075 l 0.075
 ,0.0375 -0.80645,0.994 c -0.43136,0 -0.90023,-0.0375 -1.4066,-0.13128 -0.52514,-0.075 -1.03151,-0.26257 -1.53789,-0.50638 z m 11.30907,0.28132 c -0.0375,0.075 -0.0938,0.16879 -0.18755,0.30008 -0.075,0.13128 -0.16879,0.22505 -0.24381,0.31883 l -2.62565,3.13203 -0.91898,-0.46887 2.30682,-2.73818 c 0.13129,-0.1688 0.22506,-0.30008 0.31883,-0.41261 0.0938,-0.11252 0.16879,-0.24381 0.22506,-0.37509 0.15004,-0.26257 0.18755,-0.50638 0.11253,-0.71268 -0.0563,-0.2063 -0.26257,-0.4126 -0.61891,-0.58139 -0.26256,-0.13128 -0.56264,-0.2063 -0.93773,-0.22506 -0.35634,-0.0188 -0.73143,0 -1.12528,0.0563 l -3.0195,3.60089 -0.91898,-0.46886 5.64515,-6.73293 0.91898,0.46887 -2.04426,2.4381 c 0.46887,-0.075 0.90023,-0.0938 1.27532,-0.0563 0.37509,0.0188 0.71268,0.11253 1.03151,0.28132 0.48762,0.24381 0.78769,0.54388 0.93773,0.91898 0.15004,0.37509 0.0938,0.78769 -0.13128,1.25656 z m 4.78244,3.54463 c 0.0563,-0.075 0.11253,-0.15003 0.15004,-0.22505 0.0375,-0.0563 0.075,-0.13128 0.11253,-0.2063 0.18754,
 -0.33759 0.22505,-0.67517 0.13128,-0.95649 -0.0938,-0.30008 -0.33759,-0.54389 -0.76894,-0.76894 -0.45011,-0.22506 -0.93774,-0.30008 -1.44411,-0.18755 -0.50638,0.0938 -0.93774,0.35634 -1.31283,0.75019 z m -3.6009,2.53188 c -0.76894,-0.39384 -1.25656,-0.88146 -1.51912,-1.44411 -0.24382,-0.58139 -0.18755,-1.20029 0.15003,-1.89422 0.52513,-0.99399 1.27532,-1.70667 2.26932,-2.10052 0.994,-0.39385 1.96924,-0.35634 2.88822,0.13128 0.6189,0.31883 1.01275,0.71268 1.2003,1.18155 0.16879,0.48762 0.11252,0.99399 -0.1688,1.55663 -0.0563,0.0938 -0.15003,0.24382 -0.28132,0.45012 -0.13128,0.18754 -0.31883,0.4126 -0.52513,0.65641 l -4.0135,-2.04426 c -0.075,0.075 -0.13128,0.16879 -0.18754,0.26257 -0.0563,0.075 -0.11253,0.16879 -0.15004,0.24381 -0.22506,0.45011 -0.28132,0.88147 -0.13128,1.29407 0.15003,0.4126 0.48762,0.75019 0.99399,1.01275 0.35634,0.16879 0.76895,0.30008 1.21906,0.33759 0.45011,0.0563 0.84396,0.075 1.2003,0.0187 l 0.0563,0.0375 -0.71268,0.88147 c -0.18754,-0.0188 -0.35634,-0.0375 -0
 .52513,-0.0563 -0.15003,-0.0188 -0.33758,-0.0563 -0.56264,-0.11253 -0.22505,-0.0563 -0.43135,-0.11253 -0.60015,-0.16879 -0.16879,-0.0375 -0.37509,-0.13129 -0.60015,-0.24382 z m 5.7952,2.94449 c -0.35634,-0.16879 -0.63766,-0.3751 -0.86272,-0.60015 -0.22505,-0.2063 -0.39385,-0.45011 -0.48762,-0.73143 -0.11253,-0.26257 -0.15004,-0.56264 -0.13128,-0.86272 0.0375,-0.31883 0.13128,-0.63766 0.30007,-0.994 0.26257,-0.50637 0.60015,-0.93773 0.97525,-1.31282 0.39384,-0.35634 0.80645,-0.63766 1.27531,-0.82521 0.45012,-0.16879 0.93774,-0.26256 1.44411,-0.22505 0.52513,0.0187 1.03151,0.15003 1.50038,0.39384 0.31883,0.1688 0.60015,0.35634 0.84396,0.60015 0.24381,0.22506 0.43135,0.45012 0.58139,0.69393 l -0.71268,0.88147 -0.0375,-0.0188 c -0.0375,-0.0938 -0.0938,-0.2063 -0.15004,-0.33758 -0.0563,-0.13129 -0.13128,-0.26257 -0.24381,-0.39385 -0.0938,-0.13128 -0.2063,-0.26257 -0.33759,-0.39385 -0.15003,-0.13128 -0.31883,-0.26257 -0.52513,-0.35634 -0.63766,-0.33758 -1.29407,-0.31883 -1.96924,0 -0.6564
 1,0.33759 -1.2003,0.88147 -1.59415,1.65041 -0.22505,0.45011 -0.28132,0.86272 -0.16879,1.23781 0.0938,0.37509 0.3751,0.65641 0.80645,0.88147 0.2063,0.11253 0.43136,0.18755 0.67517,0.22506 0.22506,0.0563 0.43136,0.075 0.61891,0.075 0.18754,0.0188 0.37509,0.0188 0.56264,0 0.18754,-0.0188 0.31883,-0.0375 0.39384,-0.0375 l 0.0563,0.0187 -0.71268,0.91898 c -0.33758,-0.0188 -0.69392,-0.075 -1.06902,-0.13128 -0.35633,-0.075 -0.71267,-0.18755 -1.0315,-0.35634 z m 7.93322,3.88222 -1.12528,-0.56264 -0.52513,-3.30083 -1.10652,0.24381 -1.31283,1.55664 -0.91898,-0.46887 5.64516,-6.73292 0.91898,0.48762 -3.61966,4.31357 4.5949,-1.12528 1.21905,0.61891 -4.44485,0.99399 z m 8.30832,-0.46887 c -0.26256,0.54388 -0.6189,1.01275 -1.0315,1.4066 -0.39385,0.39385 -0.82521,0.67517 -1.27532,0.86271 -0.45011,0.20631 -0.91898,0.30008 -1.38785,0.31883 -0.48762,0 -0.91898,-0.0938 -1.35033,-0.31883 -0.28132,-0.15003 -0.54389,-0.31883 -0.75019,-0.50637 -0.22506,-0.18755 -0.39385,-0.4126 -0.52513,-0.63766 l -1.7066
 8,2.0255 -0.91898,-0.46886 5.55139,-6.60165 0.91898,0.46887 -0.43136,0.50638 c 0.4126,-0.0563 0.7877,-0.075 1.16279,-0.0563 0.35634,0.0188 0.71268,0.11253 1.06902,0.28132 0.54388,0.28132 0.86271,0.65641 0.97524,1.12528 0.11253,0.46887 0.0188,0.994 -0.30008,1.59415 z m -1.05026,-0.31883 c 0.2063,-0.37509 0.26257,-0.73143 0.2063,-1.03151 -0.0563,-0.28132 -0.28132,-0.52513 -0.65641,-0.71268 -0.26257,-0.15003 -0.5814,-0.22505 -0.93773,-0.24381 -0.35634,0 -0.71268,0 -1.05027,0.0563 l -2.28807,2.71943 c 0.15004,0.22505 0.31883,0.4126 0.46887,0.56264 0.16879,0.16879 0.39385,0.30007 0.65641,0.45011 0.33759,0.16879 0.69393,0.24381 1.05027,0.22505 0.33758,-0.0375 0.67516,-0.13128 0.97524,-0.30007 0.33758,-0.18755 0.6189,-0.4126 0.88147,-0.71268 0.26256,-0.28132 0.50638,-0.6189 0.69392,-1.01275 z m 7.05176,3.63841 c -0.24381,0.48762 -0.56264,0.91897 -0.93773,1.29407 -0.3751,0.37509 -0.7877,0.65641 -1.23781,0.84396 -0.46887,0.2063 -0.91898,0.31883 -1.4066,0.31883 -0.46887,0 -0.93774,-0.13129 -1
 .44411,-0.3751 -0.63766,-0.33758 -1.05027,-0.76894 -1.23781,-1.33158 -0.16879,-0.56264 -0.0938,-1.16279 0.24381,-1.8192 0.24381,-0.48762 0.56264,-0.91898 0.93773,-1.27532 0.3751,-0.37509 0.7877,-0.65641 1.23781,-0.86271 0.45011,-0.18755 0.91898,-0.30008 1.4066,-0.28132 0.48762,0 0.97524,0.11252 1.44411,0.35634 0.63766,0.31882 1.03151,0.75018 1.21905,1.29407 0.20631,0.56264 0.11253,1.16279 -0.22505,1.83796 z m -2.55064,1.29407 c 0.31883,-0.16879 0.60015,-0.39385 0.86272,-0.69392 0.28132,-0.30008 0.50637,-0.63766 0.69392,-1.01276 0.24381,-0.45011 0.30008,-0.86271 0.2063,-1.2003 -0.0938,-0.35633 -0.35634,-0.6189 -0.75018,-0.8252 -0.31883,-0.16879 -0.63766,-0.24381 -0.95649,-0.2063 -0.33759,0.0188 -0.65642,0.11252 -0.97525,0.28132 -0.30007,0.16879 -0.60015,0.4126 -0.86271,0.71267 -0.28132,0.28132 -0.50638,0.61891 -0.69392,0.994 -0.22506,0.45011 -0.30008,0.86272 -0.2063,1.2003 0.0938,0.35634 0.33758,0.63766 0.75018,0.84396 0.31883,0.15004 0.63766,0.22506 0.95649,0.2063 0.33758,-0.0188 0.
 65641,-0.11253 0.97524,-0.30007 z m 7.95198,-3.33833 -0.69392,0.84396 -1.05026,-0.52513 0.71268,-0.84396 z m -1.44411,1.6129 -4.051,4.8387 -0.91898,-0.46886 4.05101,-4.83871 z m 5.45761,4.36984 c -0.0375,0.075 -0.11252,0.16879 -0.18754,0.30007 -0.075,0.13128 -0.16879,0.22506 -0.24381,0.31883 l -2.62566,3.13203 -0.91898,-0.46887 2.30683,-2.73818 c 0.13128,-0.16879 0.22505,-0.30007 0.31883,-0.4126 0.0938,-0.11253 0.16879,-0.24381 0.22505,-0.37509 0.15004,-0.26257 0.18755,-0.50638 0.11253,-0.71268 -0.0563,-0.2063 -0.28132,-0.4126 -0.6189,-0.5814 -0.26257,-0.13128 -0.56264,-0.2063 -0.93774,-0.22505 -0.35633,-0.0188 -0.73143,0 -1.12528,0.0563 l -3.0195,3.6009 -0.91898,-0.46887 4.05101,-4.8387 0.91898,0.46886 -0.45011,0.54389 c 0.46887,-0.075 0.90022,-0.0938 1.27532,-0.0563 0.35634,0.0188 0.71267,0.11253 1.0315,0.28132 0.48762,0.24382 0.7877,0.54389 0.93774,0.91898 0.13128,0.3751 0.0938,0.7877 -0.13129,1.25657 z m 5.53263,1.23781 -0.54388,0.67516 -1.89422,-0.97524 -1.87547,2.23181 c -0.09
 38,0.0938 -0.2063,0.22505 -0.30008,0.39384 -0.11252,0.15004 -0.2063,0.26257 -0.24381,0.3751 -0.13128,0.22505 -0.15004,0.43136 -0.0938,0.6189 0.0563,0.16879 0.26256,0.33759 0.58139,0.50638 0.13129,0.075 0.30008,0.13128 0.50638,0.18755 0.2063,0.0375 0.35634,0.075 0.43136,0.075 l 0.0563,0.0188 -0.58139,0.71268 c -0.2063,-0.0563 -0.41261,-0.11253 -0.63766,-0.18755 -0.22506,-0.075 -0.41261,-0.15004 -0.5814,-0.24381 -0.45011,-0.22506 -0.75018,-0.50638 -0.88147,-0.82521 -0.15003,-0.31883 -0.11252,-0.69392 0.0938,-1.12528 0.0563,-0.11253 0.11252,-0.2063 0.18754,-0.30007 0.075,-0.0938 0.15004,-0.2063 0.24381,-0.31883 l 2.17555,-2.58815 -0.61891,-0.31883 0.54389,-0.65641 0.6189,0.31883 1.16279,-1.38785 0.93773,0.46887 -1.18154,1.38785 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3182"
+         d="m 229.5947,278.93824 -0.90023,-0.6189 1.06902,-2.04426 -2.88822,-2.00675 -2.4006,1.10652 -0.95649,-0.63766 8.66466,-3.95723 1.18155,0.80645 z m 0.60015,-3.46961 1.85671,-3.6009 -4.23856,1.95048 z m 2.36309,5.68266 c -0.31883,-0.22505 -0.5814,-0.45011 -0.76895,-0.69392 -0.2063,-0.24381 -0.33758,-0.50638 -0.39384,-0.80645 -0.075,-0.28132 -0.075,-0.56264 0,-0.88147 0.0563,-0.30008 0.2063,-0.60015 0.43135,-0.91898 0.31883,-0.48762 0.69393,-0.86272 1.12528,-1.18154 0.43136,-0.30008 0.90023,-0.52514 1.38785,-0.63766 0.46887,-0.11253 0.95649,-0.13129 1.46286,-0.0375 0.50638,0.0938 0.97525,0.28132 1.42536,0.60015 0.30007,0.2063 0.54388,0.43135 0.75019,0.69392 0.2063,0.26256 0.37509,0.50638 0.48762,0.76894 l -0.82521,0.7877 -0.0375,-0.0375 c -0.0375,-0.0938 -0.0563,-0.2063 -0.0938,-0.33759 -0.0563,-0.15003 -0.11253,-0.28132 -0.18755,-0.43135 -0.075,-0.15004 -0.18754,-0.30008 -0.30007,-0.45012 -0.11253,-0.15003 -0.28132,-0.28132 -0.46887,-0.4126 -0.58139,-0.4126 -1.23781,-0.48762 
 -1.95048,-0.26256 -0.69393,0.24381 -1.29408,0.71267 -1.7817,1.42535 -0.30007,0.4126 -0.4126,0.82521 -0.35634,1.2003 0.0563,0.37509 0.28132,0.71268 0.69393,0.994 0.18754,0.13128 0.39384,0.22505 0.6189,0.30007 0.22506,0.075 0.43136,0.13129 0.6189,0.16879 0.18755,0.0375 0.3751,0.0563 0.56264,0.0563 0.1688,0.0188 0.30008,0.0188 0.39385,0.0188 l 0.0375,0.0375 -0.80645,0.80645 c -0.35634,-0.075 -0.69392,-0.16879 -1.05026,-0.28132 -0.35634,-0.11252 -0.67517,-0.28132 -0.97524,-0.48762 z m 7.33307,4.91373 -1.0315,-0.71268 -0.075,-3.33833 -1.14404,0.0938 -1.50037,1.36909 -0.84396,-0.60015 6.48911,-5.88897 0.84397,0.5814 -4.16354,3.78844 4.70742,-0.46886 1.12528,0.76894 -4.53863,0.39385 z m 2.17554,1.51912 -1.08777,-0.75018 1.2003,-1.08777 1.08777,0.75018 z m 8.15829,5.81395 c -0.86272,-0.58139 -1.36909,-1.25656 -1.55664,-2.04426 -0.16879,-0.76894 0.0187,-1.57539 0.58139,-2.38184 0.43136,-0.6189 0.91898,-1.12528 1.48162,-1.53788 0.54389,-0.41261 1.14404,-0.69393 1.7817,-0.84396 0.6189,-0.1688 
 1.25656,-0.18755 1.93173,-0.0563 0.65641,0.11253 1.29407,0.39385 1.93173,0.82521 0.37509,0.26256 0.71268,0.56264 0.994,0.90022 0.28132,0.33759 0.52513,0.73143 0.75018,1.18155 l -0.90022,0.88147 -0.075,-0.0563 c -0.0188,-0.16879 -0.0563,-0.31883 -0.0938,-0.48762 -0.0563,-0.15004 -0.13128,-0.35634 -0.26257,-0.58139 -0.11253,-0.20631 -0.26256,-0.41261 -0.45011,-0.63766 -0.18755,-0.22506 -0.4126,-0.41261 -0.69392,-0.61891 -0.41261,-0.28132 -0.86272,-0.46886 -1.33158,-0.52513 -0.48763,-0.075 -0.95649,-0.0375 -1.42536,0.0938 -0.46887,0.13128 -0.93773,0.37509 -1.38785,0.71267 -0.45011,0.33759 -0.84396,0.75019 -1.18154,1.25657 -0.43136,0.6189 -0.60015,1.21905 -0.52513,1.78169 0.075,0.56264 0.43136,1.06902 1.03151,1.48162 0.30007,0.2063 0.58139,0.35634 0.88147,0.46887 0.30007,0.11252 0.60015,0.18754 0.88147,0.24381 0.26256,0.0375 0.50637,0.0563 0.73143,0.0563 0.22505,0.0188 0.43136,0.0188 0.60015,0 l 0.0563,0.0375 -0.91898,0.90022 c -0.43135,-0.0563 -0.90022,-0.16879 -1.38784,-0.31882 -0.506
 38,-0.1688 -0.994,-0.39385 -1.44411,-0.73144 z m 11.15903,1.80045 c -0.0563,0.075 -0.13129,0.16879 -0.22506,0.28132 -0.0938,0.11253 -0.18755,0.2063 -0.28132,0.28132 l -3.0195,2.75694 -0.86272,-0.5814 2.66317,-2.41935 c 0.15003,-0.13128 0.26256,-0.26256 0.37509,-0.35634 0.0938,-0.11253 0.18755,-0.22505 0.26257,-0.33758 0.18754,-0.26257 0.24381,-0.48762 0.2063,-0.71268 -0.0188,-0.2063 -0.2063,-0.43136 -0.54389,-0.65641 -0.22505,-0.15004 -0.52513,-0.26257 -0.88147,-0.33759 -0.35634,-0.0563 -0.73143,-0.0938 -1.12528,-0.0938 l -3.46961,3.15078 -0.84396,-0.58139 6.48911,-5.90772 0.84396,0.58139 -2.34433,2.13804 c 0.46887,0 0.90022,0.0375 1.25656,0.11252 0.3751,0.075 0.69393,0.22506 0.994,0.43136 0.45011,0.30008 0.71268,0.63766 0.80645,1.03151 0.0938,0.39385 -0.0188,0.80645 -0.30007,1.21905 z m 4.25731,4.16354 c 0.075,-0.075 0.13128,-0.13128 0.18754,-0.18755 0.0375,-0.0563 0.0938,-0.13128 0.15004,-0.2063 0.2063,-0.31883 0.30008,-0.63766 0.24381,-0.93773 -0.0375,-0.30008 -0.26256,-0.5814 -0
 .65641,-0.84396 -0.43136,-0.30008 -0.90023,-0.43136 -1.4066,-0.39385 -0.52513,0.0375 -0.994,0.22505 -1.4066,0.58139 z m -3.90097,2.0255 c -0.71268,-0.48762 -1.14404,-1.0315 -1.31283,-1.63165 -0.16879,-0.61891 -0.0375,-1.21906 0.4126,-1.85672 0.63766,-0.91897 1.48162,-1.51912 2.51313,-1.78169 1.05026,-0.24381 1.98799,-0.075 2.85071,0.52513 0.58139,0.39385 0.91898,0.82521 1.03151,1.33158 0.0938,0.48762 -0.0188,0.994 -0.3751,1.51913 -0.075,0.075 -0.18755,0.22506 -0.33758,0.39385 -0.16879,0.18755 -0.3751,0.37509 -0.60015,0.60015 l -3.71343,-2.58815 c -0.0938,0.075 -0.15003,0.1688 -0.22505,0.24381 -0.0563,0.075 -0.11253,0.15004 -0.1688,0.22506 -0.30007,0.43136 -0.39384,0.84396 -0.30007,1.27532 0.075,0.4126 0.37509,0.78769 0.84396,1.12528 0.33758,0.22506 0.71268,0.39385 1.16279,0.50638 0.43136,0.11252 0.82521,0.16879 1.18154,0.16879 l 0.0563,0.0375 -0.82521,0.78769 c -0.18754,-0.0375 -0.35634,-0.0938 -0.50637,-0.13128 -0.15004,-0.0375 -0.33759,-0.0938 -0.56264,-0.18755 -0.20631,-0.075 -0.
 41261,-0.15003 -0.56264,-0.22505 -0.1688,-0.075 -0.35634,-0.18755 -0.56264,-0.33759 z m 5.34508,3.69467 c -0.31883,-0.22505 -0.5814,-0.45011 -0.76894,-0.69392 -0.18755,-0.24381 -0.31883,-0.52513 -0.39385,-0.80645 -0.075,-0.28132 -0.075,-0.58139 0,-0.88147 0.0563,-0.30007 0.2063,-0.6189 0.43136,-0.93773 0.31882,-0.46887 0.71267,-0.86272 1.12528,-1.16279 0.43135,-0.30008 0.90022,-0.52513 1.38784,-0.63766 0.46887,-0.13128 0.95649,-0.13128 1.46287,-0.0563 0.50637,0.0938 0.99399,0.30008 1.42535,0.60015 0.30008,0.20631 0.54389,0.45012 0.75019,0.71268 0.2063,0.26257 0.37509,0.50638 0.48762,0.76894 l -0.80645,0.7877 -0.0563,-0.0375 c -0.0188,-0.0938 -0.0563,-0.22506 -0.0938,-0.35634 -0.0375,-0.13128 -0.11253,-0.26256 -0.18755,-0.4126 -0.075,-0.15004 -0.16879,-0.30008 -0.30007,-0.45011 -0.11253,-0.15004 -0.26257,-0.30008 -0.46887,-0.43136 -0.58139,-0.39385 -1.21905,-0.48762 -1.93173,-0.24381 -0.71268,0.24381 -1.31283,0.71268 -1.80045,1.4066 -0.30007,0.43136 -0.4126,0.8252 -0.35634,1.21905 0.
 0563,0.3751 0.30008,0.69392 0.69393,0.97524 0.18754,0.13129 0.39385,0.24382 0.63766,0.31883 0.22505,0.075 0.4126,0.13129 0.60015,0.1688 0.18754,0.0188 0.37509,0.0375 0.56264,0.0563 0.16879,0.0188 0.30007,0.0188 0.39384,0.0188 l 0.0563,0.0375 -0.82521,0.7877 c -0.35634,-0.0563 -0.69392,-0.15004 -1.05026,-0.26256 -0.35634,-0.11253 -0.67517,-0.28132 -0.97524,-0.48763 z m 7.33307,4.91373 -1.0315,-0.71268 -0.075,-3.33833 -1.14404,0.0938 -1.50037,1.36909 -0.84396,-0.60015 6.48912,-5.90772 0.86271,0.60015 -4.18229,3.78845 4.70742,-0.48763 1.12528,0.7877 -4.53863,0.37509 z m 8.30832,0.65641 c -0.33758,0.48762 -0.75018,0.91898 -1.20029,1.23781 -0.46887,0.33758 -0.93774,0.56264 -1.40661,0.69392 -0.46886,0.15004 -0.93773,0.18755 -1.4066,0.13129 -0.46886,-0.0563 -0.90022,-0.22506 -1.29407,-0.48763 -0.26256,-0.18754 -0.48762,-0.39384 -0.67517,-0.6189 -0.18754,-0.2063 -0.33758,-0.45011 -0.43135,-0.69392 l -1.96924,1.78169 -0.84396,-0.58139 6.37658,-5.81395 0.84396,0.60015 -0.48762,0.43135 c 0.412
 61,0 0.80645,0.0375 1.16279,0.0938 0.35634,0.075 0.69393,0.2063 1.03151,0.45011 0.48762,0.33758 0.75019,0.75019 0.80645,1.21905 0.0375,0.48762 -0.13128,0.994 -0.50638,1.55664 z m -0.99399,-0.45011 c 0.24381,-0.37509 0.35634,-0.69392 0.33758,-0.994 -0.0188,-0.30007 -0.2063,-0.58139 -0.54388,-0.8252 -0.26257,-0.1688 -0.56264,-0.28132 -0.91898,-0.33759 -0.33759,-0.075 -0.69393,-0.0938 -1.03151,-0.11253 l -2.64441,2.4006 c 0.13128,0.24381 0.26257,0.45011 0.39385,0.63766 0.15004,0.16879 0.33758,0.33758 0.60015,0.52513 0.30007,0.2063 0.63766,0.31883 0.994,0.33759 0.35633,0.0375 0.69392,-0.0188 1.01275,-0.15004 0.35634,-0.13128 0.67517,-0.33759 0.97524,-0.5814 0.30008,-0.24381 0.5814,-0.56264 0.82521,-0.90022 z m 6.48911,4.53863 c -0.30007,0.45011 -0.67516,0.82521 -1.10652,1.14403 -0.43136,0.33759 -0.86272,0.56265 -1.33158,0.69393 -0.48763,0.13128 -0.95649,0.16879 -1.42536,0.11253 -0.46887,-0.0563 -0.93773,-0.24381 -1.38784,-0.56264 -0.60015,-0.41261 -0.95649,-0.91898 -1.05027,-1.48162 -0.
 0938,-0.5814 0.0563,-1.16279 0.46887,-1.76294 0.31883,-0.45011 0.69392,-0.84396 1.10653,-1.16279 0.43135,-0.30008 0.88147,-0.54389 1.35033,-0.67517 0.46887,-0.13128 0.93774,-0.16879 1.42536,-0.0938 0.50637,0.0563 0.95649,0.24381 1.38784,0.54388 0.5814,0.39385 0.93774,0.88147 1.05026,1.46287 0.11253,0.56264 -0.0563,1.16279 -0.48762,1.78169 z m -2.70067,0.93773 c 0.33758,-0.11252 0.65641,-0.31883 0.95649,-0.56264 0.30007,-0.26256 0.58139,-0.56264 0.8252,-0.91898 0.28132,-0.43135 0.41261,-0.80645 0.35634,-1.16279 -0.0563,-0.35633 -0.26256,-0.67516 -0.6189,-0.91897 -0.30008,-0.20631 -0.61891,-0.31883 -0.93773,-0.33759 -0.33759,-0.0188 -0.65642,0.0375 -0.994,0.15004 -0.33759,0.13128 -0.65642,0.31883 -0.95649,0.58139 -0.31883,0.26257 -0.5814,0.56264 -0.82521,0.90023 -0.28132,0.4126 -0.4126,0.80645 -0.35633,1.18154 0.0375,0.35634 0.26256,0.65642 0.6189,0.91898 0.30007,0.2063 0.60015,0.31883 0.93773,0.33759 0.31883,0.0188 0.65642,-0.0375 0.994,-0.1688 z m 8.34583,-2.2318 -0.82521,0.73143 -0
 .95648,-0.65641 0.80645,-0.75019 z m -1.66917,1.4066 -4.65115,4.23855 -0.86272,-0.60015 4.66991,-4.23855 z m 4.81995,5.045 c -0.0375,0.075 -0.11252,0.1688 -0.22505,0.28132 -0.0938,0.11253 -0.18755,0.20631 -0.28132,0.28132 l -3.0195,2.75694 -0.84396,-0.60015 2.6444,-2.4006 c 0.15004,-0.15003 0.28132,-0.26256 0.3751,-0.37509 0.11253,-0.0938 0.18754,-0.2063 0.28132,-0.31883 0.16879,-0.26256 0.24381,-0.48762 0.2063,-0.71268 -0.0375,-0.2063 -0.22506,-0.43135 -0.54389,-0.65641 -0.22505,-0.15004 -0.52513,-0.26257 -0.88147,-0.33758 -0.37509,-0.0563 -0.75018,-0.0938 -1.12528,-0.11253 l -3.48836,3.16954 -0.84397,-0.5814 4.66992,-4.23855 0.84396,0.58139 -0.52513,0.46887 c 0.48762,0 0.90022,0.0375 1.27532,0.11253 0.35633,0.075 0.69392,0.2063 0.99399,0.4126 0.43136,0.31883 0.69393,0.65641 0.7877,1.05026 0.0938,0.39385 0,0.80645 -0.30008,1.21905 z m 5.32633,1.988 -0.6189,0.58139 -1.76294,-1.21905 -2.15679,1.95049 c -0.11253,0.0938 -0.22506,0.2063 -0.35634,0.35634 -0.13128,0.13128 -0.22505,0.24381
  -0.28132,0.33758 -0.15004,0.2063 -0.2063,0.4126 -0.16879,0.60015 0.0375,0.18755 0.2063,0.37509 0.50638,0.58139 0.11252,0.0938 0.28132,0.1688 0.46886,0.24382 0.2063,0.075 0.33759,0.13128 0.41261,0.15003 l 0.0563,0.0188 -0.65641,0.6189 c -0.20631,-0.075 -0.41261,-0.15004 -0.61891,-0.26256 -0.2063,-0.11253 -0.39385,-0.20631 -0.54388,-0.31883 -0.41261,-0.28132 -0.67517,-0.60015 -0.76895,-0.93774 -0.0938,-0.33758 -0.0187,-0.69392 0.26257,-1.10652 0.0563,-0.0938 0.13128,-0.16879 0.2063,-0.26257 0.0938,-0.0938 0.18755,-0.18755 0.28132,-0.28132 l 2.51313,-2.26931 -0.5814,-0.41261 0.63766,-0.58139 0.5814,0.4126 1.33158,-1.21905 0.86271,0.58139 -1.35033,1.21906 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3184"
+         d="m 410.35223,475.95607 c 0,-3.86346 14.96623,-6.99549 33.43958,-6.99549 18.47335,0 33.43958,3.13203 33.43958,6.99549 l 0,28.03824 c 0,3.86346 -14.96623,6.99549 -33.43958,6.99549 -18.47335,0 -33.43958,-3.13203 -33.43958,-6.99549 z"
+         style="fill:#d9d9d9;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3186"
+         d="m 477.23139,475.95607 c 0,3.88222 -14.96623,7.01425 -33.43958,7.01425 -18.47335,0 -33.43958,-3.13203 -33.43958,-7.01425"
+         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3188"
+         d="m 410.35223,475.95607 c 0,-3.86346 14.96623,-6.99549 33.43958,-6.99549 18.47335,0 33.43958,3.13203 33.43958,6.99549 l 0,28.03824 c 0,3.86346 -14.96623,6.99549 -33.43958,6.99549 -18.47335,0 -33.43958,-3.13203 -33.43958,-6.99549 z"
+         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3190"
+         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="526.71808"
+         x="402.59354"
+         xml:space="preserve">(snapshot store)</text>
+      <path
+         id="path3192"
+         d="m 358.25175,418.11666 -1.4066,8.73968 0,0 -0.67517,4.33233 0,-0.0188 -0.60015,4.27606 0,0 -0.54388,4.20105 0,0 -0.45012,4.10727 0,-0.0188 -0.33758,4.0135 0,-0.0188 -0.22506,3.88222 0,-0.0188 -0.075,3.73218 0,-0.0188 0.075,3.58214 0,-0.0375 0.26257,3.3946 -0.0188,-0.0375 0.46887,3.2258 0,-0.0375 0.30008,1.51913 -0.0188,-0.0375 0.37509,1.48162 -0.0187,-0.0375 0.43136,1.42536 -0.0188,-0.0375 0.48762,1.35033 -0.0188,-0.0188 0.54389,1.27532 -0.0188,-0.0375 0.60015,1.23781 -0.0187,-0.0375 0.67516,1.16279 -0.0187,-0.0375 0.75019,1.08777 -0.0375,-0.0375 0.8252,1.01275 -0.0375,-0.0375 0.88147,0.95649 -0.0375,-0.0375 0.95649,0.88147 -0.0375,-0.0188 1.01275,0.82521 -0.0375,-0.0375 1.06902,0.76894 -0.0375,-0.0187 1.14404,0.69392 -0.0375,-0.0188 1.2003,0.63765 -0.0375,-0.0188 1.25656,0.5814 -0.0375,-0.0188 2.64441,1.01275 -0.0375,-0.0188 2.83196,0.80645 -0.0375,0 3.0195,0.63766 -0.0375,-0.0188 3.18829,0.48762 -0.0188,0 3.31958,0.31883 -0.0188,0 3.46962,0.22506 -0.0375,0 3.56338,0.093
 8 -0.0188,0 3.65716,0.0188 -0.0187,0 3.71342,-0.0563 0,0 3.78844,-0.11252 -0.0188,0 3.88222,-0.15004 0.0563,1.23781 -3.88222,0.16879 -3.78844,0.0938 -3.73218,0.0563 -3.67592,0 -3.56338,-0.11253 -3.48837,-0.2063 -3.35709,-0.33759 -3.20705,-0.48762 -3.07576,-0.63766 -2.88822,-0.8252 -2.70068,-1.03151 -1.29407,-0.60015 -1.21905,-0.65641 -1.16279,-0.71268 -1.10653,-0.7877 -1.05026,-0.84396 -0.97524,-0.91898 -0.91898,-0.97524 -0.84396,-1.06902 -0.76894,-1.12528 -0.69393,-1.20029 -0.6189,-1.25657 -0.56264,-1.33158 -0.48762,-1.36909 -0.43136,-1.44411 -0.37509,-1.50037 -0.30008,-1.55664 -0.46886,-3.24456 -0.26257,-3.45086 -0.075,-3.61965 0.075,-3.75094 0.22506,-3.90097 0.35634,-4.03225 0.45011,-4.12603 0.54388,-4.2198 0.60015,-4.27607 0.67517,-4.33233 1.42536,-8.75843 1.21905,0.2063 z m 44.03597,65.11623 5.08252,2.28807 -4.89497,2.70067 -0.18755,-4.98874 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3194"
+         d="m 247.59918,420.55477 -0.37509,4.87622 0,-0.0188 -0.26257,4.85746 0,-0.0188 -0.0938,4.8387 0,-0.0187 0.0938,4.80119 0,-0.0187 0.13129,2.38184 0,-0.0187 0.18754,2.36309 0,-0.0188 0.24381,2.34434 0,-0.0188 0.30008,2.30683 0,-0.0188 0.37509,2.26932 0,-0.0188 0.45011,2.25056 0,-0.0188 0.52514,2.21305 0,-0.0188 0.6189,2.1943 -0.0187,-0.0375 0.71267,2.13803 -0.0188,-0.0187 0.78769,2.08177 -0.0187,-0.0188 0.88147,2.04426 -0.0188,-0.0188 0.97525,1.98799 -0.0188,-0.0188 1.08777,1.95048 -0.0188,-0.0375 1.16279,1.89423 -0.0188,-0.0188 1.29408,1.81921 -0.0188,-0.0188 1.38785,1.76294 -0.0188,-0.0188 1.50038,1.70667 -0.0188,-0.0188 1.63166,1.65041 -0.0375,-0.0375 1.76294,1.57539 -0.0375,-0.0188 1.87547,1.50037 -0.0188,-0.0188 1.988,1.42536 -0.0188,-0.0188 2.13803,1.33159 -0.0188,-0.0188 2.26932,1.27532 -0.0375,-0.0188 2.41935,1.18155 -0.0188,-0.0188 2.53188,1.10653 -0.0187,-0.0188 2.64441,1.01275 -0.0188,-0.0188 2.7757,0.93774 -0.0375,0 2.86946,0.84396 -0.0188,0 2.98199,0.76894 -0.018
 8,-0.0188 3.09453,0.71268 -0.0188,0 3.1883,0.61891 0,0 3.30082,0.56264 -0.0188,0 3.3946,0.48762 0,0 3.50712,0.45011 -0.0188,0 3.61965,0.37509 -0.0188,0 3.73218,0.31883 -0.0187,0 3.82595,0.28132 0,0 3.91973,0.22506 0,0 4.03225,0.16879 0,0 4.12603,0.15004 0,0 4.23856,0.0938 -0.0188,0 4.35109,0.0563 0,0 4.4261,0.0188 0,0 4.53863,-0.0188 0,0 4.6324,-0.0375 0,0 4.72618,-0.0563 0,0 4.83871,-0.0938 0,0 4.95123,-0.0938 0,0 5.02625,-0.13129 5.13878,-0.13128 5.2138,-0.15004 5.32633,-0.15003 5.4201,-0.15004 5.51387,-0.16879 5.60765,-0.1688 1.96924,-0.0563 0.0188,1.25656 -1.95049,0.0563 -5.60765,0.15003 -5.51387,0.1688 -5.4201,0.16879 -5.32633,0.15004 -5.23255,0.15003 -5.12003,0.13129 -5.045,0.11252 -4.93248,0.11253 -4.85746,0.075 -4.72618,0.075 -4.65116,0.0375 -4.53863,0 -4.44486,-0.0188 -4.35108,-0.0563 -4.23856,-0.0938 -4.14478,-0.13129 -4.03225,-0.18754 -3.93849,-0.22506 -3.8447,-0.26256 -3.73218,-0.31883 -3.61965,-0.39385 -3.52588,-0.45011 -3.41335,-0.48763 -3.31958,-0.56264 -3.20705,-0.63
 765 -3.11328,-0.71268 -3.00074,-0.76894 -2.88822,-0.86272 -2.79445,-0.93773 -2.68192,-1.01275 -2.55063,-1.10653 -2.45686,-1.18154 -2.28807,-1.29408 -2.1943,-1.35033 -2.02551,-1.44411 -1.89422,-1.53789 -1.78169,-1.59414 -1.65041,-1.66917 -1.53789,-1.74418 -1.4066,-1.80045 -1.31282,-1.85671 -1.18155,-1.91298 -1.10652,-1.988 -0.97525,-2.0255 -0.88147,-2.08177 -0.78769,-2.11928 -0.71268,-2.15679 -0.6189,-2.21305 -0.54389,-2.2318 -0.45011,-2.26932 -0.39385,-2.30682 -0.31883,-2.32558 -0.24381,-2.34434 -0.18755,-2.38184 -0.11252,-2.4006 -0.0938,-4.8387 0.0938,-4.85746 0.24381,-4.87622 0.39384,-4.87621 1.23781,0.0938 z m 153.61952,70.33002 5.06377,2.36309 -4.93248,2.64441 -0.13129,-5.0075 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3196"
+         d="m 104.81986,419.2607 0.0188,3.0195 0,-0.0375 0.15004,2.98199 0,-0.0375 0.30007,2.94449 -0.0187,-0.0375 0.46886,2.90697 0,-0.0188 0.60015,2.85071 -0.0187,-0.0375 0.75018,2.83195 0,-0.0375 0.90023,2.77569 0,-0.0188 1.05026,2.73818 -0.0187,-0.0375 1.20029,2.68191 -0.0187,-0.0188 1.35034,2.64441 -0.0188,-0.0375 1.50038,2.6069 -0.0188,-0.0375 1.65041,2.55063 -0.0188,-0.0187 1.8192,2.49437 -0.0187,-0.0187 1.95048,2.4381 -0.0187,-0.0187 2.10052,2.38184 -0.0187,-0.0187 2.25056,2.34433 -0.0188,-0.0188 2.4006,2.28807 -0.0188,-0.0188 2.55064,2.21305 -0.0187,0 2.70067,2.17555 -0.0188,-0.0188 2.86947,2.10052 -0.0188,-0.0188 3.00075,2.04426 -0.0188,0 3.15078,1.98799 -0.0188,-0.0188 3.31957,1.93173 -0.0188,-0.0188 3.45086,1.85671 -0.0188,0 3.61966,1.78169 -0.0188,0 3.75094,1.72543 -0.0188,-0.0188 3.91973,1.66916 -0.0188,-0.0188 4.06977,1.59415 -0.0188,0 4.2198,1.51912 -0.0187,-0.0188 4.36984,1.44411 -0.0188,0 4.51988,1.36909 -0.0188,0 4.66992,1.29407 -0.0188,0 9.62115,2.28807 -0.0375,0
  9.92122,1.89422 -0.0375,-0.0187 10.27756,1.51913 -0.0187,0 10.70892,1.2003 -0.0188,0 11.19654,0.88147 -0.0188,0 5.8327,0.33758 0,0 5.96398,0.28132 0,0 6.13278,0.22506 -0.0187,0 6.32032,0.18754 0,0 6.48912,0.11253 -0.0188,0 6.67667,0.0938 0,0 6.88296,0.0375 7.08927,0.0188 -0.0188,0 7.31433,-0.0188 7.52062,-0.0375 0,0 7.76443,-0.0563 8.00825,-0.075 8.27081,-0.0938 8.51462,-0.0938 8.79594,-0.0938 9.07726,-0.0938 9.37734,-0.075 9.65866,-0.075 9.97748,-0.0563 10.27756,-0.0375 10.61515,0 7.20179,0 0,1.25656 -7.20179,-0.0188 -10.59639,0.0188 -10.29632,0.0375 -9.97748,0.0563 -9.65866,0.0563 -9.35858,0.0938 -9.07726,0.0938 -8.79594,0.0938 -8.53338,0.0938 -8.25206,0.075 -8.00824,0.075 -7.76444,0.075 -7.53937,0.0375 -7.31433,0.0188 -7.08926,-0.0188 -6.86421,-0.0375 -6.69542,-0.0938 -6.48912,-0.13128 -6.32032,-0.16879 -6.13278,-0.22506 -5.98274,-0.28132 -5.8327,-0.35633 -11.2153,-0.88147 -10.72767,-1.18155 -10.29631,-1.53788 -9.93998,-1.89422 -9.6399,-2.30683 -4.68867,-1.29407 -4.53863,-1.3690
 9 -4.36984,-1.46287 -4.23856,-1.51912 -4.08851,-1.59415 -3.91973,-1.66917 -3.76969,-1.72543 -3.63841,-1.80045 -3.46961,-1.85671 -3.33833,-1.95048 -3.16954,-1.988 -3.0195,-2.06301 -2.88822,-2.11928 -2.71943,-2.1943 -2.56939,-2.2318 -2.43811,-2.30683 -2.26931,-2.36309 -2.11928,-2.41935 -1.96924,-2.45686 -1.81921,-2.53188 -1.66916,-2.56939 -1.51913,-2.62566 -1.36909,-2.68192 -1.2003,-2.71942 -1.06902,-2.75694 -0.91898,-2.8132 -0.75018,-2.85071 -0.61891,-2.90698 -0.46886,-2.92572 -0.30008,-2.982 -0.15004,-3.00074 0,-3.03826 1.23781,0 z m 298.19929,77.96317 4.98874,2.51313 -5.00749,2.49437 0.0188,-5.0075 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3198"
+         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="468.82831"
+         x="182.75459"
+         xml:space="preserve">store state</text>
+      <text
+         id="text3200"
+         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="482.33167"
+         x="184.85512"
+         xml:space="preserve">snapshots</text>
+    </g>
+  </g>
+</svg>


[69/89] [abbrv] flink git commit: [FLINK-4273] Modify JobClient to attach to running jobs

Posted by se...@apache.org.
[FLINK-4273] Modify JobClient to attach to running jobs

These changes are required for FLINK-4272 (introduce a JobClient class
for job control). Essentially, we want to be able to re-attach to a
running job and monitor it. It shouldn't make any difference whether we
just submitted the job or we recover it from an existing JobID.

This PR modifies the JobClientActor to support two different operation
modes: a) submitJob and monitor b) re-attach to job and monitor

The JobClient class has been updated with methods to access this
functionality. Before the class just had `submitJobAndWait` and
`submitJobDetached`. Now, it has the additional methods `submitJob`,
`attachToRunningJob`, and `awaitJobResult`.

The job submission has been split up in two phases:

1a. submitJob(..)
Submit job and return a future which can be completed to
get the result with `awaitJobResult`

1b. attachToRunningJob(..)
Re-attach to a runnning job, reconstruct its class loader, and return a
future which can be completed with `awaitJobResult`

2. awaitJobResult(..)
Blocks until the returned future from either `submitJob` or
`attachToRunningJob` has been completed

- split up JobClientActor into a base class and two implementations
- JobClient: on waiting check JobClientActor liveness
- lazily reconstruct user class loader
- add additional tests for JobClientActor
- add test case to test resuming of jobs

This closes #2313


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/259a3a55
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/259a3a55
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/259a3a55

Branch: refs/heads/flip-6
Commit: 259a3a5569952458140afc8e9ad96eac0c330162
Parents: 444315a
Author: Maximilian Michels <mx...@apache.org>
Authored: Thu Aug 18 16:04:35 2016 +0200
Committer: Maximilian Michels <mx...@apache.org>
Committed: Thu Aug 25 15:46:15 2016 +0200

----------------------------------------------------------------------
 .../flink/client/program/ClusterClient.java     |  41 ++-
 .../src/test/resources/log4j-test.properties    |   2 +-
 .../flink/api/common/JobExecutionResult.java    |   4 +-
 .../client/JobAttachmentClientActor.java        | 171 +++++++++++
 .../apache/flink/runtime/client/JobClient.java  | 292 +++++++++++++++----
 .../flink/runtime/client/JobClientActor.java    | 281 ++++++------------
 ...ClientActorRegistrationTimeoutException.java |  35 +++
 .../runtime/client/JobListeningContext.java     | 145 +++++++++
 .../runtime/client/JobRetrievalException.java   |  42 +++
 .../client/JobSubmissionClientActor.java        | 192 ++++++++++++
 .../runtime/executiongraph/ExecutionGraph.java  |   1 +
 .../flink/runtime/jobmanager/JobInfo.scala      |  62 +++-
 .../flink/runtime/jobmanager/JobManager.scala   | 161 +++++-----
 .../runtime/messages/JobClientMessages.scala    |  23 +-
 .../runtime/messages/JobManagerMessages.scala   |  48 ++-
 .../testingUtils/TestingJobManagerLike.scala    |  12 +-
 .../TestingJobManagerMessages.scala             |   6 +
 .../runtime/client/JobClientActorTest.java      | 190 +++++++++++-
 .../ZooKeeperSubmittedJobGraphsStoreITCase.java |   3 +-
 .../clients/examples/JobRetrievalITCase.java    | 138 +++++++++
 20 files changed, 1499 insertions(+), 350 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/259a3a55/flink-clients/src/main/java/org/apache/flink/client/program/ClusterClient.java
----------------------------------------------------------------------
diff --git a/flink-clients/src/main/java/org/apache/flink/client/program/ClusterClient.java b/flink-clients/src/main/java/org/apache/flink/client/program/ClusterClient.java
index c3c666b..292da70 100644
--- a/flink-clients/src/main/java/org/apache/flink/client/program/ClusterClient.java
+++ b/flink-clients/src/main/java/org/apache/flink/client/program/ClusterClient.java
@@ -47,6 +47,8 @@ import org.apache.flink.core.fs.Path;
 import org.apache.flink.runtime.akka.AkkaUtils;
 import org.apache.flink.runtime.client.JobClient;
 import org.apache.flink.runtime.client.JobExecutionException;
+import org.apache.flink.runtime.client.JobListeningContext;
+import org.apache.flink.runtime.client.JobRetrievalException;
 import org.apache.flink.runtime.clusterframework.messages.GetClusterStatusResponse;
 import org.apache.flink.runtime.instance.ActorGateway;
 import org.apache.flink.runtime.jobgraph.JobGraph;
@@ -429,6 +431,39 @@ public abstract class ClusterClient {
 	}
 
 	/**
+	 * Reattaches to a running from from the supplied job id
+	 * @param jobID The job id of the job to attach to
+	 * @return The JobExecutionResult for the jobID
+	 * @throws JobExecutionException if an error occurs during monitoring the job execution
+	 */
+	public JobExecutionResult retrieveJob(JobID jobID) throws JobExecutionException {
+		final LeaderRetrievalService leaderRetrievalService;
+		try {
+			leaderRetrievalService = LeaderRetrievalUtils.createLeaderRetrievalService(flinkConfig);
+		} catch (Exception e) {
+			throw new JobRetrievalException(jobID, "Could not create the leader retrieval service", e);
+		}
+
+		ActorGateway jobManagerGateway;
+		try {
+			jobManagerGateway = getJobManagerGateway();
+		} catch (Exception e) {
+			throw new JobRetrievalException(jobID, "Could not retrieve the JobManager Gateway");
+		}
+
+		final JobListeningContext listeningContext = JobClient.attachToRunningJob(
+				jobID,
+				jobManagerGateway,
+				flinkConfig,
+				actorSystemLoader.get(),
+				leaderRetrievalService,
+				timeout,
+				printStatusDuringExecution);
+
+		return JobClient.awaitJobResult(listeningContext);
+	}
+
+	/**
 	 * Cancels a job identified by the job id.
 	 * @param jobId the job id
 	 * @throws Exception In case an error occurred.
@@ -446,11 +481,11 @@ public abstract class ClusterClient {
 		final Object result = Await.result(response, timeout);
 
 		if (result instanceof JobManagerMessages.CancellationSuccess) {
-			LOG.info("Job cancellation with ID " + jobId + " succeeded.");
+			logAndSysout("Job cancellation with ID " + jobId + " succeeded.");
 		} else if (result instanceof JobManagerMessages.CancellationFailure) {
 			final Throwable t = ((JobManagerMessages.CancellationFailure) result).cause();
-			LOG.info("Job cancellation with ID " + jobId + " failed.", t);
-			throw new Exception("Failed to cancel the job because of \n" + t.getMessage());
+			logAndSysout("Job cancellation with ID " + jobId + " failed because of " + t.getMessage());
+			throw new Exception("Failed to cancel the job with id " + jobId, t);
 		} else {
 			throw new Exception("Unknown message received while cancelling: " + result.getClass().getName());
 		}

http://git-wip-us.apache.org/repos/asf/flink/blob/259a3a55/flink-clients/src/test/resources/log4j-test.properties
----------------------------------------------------------------------
diff --git a/flink-clients/src/test/resources/log4j-test.properties b/flink-clients/src/test/resources/log4j-test.properties
index 85897b3..5100c1f 100644
--- a/flink-clients/src/test/resources/log4j-test.properties
+++ b/flink-clients/src/test/resources/log4j-test.properties
@@ -27,4 +27,4 @@ log4j.appender.testlogger.layout=org.apache.log4j.PatternLayout
 log4j.appender.testlogger.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n
 
 # suppress the irrelevant (wrong) warnings from the netty channel handler
-log4j.logger.org.jboss.netty.channel.DefaultChannelPipeline=ERROR, testlogger
\ No newline at end of file
+log4j.logger.org.jboss.netty.channel.DefaultChannelPipeline=ERROR, testlogger

http://git-wip-us.apache.org/repos/asf/flink/blob/259a3a55/flink-core/src/main/java/org/apache/flink/api/common/JobExecutionResult.java
----------------------------------------------------------------------
diff --git a/flink-core/src/main/java/org/apache/flink/api/common/JobExecutionResult.java b/flink-core/src/main/java/org/apache/flink/api/common/JobExecutionResult.java
index bc5ae09..cb4ecc5 100644
--- a/flink-core/src/main/java/org/apache/flink/api/common/JobExecutionResult.java
+++ b/flink-core/src/main/java/org/apache/flink/api/common/JobExecutionResult.java
@@ -34,7 +34,7 @@ public class JobExecutionResult extends JobSubmissionResult {
 
 	private long netRuntime;
 
-	private Map<String, Object> accumulatorResults = Collections.emptyMap();
+	private final Map<String, Object> accumulatorResults;
 
 	/**
 	 * Creates a new JobExecutionResult.
@@ -49,6 +49,8 @@ public class JobExecutionResult extends JobSubmissionResult {
 
 		if (accumulators != null) {
 			this.accumulatorResults = accumulators;
+		} else {
+			this.accumulatorResults = Collections.emptyMap();
 		}
 	}
 

http://git-wip-us.apache.org/repos/asf/flink/blob/259a3a55/flink-runtime/src/main/java/org/apache/flink/runtime/client/JobAttachmentClientActor.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/client/JobAttachmentClientActor.java b/flink-runtime/src/main/java/org/apache/flink/runtime/client/JobAttachmentClientActor.java
new file mode 100644
index 0000000..5446002
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/client/JobAttachmentClientActor.java
@@ -0,0 +1,171 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.client;
+
+import akka.actor.ActorRef;
+import akka.actor.Props;
+import akka.actor.Status;
+import akka.dispatch.Futures;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.configuration.ConfigConstants;
+import org.apache.flink.runtime.akka.ListeningBehaviour;
+import org.apache.flink.runtime.leaderretrieval.LeaderRetrievalService;
+import org.apache.flink.runtime.messages.JobClientMessages;
+import org.apache.flink.runtime.messages.JobClientMessages.AttachToJobAndWait;
+import org.apache.flink.runtime.messages.JobManagerMessages;
+import scala.concurrent.duration.FiniteDuration;
+
+import java.util.concurrent.Callable;
+
+
+/**
+ * Actor which handles Job attachment process and provides Job updates until completion.
+ */
+public class JobAttachmentClientActor extends JobClientActor {
+
+	/** JobID to attach to when the JobClientActor retrieves a job */
+	private JobID jobID;
+	/** true if a JobRegistrationSuccess message has been received */
+	private boolean successfullyRegisteredForJob = false;
+
+	public JobAttachmentClientActor(
+			LeaderRetrievalService leaderRetrievalService,
+			FiniteDuration timeout,
+			boolean sysoutUpdates) {
+		super(leaderRetrievalService, timeout, sysoutUpdates);
+	}
+
+	@Override
+	public void connectedToJobManager() {
+		if (jobID != null && !successfullyRegisteredForJob) {
+			tryToAttachToJob();
+		}
+	}
+
+	@Override
+	protected Class getClientMessageClass() {
+		return AttachToJobAndWait.class;
+	}
+
+	@Override
+	public void handleCustomMessage(Object message) {
+		if (message instanceof AttachToJobAndWait) {
+			// sanity check that this no job registration was performed through this actor before -
+			// it is a one-shot actor after all
+			if (this.client == null) {
+				jobID = ((AttachToJobAndWait) message).jobID();
+				if (jobID == null) {
+					LOG.error("Received null JobID");
+					sender().tell(
+						decorateMessage(new Status.Failure(new Exception("JobID is null"))),
+						getSelf());
+				} else {
+					LOG.info("Received JobID {}.", jobID);
+
+					this.client = getSender();
+
+					// is only successful if we already know the job manager leader
+					if (jobManager != null) {
+						tryToAttachToJob();
+					}
+				}
+			} else {
+				// repeated submission - tell failure to sender and kill self
+				String msg = "Received repeated 'AttachToJobAndWait'";
+				LOG.error(msg);
+				getSender().tell(
+					decorateMessage(new Status.Failure(new Exception(msg))), ActorRef.noSender());
+
+				terminate();
+			}
+		}
+		else if (message instanceof JobManagerMessages.RegisterJobClientSuccess) {
+			// job registration was successful :o)
+			JobManagerMessages.RegisterJobClientSuccess msg = ((JobManagerMessages.RegisterJobClientSuccess) message);
+			logAndPrintMessage("Successfully registered at the JobManager for Job " + msg.jobId());
+			successfullyRegisteredForJob = true;
+		}
+		else if (message instanceof JobManagerMessages.JobNotFound) {
+			LOG.info("Couldn't register JobClient for JobID {}",
+				((JobManagerMessages.JobNotFound) message).jobID());
+			client.tell(decorateMessage(message), getSelf());
+			terminate();
+		}
+		else if (JobClientMessages.getRegistrationTimeout().equals(message)) {
+			// check if our registration for a job was successful in the meantime
+			if (!successfullyRegisteredForJob) {
+				if (isClientConnected()) {
+					client.tell(
+						decorateMessage(new Status.Failure(
+							new JobClientActorRegistrationTimeoutException("Registration for Job at the JobManager " +
+								"timed out. " +	"You may increase '" + ConfigConstants.AKKA_CLIENT_TIMEOUT +
+								"' in case the JobManager needs more time to confirm the job client registration."))),
+						getSelf());
+				}
+
+				// We haven't heard back from the job manager after attempting registration for a job
+				// therefore terminate
+				terminate();
+			}
+		} else {
+			LOG.error("{} received unknown message: ", getClass());
+		}
+
+	}
+
+	private void tryToAttachToJob() {
+		LOG.info("Sending message to JobManager {} to attach to job {} and wait for progress", jobID);
+
+		Futures.future(new Callable<Object>() {
+			@Override
+			public Object call() throws Exception {
+				LOG.info("Attaching to job {} at the job manager {}.", jobID, jobManager.path());
+
+				jobManager.tell(
+					decorateMessage(
+						new JobManagerMessages.RegisterJobClient(
+							jobID,
+							ListeningBehaviour.EXECUTION_RESULT_AND_STATE_CHANGES)),
+					getSelf());
+
+				// issue a RegistrationTimeout message to check that we submit the job within
+				// the given timeout
+				getContext().system().scheduler().scheduleOnce(
+					timeout,
+					getSelf(),
+					decorateMessage(JobClientMessages.getRegistrationTimeout()),
+					getContext().dispatcher(),
+					ActorRef.noSender());
+
+				return null;
+			}
+		}, getContext().dispatcher());
+	}
+
+	public static Props createActorProps(
+			LeaderRetrievalService leaderRetrievalService,
+			FiniteDuration timeout,
+			boolean sysoutUpdates) {
+		return Props.create(
+			JobAttachmentClientActor.class,
+			leaderRetrievalService,
+			timeout,
+			sysoutUpdates);
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/259a3a55/flink-runtime/src/main/java/org/apache/flink/runtime/client/JobClient.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/client/JobClient.java b/flink-runtime/src/main/java/org/apache/flink/runtime/client/JobClient.java
index c0e0d08..4e916eb 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/client/JobClient.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/client/JobClient.java
@@ -21,6 +21,7 @@ package org.apache.flink.runtime.client;
 import akka.actor.ActorRef;
 import akka.actor.ActorSystem;
 import akka.actor.Address;
+import akka.actor.Identify;
 import akka.actor.PoisonPill;
 import akka.actor.Props;
 import akka.pattern.Patterns;
@@ -30,6 +31,8 @@ import org.apache.flink.api.common.JobID;
 import org.apache.flink.configuration.Configuration;
 import org.apache.flink.runtime.akka.AkkaUtils;
 import org.apache.flink.runtime.akka.ListeningBehaviour;
+import org.apache.flink.runtime.blob.BlobCache;
+import org.apache.flink.runtime.blob.BlobKey;
 import org.apache.flink.runtime.instance.ActorGateway;
 import org.apache.flink.runtime.jobgraph.JobGraph;
 import org.apache.flink.runtime.leaderretrieval.LeaderRetrievalService;
@@ -44,10 +47,15 @@ import scala.Some;
 import scala.Tuple2;
 import scala.concurrent.Await;
 import scala.concurrent.Future;
+import scala.concurrent.duration.Duration;
 import scala.concurrent.duration.FiniteDuration;
 
 import java.io.IOException;
 import java.net.InetAddress;
+import java.net.InetSocketAddress;
+import java.net.URL;
+import java.net.URLClassLoader;
+import java.util.List;
 import java.util.concurrent.TimeoutException;
 
 import static org.apache.flink.util.Preconditions.checkNotNull;
@@ -80,28 +88,18 @@ public class JobClient {
 	}
 
 	/**
-	 * Sends a [[JobGraph]] to the JobClient actor specified by jobClient which submits it then to
-	 * the JobManager. The method blocks until the job has finished or the JobManager is no longer
-	 * alive. In the former case, the [[SerializedJobExecutionResult]] is returned and in the latter
-	 * case a [[JobExecutionException]] is thrown.
-	 *
-	 * @param actorSystem The actor system that performs the communication.
-	 * @param leaderRetrievalService Leader retrieval service which used to find the current leading
-	 *                               JobManager
-	 * @param jobGraph    JobGraph describing the Flink job
-	 * @param timeout     Timeout for futures
-	 * @param sysoutLogUpdates prints log updates to system out if true
-	 * @return The job execution result
-	 * @throws org.apache.flink.runtime.client.JobExecutionException Thrown if the job
-	 *                                                               execution fails.
+	 * Submits a job to a Flink cluster (non-blocking) and returns a JobListeningContext which can be
+	 * passed to {@code awaitJobResult} to get the result of the submission.
+	 * @return JobListeningContext which may be used to retrieve the JobExecutionResult via
+	 * 			{@code awaitJobResult(JobListeningContext context)}.
 	 */
-	public static JobExecutionResult submitJobAndWait(
+	public static JobListeningContext submitJob(
 			ActorSystem actorSystem,
 			LeaderRetrievalService leaderRetrievalService,
 			JobGraph jobGraph,
 			FiniteDuration timeout,
 			boolean sysoutLogUpdates,
-			ClassLoader classLoader) throws JobExecutionException {
+			ClassLoader classLoader) {
 
 		checkNotNull(actorSystem, "The actorSystem must not be null.");
 		checkNotNull(leaderRetrievalService, "The jobManagerGateway must not be null.");
@@ -112,29 +110,187 @@ public class JobClient {
 		// the JobManager. It forwards the job submission, checks the success/failure responses, logs
 		// update messages, watches for disconnect between client and JobManager, ...
 
-		Props jobClientActorProps = JobClientActor.createJobClientActorProps(
+		Props jobClientActorProps = JobSubmissionClientActor.createActorProps(
 			leaderRetrievalService,
 			timeout,
 			sysoutLogUpdates);
 
 		ActorRef jobClientActor = actorSystem.actorOf(jobClientActorProps);
-		
-		// first block handles errors while waiting for the result
-		Object answer;
+
+		Future<Object> submissionFuture = Patterns.ask(
+				jobClientActor,
+				new JobClientMessages.SubmitJobAndWait(jobGraph),
+				new Timeout(AkkaUtils.INF_TIMEOUT()));
+
+		return new JobListeningContext(
+				jobGraph.getJobID(),
+				submissionFuture,
+				jobClientActor,
+				timeout,
+				classLoader);
+	}
+
+
+	/**
+	 * Attaches to a running Job using the JobID.
+	 * Reconstructs the user class loader by downloading the jars from the JobManager.
+	 */
+	public static JobListeningContext attachToRunningJob(
+			JobID jobID,
+			ActorGateway jobManagerGateWay,
+			Configuration configuration,
+			ActorSystem actorSystem,
+			LeaderRetrievalService leaderRetrievalService,
+			FiniteDuration timeout,
+			boolean sysoutLogUpdates) {
+
+		checkNotNull(jobID, "The jobID must not be null.");
+		checkNotNull(jobManagerGateWay, "The jobManagerGateWay must not be null.");
+		checkNotNull(configuration, "The configuration must not be null.");
+		checkNotNull(actorSystem, "The actorSystem must not be null.");
+		checkNotNull(leaderRetrievalService, "The jobManagerGateway must not be null.");
+		checkNotNull(timeout, "The timeout must not be null.");
+
+		// we create a proxy JobClientActor that deals with all communication with
+		// the JobManager. It forwards the job attachments, checks the success/failure responses, logs
+		// update messages, watches for disconnect between client and JobManager, ...
+		Props jobClientActorProps = JobAttachmentClientActor.createActorProps(
+			leaderRetrievalService,
+			timeout,
+			sysoutLogUpdates);
+
+		ActorRef jobClientActor = actorSystem.actorOf(jobClientActorProps);
+
+		Future<Object> attachmentFuture = Patterns.ask(
+				jobClientActor,
+				new JobClientMessages.AttachToJobAndWait(jobID),
+				new Timeout(AkkaUtils.INF_TIMEOUT()));
+
+		return new JobListeningContext(
+				jobID,
+				attachmentFuture,
+				jobClientActor,
+				timeout,
+				actorSystem,
+				configuration);
+	}
+
+	/**
+	 * Reconstructs the class loader by first requesting information about it at the JobManager
+	 * and then downloading missing jar files.
+	 * @param jobID id of job
+	 * @param jobManager gateway to the JobManager
+	 * @param config the flink configuration
+	 * @return A classloader that should behave like the original classloader
+	 * @throws JobRetrievalException if anything goes wrong
+	 */
+	public static ClassLoader retrieveClassLoader(
+		JobID jobID,
+		ActorGateway jobManager,
+		Configuration config)
+		throws JobRetrievalException {
+
+		final Object jmAnswer;
 		try {
-			Future<Object> future = Patterns.ask(jobClientActor,
-					new JobClientMessages.SubmitJobAndWait(jobGraph),
-					new Timeout(AkkaUtils.INF_TIMEOUT()));
-			
-			answer = Await.result(future, AkkaUtils.INF_TIMEOUT());
+			jmAnswer = Await.result(
+				jobManager.ask(
+					new JobManagerMessages.RequestClassloadingProps(jobID),
+					AkkaUtils.getDefaultTimeout()),
+				AkkaUtils.getDefaultTimeout());
+		} catch (Exception e) {
+			throw new JobRetrievalException(jobID, "Couldn't retrieve class loading properties from JobManager.", e);
 		}
-		catch (TimeoutException e) {
-			throw new JobTimeoutException(jobGraph.getJobID(), "Timeout while waiting for JobManager answer. " +
-					"Job time exceeded " + AkkaUtils.INF_TIMEOUT(), e);
+
+		if (jmAnswer instanceof JobManagerMessages.ClassloadingProps) {
+			JobManagerMessages.ClassloadingProps props = ((JobManagerMessages.ClassloadingProps) jmAnswer);
+
+			Option<String> jmHost = jobManager.actor().path().address().host();
+			String jmHostname = jmHost.isDefined() ? jmHost.get() : "localhost";
+			InetSocketAddress serverAddress = new InetSocketAddress(jmHostname, props.blobManagerPort());
+			final BlobCache blobClient = new BlobCache(serverAddress, config);
+
+			final List<BlobKey> requiredJarFiles = props.requiredJarFiles();
+			final List<URL> requiredClasspaths = props.requiredClasspaths();
+
+			final URL[] allURLs = new URL[requiredJarFiles.size() + requiredClasspaths.size()];
+
+			int pos = 0;
+			for (BlobKey blobKey : props.requiredJarFiles()) {
+				try {
+					allURLs[pos++] = blobClient.getURL(blobKey);
+				} catch (Exception e) {
+					blobClient.shutdown();
+					throw new JobRetrievalException(jobID, "Failed to download BlobKey " + blobKey);
+				}
+			}
+
+			for (URL url : requiredClasspaths) {
+				allURLs[pos++] = url;
+			}
+
+			return new URLClassLoader(allURLs, JobClient.class.getClassLoader());
+		} else if (jmAnswer instanceof JobManagerMessages.JobNotFound) {
+			throw new JobRetrievalException(jobID, "Couldn't retrieve class loader. Job " + jobID + " not found");
+		} else {
+			throw new JobRetrievalException(jobID, "Unknown response from JobManager: " + jmAnswer);
+		}
+	}
+
+	/**
+	 * Given a JobListeningContext, awaits the result of the job execution that this context is bound to
+	 * @param listeningContext The listening context of the job execution
+	 * @return The result of the execution
+	 * @throws JobExecutionException if anything goes wrong while monitoring the job
+	 */
+	public static JobExecutionResult awaitJobResult(JobListeningContext listeningContext) throws JobExecutionException {
+
+		final JobID jobID = listeningContext.getJobID();
+		final ActorRef jobClientActor = listeningContext.getJobClientActor();
+		final Future<Object> jobSubmissionFuture = listeningContext.getJobResultFuture();
+		final FiniteDuration askTimeout = listeningContext.getTimeout();
+		// retrieves class loader if necessary
+		final ClassLoader classLoader = listeningContext.getClassLoader();
+
+		// wait for the future which holds the result to be ready
+		// ping the JobClientActor from time to time to check if it is still running
+		while (!jobSubmissionFuture.isCompleted()) {
+			try {
+				Await.ready(jobSubmissionFuture, askTimeout);
+			} catch (InterruptedException e) {
+				throw new JobExecutionException(
+					jobID,
+					"Interrupted while waiting for job completion.");
+			} catch (TimeoutException e) {
+				try {
+					Await.result(
+						Patterns.ask(
+							jobClientActor,
+							// Ping the Actor to see if it is alive
+							new Identify(true),
+							Timeout.durationToTimeout(askTimeout)),
+						askTimeout);
+					// we got a reply, continue waiting for the job result
+				} catch (Exception eInner) {
+					// we could have a result but the JobClientActor might have been killed and
+					// thus the health check failed
+					if (!jobSubmissionFuture.isCompleted()) {
+						throw new JobExecutionException(
+							jobID,
+							"JobClientActor seems to have died before the JobExecutionResult could be retrieved.",
+							eInner);
+					}
+				}
+			}
+		}
+
+		final Object answer;
+		try {
+			// we have already awaited the result, zero time to wait here
+			answer = Await.result(jobSubmissionFuture, Duration.Zero());
 		}
 		catch (Throwable throwable) {
-			throw new JobExecutionException(jobGraph.getJobID(),
-					"Communication with JobManager failed: " + throwable.getMessage(), throwable);
+			throw new JobExecutionException(jobID,
+				"Couldn't retrieve the JobExecutionResult from the JobManager.", throwable);
 		}
 		finally {
 			// failsafe shutdown of the client actor
@@ -149,18 +305,16 @@ public class JobClient {
 			if (result != null) {
 				try {
 					return result.toJobExecutionResult(classLoader);
+				} catch (Throwable t) {
+					throw new JobExecutionException(jobID,
+						"Job was successfully executed but JobExecutionResult could not be deserialized.");
 				}
-				catch (Throwable t) {
-					throw new JobExecutionException(jobGraph.getJobID(),
-							"Job was successfully executed but JobExecutionResult could not be deserialized.");
-				}
-			}
-			else {
-				throw new JobExecutionException(jobGraph.getJobID(),
-						"Job was successfully executed but result contained a null JobExecutionResult.");
+			} else {
+				throw new JobExecutionException(jobID,
+					"Job was successfully executed but result contained a null JobExecutionResult.");
 			}
 		}
-		if (answer instanceof JobManagerMessages.JobResultFailure) {
+		else if (answer instanceof JobManagerMessages.JobResultFailure) {
 			LOG.info("Job execution failed");
 
 			SerializedThrowable serThrowable = ((JobManagerMessages.JobResultFailure) answer).cause();
@@ -168,23 +322,62 @@ public class JobClient {
 				Throwable cause = serThrowable.deserializeError(classLoader);
 				if (cause instanceof JobExecutionException) {
 					throw (JobExecutionException) cause;
+				} else {
+					throw new JobExecutionException(jobID, "Job execution failed", cause);
 				}
-				else {
-					throw new JobExecutionException(jobGraph.getJobID(), "Job execution failed", cause);
-				}
-			}
-			else {
-				throw new JobExecutionException(jobGraph.getJobID(),
-						"Job execution failed with null as failure cause.");
+			} else {
+				throw new JobExecutionException(jobID,
+					"Job execution failed with null as failure cause.");
 			}
 		}
+		else if (answer instanceof JobManagerMessages.JobNotFound) {
+			throw new JobRetrievalException(
+				((JobManagerMessages.JobNotFound) answer).jobID(),
+				"Couldn't retrieve Job " + jobID + " because it was not running.");
+		}
 		else {
-			throw new JobExecutionException(jobGraph.getJobID(),
-					"Unknown answer from JobManager after submitting the job: " + answer);
+			throw new JobExecutionException(jobID,
+				"Unknown answer from JobManager after submitting the job: " + answer);
 		}
 	}
 
 	/**
+	 * Sends a [[JobGraph]] to the JobClient actor specified by jobClient which submits it then to
+	 * the JobManager. The method blocks until the job has finished or the JobManager is no longer
+	 * alive. In the former case, the [[SerializedJobExecutionResult]] is returned and in the latter
+	 * case a [[JobExecutionException]] is thrown.
+	 *
+	 * @param actorSystem The actor system that performs the communication.
+	 * @param leaderRetrievalService Leader retrieval service which used to find the current leading
+	 *                               JobManager
+	 * @param jobGraph    JobGraph describing the Flink job
+	 * @param timeout     Timeout for futures
+	 * @param sysoutLogUpdates prints log updates to system out if true
+	 * @param classLoader The class loader for deserializing the results
+	 * @return The job execution result
+	 * @throws org.apache.flink.runtime.client.JobExecutionException Thrown if the job
+	 *                                                               execution fails.
+	 */
+	public static JobExecutionResult submitJobAndWait(
+			ActorSystem actorSystem,
+			LeaderRetrievalService leaderRetrievalService,
+			JobGraph jobGraph,
+			FiniteDuration timeout,
+			boolean sysoutLogUpdates,
+			ClassLoader classLoader) throws JobExecutionException {
+
+		JobListeningContext jobListeningContext = submitJob(
+				actorSystem,
+				leaderRetrievalService,
+				jobGraph,
+				timeout,
+				sysoutLogUpdates,
+				classLoader);
+
+		return awaitJobResult(jobListeningContext);
+	}
+
+	/**
 	 * Submits a job in detached mode. The method sends the JobGraph to the
 	 * JobManager and waits for the answer whether the job could be started or not.
 	 *
@@ -227,7 +420,7 @@ public class JobClient {
 					"JobManager did not respond within " + timeout.toString(), e);
 		}
 		catch (Throwable t) {
-			throw new JobExecutionException(jobGraph.getJobID(),
+			throw new JobSubmissionException(jobGraph.getJobID(),
 					"Failed to send job to JobManager: " + t.getMessage(), t.getCause());
 		}
 
@@ -258,4 +451,5 @@ public class JobClient {
 			throw new JobExecutionException(jobGraph.getJobID(), "Unexpected response from JobManager: " + result);
 		}
 	}
+
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/259a3a55/flink-runtime/src/main/java/org/apache/flink/runtime/client/JobClientActor.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/client/JobClientActor.java b/flink-runtime/src/main/java/org/apache/flink/runtime/client/JobClientActor.java
index 9379c30..1380e76 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/client/JobClientActor.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/client/JobClientActor.java
@@ -20,18 +20,11 @@ package org.apache.flink.runtime.client;
 
 import akka.actor.ActorRef;
 import akka.actor.PoisonPill;
-import akka.actor.Props;
 import akka.actor.Status;
 import akka.actor.Terminated;
-import akka.dispatch.Futures;
 import akka.dispatch.OnSuccess;
-import org.apache.flink.configuration.ConfigConstants;
 import org.apache.flink.runtime.akka.AkkaUtils;
 import org.apache.flink.runtime.akka.FlinkUntypedActor;
-import org.apache.flink.runtime.akka.ListeningBehaviour;
-import org.apache.flink.runtime.instance.ActorGateway;
-import org.apache.flink.runtime.instance.AkkaActorGateway;
-import org.apache.flink.runtime.jobgraph.JobGraph;
 import org.apache.flink.runtime.jobgraph.JobStatus;
 import org.apache.flink.runtime.leaderretrieval.LeaderRetrievalListener;
 import org.apache.flink.runtime.leaderretrieval.LeaderRetrievalService;
@@ -39,47 +32,39 @@ import org.apache.flink.runtime.messages.ExecutionGraphMessages;
 import org.apache.flink.runtime.messages.JobClientMessages;
 import org.apache.flink.runtime.messages.JobClientMessages.JobManagerActorRef;
 import org.apache.flink.runtime.messages.JobClientMessages.JobManagerLeaderAddress;
-import org.apache.flink.runtime.messages.JobClientMessages.SubmitJobAndWait;
 import org.apache.flink.runtime.messages.JobManagerMessages;
-import org.apache.flink.runtime.util.SerializedThrowable;
 import org.apache.flink.util.Preconditions;
 import scala.concurrent.duration.FiniteDuration;
 
-import java.io.IOException;
 import java.util.UUID;
-import java.util.concurrent.Callable;
+
 
 /**
- * Actor which constitutes the bridge between the non-actor code and the JobManager. The JobClient
- * is used to submit jobs to the JobManager and to request the port of the BlobManager.
+ * Actor which constitutes the bridge between the non-actor code and the JobManager.
+ * This base class handles the connection to the JobManager and notifies in case of timeouts. It also
+ * receives and prints job updates until job completion.
  */
-public class JobClientActor extends FlinkUntypedActor implements LeaderRetrievalListener {
+public abstract class JobClientActor extends FlinkUntypedActor implements LeaderRetrievalListener {
 
 	private final LeaderRetrievalService leaderRetrievalService;
 
 	/** timeout for futures */
-	private final FiniteDuration timeout;
+	protected final FiniteDuration timeout;
 
 	/** true if status messages shall be printed to sysout */
 	private final boolean sysoutUpdates;
 
-	/** true if a SubmitJobSuccess message has been received */
-	private boolean jobSuccessfullySubmitted = false;
-
-	/** true if a PoisonPill was taken */
-	private boolean terminated = false;
+	/** true if a PoisonPill about to be taken */
+	private boolean toBeTerminated = false;
 
 	/** ActorRef to the current leader */
-	private ActorRef jobManager;
+	protected ActorRef jobManager;
 
 	/** leader session ID of the JobManager when this actor was created */
-	private UUID leaderSessionID;
-
-	/** Actor which submits a job to the JobManager via this actor */
-	private ActorRef submitter;
+	protected UUID leaderSessionID;
 
-	/** JobGraph which shall be submitted to the JobManager */
-	private JobGraph jobGraph;
+	/** The client which the actor is responsible for */
+	protected ActorRef client;
 
 	public JobClientActor(
 			LeaderRetrievalService leaderRetrievalService,
@@ -109,9 +94,27 @@ public class JobClientActor extends FlinkUntypedActor implements LeaderRetrieval
 		}
 	}
 
+	/**
+	 * Hook to be called once a connection has been established with the JobManager.
+	 */
+	protected abstract void connectedToJobManager();
+
+	/**
+	 * Hook to handle custom client message which are not handled by the base class.
+	 * @param message The message to be handled
+	 */
+	protected abstract void handleCustomMessage(Object message);
+
+	/**
+	 * Hook to let the client know about messages that should start a timer for a timeout
+	 * @return The message class after which a timeout should be started
+	 */
+	protected abstract Class getClientMessageClass();
+
+
 	@Override
 	protected void handleMessage(Object message) {
-		
+
 		// =========== State Change Messages ===============
 
 		if (message instanceof ExecutionGraphMessages.ExecutionStateChanged) {
@@ -149,79 +152,31 @@ public class JobClientActor extends FlinkUntypedActor implements LeaderRetrieval
 			JobManagerActorRef msg = (JobManagerActorRef) message;
 			connectToJobManager(msg.jobManager());
 
-			logAndPrintMessage("Connected to JobManager at " +  msg.jobManager());
+			logAndPrintMessage("Connected to JobManager at " + msg.jobManager());
 
-			if (jobGraph != null && !jobSuccessfullySubmitted) {
-				// if we haven't yet submitted the job successfully
-				tryToSubmitJob(jobGraph);
-			}
+			connectedToJobManager();
 		}
 
 		// =========== Job Life Cycle Messages ===============
-		
-		// submit a job to the JobManager
-		else if (message instanceof SubmitJobAndWait) {
-			// only accept SubmitJobWait messages if we're not about to terminate
-			if (!terminated) {
-				// sanity check that this no job was submitted through this actor before -
-				// it is a one-shot actor after all
-				if (this.submitter == null) {
-					jobGraph = ((SubmitJobAndWait) message).jobGraph();
-					if (jobGraph == null) {
-						LOG.error("Received null JobGraph");
-						sender().tell(
-							decorateMessage(new Status.Failure(new Exception("JobGraph is null"))),
-							getSelf());
-					} else {
-						LOG.info("Received job {} ({}).", jobGraph.getName(), jobGraph.getJobID());
-
-						this.submitter = getSender();
-
-						// is only successful if we already know the job manager leader
-						tryToSubmitJob(jobGraph);
-					}
-				} else {
-					// repeated submission - tell failure to sender and kill self
-					String msg = "Received repeated 'SubmitJobAndWait'";
-					LOG.error(msg);
-					getSender().tell(
-						decorateMessage(new Status.Failure(new Exception(msg))), ActorRef.noSender());
-
-					terminate();
-				}
-			} else {
-				// we're about to receive a PoisonPill because terminated == true
-				String msg = getClass().getName() + " is about to be terminated. Therefore, the " +
-					"job submission cannot be executed.";
-				LOG.error(msg);
-				getSender().tell(
-					decorateMessage(new Status.Failure(new Exception(msg))), ActorRef.noSender());
-			}
-		}
+
 		// acknowledgement to submit job is only logged, our original
-		// submitter is only interested in the final job result
-		else if (message instanceof JobManagerMessages.JobResultSuccess ||
-				message instanceof JobManagerMessages.JobResultFailure) {
-			
+		// client is only interested in the final job result
+		else if (message instanceof JobManagerMessages.JobResultMessage) {
+
 			if (LOG.isDebugEnabled()) {
 				LOG.debug("Received {} message from JobManager", message.getClass().getSimpleName());
 			}
 
-			// forward the success to the original job submitter
-			if (hasJobBeenSubmitted()) {
-				this.submitter.tell(decorateMessage(message), getSelf());
+			// forward the success to the original client
+			if (isClientConnected()) {
+				this.client.tell(decorateMessage(message), getSelf());
 			}
 
 			terminate();
 		}
-		else if (message instanceof JobManagerMessages.JobSubmitSuccess) {
-			// job was successfully submitted :-)
-			LOG.info("Job was successfully submitted to the JobManager {}.", getSender().path());
-			jobSuccessfullySubmitted = true;
-		}
 
 		// =========== Actor / Communication Failure / Timeouts ===============
-		
+
 		else if (message instanceof Terminated) {
 			ActorRef target = ((Terminated) message).getActor();
 			if (jobManager.equals(target)) {
@@ -234,7 +189,7 @@ public class JobClientActor extends FlinkUntypedActor implements LeaderRetrieval
 				// Important: The ConnectionTimeout message is filtered out in case that we are
 				// notified about a new leader by setting the new leader session ID, because
 				// ConnectionTimeout extends RequiresLeaderSessionID
-				if (hasJobBeenSubmitted()) {
+				if (isClientConnected()) {
 					getContext().system().scheduler().scheduleOnce(
 						timeout,
 						getSelf(),
@@ -245,49 +200,61 @@ public class JobClientActor extends FlinkUntypedActor implements LeaderRetrieval
 			} else {
 				LOG.warn("Received 'Terminated' for unknown actor " + target);
 			}
-		} else if (JobClientMessages.getConnectionTimeout().equals(message)) {
+		}
+		else if (JobClientMessages.getConnectionTimeout().equals(message)) {
 			// check if we haven't found a job manager yet
-			if (!isConnected()) {
-				if (hasJobBeenSubmitted()) {
-					submitter.tell(
-						decorateMessage(new Status.Failure(
-							new JobClientActorConnectionTimeoutException("Lost connection to the JobManager."))),
+			if (!isJobManagerConnected()) {
+				final JobClientActorConnectionTimeoutException errorMessage =
+					new JobClientActorConnectionTimeoutException("Lost connection to the JobManager.");
+				final Object replyMessage = decorateMessage(new Status.Failure(errorMessage));
+				if (isClientConnected()) {
+					client.tell(
+						replyMessage,
 						getSelf());
 				}
 				// Connection timeout reached, let's terminate
 				terminate();
 			}
-		} else if (JobClientMessages.getSubmissionTimeout().equals(message)) {
-			// check if our job submission was successful in the meantime
-			if (!jobSuccessfullySubmitted) {
-				if (hasJobBeenSubmitted()) {
-					submitter.tell(
-						decorateMessage(new Status.Failure(
-							new JobClientActorSubmissionTimeoutException("Job submission to the JobManager timed out. " +
-								"You may increase '" + ConfigConstants.AKKA_CLIENT_TIMEOUT + "' in case the JobManager " +
-								"needs more time to configure and confirm the job submission."))),
-						getSelf());
-				}
-
-				// We haven't heard back from the job manager after sending the job graph to him,
-				// therefore terminate
-				terminate();
-			}
 		}
 
-		// =========== Unknown Messages ===============
-		
+		// =========== Message Delegation ===============
+
+		else if (!isJobManagerConnected() && getClientMessageClass().equals(message.getClass())) {
+			LOG.info(
+				"Received {} but there is no connection to a JobManager yet.",
+				message);
+			// We want to submit/attach to a job, but we haven't found a job manager yet.
+			// Let's give him another chance to find a job manager within the given timeout.
+			getContext().system().scheduler().scheduleOnce(
+				timeout,
+				getSelf(),
+				decorateMessage(JobClientMessages.getConnectionTimeout()),
+				getContext().dispatcher(),
+				ActorRef.noSender()
+			);
+			handleCustomMessage(message);
+		}
 		else {
-			LOG.error("JobClient received unknown message: " + message);
+			if (!toBeTerminated) {
+				handleCustomMessage(message);
+			} else {
+				// we're about to receive a PoisonPill because toBeTerminated == true
+				String msg = getClass().getName() + " is about to be terminated. Therefore, the " +
+					"job submission cannot be executed.";
+				LOG.error(msg);
+				getSender().tell(
+					decorateMessage(new Status.Failure(new Exception(msg))), ActorRef.noSender());
+			}
 		}
 	}
 
+
 	@Override
 	protected UUID getLeaderSessionID() {
 		return leaderSessionID;
 	}
 
-	private void logAndPrintMessage(String message) {
+	protected void logAndPrintMessage(String message) {
 		LOG.info(message);
 		if (sysoutUpdates) {
 			System.out.println(message);
@@ -351,97 +318,19 @@ public class JobClientActor extends FlinkUntypedActor implements LeaderRetrieval
 		getContext().watch(jobManager);
 	}
 
-	private void tryToSubmitJob(final JobGraph jobGraph) {
-		this.jobGraph = jobGraph;
-
-		if (isConnected()) {
-			LOG.info("Sending message to JobManager {} to submit job {} ({}) and wait for progress",
-				jobManager.path().toString(), jobGraph.getName(), jobGraph.getJobID());
-
-			Futures.future(new Callable<Object>() {
-				@Override
-				public Object call() throws Exception {
-					ActorGateway jobManagerGateway = new AkkaActorGateway(jobManager, leaderSessionID);
-
-					LOG.info("Upload jar files to job manager {}.", jobManager.path());
-
-					try {
-						jobGraph.uploadUserJars(jobManagerGateway, timeout);
-					} catch (IOException exception) {
-						getSelf().tell(
-							decorateMessage(new JobManagerMessages.JobResultFailure(
-								new SerializedThrowable(
-									new JobSubmissionException(
-										jobGraph.getJobID(),
-										"Could not upload the jar files to the job manager.",
-										exception)
-								)
-							)),
-							ActorRef.noSender()
-						);
-					}
-
-					LOG.info("Submit job to the job manager {}.", jobManager.path());
-
-					jobManager.tell(
-						decorateMessage(
-							new JobManagerMessages.SubmitJob(
-								jobGraph,
-								ListeningBehaviour.EXECUTION_RESULT_AND_STATE_CHANGES)),
-						getSelf());
-
-					// issue a SubmissionTimeout message to check that we submit the job within
-					// the given timeout
-					getContext().system().scheduler().scheduleOnce(
-						timeout,
-						getSelf(),
-						decorateMessage(JobClientMessages.getSubmissionTimeout()),
-						getContext().dispatcher(),
-						ActorRef.noSender());
-
-					return null;
-				}
-			}, getContext().dispatcher());
-		} else {
-			LOG.info("Could not submit job {} ({}), because there is no connection to a " +
-					"JobManager.",
-				jobGraph.getName(), jobGraph.getJobID());
-
-			// We want to submit a job, but we haven't found a job manager yet.
-			// Let's give him another chance to find a job manager within the given timeout.
-			getContext().system().scheduler().scheduleOnce(
-				timeout,
-				getSelf(),
-				decorateMessage(JobClientMessages.getConnectionTimeout()),
-				getContext().dispatcher(),
-				ActorRef.noSender()
-			);
-		}
-	}
-
-	private void terminate() {
+	protected void terminate() {
 		LOG.info("Terminate JobClientActor.");
-		terminated = true;
+		toBeTerminated = true;
 		disconnectFromJobManager();
 		getSelf().tell(decorateMessage(PoisonPill.getInstance()), ActorRef.noSender());
 	}
 
-	private boolean isConnected() {
+	private boolean isJobManagerConnected() {
 		return jobManager != ActorRef.noSender();
 	}
 
-	private boolean hasJobBeenSubmitted() {
-		return submitter != ActorRef.noSender();
+	protected boolean isClientConnected() {
+		return client != ActorRef.noSender();
 	}
 
-	public static Props createJobClientActorProps(
-			LeaderRetrievalService leaderRetrievalService,
-			FiniteDuration timeout,
-			boolean sysoutUpdates) {
-		return Props.create(
-			JobClientActor.class,
-			leaderRetrievalService,
-			timeout,
-			sysoutUpdates);
-	}
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/259a3a55/flink-runtime/src/main/java/org/apache/flink/runtime/client/JobClientActorRegistrationTimeoutException.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/client/JobClientActorRegistrationTimeoutException.java b/flink-runtime/src/main/java/org/apache/flink/runtime/client/JobClientActorRegistrationTimeoutException.java
new file mode 100644
index 0000000..e57d1b4
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/client/JobClientActorRegistrationTimeoutException.java
@@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.client;
+
+/**
+ * Exception which is thrown by the {@link JobClientActor} if it has not heard back from the job
+ * manager after it has attempted to register for a job within a given timeout interval.
+ */
+public class JobClientActorRegistrationTimeoutException extends Exception {
+	private static final long serialVersionUID = 8762463142030454853L;
+
+	public JobClientActorRegistrationTimeoutException(String msg) {
+		super(msg);
+	}
+
+	public JobClientActorRegistrationTimeoutException(String msg, Throwable cause) {
+		super(msg, cause);
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/259a3a55/flink-runtime/src/main/java/org/apache/flink/runtime/client/JobListeningContext.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/client/JobListeningContext.java b/flink-runtime/src/main/java/org/apache/flink/runtime/client/JobListeningContext.java
new file mode 100644
index 0000000..b5d7cb7
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/client/JobListeningContext.java
@@ -0,0 +1,145 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.runtime.client;
+
+import akka.actor.ActorRef;
+import akka.actor.ActorSystem;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.runtime.akka.AkkaUtils;
+import org.apache.flink.runtime.instance.ActorGateway;
+import org.apache.flink.runtime.util.LeaderRetrievalUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import scala.concurrent.Future;
+import scala.concurrent.duration.FiniteDuration;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * The JobListeningContext holds the state necessary to monitor a running job and receive its results.
+ */
+public final class JobListeningContext {
+
+	private final Logger LOG = LoggerFactory.getLogger(getClass());
+
+	/** The Job id of the Job */
+	private final JobID jobID;
+	/** The Future which is completed upon job completion */
+	private final Future<Object> jobResultFuture;
+	/** The JobClientActor which handles communication and monitoring of the job */
+	private final ActorRef jobClientActor;
+	/** Timeout used Asks */
+	private final FiniteDuration timeout;
+
+	/** ActorSystem for leader retrieval */
+	private ActorSystem actorSystem;
+	/** Flink configuration for initializing the BlobService */
+	private Configuration configuration;
+
+	/** The class loader (either provided at job submission or reconstructed when it is needed */
+	private ClassLoader classLoader;
+
+	/**
+	 * Constructor to use when the class loader is available.
+	 */
+	public JobListeningContext(
+		JobID jobID,
+		Future<Object> jobResultFuture,
+		ActorRef jobClientActor,
+		FiniteDuration timeout,
+		ClassLoader classLoader) {
+		this.jobID = checkNotNull(jobID);
+		this.jobResultFuture = checkNotNull(jobResultFuture);
+		this.jobClientActor = checkNotNull(jobClientActor);
+		this.timeout = checkNotNull(timeout);
+		this.classLoader = checkNotNull(classLoader);
+	}
+
+	/**
+	 * Constructor to use when the class loader is not available.
+	 */
+	public JobListeningContext(
+		JobID jobID,
+		Future<Object> jobResultFuture,
+		ActorRef jobClientActor,
+		FiniteDuration timeout,
+		ActorSystem actorSystem,
+		Configuration configuration) {
+		this.jobID = checkNotNull(jobID);
+		this.jobResultFuture = checkNotNull(jobResultFuture);
+		this.jobClientActor = checkNotNull(jobClientActor);
+		this.timeout = checkNotNull(timeout);
+		this.actorSystem = checkNotNull(actorSystem);
+		this.configuration = checkNotNull(configuration);
+	}
+
+	/**
+	 * @return The Job ID that this context is bound to.
+	 */
+	public JobID getJobID() {
+		return jobID;
+	}
+
+	/**
+	 * @return The Future that eventually holds the result of the execution.
+	 */
+	public Future<Object> getJobResultFuture() {
+		return jobResultFuture;
+	}
+
+	/**
+	 * @return The Job Client actor which communicats with the JobManager.
+	 */
+	public ActorRef getJobClientActor() {
+		return jobClientActor;
+	}
+
+	/**
+	 * @return The default timeout of Akka asks
+	 */
+	public FiniteDuration getTimeout() {
+		return timeout;
+	}
+
+	/**
+	 * The class loader necessary to deserialize the result of a job execution,
+	 * i.e. JobExecutionResult or Exceptions
+	 * @return The class loader for the job id
+	 * @throws JobRetrievalException if anything goes wrong
+	 */
+	public ClassLoader getClassLoader() throws JobRetrievalException {
+		if (classLoader == null) {
+			// lazily initializes the class loader when it is needed
+			classLoader = JobClient.retrieveClassLoader(jobID, getJobManager(), configuration);
+			LOG.info("Reconstructed class loader for Job {}", jobID);
+		}
+		return classLoader;
+	}
+
+	private ActorGateway getJobManager() throws JobRetrievalException {
+		try {
+			return LeaderRetrievalUtils.retrieveLeaderGateway(
+				LeaderRetrievalUtils.createLeaderRetrievalService(configuration),
+				actorSystem,
+				AkkaUtils.getLookupTimeout(configuration));
+		} catch (Exception e) {
+			throw new JobRetrievalException(jobID, "Couldn't retrieve leading JobManager.", e);
+		}
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/259a3a55/flink-runtime/src/main/java/org/apache/flink/runtime/client/JobRetrievalException.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/client/JobRetrievalException.java b/flink-runtime/src/main/java/org/apache/flink/runtime/client/JobRetrievalException.java
new file mode 100644
index 0000000..a92bddc
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/client/JobRetrievalException.java
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.flink.runtime.client;
+
+import org.apache.flink.api.common.JobID;
+
+/**
+ * Exception used to indicate that a job couldn't be retrieved from the JobManager
+ */
+public class JobRetrievalException extends JobExecutionException {
+
+	private static final long serialVersionUID = -42L;
+
+	public JobRetrievalException(JobID jobID, String msg, Throwable cause) {
+		super(jobID, msg, cause);
+	}
+
+	public JobRetrievalException(JobID jobID, String msg) {
+		super(jobID, msg);
+	}
+
+	public JobRetrievalException(JobID jobID, Throwable cause) {
+		super(jobID, cause);
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/259a3a55/flink-runtime/src/main/java/org/apache/flink/runtime/client/JobSubmissionClientActor.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/client/JobSubmissionClientActor.java b/flink-runtime/src/main/java/org/apache/flink/runtime/client/JobSubmissionClientActor.java
new file mode 100644
index 0000000..2cc4a50
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/client/JobSubmissionClientActor.java
@@ -0,0 +1,192 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.client;
+
+import akka.actor.ActorRef;
+import akka.actor.Props;
+import akka.actor.Status;
+import akka.dispatch.Futures;
+import org.apache.flink.configuration.ConfigConstants;
+import org.apache.flink.runtime.akka.ListeningBehaviour;
+import org.apache.flink.runtime.instance.ActorGateway;
+import org.apache.flink.runtime.instance.AkkaActorGateway;
+import org.apache.flink.runtime.jobgraph.JobGraph;
+import org.apache.flink.runtime.leaderretrieval.LeaderRetrievalService;
+import org.apache.flink.runtime.messages.JobClientMessages;
+import org.apache.flink.runtime.messages.JobClientMessages.SubmitJobAndWait;
+import org.apache.flink.runtime.messages.JobManagerMessages;
+import org.apache.flink.runtime.util.SerializedThrowable;
+import scala.concurrent.duration.FiniteDuration;
+
+import java.io.IOException;
+import java.util.concurrent.Callable;
+
+
+/**
+ * Actor which handles Job submission process and provides Job updates until completion.
+ */
+public class JobSubmissionClientActor extends JobClientActor {
+
+	/** JobGraph which shall be submitted to the JobManager */
+	private JobGraph jobGraph;
+	/** true if a SubmitJobSuccess message has been received */
+	private boolean jobSuccessfullySubmitted = false;
+
+	public JobSubmissionClientActor(
+			LeaderRetrievalService leaderRetrievalService,
+			FiniteDuration timeout,
+			boolean sysoutUpdates) {
+		super(leaderRetrievalService, timeout, sysoutUpdates);
+	}
+
+
+	@Override
+	public void connectedToJobManager() {
+		if (jobGraph != null && !jobSuccessfullySubmitted) {
+			// if we haven't yet submitted the job successfully
+			tryToSubmitJob();
+		}
+	}
+
+	@Override
+	protected Class getClientMessageClass() {
+		return SubmitJobAndWait.class;
+	}
+
+	@Override
+	public void handleCustomMessage(Object message) {
+		// submit a job to the JobManager
+		if (message instanceof SubmitJobAndWait) {
+			// sanity check that this no job was submitted through this actor before -
+			// it is a one-shot actor after all
+			if (this.client == null) {
+				jobGraph = ((SubmitJobAndWait) message).jobGraph();
+				if (jobGraph == null) {
+					LOG.error("Received null JobGraph");
+					sender().tell(
+						decorateMessage(new Status.Failure(new Exception("JobGraph is null"))),
+						getSelf());
+				} else {
+					LOG.info("Received job {} ({}).", jobGraph.getName(), jobGraph.getJobID());
+
+					this.client = getSender();
+
+					// is only successful if we already know the job manager leader
+					if (jobManager != null) {
+						tryToSubmitJob();
+					}
+				}
+			} else {
+				// repeated submission - tell failure to sender and kill self
+				String msg = "Received repeated 'SubmitJobAndWait'";
+				LOG.error(msg);
+				getSender().tell(
+					decorateMessage(new Status.Failure(new Exception(msg))), ActorRef.noSender());
+
+				terminate();
+			}
+		} else if (message instanceof JobManagerMessages.JobSubmitSuccess) {
+			// job was successfully submitted :-)
+			LOG.info("Job {} was successfully submitted to the JobManager {}.",
+				((JobManagerMessages.JobSubmitSuccess) message).jobId(),
+				getSender().path());
+			jobSuccessfullySubmitted = true;
+		} else if (JobClientMessages.getSubmissionTimeout().equals(message)) {
+			// check if our job submission was successful in the meantime
+			if (!jobSuccessfullySubmitted) {
+				if (isClientConnected()) {
+					client.tell(
+						decorateMessage(new Status.Failure(
+							new JobClientActorSubmissionTimeoutException("Job submission to the JobManager timed out. " +
+								"You may increase '" + ConfigConstants.AKKA_CLIENT_TIMEOUT + "' in case the JobManager " +
+								"needs more time to configure and confirm the job submission."))),
+						getSelf());
+				}
+
+				// We haven't heard back from the job manager after sending the job graph to him,
+				// therefore terminate
+				terminate();
+			}
+		} else {
+			LOG.error("{} received unknown message: ", getClass());
+		}
+	}
+
+	private void tryToSubmitJob() {
+		LOG.info("Sending message to JobManager {} to submit job {} ({}) and wait for progress",
+			jobManager.path().toString(), jobGraph.getName(), jobGraph.getJobID());
+
+		Futures.future(new Callable<Object>() {
+			@Override
+			public Object call() throws Exception {
+				ActorGateway jobManagerGateway = new AkkaActorGateway(jobManager, leaderSessionID);
+
+				LOG.info("Upload jar files to job manager {}.", jobManager.path());
+
+				try {
+					jobGraph.uploadUserJars(jobManagerGateway, timeout);
+				} catch (IOException exception) {
+					getSelf().tell(
+						decorateMessage(new JobManagerMessages.JobResultFailure(
+							new SerializedThrowable(
+								new JobSubmissionException(
+									jobGraph.getJobID(),
+									"Could not upload the jar files to the job manager.",
+									exception)
+							)
+						)),
+						ActorRef.noSender()
+					);
+				}
+
+				LOG.info("Submit job to the job manager {}.", jobManager.path());
+
+				jobManager.tell(
+					decorateMessage(
+						new JobManagerMessages.SubmitJob(
+							jobGraph,
+							ListeningBehaviour.EXECUTION_RESULT_AND_STATE_CHANGES)),
+					getSelf());
+
+				// issue a SubmissionTimeout message to check that we submit the job within
+				// the given timeout
+				getContext().system().scheduler().scheduleOnce(
+					timeout,
+					getSelf(),
+					decorateMessage(JobClientMessages.getSubmissionTimeout()),
+					getContext().dispatcher(),
+					ActorRef.noSender());
+
+				return null;
+			}
+		}, getContext().dispatcher());
+	}
+
+
+	public static Props createActorProps(
+			LeaderRetrievalService leaderRetrievalService,
+			FiniteDuration timeout,
+			boolean sysoutUpdates) {
+		return Props.create(
+			JobSubmissionClientActor.class,
+			leaderRetrievalService,
+			timeout,
+			sysoutUpdates);
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/259a3a55/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionGraph.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionGraph.java b/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionGraph.java
index 7a94c0f..d7e40a3 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionGraph.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionGraph.java
@@ -931,6 +931,7 @@ public class ExecutionGraph {
 		intermediateResults.clear();
 		currentExecutions.clear();
 		requiredJarFiles.clear();
+		requiredClasspaths.clear();
 		jobStatusListeners.clear();
 		executionListeners.clear();
 

http://git-wip-us.apache.org/repos/asf/flink/blob/259a3a55/flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobInfo.scala
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobInfo.scala b/flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobInfo.scala
index 67d7a06..a84650c 100644
--- a/flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobInfo.scala
+++ b/flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobInfo.scala
@@ -21,6 +21,7 @@ package org.apache.flink.runtime.jobmanager
 import akka.actor.ActorRef
 import org.apache.flink.runtime.akka.ListeningBehaviour
 
+
 /**
  * Utility class to store job information on the [[JobManager]]. The JobInfo stores which actor
  * submitted the job, when the start time and, if already terminated, the end time was.
@@ -37,11 +38,14 @@ import org.apache.flink.runtime.akka.ListeningBehaviour
  * @param start Starting time
  */
 class JobInfo(
-  val client: ActorRef,
-  val listeningBehaviour: ListeningBehaviour,
+  client: ActorRef,
+  listeningBehaviour: ListeningBehaviour,
   val start: Long,
   val sessionTimeout: Long) extends Serializable {
 
+  val clients = scala.collection.mutable.HashSet[(ActorRef, ListeningBehaviour)]()
+  clients += ((client, listeningBehaviour))
+
   var sessionAlive = sessionTimeout > 0
 
   var lastActive = 0L
@@ -58,10 +62,62 @@ class JobInfo(
     }
   }
 
-  override def toString = s"JobInfo(client: $client ($listeningBehaviour), start: $start)"
+
+  /**
+    * Notifies all clients by sending a message
+    * @param message the message to send
+    */
+  def notifyClients(message: Any) = {
+    clients foreach {
+      case (clientActor, _) =>
+        clientActor ! message
+    }
+  }
+
+  /**
+    * Notifies all clients which are not of type detached
+    * @param message the message to sent to non-detached clients
+    */
+  def notifyNonDetachedClients(message: Any) = {
+    clients foreach {
+      case (clientActor, ListeningBehaviour.DETACHED) =>
+        // do nothing
+      case (clientActor, _) =>
+        clientActor ! message
+    }
+  }
+
+  /**
+    * Sends a message to job clients that match the listening behavior
+    * @param message the message to send to all clients
+    * @param listeningBehaviour the desired listening behaviour
+    */
+  def notifyClients(message: Any, listeningBehaviour: ListeningBehaviour) = {
+    clients foreach {
+      case (clientActor, `listeningBehaviour`) =>
+        clientActor ! message
+      case _ =>
+    }
+  }
 
   def setLastActive() =
     lastActive = System.currentTimeMillis()
+
+
+  override def toString = s"JobInfo(clients: ${clients.toString()}, start: $start)"
+
+  override def equals(other: Any): Boolean = other match {
+    case that: JobInfo =>
+      clients == that.clients &&
+        start == that.start &&
+        sessionTimeout == that.sessionTimeout
+    case _ => false
+  }
+
+  override def hashCode(): Int = {
+    val state = Seq(clients, start, sessionTimeout)
+    state.map(_.hashCode()).foldLeft(0)((a, b) => 31 * a + b)
+  }
 }
 
 object JobInfo{

http://git-wip-us.apache.org/repos/asf/flink/blob/259a3a55/flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobManager.scala
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobManager.scala b/flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobManager.scala
index 0587987..d35fb0a 100644
--- a/flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobManager.scala
+++ b/flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobManager.scala
@@ -19,18 +19,16 @@
 package org.apache.flink.runtime.jobmanager
 
 import java.io.{File, IOException}
-import java.net.{BindException, ServerSocket, UnknownHostException, InetAddress, InetSocketAddress}
+import java.net.{BindException, InetAddress, InetSocketAddress, ServerSocket, UnknownHostException}
 import java.lang.management.ManagementFactory
 import java.util.UUID
 import java.util.concurrent.{ExecutorService, TimeUnit, TimeoutException}
 import javax.management.ObjectName
 
-import akka.actor.Status.{Success, Failure}
+import akka.actor.Status.{Failure, Success}
 import akka.actor._
 import akka.pattern.ask
-
 import grizzled.slf4j.Logger
-
 import org.apache.flink.api.common.{ExecutionConfig, JobID}
 import org.apache.flink.configuration.{ConfigConstants, Configuration, GlobalConfiguration}
 import org.apache.flink.core.fs.FileSystem
@@ -41,8 +39,8 @@ import org.apache.flink.runtime.accumulators.AccumulatorSnapshot
 import org.apache.flink.runtime.akka.{AkkaUtils, ListeningBehaviour}
 import org.apache.flink.runtime.blob.BlobServer
 import org.apache.flink.runtime.checkpoint._
-import org.apache.flink.runtime.checkpoint.savepoint.{SavepointLoader, SavepointStoreFactory, SavepointStore}
-import org.apache.flink.runtime.checkpoint.stats.{CheckpointStatsTracker, SimpleCheckpointStatsTracker, DisabledCheckpointStatsTracker}
+import org.apache.flink.runtime.checkpoint.savepoint.{SavepointLoader, SavepointStore, SavepointStoreFactory}
+import org.apache.flink.runtime.checkpoint.stats.{CheckpointStatsTracker, DisabledCheckpointStatsTracker, SimpleCheckpointStatsTracker}
 import org.apache.flink.runtime.client._
 import org.apache.flink.runtime.execution.SuppressRestartsException
 import org.apache.flink.runtime.clusterframework.FlinkResourceManager
@@ -58,24 +56,22 @@ import org.apache.flink.runtime.jobgraph.{JobGraph, JobStatus, JobVertexID}
 import org.apache.flink.runtime.jobmanager.SubmittedJobGraphStore.SubmittedJobGraphListener
 import org.apache.flink.runtime.jobmanager.scheduler.{Scheduler => FlinkScheduler}
 import org.apache.flink.runtime.leaderelection.{LeaderContender, LeaderElectionService, StandaloneLeaderElectionService}
-
 import org.apache.flink.runtime.messages.ArchiveMessages.ArchiveExecutionGraph
 import org.apache.flink.runtime.messages.ExecutionGraphMessages.JobStatusChanged
 import org.apache.flink.runtime.messages.JobManagerMessages._
-import org.apache.flink.runtime.messages.Messages.{Disconnect, Acknowledge}
+import org.apache.flink.runtime.messages.Messages.{Acknowledge, Disconnect}
 import org.apache.flink.runtime.messages.RegistrationMessages._
 import org.apache.flink.runtime.messages.TaskManagerMessages.{Heartbeat, SendStackTrace}
 import org.apache.flink.runtime.messages.TaskMessages.{PartitionState, UpdateTaskExecutionState}
 import org.apache.flink.runtime.messages.accumulators.{AccumulatorMessage, AccumulatorResultStringsFound, AccumulatorResultsErroneous, AccumulatorResultsFound, RequestAccumulatorResults, RequestAccumulatorResultsStringified}
-import org.apache.flink.runtime.messages.checkpoint.{DeclineCheckpoint, AbstractCheckpointMessage, AcknowledgeCheckpoint}
-
+import org.apache.flink.runtime.messages.checkpoint.{AbstractCheckpointMessage, AcknowledgeCheckpoint, DeclineCheckpoint}
 import org.apache.flink.runtime.messages.webmonitor.InfoMessage
 import org.apache.flink.runtime.messages.webmonitor._
 import org.apache.flink.runtime.metrics.{MetricRegistry => FlinkMetricRegistry}
 import org.apache.flink.runtime.metrics.groups.JobManagerMetricGroup
 import org.apache.flink.runtime.process.ProcessReaper
-import org.apache.flink.runtime.query.{UnknownKvStateLocation, KvStateMessage}
-import org.apache.flink.runtime.query.KvStateMessage.{NotifyKvStateUnregistered, LookupKvStateLocation, NotifyKvStateRegistered}
+import org.apache.flink.runtime.query.{KvStateMessage, UnknownKvStateLocation}
+import org.apache.flink.runtime.query.KvStateMessage.{LookupKvStateLocation, NotifyKvStateRegistered, NotifyKvStateUnregistered}
 import org.apache.flink.runtime.security.SecurityUtils
 import org.apache.flink.runtime.security.SecurityUtils.FlinkSecuredRunner
 import org.apache.flink.runtime.taskmanager.TaskManager
@@ -83,7 +79,6 @@ import org.apache.flink.runtime.util._
 import org.apache.flink.runtime.webmonitor.{WebMonitor, WebMonitorUtils}
 import org.apache.flink.runtime.{FlinkActor, LeaderSessionMessageFilter, LogMessages}
 import org.apache.flink.util.{ConfigurationUtil, InstantiationUtil, NetUtils}
-
 import org.jboss.netty.channel.ChannelException
 
 import scala.annotation.tailrec
@@ -479,6 +474,22 @@ class JobManager(
 
       submitJob(jobGraph, jobInfo)
 
+    case RegisterJobClient(jobID, listeningBehaviour) =>
+      val client = sender()
+      currentJobs.get(jobID) match {
+        case Some((executionGraph, jobInfo)) =>
+          log.info("Registering client for job $jobID")
+          jobInfo.clients += ((client, listeningBehaviour))
+          val listener = new StatusListenerMessenger(client, leaderSessionID.orNull)
+          executionGraph.registerJobStatusListener(listener)
+          if (listeningBehaviour == ListeningBehaviour.EXECUTION_RESULT_AND_STATE_CHANGES) {
+            executionGraph.registerExecutionListener(listener)
+          }
+          client ! decorateMessage(RegisterJobClientSuccess(jobID))
+        case None =>
+          client ! decorateMessage(JobNotFound(jobID))
+      }
+
     case RecoverSubmittedJob(submittedJobGraph) =>
       if (!currentJobs.contains(submittedJobGraph.getJobId)) {
         submitJob(
@@ -788,50 +799,53 @@ class JobManager(
               }
 
               // is the client waiting for the job result?
-              if (jobInfo.listeningBehaviour != ListeningBehaviour.DETACHED) {
-                newJobStatus match {
-                  case JobStatus.FINISHED =>
-                  try {
-                    val accumulatorResults = executionGraph.getAccumulatorsSerialized()
-                    val result = new SerializedJobExecutionResult(
-                      jobID,
-                      jobInfo.duration,
-                      accumulatorResults)
-
-                    jobInfo.client ! decorateMessage(JobResultSuccess(result))
-                  } catch {
-                    case e: Exception =>
-                      log.error(s"Cannot fetch final accumulators for job $jobID", e)
-                      val exception = new JobExecutionException(jobID,
-                        "Failed to retrieve accumulator results.", e)
+              newJobStatus match {
+                case JobStatus.FINISHED =>
+                try {
+                  val accumulatorResults = executionGraph.getAccumulatorsSerialized()
+                  val result = new SerializedJobExecutionResult(
+                    jobID,
+                    jobInfo.duration,
+                    accumulatorResults)
+
+                  jobInfo.notifyNonDetachedClients(
+                    decorateMessage(JobResultSuccess(result)))
+                } catch {
+                  case e: Exception =>
+                    log.error(s"Cannot fetch final accumulators for job $jobID", e)
+                    val exception = new JobExecutionException(jobID,
+                      "Failed to retrieve accumulator results.", e)
 
-                      jobInfo.client ! decorateMessage(JobResultFailure(
-                        new SerializedThrowable(exception)))
-                  }
+                    jobInfo.notifyNonDetachedClients(
+                      decorateMessage(JobResultFailure(
+                        new SerializedThrowable(exception))))
+                }
 
-                  case JobStatus.CANCELED =>
-                    // the error may be packed as a serialized throwable
-                    val unpackedError = SerializedThrowable.get(
-                      error, executionGraph.getUserClassLoader())
+                case JobStatus.CANCELED =>
+                  // the error may be packed as a serialized throwable
+                  val unpackedError = SerializedThrowable.get(
+                    error, executionGraph.getUserClassLoader())
 
-                    jobInfo.client ! decorateMessage(JobResultFailure(
+                  jobInfo.notifyNonDetachedClients(
+                    decorateMessage(JobResultFailure(
                       new SerializedThrowable(
-                        new JobCancellationException(jobID, "Job was cancelled.", unpackedError))))
+                        new JobCancellationException(jobID, "Job was cancelled.", unpackedError)))))
 
-                  case JobStatus.FAILED =>
-                    val unpackedError = SerializedThrowable.get(
-                      error, executionGraph.getUserClassLoader())
+                case JobStatus.FAILED =>
+                  val unpackedError = SerializedThrowable.get(
+                    error, executionGraph.getUserClassLoader())
 
-                    jobInfo.client ! decorateMessage(JobResultFailure(
+                  jobInfo.notifyNonDetachedClients(
+                    decorateMessage(JobResultFailure(
                       new SerializedThrowable(
-                        new JobExecutionException(jobID, "Job execution failed.", unpackedError))))
-
-                  case x =>
-                    val exception = new JobExecutionException(jobID, s"$x is not a terminal state.")
-                    jobInfo.client ! decorateMessage(JobResultFailure(
-                      new SerializedThrowable(exception)))
-                    throw exception
-                }
+                        new JobExecutionException(jobID, "Job execution failed.", unpackedError)))))
+
+                case x =>
+                  val exception = new JobExecutionException(jobID, s"$x is not a terminal state.")
+                  jobInfo.notifyNonDetachedClients(
+                    decorateMessage(JobResultFailure(
+                      new SerializedThrowable(exception))))
+                  throw exception
               }
             }(context.dispatcher)
           }
@@ -919,6 +933,18 @@ class JobManager(
           archive forward decorateMessage(RequestJob(jobID))
       }
 
+    case RequestClassloadingProps(jobID) =>
+      currentJobs.get(jobID) match {
+        case Some((graph, jobInfo)) =>
+          sender() ! decorateMessage(
+            ClassloadingProps(
+              libraryCacheManager.getBlobServerPort,
+              graph.getRequiredJarFiles,
+              graph.getRequiredClasspaths))
+        case None =>
+          sender() ! decorateMessage(JobNotFound(jobID))
+      }
+
     case RequestBlobManagerPort =>
       sender ! decorateMessage(libraryCacheManager.getBlobServerPort)
 
@@ -1052,11 +1078,10 @@ class JobManager(
    */
   private def submitJob(jobGraph: JobGraph, jobInfo: JobInfo, isRecovery: Boolean = false): Unit = {
     if (jobGraph == null) {
-      jobInfo.client ! decorateMessage(JobResultFailure(
-        new SerializedThrowable(
-          new JobSubmissionException(null, "JobGraph must not be null.")
-        )
-      ))
+      jobInfo.notifyClients(
+        decorateMessage(JobResultFailure(
+          new SerializedThrowable(
+            new JobSubmissionException(null, "JobGraph must not be null.")))))
     }
     else {
       val jobId = jobGraph.getJobID
@@ -1259,13 +1284,15 @@ class JobManager(
         executionGraph.registerJobStatusListener(
           new StatusListenerMessenger(self, leaderSessionID.orNull))
 
-        if (jobInfo.listeningBehaviour == ListeningBehaviour.EXECUTION_RESULT_AND_STATE_CHANGES) {
+        jobInfo.clients foreach {
           // the sender wants to be notified about state changes
-          val listener  = new StatusListenerMessenger(jobInfo.client, leaderSessionID.orNull)
-
-          executionGraph.registerExecutionListener(listener)
-          executionGraph.registerJobStatusListener(listener)
+          case (client, ListeningBehaviour.EXECUTION_RESULT_AND_STATE_CHANGES) =>
+            val listener  = new StatusListenerMessenger(client, leaderSessionID.orNull)
+            executionGraph.registerExecutionListener(listener)
+            executionGraph.registerJobStatusListener(listener)
+          case _ => // do nothing
         }
+
       } catch {
         case t: Throwable =>
           log.error(s"Failed to submit job $jobId ($jobName)", t)
@@ -1283,7 +1310,8 @@ class JobManager(
             new JobExecutionException(jobId, s"Failed to submit job $jobId ($jobName)", t)
           }
 
-          jobInfo.client ! decorateMessage(JobResultFailure(new SerializedThrowable(rt)))
+          jobInfo.notifyClients(
+            decorateMessage(JobResultFailure(new SerializedThrowable(rt))))
           return
       }
 
@@ -1338,7 +1366,8 @@ class JobManager(
             }
           }
 
-          jobInfo.client ! decorateMessage(JobSubmitSuccess(jobGraph.getJobID))
+          jobInfo.notifyClients(
+            decorateMessage(JobSubmitSuccess(jobGraph.getJobID)))
 
           if (leaderElectionService.hasLeadership) {
             // There is a small chance that multiple job managers schedule the same job after if
@@ -1740,10 +1769,10 @@ class JobManager(
       future {
         eg.suspend(cause)
 
-        if (jobInfo.listeningBehaviour != ListeningBehaviour.DETACHED) {
-          jobInfo.client ! decorateMessage(
-            Failure(new JobExecutionException(jobID, "All jobs are cancelled and cleared.", cause)))
-        }
+        jobInfo.notifyNonDetachedClients(
+          decorateMessage(
+            Failure(
+              new JobExecutionException(jobID, "All jobs are cancelled and cleared.", cause))))
       }(context.dispatcher)
     }
 

http://git-wip-us.apache.org/repos/asf/flink/blob/259a3a55/flink-runtime/src/main/scala/org/apache/flink/runtime/messages/JobClientMessages.scala
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/scala/org/apache/flink/runtime/messages/JobClientMessages.scala b/flink-runtime/src/main/scala/org/apache/flink/runtime/messages/JobClientMessages.scala
index a60fa7a..1f29e32 100644
--- a/flink-runtime/src/main/scala/org/apache/flink/runtime/messages/JobClientMessages.scala
+++ b/flink-runtime/src/main/scala/org/apache/flink/runtime/messages/JobClientMessages.scala
@@ -21,6 +21,7 @@ package org.apache.flink.runtime.messages
 import java.util.UUID
 
 import akka.actor.ActorRef
+import org.apache.flink.api.common.JobID
 import org.apache.flink.runtime.jobgraph.JobGraph
 
 /**
@@ -29,7 +30,7 @@ import org.apache.flink.runtime.jobgraph.JobGraph
 object JobClientMessages {
 
   /**
-   * This message is sent to the JobClient (via ask) to submit a job and
+   * This message is sent to the JobClientActor (via ask) to submit a job and
    * get a response when the job execution has finished.
    * 
    * The response to this message is a
@@ -40,15 +41,11 @@ object JobClientMessages {
   case class SubmitJobAndWait(jobGraph: JobGraph)
 
   /**
-   * This message is sent to the JobClient (via ask) to submit a job and 
-   * return as soon as the result of the submit operation is known. 
-   *
-   * The response to this message is a
-   * [[org.apache.flink.api.common.JobSubmissionResult]]
-   *
-   * @param jobGraph The job to be executed.
-   */
-  case class SubmitJobDetached(jobGraph: JobGraph)
+    * This message is sent to the JobClientActor to ask it to register at the JobManager
+    * and then return once the job execution is complete.
+    * @param jobID The job id
+    */
+  case class AttachToJobAndWait(jobID: JobID)
 
   /** Notifies the JobClientActor about a new leader address and a leader session ID.
     *
@@ -66,9 +63,13 @@ object JobClientMessages {
   /** Message which is triggered when the submission timeout has been reached. */
   case object SubmissionTimeout extends RequiresLeaderSessionID
 
-  /** Messaeg which is triggered when the connection timeout has been reached. */
+  /** Message which is triggered when the JobClient registration at the JobManager times out */
+  case object RegistrationTimeout extends RequiresLeaderSessionID
+
+  /** Message which is triggered when the connection timeout has been reached. */
   case object ConnectionTimeout extends RequiresLeaderSessionID
 
   def getSubmissionTimeout(): AnyRef = SubmissionTimeout
+  def getRegistrationTimeout(): AnyRef = RegistrationTimeout
   def getConnectionTimeout(): AnyRef = ConnectionTimeout
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/259a3a55/flink-runtime/src/main/scala/org/apache/flink/runtime/messages/JobManagerMessages.scala
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/scala/org/apache/flink/runtime/messages/JobManagerMessages.scala b/flink-runtime/src/main/scala/org/apache/flink/runtime/messages/JobManagerMessages.scala
index 14f72b0..40c4dcf 100644
--- a/flink-runtime/src/main/scala/org/apache/flink/runtime/messages/JobManagerMessages.scala
+++ b/flink-runtime/src/main/scala/org/apache/flink/runtime/messages/JobManagerMessages.scala
@@ -18,6 +18,7 @@
 
 package org.apache.flink.runtime.messages
 
+import java.net.URL
 import java.util.UUID
 
 import akka.actor.ActorRef
@@ -69,6 +70,19 @@ object JobManagerMessages {
     extends RequiresLeaderSessionID
 
   /**
+    * Registers the sender of the message as the client for the provided job identifier.
+    * This message is acknowledged by the JobManager with [[RegisterJobClientSuccess]]
+    * or [[JobNotFound]] if the job was not running.
+    * @param jobID The job id of the job
+    * @param listeningBehaviour The types of updates which will be sent to the sender
+    * after registration
+    */
+  case class RegisterJobClient(
+      jobID: JobID,
+      listeningBehaviour: ListeningBehaviour)
+    extends RequiresLeaderSessionID
+
+  /**
    * Triggers the recovery of the job with the given ID.
    *
    * @param jobId ID of the job to recover
@@ -195,6 +209,23 @@ object JobManagerMessages {
   case object RequestTotalNumberOfSlots
 
   /**
+    * Requests all entities necessary for reconstructing a job class loader
+    * May respond with [[ClassloadingProps]] or [[JobNotFound]]
+    * @param jobId The job id of the registered job
+    */
+  case class RequestClassloadingProps(jobId: JobID)
+
+  /**
+    * Response to [[RequestClassloadingProps]]
+    * @param blobManagerPort The port of the blobManager
+    * @param requiredJarFiles The blob keys of the required jar files
+    * @param requiredClasspaths The urls of the required classpaths
+    */
+  case class ClassloadingProps(blobManagerPort: Integer,
+                               requiredJarFiles: java.util.List[BlobKey],
+                               requiredClasspaths: java.util.List[URL])
+
+  /**
    * Requests the port of the blob manager from the job manager. The result is sent back to the
    * sender as an [[Int]].
    */
@@ -218,16 +249,27 @@ object JobManagerMessages {
   case class JobSubmitSuccess(jobId: JobID)
 
   /**
+    * Denotes a successful registration of a JobClientActor for a running job
+    * @param jobId The job id of the registered job
+    */
+  case class RegisterJobClientSuccess(jobId: JobID)
+
+  /**
+    * Denotes messages which contain the result of a completed job execution
+    */
+  sealed trait JobResultMessage
+
+  /**
    * Denotes a successful job execution.
    * @param result The result of the job execution, in serialized form.
    */
-  case class JobResultSuccess(result: SerializedJobExecutionResult)
+  case class JobResultSuccess(result: SerializedJobExecutionResult) extends JobResultMessage
 
   /**
    * Denotes an unsuccessful job execution.
    * @param cause The exception that caused the job to fail, in serialized form.
    */
-  case class JobResultFailure(cause: SerializedThrowable)
+  case class JobResultFailure(cause: SerializedThrowable) extends JobResultMessage
 
 
   sealed trait CancellationResponse{
@@ -316,7 +358,7 @@ object JobManagerMessages {
 
   /**
    * Denotes that there is no job with [[jobID]] retrievable. This message can be the response of
-   * [[RequestJob]] or [[RequestJobStatus]].
+   * [[RequestJob]], [[RequestJobStatus]] or [[RegisterJobClient]].
    *
    * @param jobID
    */


[46/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/libs/gelly/index.md
----------------------------------------------------------------------
diff --git a/docs/apis/batch/libs/gelly/index.md b/docs/apis/batch/libs/gelly/index.md
deleted file mode 100644
index 643a318..0000000
--- a/docs/apis/batch/libs/gelly/index.md
+++ /dev/null
@@ -1,74 +0,0 @@
----
-title: "Gelly: Flink Graph API"
-# Top navigation
-top-nav-group: libs
-top-nav-pos: 1
-top-nav-title: "Graphs: Gelly"
-# Sub navigation
-sub-nav-group: batch
-sub-nav-id: gelly
-sub-nav-pos: 1
-sub-nav-parent: libs
-sub-nav-title: Gelly
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-Gelly is a Graph API for Flink. It contains a set of methods and utilities which aim to simplify the development of graph analysis applications in Flink. In Gelly, graphs can be transformed and modified using high-level functions similar to the ones provided by the batch processing API. Gelly provides methods to create, transform and modify graphs, as well as a library of graph algorithms.
-
-{:#markdown-toc}
-* [Graph API](graph_api.html)
-* [Iterative Graph Processing](iterative_graph_processing.html)
-* [Library Methods](library_methods.html)
-* [Graph Algorithms](graph_algorithms.html)
-* [Graph Generators](graph_generators.html)
-
-Using Gelly
------------
-
-Gelly is currently part of the *libraries* Maven project. All relevant classes are located in the *org.apache.flink.graph* package.
-
-Add the following dependency to your `pom.xml` to use Gelly.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight xml %}
-<dependency>
-    <groupId>org.apache.flink</groupId>
-    <artifactId>flink-gelly{{ site.scala_version_suffix }}</artifactId>
-    <version>{{site.version}}</version>
-</dependency>
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight xml %}
-<dependency>
-    <groupId>org.apache.flink</groupId>
-    <artifactId>flink-gelly-scala{{ site.scala_version_suffix }}</artifactId>
-    <version>{{site.version}}</version>
-</dependency>
-{% endhighlight %}
-</div>
-</div>
-
-Note that Gelly is currently not part of the binary distribution. See linking with it for cluster execution [here]({{ site.baseurl }}/apis/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution).
-
-The remaining sections provide a description of available methods and present several examples of how to use Gelly and how to mix it with the Flink DataSet API. After reading this guide, you might also want to check the {% gh_link /flink-libraries/flink-gelly-examples/ "Gelly examples" %}.
-
-{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/libs/gelly/iterative_graph_processing.md
----------------------------------------------------------------------
diff --git a/docs/apis/batch/libs/gelly/iterative_graph_processing.md b/docs/apis/batch/libs/gelly/iterative_graph_processing.md
deleted file mode 100644
index d7a096e..0000000
--- a/docs/apis/batch/libs/gelly/iterative_graph_processing.md
+++ /dev/null
@@ -1,971 +0,0 @@
----
-title: Iterative Graph Processing
-
-# Sub navigation
-sub-nav-group: batch
-sub-nav-parent: gelly
-sub-nav-title: Iterative Graph Processing
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-Gelly exploits Flink's efficient iteration operators to support large-scale iterative graph processing. Currently, we provide implementations of the vertex-centric, scatter-gather, and gather-sum-apply models. In the following sections, we describe these abstractions and show how you can use them in Gelly.
-
-* This will be replaced by the TOC
-{:toc}
-
-## Vertex-Centric Iterations
-The vertex-centric model, also known as "think like a vertex" or "Pregel", expresses computation from the perspective of a vertex in the graph.
-The computation proceeds in synchronized iteration steps, called supersteps. In each superstep, each vertex executes one user-defined function.
-Vertices communicate with other vertices through messages. A vertex can send a message to any other vertex in the graph, as long as it knows its unique ID.
-
-The computational model is shown in the figure below. The dotted boxes correspond to parallelization units.
-In each superstep, all active vertices execute the
-same user-defined computation in parallel. Supersteps are executed synchronously, so that messages sent during one superstep are guaranteed to be delivered in the beginning of the next superstep.
-
-<p class="text-center">
-    <img alt="Vertex-Centric Computational Model" width="70%" src="fig/vertex-centric supersteps.png"/>
-</p>
-
-To use vertex-centric iterations in Gelly, the user only needs to define the vertex compute function, `ComputeFunction`.
-This function and the maximum number of iterations to run are given as parameters to Gelly's `runVertexCentricIteration`. This method will execute the vertex-centric iteration on the input Graph and return a new Graph, with updated vertex values. An optional message combiner, `MessageCombiner`, can be defined to reduce communication costs.
-
-Let us consider computing Single-Source-Shortest-Paths with vertex-centric iterations. Initially, each vertex has a value of infinite distance, except from the source vertex, which has a value of zero. During the first superstep, the source propagates distances to its neighbors. During the following supersteps, each vertex checks its received messages and chooses the minimum distance among them. If this distance is smaller than its current value, it updates its state and produces messages for its neighbors. If a vertex does not change its value during a superstep, then it does not produce any messages for its neighbors for the next superstep. The algorithm converges when there are no value updates or the maximum number of supersteps has been reached. In this algorithm, a message combiner can be used to reduce the number of messages sent to a target vertex.
-
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-// read the input graph
-Graph<Long, Double, Double> graph = ...
-
-// define the maximum number of iterations
-int maxIterations = 10;
-
-// Execute the vertex-centric iteration
-Graph<Long, Double, Double> result = graph.runVertexCentricIteration(
-            new SSSPComputeFunction(), new SSSPCombiner(), maxIterations);
-
-// Extract the vertices as the result
-DataSet<Vertex<Long, Double>> singleSourceShortestPaths = result.getVertices();
-
-
-// - - -  UDFs - - - //
-
-public static final class SSSPComputeFunction extends ComputeFunction<Long, Double, Double, Double> {
-
-public void compute(Vertex<Long, Double> vertex, MessageIterator<Double> messages) {
-
-    double minDistance = (vertex.getId().equals(srcId)) ? 0d : Double.POSITIVE_INFINITY;
-
-    for (Double msg : messages) {
-        minDistance = Math.min(minDistance, msg);
-    }
-
-    if (minDistance < vertex.getValue()) {
-        setNewVertexValue(minDistance);
-        for (Edge<Long, Double> e: getEdges()) {
-            sendMessageTo(e.getTarget(), minDistance + e.getValue());
-        }
-    }
-}
-
-// message combiner
-public static final class SSSPCombiner extends MessageCombiner<Long, Double> {
-
-    public void combineMessages(MessageIterator<Double> messages) {
-
-        double minMessage = Double.POSITIVE_INFINITY;
-        for (Double msg: messages) {
-           minMessage = Math.min(minMessage, msg);
-        }
-        sendCombinedMessage(minMessage);
-    }
-}
-
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-// read the input graph
-val graph: Graph[Long, Double, Double] = ...
-
-// define the maximum number of iterations
-val maxIterations = 10
-
-// Execute the vertex-centric iteration
-val result = graph.runVertexCentricIteration(new SSSPComputeFunction, new SSSPCombiner, maxIterations)
-
-// Extract the vertices as the result
-val singleSourceShortestPaths = result.getVertices
-
-
-// - - -  UDFs - - - //
-
-final class SSSPComputeFunction extends ComputeFunction[Long, Double, Double, Double] {
-
-    override def compute(vertex: Vertex[Long, Double], messages: MessageIterator[Double]) = {
-
-    var minDistance = if (vertex.getId.equals(srcId)) 0 else Double.MaxValue
-
-    while (messages.hasNext) {
-        val msg = messages.next
-        if (msg < minDistance) {
-            minDistance = msg
-        }
-    }
-
-    if (vertex.getValue > minDistance) {
-        setNewVertexValue(minDistance)
-        for (edge: Edge[Long, Double] <- getEdges) {
-            sendMessageTo(edge.getTarget, vertex.getValue + edge.getValue)
-        }
-    }
-}
-
-// message combiner
-final class SSSPCombiner extends MessageCombiner[Long, Double] {
-
-    override def combineMessages(messages: MessageIterator[Double]) {
-
-        var minDistance = Double.MaxValue
-
-        while (messages.hasNext) {
-          val msg = inMessages.next
-          if (msg < minDistance) {
-            minDistance = msg
-          }
-        }
-        sendCombinedMessage(minMessage)
-    }
-}
-{% endhighlight %}
-</div>
-</div>
-
-{% top %}
-
-## Configuring a Vertex-Centric Iteration
-A vertex-centric iteration can be configured using a `VertexCentricConfiguration` object.
-Currently, the following parameters can be specified:
-
-* <strong>Name</strong>: The name for the vertex-centric iteration. The name is displayed in logs and messages
-and can be specified using the `setName()` method.
-
-* <strong>Parallelism</strong>: The parallelism for the iteration. It can be set using the `setParallelism()` method.
-
-* <strong>Solution set in unmanaged memory</strong>: Defines whether the solution set is kept in managed memory (Flink's internal way of keeping objects in serialized form) or as a simple object map. By default, the solution set runs in managed memory. This property can be set using the `setSolutionSetUnmanagedMemory()` method.
-
-* <strong>Aggregators</strong>: Iteration aggregators can be registered using the `registerAggregator()` method. An iteration aggregator combines
-all aggregates globally once per superstep and makes them available in the next superstep. Registered aggregators can be accessed inside the user-defined `ComputeFunction`.
-
-* <strong>Broadcast Variables</strong>: DataSets can be added as [Broadcast Variables]({{site.baseurl}}/apis/batch/index.html#broadcast-variables) to the `ComputeFunction`, using the `addBroadcastSet()` method.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-
-Graph<Long, Double, Double> graph = ...
-
-// configure the iteration
-VertexCentricConfiguration parameters = new VertexCentricConfiguration();
-
-// set the iteration name
-parameters.setName("Gelly Iteration");
-
-// set the parallelism
-parameters.setParallelism(16);
-
-// register an aggregator
-parameters.registerAggregator("sumAggregator", new LongSumAggregator());
-
-// run the vertex-centric iteration, also passing the configuration parameters
-Graph<Long, Long, Double> result =
-            graph.runVertexCentricIteration(
-            new Compute(), null, maxIterations, parameters);
-
-// user-defined function
-public static final class Compute extends ComputeFunction {
-
-    LongSumAggregator aggregator = new LongSumAggregator();
-
-    public void preSuperstep() {
-
-        // retrieve the Aggregator
-        aggregator = getIterationAggregator("sumAggregator");
-    }
-
-
-    public void compute(Vertex<Long, Long> vertex, MessageIterator inMessages) {
-
-        //do some computation
-        Long partialValue = ...
-
-        // aggregate the partial value
-        aggregator.aggregate(partialValue);
-
-        // update the vertex value
-        setNewVertexValue(...);
-    }
-}
-
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-
-val graph: Graph[Long, Long, Double] = ...
-
-val parameters = new VertexCentricConfiguration
-
-// set the iteration name
-parameters.setName("Gelly Iteration")
-
-// set the parallelism
-parameters.setParallelism(16)
-
-// register an aggregator
-parameters.registerAggregator("sumAggregator", new LongSumAggregator)
-
-// run the vertex-centric iteration, also passing the configuration parameters
-val result = graph.runVertexCentricIteration(new Compute, new Combiner, maxIterations, parameters)
-
-// user-defined function
-final class Compute extends ComputeFunction {
-
-    var aggregator = new LongSumAggregator
-
-    override def preSuperstep {
-
-        // retrieve the Aggregator
-        aggregator = getIterationAggregator("sumAggregator")
-    }
-
-
-    override def compute(vertex: Vertex[Long, Long], inMessages: MessageIterator[Long]) {
-
-        //do some computation
-        val partialValue = ...
-
-        // aggregate the partial value
-        aggregator.aggregate(partialValue)
-
-        // update the vertex value
-        setNewVertexValue(...)
-    }
-}
-
-{% endhighlight %}
-</div>
-</div>
-
-{% top %}
-
-## Scatter-Gather Iterations
-The scatter-gather model, also known as "signal/collect" model, expresses computation from the perspective of a vertex in the graph. The computation proceeds in synchronized iteration steps, called supersteps. In each superstep, a vertex produces messages for other vertices and updates its value based on the messages it receives. To use scatter-gather iterations in Gelly, the user only needs to define how a vertex behaves in each superstep:
-
-* <strong>Scatter</strong>:  produces the messages that a vertex will send to other vertices.
-* <strong>Gather</strong>: updates the vertex value using received messages.
-
-Gelly provides methods for scatter-gather iterations. The user only needs to implement two functions, corresponding to the scatter and gather phases. The first function is a `ScatterFunction`, which allows a vertex to send out messages to other vertices. Messages are received during the same superstep as they are sent. The second function is `GatherFunction`, which defines how a vertex will update its value based on the received messages.
-These functions and the maximum number of iterations to run are given as parameters to Gelly's `runScatterGatherIteration`. This method will execute the scatter-gather iteration on the input Graph and return a new Graph, with updated vertex values.
-
-A scatter-gather iteration can be extended with information such as the total number of vertices, the in degree and out degree.
-Additionally, the  neighborhood type (in/out/all) over which to run the scatter-gather iteration can be specified. By default, the updates from the in-neighbors are used to modify the current vertex's state and messages are sent to out-neighbors.
-
-Let us consider computing Single-Source-Shortest-Paths with scatter-gather iterations on the following graph and let vertex 1 be the source. In each superstep, each vertex sends a candidate distance message to all its neighbors. The message value is the sum of the current value of the vertex and the edge weight connecting this vertex with its neighbor. Upon receiving candidate distance messages, each vertex calculates the minimum distance and, if a shorter path has been discovered, it updates its value. If a vertex does not change its value during a superstep, then it does not produce messages for its neighbors for the next superstep. The algorithm converges when there are no value updates.
-
-<p class="text-center">
-    <img alt="Scatter-gather SSSP superstep 1" width="70%" src="fig/gelly-vc-sssp1.png"/>
-</p>
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-// read the input graph
-Graph<Long, Double, Double> graph = ...
-
-// define the maximum number of iterations
-int maxIterations = 10;
-
-// Execute the scatter-gather iteration
-Graph<Long, Double, Double> result = graph.runScatterGatherIteration(
-			new MinDistanceMessenger(), new VertexDistanceUpdater(), maxIterations);
-
-// Extract the vertices as the result
-DataSet<Vertex<Long, Double>> singleSourceShortestPaths = result.getVertices();
-
-
-// - - -  UDFs - - - //
-
-// scatter: messaging
-public static final class MinDistanceMessenger extends ScatterFunction<Long, Double, Double, Double> {
-
-	public void sendMessages(Vertex<Long, Double> vertex) {
-		for (Edge<Long, Double> edge : getEdges()) {
-			sendMessageTo(edge.getTarget(), vertex.getValue() + edge.getValue());
-		}
-	}
-}
-
-// gather: vertex update
-public static final class VertexDistanceUpdater extends GatherFunction<Long, Double, Double> {
-
-	public void updateVertex(Vertex<Long, Double> vertex, MessageIterator<Double> inMessages) {
-		Double minDistance = Double.MAX_VALUE;
-
-		for (double msg : inMessages) {
-			if (msg < minDistance) {
-				minDistance = msg;
-			}
-		}
-
-		if (vertex.getValue() > minDistance) {
-			setNewVertexValue(minDistance);
-		}
-	}
-}
-
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-// read the input graph
-val graph: Graph[Long, Double, Double] = ...
-
-// define the maximum number of iterations
-val maxIterations = 10
-
-// Execute the scatter-gather iteration
-val result = graph.runScatterGatherIteration(new MinDistanceMessenger, new VertexDistanceUpdater, maxIterations)
-
-// Extract the vertices as the result
-val singleSourceShortestPaths = result.getVertices
-
-
-// - - -  UDFs - - - //
-
-// messaging
-final class MinDistanceMessenger extends ScatterFunction[Long, Double, Double, Double] {
-
-	override def sendMessages(vertex: Vertex[Long, Double]) = {
-		for (edge: Edge[Long, Double] <- getEdges) {
-			sendMessageTo(edge.getTarget, vertex.getValue + edge.getValue)
-		}
-	}
-}
-
-// vertex update
-final class VertexDistanceUpdater extends GatherFunction[Long, Double, Double] {
-
-	override def updateVertex(vertex: Vertex[Long, Double], inMessages: MessageIterator[Double]) = {
-		var minDistance = Double.MaxValue
-
-		while (inMessages.hasNext) {
-		  val msg = inMessages.next
-		  if (msg < minDistance) {
-			minDistance = msg
-		  }
-		}
-
-		if (vertex.getValue > minDistance) {
-		  setNewVertexValue(minDistance)
-		}
-	}
-}
-{% endhighlight %}
-</div>
-</div>
-
-{% top %}
-
-## Configuring a Scatter-Gather Iteration
-A scatter-gather iteration can be configured using a `ScatterGatherConfiguration` object.
-Currently, the following parameters can be specified:
-
-* <strong>Name</strong>: The name for the scatter-gather iteration. The name is displayed in logs and messages
-and can be specified using the `setName()` method.
-
-* <strong>Parallelism</strong>: The parallelism for the iteration. It can be set using the `setParallelism()` method.
-
-* <strong>Solution set in unmanaged memory</strong>: Defines whether the solution set is kept in managed memory (Flink's internal way of keeping objects in serialized form) or as a simple object map. By default, the solution set runs in managed memory. This property can be set using the `setSolutionSetUnmanagedMemory()` method.
-
-* <strong>Aggregators</strong>: Iteration aggregators can be registered using the `registerAggregator()` method. An iteration aggregator combines
-all aggregates globally once per superstep and makes them available in the next superstep. Registered aggregators can be accessed inside the user-defined `ScatterFunction` and `GatherFunction`.
-
-* <strong>Broadcast Variables</strong>: DataSets can be added as [Broadcast Variables]({{site.baseurl}}/apis/batch/index.html#broadcast-variables) to the `ScatterFunction` and `GatherFunction`, using the `addBroadcastSetForUpdateFunction()` and `addBroadcastSetForMessagingFunction()` methods, respectively.
-
-* <strong>Number of Vertices</strong>: Accessing the total number of vertices within the iteration. This property can be set using the `setOptNumVertices()` method.
-The number of vertices can then be accessed in the vertex update function and in the messaging function using the `getNumberOfVertices()` method. If the option is not set in the configuration, this method will return -1.
-
-* <strong>Degrees</strong>: Accessing the in/out degree for a vertex within an iteration. This property can be set using the `setOptDegrees()` method.
-The in/out degrees can then be accessed in the vertex update function and in the messaging function, per vertex using the `getInDegree()` and `getOutDegree()` methods.
-If the degrees option is not set in the configuration, these methods will return -1.
-
-* <strong>Messaging Direction</strong>: By default, a vertex sends messages to its out-neighbors and updates its value based on messages received from its in-neighbors. This configuration option allows users to change the messaging direction to either `EdgeDirection.IN`, `EdgeDirection.OUT`, `EdgeDirection.ALL`. The messaging direction also dictates the update direction which would be `EdgeDirection.OUT`, `EdgeDirection.IN` and `EdgeDirection.ALL`, respectively. This property can be set using the `setDirection()` method.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-
-Graph<Long, Double, Double> graph = ...
-
-// configure the iteration
-ScatterGatherConfiguration parameters = new ScatterGatherConfiguration();
-
-// set the iteration name
-parameters.setName("Gelly Iteration");
-
-// set the parallelism
-parameters.setParallelism(16);
-
-// register an aggregator
-parameters.registerAggregator("sumAggregator", new LongSumAggregator());
-
-// run the scatter-gather iteration, also passing the configuration parameters
-Graph<Long, Double, Double> result =
-			graph.runScatterGatherIteration(
-			new Messenger(), new VertexUpdater(), maxIterations, parameters);
-
-// user-defined functions
-public static final class Messenger extends ScatterFunction {...}
-
-public static final class VertexUpdater extends GatherFunction {
-
-	LongSumAggregator aggregator = new LongSumAggregator();
-
-	public void preSuperstep() {
-
-		// retrieve the Aggregator
-		aggregator = getIterationAggregator("sumAggregator");
-	}
-
-
-	public void updateVertex(Vertex<Long, Long> vertex, MessageIterator inMessages) {
-
-		//do some computation
-		Long partialValue = ...
-
-		// aggregate the partial value
-		aggregator.aggregate(partialValue);
-
-		// update the vertex value
-		setNewVertexValue(...);
-	}
-}
-
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-
-val graph: Graph[Long, Double, Double] = ...
-
-val parameters = new ScatterGatherConfiguration
-
-// set the iteration name
-parameters.setName("Gelly Iteration")
-
-// set the parallelism
-parameters.setParallelism(16)
-
-// register an aggregator
-parameters.registerAggregator("sumAggregator", new LongSumAggregator)
-
-// run the scatter-gather iteration, also passing the configuration parameters
-val result = graph.runScatterGatherIteration(new Messenger, new VertexUpdater, maxIterations, parameters)
-
-// user-defined functions
-final class Messenger extends ScatterFunction {...}
-
-final class VertexUpdater extends GatherFunction {
-
-	var aggregator = new LongSumAggregator
-
-	override def preSuperstep {
-
-		// retrieve the Aggregator
-		aggregator = getIterationAggregator("sumAggregator")
-	}
-
-
-	override def updateVertex(vertex: Vertex[Long, Long], inMessages: MessageIterator[Long]) {
-
-		//do some computation
-		val partialValue = ...
-
-		// aggregate the partial value
-		aggregator.aggregate(partialValue)
-
-		// update the vertex value
-		setNewVertexValue(...)
-	}
-}
-
-{% endhighlight %}
-</div>
-</div>
-
-The following example illustrates the usage of the degree as well as the number of vertices options.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-
-Graph<Long, Double, Double> graph = ...
-
-// configure the iteration
-ScatterGatherConfiguration parameters = new ScatterGatherConfiguration();
-
-// set the number of vertices option to true
-parameters.setOptNumVertices(true);
-
-// set the degree option to true
-parameters.setOptDegrees(true);
-
-// run the scatter-gather iteration, also passing the configuration parameters
-Graph<Long, Double, Double> result =
-			graph.runScatterGatherIteration(
-			new Messenger(), new VertexUpdater(), maxIterations, parameters);
-
-// user-defined functions
-public static final class Messenger extends ScatterFunction {
-	...
-	// retrieve the vertex out-degree
-	outDegree = getOutDegree();
-	...
-}
-
-public static final class VertexUpdater extends GatherFunction {
-	...
-	// get the number of vertices
-	long numVertices = getNumberOfVertices();
-	...
-}
-
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-
-val graph: Graph[Long, Double, Double] = ...
-
-// configure the iteration
-val parameters = new ScatterGatherConfiguration
-
-// set the number of vertices option to true
-parameters.setOptNumVertices(true)
-
-// set the degree option to true
-parameters.setOptDegrees(true)
-
-// run the scatter-gather iteration, also passing the configuration parameters
-val result = graph.runScatterGatherIteration(new Messenger, new VertexUpdater, maxIterations, parameters)
-
-// user-defined functions
-final class Messenger extends ScatterFunction {
-	...
-	// retrieve the vertex out-degree
-	val outDegree = getOutDegree
-	...
-}
-
-final class VertexUpdater extends GatherFunction {
-	...
-	// get the number of vertices
-	val numVertices = getNumberOfVertices
-	...
-}
-
-{% endhighlight %}
-</div>
-</div>
-
-The following example illustrates the usage of the edge direction option. Vertices update their values to contain a list of all their in-neighbors.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-Graph<Long, HashSet<Long>, Double> graph = ...
-
-// configure the iteration
-ScatterGatherConfiguration parameters = new ScatterGatherConfiguration();
-
-// set the messaging direction
-parameters.setDirection(EdgeDirection.IN);
-
-// run the scatter-gather iteration, also passing the configuration parameters
-DataSet<Vertex<Long, HashSet<Long>>> result =
-			graph.runScatterGatherIteration(
-			new Messenger(), new VertexUpdater(), maxIterations, parameters)
-			.getVertices();
-
-// user-defined functions
-public static final class Messenger extends GatherFunction {...}
-
-public static final class VertexUpdater extends ScatterFunction {...}
-
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val graph: Graph[Long, HashSet[Long], Double] = ...
-
-// configure the iteration
-val parameters = new ScatterGatherConfiguration
-
-// set the messaging direction
-parameters.setDirection(EdgeDirection.IN)
-
-// run the scatter-gather iteration, also passing the configuration parameters
-val result = graph.runScatterGatherIteration(new Messenger, new VertexUpdater, maxIterations, parameters)
-			.getVertices
-
-// user-defined functions
-final class Messenger extends ScatterFunction {...}
-
-final class VertexUpdater extends GatherFunction {...}
-
-{% endhighlight %}
-</div>
-</div>
-
-{% top %}
-
-## Gather-Sum-Apply Iterations
-Like in the scatter-gather model, Gather-Sum-Apply also proceeds in synchronized iterative steps, called supersteps. Each superstep consists of the following three phases:
-
-* <strong>Gather</strong>: a user-defined function is invoked in parallel on the edges and neighbors of each vertex, producing a partial value.
-* <strong>Sum</strong>: the partial values produced in the Gather phase are aggregated to a single value, using a user-defined reducer.
-* <strong>Apply</strong>:  each vertex value is updated by applying a function on the current value and the aggregated value produced by the Sum phase.
-
-Let us consider computing Single-Source-Shortest-Paths with GSA on the following graph and let vertex 1 be the source. During the `Gather` phase, we calculate the new candidate distances, by adding each vertex value with the edge weight. In `Sum`, the candidate distances are grouped by vertex ID and the minimum distance is chosen. In `Apply`, the newly calculated distance is compared to the current vertex value and the minimum of the two is assigned as the new value of the vertex.
-
-<p class="text-center">
-    <img alt="GSA SSSP superstep 1" width="70%" src="fig/gelly-gsa-sssp1.png"/>
-</p>
-
-Notice that, if a vertex does not change its value during a superstep, it will not calculate candidate distance during the next superstep. The algorithm converges when no vertex changes value.
-
-To implement this example in Gelly GSA, the user only needs to call the `runGatherSumApplyIteration` method on the input graph and provide the `GatherFunction`, `SumFunction` and `ApplyFunction` UDFs. Iteration synchronization, grouping, value updates and convergence are handled by the system:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-// read the input graph
-Graph<Long, Double, Double> graph = ...
-
-// define the maximum number of iterations
-int maxIterations = 10;
-
-// Execute the GSA iteration
-Graph<Long, Double, Double> result = graph.runGatherSumApplyIteration(
-				new CalculateDistances(), new ChooseMinDistance(), new UpdateDistance(), maxIterations);
-
-// Extract the vertices as the result
-DataSet<Vertex<Long, Double>> singleSourceShortestPaths = result.getVertices();
-
-
-// - - -  UDFs - - - //
-
-// Gather
-private static final class CalculateDistances extends GatherFunction<Double, Double, Double> {
-
-	public Double gather(Neighbor<Double, Double> neighbor) {
-		return neighbor.getNeighborValue() + neighbor.getEdgeValue();
-	}
-}
-
-// Sum
-private static final class ChooseMinDistance extends SumFunction<Double, Double, Double> {
-
-	public Double sum(Double newValue, Double currentValue) {
-		return Math.min(newValue, currentValue);
-	}
-}
-
-// Apply
-private static final class UpdateDistance extends ApplyFunction<Long, Double, Double> {
-
-	public void apply(Double newDistance, Double oldDistance) {
-		if (newDistance < oldDistance) {
-			setResult(newDistance);
-		}
-	}
-}
-
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-// read the input graph
-val graph: Graph[Long, Double, Double] = ...
-
-// define the maximum number of iterations
-val maxIterations = 10
-
-// Execute the GSA iteration
-val result = graph.runGatherSumApplyIteration(new CalculateDistances, new ChooseMinDistance, new UpdateDistance, maxIterations)
-
-// Extract the vertices as the result
-val singleSourceShortestPaths = result.getVertices
-
-
-// - - -  UDFs - - - //
-
-// Gather
-final class CalculateDistances extends GatherFunction[Double, Double, Double] {
-
-	override def gather(neighbor: Neighbor[Double, Double]): Double = {
-		neighbor.getNeighborValue + neighbor.getEdgeValue
-	}
-}
-
-// Sum
-final class ChooseMinDistance extends SumFunction[Double, Double, Double] {
-
-	override def sum(newValue: Double, currentValue: Double): Double = {
-		Math.min(newValue, currentValue)
-	}
-}
-
-// Apply
-final class UpdateDistance extends ApplyFunction[Long, Double, Double] {
-
-	override def apply(newDistance: Double, oldDistance: Double) = {
-		if (newDistance < oldDistance) {
-			setResult(newDistance)
-		}
-	}
-}
-
-{% endhighlight %}
-</div>
-</div>
-
-Note that `gather` takes a `Neighbor` type as an argument. This is a convenience type which simply wraps a vertex with its neighboring edge.
-
-For more examples of how to implement algorithms with the Gather-Sum-Apply model, check the {% gh_link /flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/GSAPageRank.java "GSAPageRank" %} and {% gh_link /flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/GSAConnectedComponents.java "GSAConnectedComponents" %} library methods of Gelly.
-
-{% top %}
-
-## Configuring a Gather-Sum-Apply Iteration
-A GSA iteration can be configured using a `GSAConfiguration` object.
-Currently, the following parameters can be specified:
-
-* <strong>Name</strong>: The name for the GSA iteration. The name is displayed in logs and messages and can be specified using the `setName()` method.
-
-* <strong>Parallelism</strong>: The parallelism for the iteration. It can be set using the `setParallelism()` method.
-
-* <strong>Solution set in unmanaged memory</strong>: Defines whether the solution set is kept in managed memory (Flink's internal way of keeping objects in serialized form) or as a simple object map. By default, the solution set runs in managed memory. This property can be set using the `setSolutionSetUnmanagedMemory()` method.
-
-* <strong>Aggregators</strong>: Iteration aggregators can be registered using the `registerAggregator()` method. An iteration aggregator combines all aggregates globally once per superstep and makes them available in the next superstep. Registered aggregators can be accessed inside the user-defined `GatherFunction`, `SumFunction` and `ApplyFunction`.
-
-* <strong>Broadcast Variables</strong>: DataSets can be added as [Broadcast Variables]({{site.baseurl}}/apis/index.html#broadcast-variables) to the `GatherFunction`, `SumFunction` and `ApplyFunction`, using the methods `addBroadcastSetForGatherFunction()`, `addBroadcastSetForSumFunction()` and `addBroadcastSetForApplyFunction` methods, respectively.
-
-* <strong>Number of Vertices</strong>: Accessing the total number of vertices within the iteration. This property can be set using the `setOptNumVertices()` method.
-The number of vertices can then be accessed in the gather, sum and/or apply functions by using the `getNumberOfVertices()` method. If the option is not set in the configuration, this method will return -1.
-
-* <strong>Neighbor Direction</strong>: By default values are gathered from the out neighbors of the Vertex. This can be modified
-using the `setDirection()` method.
-
-The following example illustrates the usage of the number of vertices option.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-
-Graph<Long, Double, Double> graph = ...
-
-// configure the iteration
-GSAConfiguration parameters = new GSAConfiguration();
-
-// set the number of vertices option to true
-parameters.setOptNumVertices(true);
-
-// run the gather-sum-apply iteration, also passing the configuration parameters
-Graph<Long, Long, Long> result = graph.runGatherSumApplyIteration(
-				new Gather(), new Sum(), new Apply(),
-			    maxIterations, parameters);
-
-// user-defined functions
-public static final class Gather {
-	...
-	// get the number of vertices
-	long numVertices = getNumberOfVertices();
-	...
-}
-
-public static final class Sum {
-	...
-    // get the number of vertices
-    long numVertices = getNumberOfVertices();
-    ...
-}
-
-public static final class Apply {
-	...
-    // get the number of vertices
-    long numVertices = getNumberOfVertices();
-    ...
-}
-
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-
-val graph: Graph[Long, Double, Double] = ...
-
-// configure the iteration
-val parameters = new GSAConfiguration
-
-// set the number of vertices option to true
-parameters.setOptNumVertices(true)
-
-// run the gather-sum-apply iteration, also passing the configuration parameters
-val result = graph.runGatherSumApplyIteration(new Gather, new Sum, new Apply, maxIterations, parameters)
-
-// user-defined functions
-final class Gather {
-	...
-	// get the number of vertices
-	val numVertices = getNumberOfVertices
-	...
-}
-
-final class Sum {
-	...
-    // get the number of vertices
-    val numVertices = getNumberOfVertices
-    ...
-}
-
-final class Apply {
-	...
-    // get the number of vertices
-    val numVertices = getNumberOfVertices
-    ...
-}
-
-{% endhighlight %}
-</div>
-</div>
-
-The following example illustrates the usage of the edge direction option.
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-
-Graph<Long, HashSet<Long>, Double> graph = ...
-
-// configure the iteration
-GSAConfiguration parameters = new GSAConfiguration();
-
-// set the messaging direction
-parameters.setDirection(EdgeDirection.IN);
-
-// run the gather-sum-apply iteration, also passing the configuration parameters
-DataSet<Vertex<Long, HashSet<Long>>> result =
-			graph.runGatherSumApplyIteration(
-			new Gather(), new Sum(), new Apply(), maxIterations, parameters)
-			.getVertices();
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-
-val graph: Graph[Long, HashSet[Long], Double] = ...
-
-// configure the iteration
-val parameters = new GSAConfiguration
-
-// set the messaging direction
-parameters.setDirection(EdgeDirection.IN)
-
-// run the gather-sum-apply iteration, also passing the configuration parameters
-val result = graph.runGatherSumApplyIteration(new Gather, new Sum, new Apply, maxIterations, parameters)
-			.getVertices()
-{% endhighlight %}
-</div>
-</div>
-{% top %}
-
-## Iteration Abstractions Comparison
-Although the three iteration abstractions in Gelly seem quite similar, understanding their differences can lead to more performant and maintainable programs.
-Among the three, the vertex-centric model is the most general model and supports arbitrary computation and messaging for each vertex. In the scatter-gather model, the logic of producing messages is decoupled from the logic of updating vertex values. Thus, programs written using scatter-gather are sometimes easier to follow and maintain.
-Separating the messaging phase from the vertex value update logic not only makes some programs easier to follow but might also have a positive impact on performance. Scatter-gather implementations typically have lower memory requirements, because concurrent access to the inbox (messages received) and outbox (messages to send) data structures is not required. However, this characteristic also limits expressiveness and makes some computation patterns non-intuitive. Naturally, if an algorithm requires a vertex to concurrently access its inbox and outbox, then the expression of this algorithm in scatter-gather might be problematic. Strongly Connected Components and Approximate Maximum
-Weight Matching are examples of such graph algorithms. A direct consequence of this restriction is that vertices cannot generate messages and update their states in the same phase. Thus, deciding whether to propagate a message based on its content would require storing it in the vertex value, so that the gather phase has access to it, in the following iteration step. Similarly, if the vertex update logic includes computation over the values of the neighboring edges, these have to be included inside a special message passed from the scatter to the gather phase. Such workarounds often lead to higher memory requirements and non-elegant, hard to understand algorithm implementations.
-
-Gather-sum-apply iterations are also quite similar to scatter-gather iterations. In fact, any algorithm which can be expressed as a GSA iteration can also be written in the scatter-gather model. The messaging phase of the scatter-gather model is equivalent to the Gather and Sum steps of GSA: Gather can be seen as the phase where the messages are produced and Sum as the phase where they are routed to the target vertex. Similarly, the value update phase corresponds to the Apply step.
-
-The main difference between the two implementations is that the Gather phase of GSA parallelizes the computation over the edges, while the messaging phase distributes the computation over the vertices. Using the SSSP examples above, we see that in the first superstep of the scatter-gather case, vertices 1, 2 and 3 produce messages in parallel. Vertex 1 produces 3 messages, while vertices 2 and 3 produce one message each. In the GSA case on the other hand, the computation is parallelized over the edges: the three candidate distance values of vertex 1 are produced in parallel. Thus, if the Gather step contains "heavy" computation, it might be a better idea to use GSA and spread out the computation, instead of burdening a single vertex. Another case when parallelizing over the edges might prove to be more efficient is when the input graph is skewed (some vertices have a lot more neighbors than others).
-
-Another difference between the two implementations is that the scatter-gather implementation uses a `coGroup` operator internally, while GSA uses a `reduce`. Therefore, if the function that combines neighbor values (messages) requires the whole group of values for the computation, scatter-gather should be used. If the update function is associative and commutative, then the GSA's reducer is expected to give a more efficient implementation, as it can make use of a combiner.
-
-Another thing to note is that GSA works strictly on neighborhoods, while in the vertex-centric and scatter-gather models, a vertex can send a message to any vertex, given that it knows its vertex ID, regardless of whether it is a neighbor. Finally, in Gelly's scatter-gather implementation, one can choose the messaging direction, i.e. the direction in which updates propagate. GSA does not support this yet, so each vertex will be updated based on the values of its in-neighbors only.
-
-The main differences among the Gelly iteration models are shown in the table below.
-
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 25%">Iteration Model</th>
-      <th class="text-center">Update Function</th>
-      <th class="text-center">Update Logic</th>
-      <th class="text-center">Communication Scope</th>
-      <th class="text-center">Communication Logic</th>
-    </tr>
-  </thead>
-  <tbody>
- <tr>
-  <td>Vertex-Centric</td>
-  <td>arbitrary</td>
-  <td>arbitrary</td>
-  <td>any vertex</td>
-  <td>arbitrary</td>
-</tr>
-<tr>
-  <td>Scatter-Gather</td>
-  <td>arbitrary</td>
-  <td>based on received messages</td>
-  <td>any vertex</td>
-  <td>based on vertex state</td>
-</tr>
-<tr>
-  <td>Gather-Sum-Apply</td>
-  <td>associative and commutative</td>
-  <td>based on neighbors' values</td>
-  <td>neighborhood</td>
-  <td>based on vertex state</td>
-</tr>
-</tbody>
-</table>
-
-{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/libs/gelly/library_methods.md
----------------------------------------------------------------------
diff --git a/docs/apis/batch/libs/gelly/library_methods.md b/docs/apis/batch/libs/gelly/library_methods.md
deleted file mode 100644
index e49a793..0000000
--- a/docs/apis/batch/libs/gelly/library_methods.md
+++ /dev/null
@@ -1,350 +0,0 @@
----
-title: Library Methods
-
-# Sub navigation
-sub-nav-group: batch
-sub-nav-parent: gelly
-sub-nav-title: Library Methods
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-Gelly has a growing collection of graph algorithms for easily analyzing large-scale Graphs.
-
-* This will be replaced by the TOC
-{:toc}
-
-Gelly's library methods can be used by simply calling the `run()` method on the input graph:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-
-Graph<Long, Long, NullValue> graph = ...
-
-// run Label Propagation for 30 iterations to detect communities on the input graph
-DataSet<Vertex<Long, Long>> verticesWithCommunity = graph.run(new LabelPropagation<Long>(30));
-
-// print the result
-verticesWithCommunity.print();
-
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = ExecutionEnvironment.getExecutionEnvironment
-
-val graph: Graph[java.lang.Long, java.lang.Long, NullValue] = ...
-
-// run Label Propagation for 30 iterations to detect communities on the input graph
-val verticesWithCommunity = graph.run(new LabelPropagation[java.lang.Long, java.lang.Long, NullValue](30))
-
-// print the result
-verticesWithCommunity.print
-
-{% endhighlight %}
-</div>
-</div>
-
-## Community Detection
-
-#### Overview
-In graph theory, communities refer to groups of nodes that are well connected internally, but sparsely connected to other groups.
-This library method is an implementation of the community detection algorithm described in the paper [Towards real-time community detection in large networks](http://arxiv.org/pdf/0808.2633.pdf%22%3Earticle%20explaining%20the%20algorithm%20in%20detail).
-
-#### Details
-The algorithm is implemented using [scatter-gather iterations](#scatter-gather-iterations).
-Initially, each vertex is assigned a `Tuple2` containing its initial value along with a score equal to 1.0.
-In each iteration, vertices send their labels and scores to their neighbors. Upon receiving messages from its neighbors,
-a vertex chooses the label with the highest score and subsequently re-scores it using the edge values,
-a user-defined hop attenuation parameter, `delta`, and the superstep number.
-The algorithm converges when vertices no longer update their value or when the maximum number of iterations
-is reached.
-
-#### Usage
-The algorithm takes as input a `Graph` with any vertex type, `Long` vertex values, and `Double` edge values. It returns a `Graph` of the same type as the input,
-where the vertex values correspond to the community labels, i.e. two vertices belong to the same community if they have the same vertex value.
-The constructor takes two parameters:
-
-* `maxIterations`: the maximum number of iterations to run.
-* `delta`: the hop attenuation parameter, with default value 0.5.
-
-## Label Propagation
-
-#### Overview
-This is an implementation of the well-known Label Propagation algorithm described in [this paper](http://journals.aps.org/pre/abstract/10.1103/PhysRevE.76.036106). The algorithm discovers communities in a graph, by iteratively propagating labels between neighbors. Unlike the [Community Detection library method](#community-detection), this implementation does not use scores associated with the labels.
-
-#### Details
-The algorithm is implemented using [scatter-gather iterations](#scatter-gather-iterations).
-Labels are expected to be of type `Comparable` and are initialized using the vertex values of the input `Graph`.
-The algorithm iteratively refines discovered communities by propagating labels. In each iteration, a vertex adopts
-the label that is most frequent among its neighbors' labels. In case of a tie (i.e. two or more labels appear with the
-same frequency), the algorithm picks the greater label. The algorithm converges when no vertex changes its value or
-the maximum number of iterations has been reached. Note that different initializations might lead to different results.
-
-#### Usage
-The algorithm takes as input a `Graph` with a `Comparable` vertex type, a `Comparable` vertex value type and an arbitrary edge value type.
-It returns a `DataSet` of vertices, where the vertex value corresponds to the community in which this vertex belongs after convergence.
-The constructor takes one parameter:
-
-* `maxIterations`: the maximum number of iterations to run.
-
-## Connected Components
-
-#### Overview
-This is an implementation of the Weakly Connected Components algorithm. Upon convergence, two vertices belong to the
-same component, if there is a path from one to the other, without taking edge direction into account.
-
-#### Details
-The algorithm is implemented using [scatter-gather iterations](#scatter-gather-iterations).
-This implementation uses a comparable vertex value as initial component identifier (ID). Vertices propagate their
-current value in each iteration. Upon receiving component IDs from its neighbors, a vertex adopts a new component ID if
-its value is lower than its current component ID. The algorithm converges when vertices no longer update their component
-ID value or when the maximum number of iterations has been reached.
-
-#### Usage
-The result is a `DataSet` of vertices, where the vertex value corresponds to the assigned component.
-The constructor takes one parameter:
-
-* `maxIterations`: the maximum number of iterations to run.
-
-## GSA Connected Components
-
-#### Overview
-This is an implementation of the Weakly Connected Components algorithm. Upon convergence, two vertices belong to the
-same component, if there is a path from one to the other, without taking edge direction into account.
-
-#### Details
-The algorithm is implemented using [gather-sum-apply iterations](#gather-sum-apply-iterations).
-This implementation uses a comparable vertex value as initial component identifier (ID). In the gather phase, each
-vertex collects the vertex value of their adjacent vertices. In the sum phase, the minimum among those values is
-selected. In the apply phase, the algorithm sets the minimum value as the new vertex value if it is smaller than
-the current value. The algorithm converges when vertices no longer update their component ID value or when the
-maximum number of iterations has been reached.
-
-#### Usage
-The result is a `DataSet` of vertices, where the vertex value corresponds to the assigned component.
-The constructor takes one parameter:
-
-* `maxIterations`: the maximum number of iterations to run.
-
-## PageRank
-
-#### Overview
-An implementation of a simple [PageRank algorithm](https://en.wikipedia.org/wiki/PageRank), using [scatter-gather iterations](#scatter-gather-iterations).
-PageRank is an algorithm that was first used to rank web search engine results. Today, the algorithm and many variations, are used in various graph application domains. The idea of PageRank is that important or relevant pages tend to link to other important pages.
-
-#### Details
-The algorithm operates in iterations, where pages distribute their scores to their neighbors (pages they have links to) and subsequently update their scores based on the partial values they receive. The implementation assumes that each page has at least one incoming and one outgoing link.
-In order to consider the importance of a link from one page to another, scores are divided by the total number of out-links of the source page. Thus, a page with 10 links will distribute 1/10 of its score to each neighbor, while a page with 100 links, will distribute 1/100 of its score to each neighboring page. This process computes what is often called the transition probablities, i.e. the probability that some page will lead to other page while surfing the web. To correctly compute the transition probabilities, this implementation expects the edge values to be initialised to 1.0.
-
-#### Usage
-The algorithm takes as input a `Graph` with any vertex type, `Double` vertex values, and `Double` edge values. Edges values should be initialized to 1.0, in order to correctly compute the transition probabilities. Otherwise, the transition probability for an Edge `(u, v)` will be set to the edge value divided by `u`'s out-degree. The algorithm returns a `DataSet` of vertices, where the vertex value corresponds to assigned rank after convergence (or maximum iterations).
-The constructors take the following parameters:
-
-* `beta`: the damping factor.
-* `maxIterations`: the maximum number of iterations to run.
-
-## GSA PageRank
-
-The algorithm is implemented using [gather-sum-apply iterations](#gather-sum-apply-iterations).
-
-See the [PageRank](#pagerank) library method for implementation details and usage information.
-
-## Single Source Shortest Paths
-
-#### Overview
-An implementation of the Single-Source-Shortest-Paths algorithm for weighted graphs. Given a source vertex, the algorithm computes the shortest paths from this source to all other nodes in the graph.
-
-#### Details
-The algorithm is implemented using [scatter-gather iterations](#scatter-gather-iterations).
-In each iteration, a vertex sends to its neighbors a message containing the sum its current distance and the edge weight connecting this vertex with the neighbor. Upon receiving candidate distance messages, a vertex calculates the minimum distance and, if a shorter path has been discovered, it updates its value. If a vertex does not change its value during a superstep, then it does not produce messages for its neighbors for the next superstep. The computation terminates after the specified maximum number of supersteps or when there are no value updates.
-
-#### Usage
-The algorithm takes as input a `Graph` with any vertex type, `Double` vertex values, and `Double` edge values. The output is a `DataSet` of vertices where the vertex values
-correspond to the minimum distances from the given source vertex.
-The constructor takes two parameters:
-
-* `srcVertexId` The vertex ID of the source vertex.
-* `maxIterations`: the maximum number of iterations to run.
-
-## GSA Single Source Shortest Paths
-
-The algorithm is implemented using [gather-sum-apply iterations](#gather-sum-apply-iterations).
-
-See the [Single Source Shortest Paths](#single-source-shortest-paths) library method for implementation details and usage information.
-
-## Triangle Count
-
-#### Overview
-An analytic for counting the number of unique triangles in a graph.
-
-#### Details
-Counts the triangles generated by [Triangle Listing](#triangle-listing).
-
-#### Usage
-The analytic takes an undirected graph as input and returns as a result a `Long` corresponding to the number of triangles
-in the graph. The graph ID type must be `Comparable` and `Copyable`.
-
-## Triangle Listing
-
-This algorithm supports object reuse. The graph ID type must be `Comparable` and `Copyable`.
-
-See the [Triangle Enumerator](#triangle-enumerator) library method for implementation details.
-
-## Triangle Enumerator
-
-#### Overview
-This library method enumerates unique triangles present in the input graph. A triangle consists of three edges that connect three vertices with each other.
-This implementation ignores edge directions.
-
-#### Details
-The basic triangle enumeration algorithm groups all edges that share a common vertex and builds triads, i.e., triples of vertices
-that are connected by two edges. Then, all triads are filtered for which no third edge exists that closes the triangle.
-For a group of <i>n</i> edges that share a common vertex, the number of built triads is quadratic <i>((n*(n-1))/2)</i>.
-Therefore, an optimization of the algorithm is to group edges on the vertex with the smaller output degree to reduce the number of triads.
-This implementation extends the basic algorithm by computing output degrees of edge vertices and grouping on edges on the vertex with the smaller degree.
-
-#### Usage
-The algorithm takes a directed graph as input and outputs a `DataSet` of `Tuple3`. The Vertex ID type has to be `Comparable`.
-Each `Tuple3` corresponds to a triangle, with the fields containing the IDs of the vertices forming the triangle.
-
-## Hyperlink-Induced Topic Search
-
-#### Overview
-[Hyperlink-Induced Topic Search](http://www.cs.cornell.edu/home/kleinber/auth.pdf) (HITS, or "Hubs and Authorities")
-computes two interdependent scores for every vertex in a directed graph. Good hubs are those which point to many
-good authorities and good authorities are those pointed to by many good hubs.
-
-#### Details
-Every vertex is assigned the same initial hub and authority scores. The algorithm then iteratively updates the scores
-until termination. During each iteration new hub scores are computed from the authority scores, then new authority
-scores are computed from the new hub scores. The scores are then normalized and optionally tested for convergence.
-
-#### Usage
-The algorithm takes a directed graph as input and outputs a `DataSet` of `Tuple3` containing the vertex ID, hub score,
-and authority score.
-
-## Summarization
-
-#### Overview
-The summarization algorithm computes a condensed version of the input graph by grouping vertices and edges based on
-their values. In doing so, the algorithm helps to uncover insights about patterns and distributions in the graph.
-One possible use case is the visualization of communities where the whole graph is too large and needs to be summarized
-based on the community identifier stored at a vertex.
-
-#### Details
-In the resulting graph, each vertex represents a group of vertices that share the same value. An edge, that connects a
-vertex with itself, represents all edges with the same edge value that connect vertices from the same vertex group. An
-edge between different vertices in the output graph represents all edges with the same edge value between members of
-different vertex groups in the input graph.
-
-The algorithm is implemented using Flink data operators. First, vertices are grouped by their value and a representative
-is chosen from each group. For any edge, the source and target vertex identifiers are replaced with the corresponding
-representative and grouped by source, target and edge value. Output vertices and edges are created from their
-corresponding groupings.
-
-#### Usage
-The algorithm takes a directed, vertex (and possibly edge) attributed graph as input and outputs a new graph where each
-vertex represents a group of vertices and each edge represents a group of edges from the input graph. Furthermore, each
-vertex and edge in the output graph stores the common group value and the number of represented elements.
-
-## Adamic-Adar
-
-#### Overview
-Adamic-Adar measures the similarity between pairs of vertices as the sum of the inverse logarithm of degree over shared
-neighbors. Scores are non-negative and unbounded. A vertex with higher degree has greater overall influence but is less
-influential to each pair of neighbors.
-
-#### Details
-The algorithm first annotates each vertex with the inverse of the logarithm of the vertex degree then joins this score
-onto edges by source vertex. Grouping on the source vertex, each pair of neighbors is emitted with the vertex score.
-Grouping on two-paths, the Adamic-Adar score is summed.
-
-See the [Jaccard Index](#jaccard-index) library method for a similar algorithm.
-
-#### Usage
-The algorithm takes a simple, undirected graph as input and outputs a `DataSet` of tuples containing two vertex IDs and
-the Adamic-Adair similarity score. The graph ID type must be `Comparable` and `Copyable`.
-
-* `setLittleParallelism`: override the parallelism of operators processing small amounts of data
-* `setMinimumRatio`: filter out Adamic-Adar scores less than the given ratio times the average score
-* `setMinimumScore`: filter out Adamic-Adar scores less than the given minimum
-
-## Jaccard Index
-
-#### Overview
-The Jaccard Index measures the similarity between vertex neighborhoods and is computed as the number of shared neighbors
-divided by the number of distinct neighbors. Scores range from 0.0 (no shared neighbors) to 1.0 (all neighbors are
-shared).
-
-#### Details
-Counting shared neighbors for pairs of vertices is equivalent to counting connecting paths of length two. The number of
-distinct neighbors is computed by storing the sum of degrees of the vertex pair and subtracting the count of shared
-neighbors, which are double-counted in the sum of degrees.
-
-The algorithm first annotates each edge with the target vertex's degree. Grouping on the source vertex, each pair of
-neighbors is emitted with the degree sum. Grouping on two-paths, the shared neighbors are counted.
-
-#### Usage
-The algorithm takes a simple, undirected graph as input and outputs a `DataSet` of tuples containing two vertex IDs,
-the number of shared neighbors, and the number of distinct neighbors. The result class provides a method to compute the
-Jaccard Index score. The graph ID type must be `Comparable` and `Copyable`.
-
-* `setLittleParallelism`: override the parallelism of operators processing small amounts of data
-* `setMaximumScore`: filter out Jaccard Index scores greater than or equal to the given maximum fraction
-* `setMinimumScore`: filter out Jaccard Index scores less than the given minimum fraction
-
-## Local Clustering Coefficient
-
-#### Overview
-The local clustering coefficient measures the connectedness of each vertex's neighborhood. Scores range from 0.0 (no
-edges between neighbors) to 1.0 (neighborhood is a clique).
-
-#### Details
-An edge between a vertex's neighbors is a triangle. Counting edges between neighbors is equivalent to counting the
-number of triangles which include the vertex. The clustering coefficient score is the number of edges between neighbors
-divided by the number of potential edges between neighbors.
-
-See the [Triangle Enumeration](#triangle-enumeration) library method for a detailed explanation of triangle enumeration.
-
-#### Usage
-Directed and undirected variants are provided. The algorithms take a simple graph as input and output a `DataSet` of
-tuples containing the vertex ID, vertex degree, and number of triangles containing the vertex. The graph ID type must be
-`Comparable` and `Copyable`.
-
-## Global Clustering Coefficient
-
-#### Overview
-The global clustering coefficient measures the connectedness of a graph. Scores range from 0.0 (no edges between
-neighbors) to 1.0 (complete graph).
-
-#### Details
-See the [Local Clustering Coefficient](#local-clustering-coefficient) library method for a detailed explanation of
-clustering coefficient.
-
-#### Usage
-Directed and undirected variants are provided. The algorithm takes a simple graph as input and outputs a result
-containing the total number of triplets and triangles in the graph. The graph ID type must be `Comparable` and
-`Copyable`.
-
-
-{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/libs/index.md
----------------------------------------------------------------------
diff --git a/docs/apis/batch/libs/index.md b/docs/apis/batch/libs/index.md
deleted file mode 100644
index cb32108..0000000
--- a/docs/apis/batch/libs/index.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: "Libraries"
-sub-nav-group: batch
-sub-nav-id: libs
-sub-nav-pos: 6
-sub-nav-title: Libraries
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-- Graph processing: [Gelly](gelly/index.html)
-- Machine Learning: [FlinkML](ml/index.html)
-- Relational Queries: [Table and SQL](table.html)

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/libs/ml/als.md
----------------------------------------------------------------------
diff --git a/docs/apis/batch/libs/ml/als.md b/docs/apis/batch/libs/ml/als.md
deleted file mode 100644
index 5cfa159..0000000
--- a/docs/apis/batch/libs/ml/als.md
+++ /dev/null
@@ -1,178 +0,0 @@
----
-mathjax: include
-title: Alternating Least Squares
-
-# Sub navigation
-sub-nav-group: batch
-sub-nav-parent: flinkml
-sub-nav-title: ALS
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-* This will be replaced by the TOC
-{:toc}
-
-## Description
-
-The alternating least squares (ALS) algorithm factorizes a given matrix $R$ into two factors $U$ and $V$ such that $R \approx U^TV$.
-The unknown row dimension is given as a parameter to the algorithm and is called latent factors.
-Since matrix factorization can be used in the context of recommendation, the matrices $U$ and $V$ can be called user and item matrix, respectively.
-The $i$th column of the user matrix is denoted by $u_i$ and the $i$th column of the item matrix is $v_i$.
-The matrix $R$ can be called the ratings matrix with $$(R)_{i,j} = r_{i,j}$$.
-
-In order to find the user and item matrix, the following problem is solved:
-
-$$\arg\min_{U,V} \sum_{\{i,j\mid r_{i,j} \not= 0\}} \left(r_{i,j} - u_{i}^Tv_{j}\right)^2 + 
-\lambda \left(\sum_{i} n_{u_i} \left\lVert u_i \right\rVert^2 + \sum_{j} n_{v_j} \left\lVert v_j \right\rVert^2 \right)$$
-
-with $\lambda$ being the regularization factor, $$n_{u_i}$$ being the number of items the user $i$ has rated and $$n_{v_j}$$ being the number of times the item $j$ has been rated.
-This regularization scheme to avoid overfitting is called weighted-$\lambda$-regularization.
-Details can be found in the work of [Zhou et al.](http://dx.doi.org/10.1007/978-3-540-68880-8_32).
-
-By fixing one of the matrices $U$ or $V$, we obtain a quadratic form which can be solved directly.
-The solution of the modified problem is guaranteed to monotonically decrease the overall cost function.
-By applying this step alternately to the matrices $U$ and $V$, we can iteratively improve the matrix factorization.
-
-The matrix $R$ is given in its sparse representation as a tuple of $(i, j, r)$ where $i$ denotes the row index, $j$ the column index and $r$ is the matrix value at position $(i,j)$.
-
-## Operations
-
-`ALS` is a `Predictor`.
-As such, it supports the `fit` and `predict` operation.
-
-### Fit
-
-ALS is trained on the sparse representation of the rating matrix: 
-
-* `fit: DataSet[(Int, Int, Double)] => Unit` 
-
-### Predict
-
-ALS predicts for each tuple of row and column index the rating: 
-
-* `predict: DataSet[(Int, Int)] => DataSet[(Int, Int, Double)]`
-
-## Parameters
-
-The alternating least squares implementation can be controlled by the following parameters:
-
-   <table class="table table-bordered">
-    <thead>
-      <tr>
-        <th class="text-left" style="width: 20%">Parameters</th>
-        <th class="text-center">Description</th>
-      </tr>
-    </thead>
-
-    <tbody>
-      <tr>
-        <td><strong>NumFactors</strong></td>
-        <td>
-          <p>
-            The number of latent factors to use for the underlying model.
-            It is equivalent to the dimension of the calculated user and item vectors.
-            (Default value: <strong>10</strong>)
-          </p>
-        </td>
-      </tr>
-      <tr>
-        <td><strong>Lambda</strong></td>
-        <td>
-          <p>
-            Regularization factor. Tune this value in order to avoid overfitting or poor performance due to strong generalization.
-            (Default value: <strong>1</strong>)
-          </p>
-        </td>
-      </tr>
-      <tr>
-        <td><strong>Iterations</strong></td>
-        <td>
-          <p>
-            The maximum number of iterations.
-            (Default value: <strong>10</strong>)
-          </p>
-        </td>
-      </tr>
-      <tr>
-        <td><strong>Blocks</strong></td>
-        <td>
-          <p>
-            The number of blocks into which the user and item matrix are grouped.
-            The fewer blocks one uses, the less data is sent redundantly. 
-            However, bigger blocks entail bigger update messages which have to be stored on the heap. 
-            If the algorithm fails because of an OutOfMemoryException, then try to increase the number of blocks. 
-            (Default value: <strong>None</strong>)
-          </p>
-        </td>
-      </tr>
-      <tr>
-        <td><strong>Seed</strong></td>
-        <td>
-          <p>
-            Random seed used to generate the initial item matrix for the algorithm.
-            (Default value: <strong>0</strong>)
-          </p>
-        </td>
-      </tr>
-      <tr>
-        <td><strong>TemporaryPath</strong></td>
-        <td>
-          <p>
-            Path to a temporary directory into which intermediate results are stored.
-            If this value is set, then the algorithm is split into two preprocessing steps, the ALS iteration and a post-processing step which calculates a last ALS half-step.
-            The preprocessing steps calculate the <code>OutBlockInformation</code> and <code>InBlockInformation</code> for the given rating matrix.
-            The results of the individual steps are stored in the specified directory.
-            By splitting the algorithm into multiple smaller steps, Flink does not have to split the available memory amongst too many operators. 
-            This allows the system to process bigger individual messages and improves the overall performance.
-            (Default value: <strong>None</strong>)
-          </p>
-        </td>
-      </tr>
-    </tbody>
-  </table>
-
-## Examples
-
-{% highlight scala %}
-// Read input data set from a csv file
-val inputDS: DataSet[(Int, Int, Double)] = env.readCsvFile[(Int, Int, Double)](
-  pathToTrainingFile)
-
-// Setup the ALS learner
-val als = ALS()
-.setIterations(10)
-.setNumFactors(10)
-.setBlocks(100)
-.setTemporaryPath("hdfs://tempPath")
-
-// Set the other parameters via a parameter map
-val parameters = ParameterMap()
-.add(ALS.Lambda, 0.9)
-.add(ALS.Seed, 42L)
-
-// Calculate the factorization
-als.fit(inputDS, parameters)
-
-// Read the testing data set from a csv file
-val testingDS: DataSet[(Int, Int)] = env.readCsvFile[(Int, Int)](pathToData)
-
-// Calculate the ratings according to the matrix factorization
-val predictedRatings = als.predict(testingDS)
-{% endhighlight %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/libs/ml/contribution_guide.md
----------------------------------------------------------------------
diff --git a/docs/apis/batch/libs/ml/contribution_guide.md b/docs/apis/batch/libs/ml/contribution_guide.md
deleted file mode 100644
index 30df530..0000000
--- a/docs/apis/batch/libs/ml/contribution_guide.md
+++ /dev/null
@@ -1,110 +0,0 @@
----
-mathjax: include
-title: How to Contribute
-
-# Sub navigation
-sub-nav-group: batch
-sub-nav-parent: flinkml
-sub-nav-title: How To Contribute
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-The Flink community highly appreciates all sorts of contributions to FlinkML.
-FlinkML offers people interested in machine learning to work on a highly active open source project which makes scalable ML reality.
-The following document describes how to contribute to FlinkML.
-
-* This will be replaced by the TOC
-{:toc}
-
-## Getting Started
-
-In order to get started first read Flink's [contribution guide](http://flink.apache.org/how-to-contribute.html).
-Everything from this guide also applies to FlinkML.
-
-## Pick a Topic
-
-If you are looking for some new ideas you should first look into our [roadmap](https://cwiki.apache.org/confluence/display/FLINK/FlinkML%3A+Vision+and+Roadmap), then you should check out the list of [unresolved issues on JIRA](https://issues.apache.org/jira/issues/?jql=component%20%3D%20%22Machine%20Learning%20Library%22%20AND%20project%20%3D%20FLINK%20AND%20resolution%20%3D%20Unresolved%20ORDER%20BY%20priority%20DESC).
-Once you decide to contribute to one of these issues, you should take ownership of it and track your progress with this issue.
-That way, the other contributors know the state of the different issues and redundant work is avoided.
-
-If you already know what you want to contribute to FlinkML all the better.
-It is still advisable to create a JIRA issue for your idea to tell the Flink community what you want to do, though.
-
-## Testing
-
-New contributions should come with tests to verify the correct behavior of the algorithm.
-The tests help to maintain the algorithm's correctness throughout code changes, e.g. refactorings.
-
-We distinguish between unit tests, which are executed during Maven's test phase, and integration tests, which are executed during maven's verify phase.
-Maven automatically makes this distinction by using the following naming rules:
-All test cases whose class name ends with a suffix fulfilling the regular expression `(IT|Integration)(Test|Suite|Case)`, are considered integration tests.
-The rest are considered unit tests and should only test behavior which is local to the component under test.
-
-An integration test is a test which requires the full Flink system to be started.
-In order to do that properly, all integration test cases have to mix in the trait `FlinkTestBase`.
-This trait will set the right `ExecutionEnvironment` so that the test will be executed on a special `FlinkMiniCluster` designated for testing purposes.
-Thus, an integration test could look the following:
-
-{% highlight scala %}
-class ExampleITSuite extends FlatSpec with FlinkTestBase {
-  behavior of "An example algorithm"
-
-  it should "do something" in {
-    ...
-  }
-}
-{% endhighlight %}
-
-The test style does not have to be `FlatSpec` but can be any other scalatest `Suite` subclass.
-See [ScalaTest testing styles](http://scalatest.org/user_guide/selecting_a_style) for more information.
-
-## Documentation
-
-When contributing new algorithms, it is required to add code comments describing the way the algorithm works and its parameters with which the user can control its behavior.
-Additionally, we would like to encourage contributors to add this information to the online documentation.
-The online documentation for FlinkML's components can be found in the directory `docs/libs/ml`.
-
-Every new algorithm is described by a single markdown file.
-This file should contain at least the following points:
-
-1. What does the algorithm do
-2. How does the algorithm work (or reference to description)
-3. Parameter description with default values
-4. Code snippet showing how the algorithm is used
-
-In order to use latex syntax in the markdown file, you have to include `mathjax: include` in the YAML front matter.
-
-{% highlight java %}
----
-mathjax: include
-htmlTitle: FlinkML - Example title
-title: <a href="../ml">FlinkML</a> - Example title
----
-{% endhighlight %}
-
-In order to use displayed mathematics, you have to put your latex code in `$$ ... $$`.
-For in-line mathematics, use `$ ... $`.
-Additionally some predefined latex commands are included into the scope of your markdown file.
-See `docs/_include/latex_commands.html` for the complete list of predefined latex commands.
-
-## Contributing
-
-Once you have implemented the algorithm with adequate test coverage and added documentation, you are ready to open a pull request.
-Details of how to open a pull request can be found [here](http://flink.apache.org/how-to-contribute.html#contributing-code--documentation).

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/libs/ml/cross_validation.md
----------------------------------------------------------------------
diff --git a/docs/apis/batch/libs/ml/cross_validation.md b/docs/apis/batch/libs/ml/cross_validation.md
deleted file mode 100644
index 2473317..0000000
--- a/docs/apis/batch/libs/ml/cross_validation.md
+++ /dev/null
@@ -1,175 +0,0 @@
----
-mathjax: include
-title: Cross Validation
-
-# Sub navigation
-sub-nav-group: batch
-sub-nav-parent: flinkml
-sub-nav-title: Cross Validation
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-* This will be replaced by the TOC
-{:toc}
-
-## Description
-
- A prevalent problem when utilizing machine learning algorithms is *overfitting*, or when an algorithm "memorizes" the training data but does a poor job extrapolating to out of sample cases. A common method for dealing with the overfitting problem is to hold back some subset of data from the original training algorithm and then measure the fit algorithm's performance on this hold-out set. This is commonly known as *cross validation*.  A model is trained on one subset of data and then *validated* on another set of data.
-
-## Cross Validation Strategies
-
-There are several strategies for holding out data. FlinkML has convenience methods for
-- Train-Test Splits
-- Train-Test-Holdout Splits
-- K-Fold Splits
-- Multi-Random Splits
-
-### Train-Test Splits
-
-The simplest method of splitting is the `trainTestSplit`. This split takes a DataSet and a parameter *fraction*.  The *fraction* indicates the portion of the DataSet that should be allocated to the training set. This split also takes two additional optional parameters, *precise* and *seed*.  
-
-By default, the Split is done by randomly deciding whether or not an observation is assigned to the training DataSet with probability = *fraction*.  When *precise* is `true` however, additional steps are taken to ensure the training set is as close as possible to the length of the DataSet  $\cdot$ *fraction*.
-
-The method returns a new `TrainTestDataSet` object which has a `.training` attribute containing the training DataSet and a `.testing` attribute containing the testing DataSet.
-
-
-### Train-Test-Holdout Splits
-
-In some cases, algorithms have been known to 'learn' the testing set.  To combat this issue, a train-test-hold out strategy introduces a secondary holdout set, aptly called the *holdout* set.
-
-Traditionally, training and testing would be done to train an algorithms as normal and then a final test of the algorithm on the holdout set would be done.  Ideally, prediction errors/model scores in the holdout set would not be significantly different than those observed in the testing set.
-
-In a train-test-holdout strategy we sacrifice the sample size of the initial fitting algorithm for increased confidence that our model is not over-fit.
-
-When using `trainTestHoldout` splitter, the *fraction* `Double` is replaced by a *fraction* array of length three. The first element coresponds to the portion to be used for training, second for testing, and third for holdout.  The weights of this array are *relative*, e.g. an array `Array(3.0, 2.0, 1.0)` would results in approximately 50% of the observations being in the training set, 33% of the observations in the testing set, and 17% of the observations in holdout set.
-
-### K-Fold Splits
-
-In a *k-fold* strategy, the DataSet is split into *k* equal subsets. Then for each of the *k* subsets, a `TrainTestDataSet` is created where the subset is the `.training` DataSet, and the remaining subsets are the `.testing` set.
-
-For each training set, an algorithm is trained and then is evaluated based on the predictions based on the associated testing set. When an algorithm that has consistent grades (e.g. prediction errors) across held out datasets we can have some confidence that our approach (e.g. choice of algorithm / algorithm parameters / number of iterations) is robust against overfitting.
-
-<a href="https://en.wikipedia.org/wiki/Cross-validation_(statistics)#k-fold_cross-validation">K-Fold Cross Validatation</a>
-
-### Multi-Random Splits
-
-The *multi-random* strategy can be thought of as a more general form of the *train-test-holdout* strategy. In fact, `.trainTestHoldoutSplit` is a simple wrapper for `multiRandomSplit` which also packages the datasets into a `trainTestHoldoutDataSet` object.
-
-The first major difference, is that `multiRandomSplit` takes an array of fractions of any length. E.g. one can create multiple holdout sets.  Alternatively, one could think of `kFoldSplit` as a wrapper for `multiRandomSplit` (which it is), the difference being `kFoldSplit` creates subsets of approximately equal size, where `multiRandomSplit` will create subsets of any size.
-
-The second major difference is that `multiRandomSplit` returns an array of DataSets, equal in size and proportion to the *fraction array* that it was passed as an argument.
-
-## Parameters
-
-The various `Splitter` methods share many parameters.
-
- <table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 20%">Parameter</th>
-      <th class="text-center">Type</th>
-      <th class="text-center">Description</th>
-      <th class="text-right">Used by Method</th>
-    </tr>
-  </thead>
-
-  <tbody>
-    <tr>
-      <td><code>input</code></td>
-      <td><code>DataSet[Any]</code></td>
-      <td>DataSet to be split.</td>
-      <td>
-      <code>randomSplit</code><br>
-      <code>multiRandomSplit</code><br>
-      <code>kFoldSplit</code><br>
-      <code>trainTestSplit</code><br>
-      <code>trainTestHoldoutSplit</code>
-      </td>
-    </tr>
-    <tr>
-      <td><code>seed</code></td>
-      <td><code>Long</code></td>
-      <td>
-        <p>
-          Used for seeding the random number generator which sorts DataSets into other DataSets.
-        </p>
-      </td>
-      <td>
-      <code>randomSplit</code><br>
-      <code>multiRandomSplit</code><br>
-      <code>kFoldSplit</code><br>
-      <code>trainTestSplit</code><br>
-      <code>trainTestHoldoutSplit</code>
-      </td>
-    </tr>
-    <tr>
-      <td><code>precise</code></td>
-      <td><code>Boolean</code></td>
-      <td>When true, make additional effort to make DataSets as close to the prescribed proportions as possible.</td>
-      <td>
-      <code>randomSplit</code><br>
-      <code>trainTestSplit</code>
-      </td>
-    </tr>
-    <tr>
-      <td><code>fraction</code></td>
-      <td><code>Double</code></td>
-      <td>The portion of the `input` to assign to the first or <code>.training</code> DataSet. Must be in the range (0,1)</td>
-      <td><code>randomSplit</code><br>
-        <code>trainTestSplit</code>
-      </td>
-    </tr>
-    <tr>
-      <td><code>fracArray</code></td>
-      <td><code>Array[Double]</code></td>
-      <td>An array that prescribes the proportions of the output datasets (proportions need not sum to 1 or be within the range (0,1))</td>
-      <td>
-      <code>multiRandomSplit</code><br>
-      <code>trainTestHoldoutSplit</code>
-      </td>
-    </tr>
-    <tr>
-      <td><code>kFolds</code></td>
-      <td><code>Int</code></td>
-      <td>The number of subsets to break the <code>input</code> DataSet into.</td>
-      <td><code>kFoldSplit</code></td>
-      </tr>
-
-  </tbody>
-</table>
-
-## Examples
-
-{% highlight scala %}
-// An input dataset- does not have to be of type LabeledVector
-val data: DataSet[LabeledVector] = ...
-
-// A Simple Train-Test-Split
-val dataTrainTest: TrainTestDataSet = Splitter.trainTestSplit(data, 0.6, true)
-
-// Create a train test holdout DataSet
-val dataTrainTestHO: trainTestHoldoutDataSet = Splitter.trainTestHoldoutSplit(data, Array(6.0, 3.0, 1.0))
-
-// Create an Array of K TrainTestDataSets
-val dataKFolded: Array[TrainTestDataSet] =  Splitter.kFoldSplit(data, 10)
-
-// create an array of 5 datasets
-val dataMultiRandom: Array[DataSet[T]] = Splitter.multiRandomSplit(data, Array(0.5, 0.1, 0.1, 0.1, 0.1))
-{% endhighlight %}
\ No newline at end of file


[75/89] [abbrv] flink git commit: [FLINK-4362] [rpc] Auto generate rpc gateways via Java proxies

Posted by se...@apache.org.
[FLINK-4362] [rpc] Auto generate rpc gateways via Java proxies

This PR introduces a generic AkkaRpcActor which receives rpc calls as a
RpcInvocation message. The RpcInvocation message is generated by the
AkkaInvocationHandler which gets them from automatically generated Java Proxies.

Add documentation for proxy based akka rpc service

Log unknown message type in AkkaRpcActor but do not fail actor

Use ReflectionUtil to extract RpcGateway type from RpcEndpoint

This closes #2357.


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/dc808e76
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/dc808e76
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/dc808e76

Branch: refs/heads/flip-6
Commit: dc808e76a219d24b58e94cbb3f7de336d1383fbd
Parents: ddeee3b
Author: Till Rohrmann <tr...@apache.org>
Authored: Wed Aug 10 18:42:26 2016 +0200
Committer: Stephan Ewen <se...@apache.org>
Committed: Thu Aug 25 20:21:02 2016 +0200

----------------------------------------------------------------------
 .../org/apache/flink/util/ReflectionUtil.java   |  10 +-
 .../flink/runtime/rpc/MainThreadExecutor.java   |   4 +-
 .../apache/flink/runtime/rpc/RpcEndpoint.java   |  22 +-
 .../apache/flink/runtime/rpc/RpcService.java    |   2 +-
 .../flink/runtime/rpc/akka/AkkaGateway.java     |   4 +-
 .../runtime/rpc/akka/AkkaInvocationHandler.java | 226 +++++++++++++++++++
 .../flink/runtime/rpc/akka/AkkaRpcActor.java    | 175 ++++++++++++++
 .../flink/runtime/rpc/akka/AkkaRpcService.java  | 121 +++++-----
 .../flink/runtime/rpc/akka/BaseAkkaActor.java   |  50 ----
 .../flink/runtime/rpc/akka/BaseAkkaGateway.java |  41 ----
 .../rpc/akka/jobmaster/JobMasterAkkaActor.java  |  58 -----
 .../akka/jobmaster/JobMasterAkkaGateway.java    |  57 -----
 .../runtime/rpc/akka/messages/CallAsync.java    |  41 ++++
 .../rpc/akka/messages/CallableMessage.java      |  33 ---
 .../runtime/rpc/akka/messages/CancelTask.java   |  36 ---
 .../runtime/rpc/akka/messages/ExecuteTask.java  |  36 ---
 .../messages/RegisterAtResourceManager.java     |  36 ---
 .../rpc/akka/messages/RegisterJobMaster.java    |  36 ---
 .../runtime/rpc/akka/messages/RequestSlot.java  |  37 ---
 .../rpc/akka/messages/RpcInvocation.java        |  98 ++++++++
 .../runtime/rpc/akka/messages/RunAsync.java     |  40 ++++
 .../rpc/akka/messages/RunnableMessage.java      |  31 ---
 .../akka/messages/UpdateTaskExecutionState.java |  37 ---
 .../ResourceManagerAkkaActor.java               |  65 ------
 .../ResourceManagerAkkaGateway.java             |  67 ------
 .../taskexecutor/TaskExecutorAkkaActor.java     |  77 -------
 .../taskexecutor/TaskExecutorAkkaGateway.java   |  59 -----
 .../flink/runtime/rpc/jobmaster/JobMaster.java  |   4 +-
 .../rpc/resourcemanager/ResourceManager.java    |   4 +-
 .../runtime/rpc/taskexecutor/TaskExecutor.java  |   4 +-
 .../flink/runtime/rpc/RpcCompletenessTest.java  |  50 ++--
 .../runtime/rpc/akka/AkkaRpcServiceTest.java    |   4 +-
 .../rpc/taskexecutor/TaskExecutorTest.java      |   2 +-
 33 files changed, 700 insertions(+), 867 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-core/src/main/java/org/apache/flink/util/ReflectionUtil.java
----------------------------------------------------------------------
diff --git a/flink-core/src/main/java/org/apache/flink/util/ReflectionUtil.java b/flink-core/src/main/java/org/apache/flink/util/ReflectionUtil.java
index fe2d4c0..b851eba 100644
--- a/flink-core/src/main/java/org/apache/flink/util/ReflectionUtil.java
+++ b/flink-core/src/main/java/org/apache/flink/util/ReflectionUtil.java
@@ -48,6 +48,14 @@ public final class ReflectionUtil {
 		return getTemplateType(clazz, 0);
 	}
 
+	public static <T> Class<T> getTemplateType1(Type type) {
+		if (type instanceof ParameterizedType) {
+			return (Class<T>) getTemplateTypes((ParameterizedType) type)[0];
+		} else {
+			throw new IllegalArgumentException();
+		}
+	}
+
 	public static <T> Class<T> getTemplateType2(Class<?> clazz) {
 		return getTemplateType(clazz, 1);
 	}
@@ -123,7 +131,7 @@ public final class ReflectionUtil {
 		Class<?>[] types = new Class<?>[paramterizedType.getActualTypeArguments().length];
 		int i = 0;
 		for (Type templateArgument : paramterizedType.getActualTypeArguments()) {
-			assert (templateArgument instanceof Class<?>);
+			assert templateArgument instanceof Class<?>;
 			types[i++] = (Class<?>) templateArgument;
 		}
 		return types;

http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/MainThreadExecutor.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/MainThreadExecutor.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/MainThreadExecutor.java
index 14b2997..882c1b7 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/MainThreadExecutor.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/MainThreadExecutor.java
@@ -47,9 +47,9 @@ public interface MainThreadExecutor {
 	 * future will throw a {@link TimeoutException}.
 	 *
 	 * @param callable Callable to be executed
-	 * @param timeout Timeout for the future to complete
+	 * @param callTimeout Timeout for the future to complete
 	 * @param <V> Return value of the callable
 	 * @return Future of the callable result
 	 */
-	<V> Future<V> callAsync(Callable<V> callable, Timeout timeout);
+	<V> Future<V> callAsync(Callable<V> callable, Timeout callTimeout);
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java
index 0d928a8..aef0803 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java
@@ -20,6 +20,7 @@ package org.apache.flink.runtime.rpc;
 
 import akka.util.Timeout;
 
+import org.apache.flink.util.ReflectionUtil;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -60,6 +61,9 @@ public abstract class RpcEndpoint<C extends RpcGateway> {
 	/** RPC service to be used to start the RPC server and to obtain rpc gateways */
 	private final RpcService rpcService;
 
+	/** Class of the self gateway */
+	private final Class<C> selfGatewayType;
+
 	/** Self gateway which can be used to schedule asynchronous calls on yourself */
 	private final C self;
 
@@ -70,15 +74,19 @@ public abstract class RpcEndpoint<C extends RpcGateway> {
 	 * of the executing rpc server. */
 	private final MainThreadExecutionContext mainThreadExecutionContext;
 
-
 	/**
 	 * Initializes the RPC endpoint.
 	 * 
 	 * @param rpcService The RPC server that dispatches calls to this RPC endpoint. 
 	 */
-	public RpcEndpoint(RpcService rpcService) {
+	protected RpcEndpoint(final RpcService rpcService) {
 		this.rpcService = checkNotNull(rpcService, "rpcService");
+
+		// IMPORTANT: Don't change order of selfGatewayType and self because rpcService.startServer
+		// requires that selfGatewayType has been initialized
+		this.selfGatewayType = ReflectionUtil.getTemplateType1(getClass());
 		this.self = rpcService.startServer(this);
+		
 		this.selfAddress = rpcService.getAddress(self);
 		this.mainThreadExecutionContext = new MainThreadExecutionContext((MainThreadExecutor) self);
 	}
@@ -149,6 +157,7 @@ public abstract class RpcEndpoint<C extends RpcGateway> {
 	//  Asynchronous executions
 	// ------------------------------------------------------------------------
 
+
 	/**
 	 * Execute the runnable in the main thread of the underlying RPC endpoint.
 	 *
@@ -172,6 +181,15 @@ public abstract class RpcEndpoint<C extends RpcGateway> {
 		return ((MainThreadExecutor) self).callAsync(callable, timeout);
 	}
 
+	/**
+	 * Returns the class of the self gateway type.
+	 *
+	 * @return Class of the self gateway type
+	 */
+	public final Class<C> getSelfGatewayType() {
+		return selfGatewayType;
+	}
+
 	// ------------------------------------------------------------------------
 	//  Utilities
 	// ------------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcService.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcService.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcService.java
index 90ff7b6..f93be83 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcService.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcService.java
@@ -46,7 +46,7 @@ public interface RpcService {
 	 * @param <C> Type of the self rpc gateway associated with the rpc server
 	 * @return Self gateway to dispatch remote procedure calls to oneself
 	 */
-	<S extends RpcEndpoint, C extends RpcGateway> C startServer(S rpcEndpoint);
+	<C extends RpcGateway, S extends RpcEndpoint<C>> C startServer(S rpcEndpoint);
 
 	/**
 	 * Stop the underlying rpc server of the provided self gateway.

http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaGateway.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaGateway.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaGateway.java
index a96a600..a826e7d 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaGateway.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaGateway.java
@@ -23,7 +23,7 @@ import akka.actor.ActorRef;
 /**
  * Interface for Akka based rpc gateways
  */
-public interface AkkaGateway {
+interface AkkaGateway {
 
-	ActorRef getActorRef();
+	ActorRef getRpcServer();
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaInvocationHandler.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaInvocationHandler.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaInvocationHandler.java
new file mode 100644
index 0000000..e8e383a
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaInvocationHandler.java
@@ -0,0 +1,226 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.akka;
+
+import akka.actor.ActorRef;
+import akka.pattern.Patterns;
+import akka.util.Timeout;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.runtime.rpc.MainThreadExecutor;
+import org.apache.flink.runtime.rpc.RpcTimeout;
+import org.apache.flink.runtime.rpc.akka.messages.CallAsync;
+import org.apache.flink.runtime.rpc.akka.messages.RpcInvocation;
+import org.apache.flink.runtime.rpc.akka.messages.RunAsync;
+import org.apache.flink.util.Preconditions;
+import scala.concurrent.Await;
+import scala.concurrent.Future;
+import scala.concurrent.duration.FiniteDuration;
+
+import java.lang.annotation.Annotation;
+import java.lang.reflect.InvocationHandler;
+import java.lang.reflect.Method;
+import java.util.BitSet;
+import java.util.concurrent.Callable;
+
+/**
+ * Invocation handler to be used with a {@link AkkaRpcActor}. The invocation handler wraps the
+ * rpc in a {@link RpcInvocation} message and then sends it to the {@link AkkaRpcActor} where it is
+ * executed.
+ */
+class AkkaInvocationHandler implements InvocationHandler, AkkaGateway, MainThreadExecutor {
+	private final ActorRef rpcServer;
+
+	// default timeout for asks
+	private final Timeout timeout;
+
+	AkkaInvocationHandler(ActorRef rpcServer, Timeout timeout) {
+		this.rpcServer = Preconditions.checkNotNull(rpcServer);
+		this.timeout = Preconditions.checkNotNull(timeout);
+	}
+
+	@Override
+	public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
+		Class<?> declaringClass = method.getDeclaringClass();
+
+		Object result;
+
+		if (declaringClass.equals(AkkaGateway.class) || declaringClass.equals(MainThreadExecutor.class) || declaringClass.equals(Object.class)) {
+			result = method.invoke(this, args);
+		} else {
+			String methodName = method.getName();
+			Class<?>[] parameterTypes = method.getParameterTypes();
+			Annotation[][] parameterAnnotations = method.getParameterAnnotations();
+			Timeout futureTimeout = extractRpcTimeout(parameterAnnotations, args, timeout);
+
+			Tuple2<Class<?>[], Object[]> filteredArguments = filterArguments(
+				parameterTypes,
+				parameterAnnotations,
+				args);
+
+			RpcInvocation rpcInvocation = new RpcInvocation(
+				methodName,
+				filteredArguments.f0,
+				filteredArguments.f1);
+
+			Class<?> returnType = method.getReturnType();
+
+			if (returnType.equals(Void.TYPE)) {
+				rpcServer.tell(rpcInvocation, ActorRef.noSender());
+
+				result = null;
+			} else if (returnType.equals(Future.class)) {
+				// execute an asynchronous call
+				result = Patterns.ask(rpcServer, rpcInvocation, futureTimeout);
+			} else {
+				// execute a synchronous call
+				Future<?> futureResult = Patterns.ask(rpcServer, rpcInvocation, futureTimeout);
+				FiniteDuration duration = timeout.duration();
+
+				result = Await.result(futureResult, duration);
+			}
+		}
+
+		return result;
+	}
+
+	@Override
+	public ActorRef getRpcServer() {
+		return rpcServer;
+	}
+
+	@Override
+	public void runAsync(Runnable runnable) {
+		// Unfortunately I couldn't find a way to allow only local communication. Therefore, the
+		// runnable field is transient transient
+		rpcServer.tell(new RunAsync(runnable), ActorRef.noSender());
+	}
+
+	@Override
+	public <V> Future<V> callAsync(Callable<V> callable, Timeout callTimeout) {
+		// Unfortunately I couldn't find a way to allow only local communication. Therefore, the
+		// callable field is declared transient
+		@SuppressWarnings("unchecked")
+		Future<V> result = (Future<V>) Patterns.ask(rpcServer, new CallAsync(callable), callTimeout);
+
+		return result;
+	}
+
+	/**
+	 * Extracts the {@link RpcTimeout} annotated rpc timeout value from the list of given method
+	 * arguments. If no {@link RpcTimeout} annotated parameter could be found, then the default
+	 * timeout is returned.
+	 *
+	 * @param parameterAnnotations Parameter annotations
+	 * @param args Array of arguments
+	 * @param defaultTimeout Default timeout to return if no {@link RpcTimeout} annotated parameter
+	 *                       has been found
+	 * @return Timeout extracted from the array of arguments or the default timeout
+	 */
+	private static Timeout extractRpcTimeout(Annotation[][] parameterAnnotations, Object[] args, Timeout defaultTimeout) {
+		if (args != null) {
+			Preconditions.checkArgument(parameterAnnotations.length == args.length);
+
+			for (int i = 0; i < parameterAnnotations.length; i++) {
+				if (isRpcTimeout(parameterAnnotations[i])) {
+					if (args[i] instanceof FiniteDuration) {
+						return new Timeout((FiniteDuration) args[i]);
+					} else {
+						throw new RuntimeException("The rpc timeout parameter must be of type " +
+							FiniteDuration.class.getName() + ". The type " + args[i].getClass().getName() +
+							" is not supported.");
+					}
+				}
+			}
+		}
+
+		return defaultTimeout;
+	}
+
+	/**
+	 * Removes all {@link RpcTimeout} annotated parameters from the parameter type and argument
+	 * list.
+	 *
+	 * @param parameterTypes Array of parameter types
+	 * @param parameterAnnotations Array of parameter annotations
+	 * @param args Arary of arguments
+	 * @return Tuple of filtered parameter types and arguments which no longer contain the
+	 * {@link RpcTimeout} annotated parameter types and arguments
+	 */
+	private static Tuple2<Class<?>[], Object[]> filterArguments(
+		Class<?>[] parameterTypes,
+		Annotation[][] parameterAnnotations,
+		Object[] args) {
+
+		Class<?>[] filteredParameterTypes;
+		Object[] filteredArgs;
+
+		if (args == null) {
+			filteredParameterTypes = parameterTypes;
+			filteredArgs = null;
+		} else {
+			Preconditions.checkArgument(parameterTypes.length == parameterAnnotations.length);
+			Preconditions.checkArgument(parameterAnnotations.length == args.length);
+
+			BitSet isRpcTimeoutParameter = new BitSet(parameterTypes.length);
+			int numberRpcParameters = parameterTypes.length;
+
+			for (int i = 0; i < parameterTypes.length; i++) {
+				if (isRpcTimeout(parameterAnnotations[i])) {
+					isRpcTimeoutParameter.set(i);
+					numberRpcParameters--;
+				}
+			}
+
+			if (numberRpcParameters == parameterTypes.length) {
+				filteredParameterTypes = parameterTypes;
+				filteredArgs = args;
+			} else {
+				filteredParameterTypes = new Class<?>[numberRpcParameters];
+				filteredArgs = new Object[numberRpcParameters];
+				int counter = 0;
+
+				for (int i = 0; i < parameterTypes.length; i++) {
+					if (!isRpcTimeoutParameter.get(i)) {
+						filteredParameterTypes[counter] = parameterTypes[i];
+						filteredArgs[counter] = args[i];
+						counter++;
+					}
+				}
+			}
+		}
+
+		return Tuple2.of(filteredParameterTypes, filteredArgs);
+	}
+
+	/**
+	 * Checks whether any of the annotations is of type {@link RpcTimeout}
+	 *
+	 * @param annotations Array of annotations
+	 * @return True if {@link RpcTimeout} was found; otherwise false
+	 */
+	private static boolean isRpcTimeout(Annotation[] annotations) {
+		for (Annotation annotation : annotations) {
+			if (annotation.annotationType().equals(RpcTimeout.class)) {
+				return true;
+			}
+		}
+
+		return false;
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActor.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActor.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActor.java
new file mode 100644
index 0000000..57da38a
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActor.java
@@ -0,0 +1,175 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.akka;
+
+import akka.actor.Status;
+import akka.actor.UntypedActor;
+import akka.pattern.Patterns;
+import org.apache.flink.runtime.rpc.RpcEndpoint;
+import org.apache.flink.runtime.rpc.RpcGateway;
+import org.apache.flink.runtime.rpc.akka.messages.CallAsync;
+import org.apache.flink.runtime.rpc.akka.messages.RpcInvocation;
+import org.apache.flink.runtime.rpc.akka.messages.RunAsync;
+import org.apache.flink.util.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import scala.concurrent.Future;
+
+import java.lang.reflect.Method;
+import java.util.concurrent.Callable;
+
+/**
+ * Akka rpc actor which receives {@link RpcInvocation}, {@link RunAsync} and {@link CallAsync}
+ * messages.
+ * <p>
+ * The {@link RpcInvocation} designates a rpc and is dispatched to the given {@link RpcEndpoint}
+ * instance.
+ * <p>
+ * The {@link RunAsync} and {@link CallAsync} messages contain executable code which is executed
+ * in the context of the actor thread.
+ *
+ * @param <C> Type of the {@link RpcGateway} associated with the {@link RpcEndpoint}
+ * @param <T> Type of the {@link RpcEndpoint}
+ */
+class AkkaRpcActor<C extends RpcGateway, T extends RpcEndpoint<C>> extends UntypedActor {
+	private static final Logger LOG = LoggerFactory.getLogger(AkkaRpcActor.class);
+
+	private final T rpcEndpoint;
+
+	AkkaRpcActor(final T rpcEndpoint) {
+		this.rpcEndpoint = Preconditions.checkNotNull(rpcEndpoint, "rpc endpoint");
+	}
+
+	@Override
+	public void onReceive(final Object message)  {
+		if (message instanceof RunAsync) {
+			handleRunAsync((RunAsync) message);
+		} else if (message instanceof CallAsync) {
+			handleCallAsync((CallAsync) message);
+		} else if (message instanceof RpcInvocation) {
+			handleRpcInvocation((RpcInvocation) message);
+		} else {
+			LOG.warn("Received message of unknown type {}. Dropping this message!", message.getClass());
+		}
+	}
+
+	/**
+	 * Handle rpc invocations by looking up the rpc method on the rpc endpoint and calling this
+	 * method with the provided method arguments. If the method has a return value, it is returned
+	 * to the sender of the call.
+	 *
+	 * @param rpcInvocation Rpc invocation message
+	 */
+	private void handleRpcInvocation(RpcInvocation rpcInvocation) {
+		Method rpcMethod = null;
+
+		try {
+			rpcMethod = lookupRpcMethod(rpcInvocation.getMethodName(), rpcInvocation.getParameterTypes());
+		} catch (final NoSuchMethodException e) {
+			LOG.error("Could not find rpc method for rpc invocation: {}.", rpcInvocation, e);
+		}
+
+		if (rpcMethod != null) {
+			if (rpcMethod.getReturnType().equals(Void.TYPE)) {
+				// No return value to send back
+				try {
+					rpcMethod.invoke(rpcEndpoint, rpcInvocation.getArgs());
+				} catch (Throwable e) {
+					LOG.error("Error while executing remote procedure call {}.", rpcMethod, e);
+				}
+			} else {
+				try {
+					Object result = rpcMethod.invoke(rpcEndpoint, rpcInvocation.getArgs());
+
+					if (result instanceof Future) {
+						// pipe result to sender
+						Patterns.pipe((Future<?>) result, getContext().dispatcher()).to(getSender());
+					} else {
+						// tell the sender the result of the computation
+						getSender().tell(new Status.Success(result), getSelf());
+					}
+				} catch (Throwable e) {
+					// tell the sender about the failure
+					getSender().tell(new Status.Failure(e), getSelf());
+				}
+			}
+		}
+	}
+
+	/**
+	 * Handle asynchronous {@link Callable}. This method simply executes the given {@link Callable}
+	 * in the context of the actor thread.
+	 *
+	 * @param callAsync Call async message
+	 */
+	private void handleCallAsync(CallAsync callAsync) {
+		if (callAsync.getCallable() == null) {
+			final String result = "Received a " + callAsync.getClass().getName() + " message with an empty " +
+				"callable field. This indicates that this message has been serialized " +
+				"prior to sending the message. The " + callAsync.getClass().getName() +
+				" is only supported with local communication.";
+
+			LOG.warn(result);
+
+			getSender().tell(new Status.Failure(new Exception(result)), getSelf());
+		} else {
+			try {
+				Object result = callAsync.getCallable().call();
+
+				getSender().tell(new Status.Success(result), getSelf());
+			} catch (Throwable e) {
+				getSender().tell(new Status.Failure(e), getSelf());
+			}
+		}
+	}
+
+	/**
+	 * Handle asynchronous {@link Runnable}. This method simply executes the given {@link Runnable}
+	 * in the context of the actor thread.
+	 *
+	 * @param runAsync Run async message
+	 */
+	private void handleRunAsync(RunAsync runAsync) {
+		if (runAsync.getRunnable() == null) {
+			LOG.warn("Received a {} message with an empty runnable field. This indicates " +
+				"that this message has been serialized prior to sending the message. The " +
+				"{} is only supported with local communication.",
+				runAsync.getClass().getName(),
+				runAsync.getClass().getName());
+		} else {
+			try {
+				runAsync.getRunnable().run();
+			} catch (final Throwable e) {
+				LOG.error("Caught exception while executing runnable in main thread.", e);
+			}
+		}
+	}
+
+	/**
+	 * Look up the rpc method on the given {@link RpcEndpoint} instance.
+	 *
+	 * @param methodName Name of the method
+	 * @param parameterTypes Parameter types of the method
+	 * @return Method of the rpc endpoint
+	 * @throws NoSuchMethodException
+	 */
+	private Method lookupRpcMethod(final String methodName, final Class<?>[] parameterTypes) throws NoSuchMethodException {
+		return rpcEndpoint.getClass().getMethod(methodName, parameterTypes);
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
index d55bd13..17983d0 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
@@ -29,88 +29,82 @@ import akka.dispatch.Mapper;
 import akka.pattern.AskableActorSelection;
 import akka.util.Timeout;
 import org.apache.flink.runtime.akka.AkkaUtils;
-import org.apache.flink.runtime.rpc.jobmaster.JobMaster;
-import org.apache.flink.runtime.rpc.jobmaster.JobMasterGateway;
-import org.apache.flink.runtime.rpc.resourcemanager.ResourceManager;
-import org.apache.flink.runtime.rpc.resourcemanager.ResourceManagerGateway;
+import org.apache.flink.runtime.rpc.MainThreadExecutor;
 import org.apache.flink.runtime.rpc.RpcGateway;
 import org.apache.flink.runtime.rpc.RpcEndpoint;
 import org.apache.flink.runtime.rpc.RpcService;
-import org.apache.flink.runtime.rpc.akka.jobmaster.JobMasterAkkaActor;
-import org.apache.flink.runtime.rpc.akka.jobmaster.JobMasterAkkaGateway;
-import org.apache.flink.runtime.rpc.akka.resourcemanager.ResourceManagerAkkaActor;
-import org.apache.flink.runtime.rpc.akka.resourcemanager.ResourceManagerAkkaGateway;
-import org.apache.flink.runtime.rpc.akka.taskexecutor.TaskExecutorAkkaActor;
-import org.apache.flink.runtime.rpc.akka.taskexecutor.TaskExecutorAkkaGateway;
-import org.apache.flink.runtime.rpc.taskexecutor.TaskExecutorGateway;
-import org.apache.flink.runtime.rpc.taskexecutor.TaskExecutor;
+import org.apache.flink.util.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 import scala.concurrent.Future;
 
+import java.lang.reflect.InvocationHandler;
+import java.lang.reflect.Proxy;
+import java.util.Collection;
 import java.util.HashSet;
-import java.util.Set;
 
+/**
+ * Akka based {@link RpcService} implementation. The rpc service starts an Akka actor to receive
+ * rpcs from a {@link RpcGateway}.
+ */
 public class AkkaRpcService implements RpcService {
+	private static final Logger LOG = LoggerFactory.getLogger(AkkaRpcService.class);
+
 	private final ActorSystem actorSystem;
 	private final Timeout timeout;
-	private final Set<ActorRef> actors = new HashSet<>();
+	private final Collection<ActorRef> actors = new HashSet<>(4);
 
-	public AkkaRpcService(ActorSystem actorSystem, Timeout timeout) {
-		this.actorSystem = actorSystem;
-		this.timeout = timeout;
+	public AkkaRpcService(final ActorSystem actorSystem, final Timeout timeout) {
+		this.actorSystem = Preconditions.checkNotNull(actorSystem, "actor system");
+		this.timeout = Preconditions.checkNotNull(timeout, "timeout");
 	}
 
 	@Override
-	public <C extends RpcGateway> Future<C> connect(String address, final Class<C> clazz) {
-		ActorSelection actorSel = actorSystem.actorSelection(address);
+	public <C extends RpcGateway> Future<C> connect(final String address, final Class<C> clazz) {
+		LOG.info("Try to connect to remote rpc server with address {}. Returning a {} gateway.", address, clazz.getName());
 
-		AskableActorSelection asker = new AskableActorSelection(actorSel);
+		final ActorSelection actorSel = actorSystem.actorSelection(address);
 
-		Future<Object> identify = asker.ask(new Identify(42), timeout);
+		final AskableActorSelection asker = new AskableActorSelection(actorSel);
+
+		final Future<Object> identify = asker.ask(new Identify(42), timeout);
 
 		return identify.map(new Mapper<Object, C>(){
+			@Override
 			public C apply(Object obj) {
 				ActorRef actorRef = ((ActorIdentity) obj).getRef();
 
-				if (clazz == TaskExecutorGateway.class) {
-					return (C) new TaskExecutorAkkaGateway(actorRef, timeout);
-				} else if (clazz == ResourceManagerGateway.class) {
-					return (C) new ResourceManagerAkkaGateway(actorRef, timeout);
-				} else if (clazz == JobMasterGateway.class) {
-					return (C) new JobMasterAkkaGateway(actorRef, timeout);
-				} else {
-					throw new RuntimeException("Could not find remote endpoint " + clazz);
-				}
+				InvocationHandler akkaInvocationHandler = new AkkaInvocationHandler(actorRef, timeout);
+
+				@SuppressWarnings("unchecked")
+				C proxy = (C) Proxy.newProxyInstance(
+					ClassLoader.getSystemClassLoader(),
+					new Class<?>[] {clazz},
+					akkaInvocationHandler);
+
+				return proxy;
 			}
 		}, actorSystem.dispatcher());
 	}
 
 	@Override
-	public <S extends RpcEndpoint, C extends RpcGateway> C startServer(S rpcEndpoint) {
-		ActorRef ref;
-		C self;
-		if (rpcEndpoint instanceof TaskExecutor) {
-			ref = actorSystem.actorOf(
-				Props.create(TaskExecutorAkkaActor.class, rpcEndpoint)
-			);
-
-			self = (C) new TaskExecutorAkkaGateway(ref, timeout);
-		} else if (rpcEndpoint instanceof ResourceManager) {
-			ref = actorSystem.actorOf(
-				Props.create(ResourceManagerAkkaActor.class, rpcEndpoint)
-			);
-
-			self = (C) new ResourceManagerAkkaGateway(ref, timeout);
-		} else if (rpcEndpoint instanceof JobMaster) {
-			ref = actorSystem.actorOf(
-				Props.create(JobMasterAkkaActor.class, rpcEndpoint)
-			);
-
-			self = (C) new JobMasterAkkaGateway(ref, timeout);
-		} else {
-			throw new RuntimeException("Could not start RPC server for class " + rpcEndpoint.getClass());
-		}
+	public <C extends RpcGateway, S extends RpcEndpoint<C>> C startServer(S rpcEndpoint) {
+		Preconditions.checkNotNull(rpcEndpoint, "rpc endpoint");
 
-		actors.add(ref);
+		LOG.info("Start Akka rpc actor to handle rpcs for {}.", rpcEndpoint.getClass().getName());
+
+		Props akkaRpcActorProps = Props.create(AkkaRpcActor.class, rpcEndpoint);
+
+		ActorRef actorRef = actorSystem.actorOf(akkaRpcActorProps);
+		actors.add(actorRef);
+
+		InvocationHandler akkaInvocationHandler = new AkkaInvocationHandler(actorRef, timeout);
+
+		@SuppressWarnings("unchecked")
+		C self = (C) Proxy.newProxyInstance(
+			ClassLoader.getSystemClassLoader(),
+			new Class<?>[]{rpcEndpoint.getSelfGatewayType(), MainThreadExecutor.class, AkkaGateway.class},
+			akkaInvocationHandler);
 
 		return self;
 	}
@@ -120,16 +114,19 @@ public class AkkaRpcService implements RpcService {
 		if (selfGateway instanceof AkkaGateway) {
 			AkkaGateway akkaClient = (AkkaGateway) selfGateway;
 
-			if (actors.contains(akkaClient.getActorRef())) {
-				akkaClient.getActorRef().tell(PoisonPill.getInstance(), ActorRef.noSender());
-			} else {
-				// don't stop this actor since it was not started by this RPC service
+			if (actors.contains(akkaClient.getRpcServer())) {
+				ActorRef selfActorRef = akkaClient.getRpcServer();
+
+				LOG.info("Stop Akka rpc actor {}.", selfActorRef.path());
+
+				selfActorRef.tell(PoisonPill.getInstance(), ActorRef.noSender());
 			}
 		}
 	}
 
 	@Override
 	public void stopService() {
+		LOG.info("Stop Akka rpc service.");
 		actorSystem.shutdown();
 		actorSystem.awaitTermination();
 	}
@@ -137,9 +134,11 @@ public class AkkaRpcService implements RpcService {
 	@Override
 	public <C extends RpcGateway> String getAddress(C selfGateway) {
 		if (selfGateway instanceof AkkaGateway) {
-			return AkkaUtils.getAkkaURL(actorSystem, ((AkkaGateway) selfGateway).getActorRef());
+			ActorRef actorRef = ((AkkaGateway) selfGateway).getRpcServer();
+			return AkkaUtils.getAkkaURL(actorSystem, actorRef);
 		} else {
-			throw new RuntimeException("Cannot get address for non " + AkkaGateway.class.getName() + ".");
+			String className = AkkaGateway.class.getName();
+			throw new RuntimeException("Cannot get address for non " + className + '.');
 		}
 	}
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/BaseAkkaActor.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/BaseAkkaActor.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/BaseAkkaActor.java
deleted file mode 100644
index 3cb499c..0000000
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/BaseAkkaActor.java
+++ /dev/null
@@ -1,50 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.runtime.rpc.akka;
-
-import akka.actor.Status;
-import akka.actor.UntypedActor;
-import org.apache.flink.runtime.rpc.akka.messages.CallableMessage;
-import org.apache.flink.runtime.rpc.akka.messages.RunnableMessage;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-public class BaseAkkaActor extends UntypedActor {
-	private static final Logger LOG = LoggerFactory.getLogger(BaseAkkaActor.class);
-
-	@Override
-	public void onReceive(Object message) throws Exception {
-		if (message instanceof RunnableMessage) {
-			try {
-				((RunnableMessage) message).getRunnable().run();
-			} catch (Exception e) {
-				LOG.error("Encountered error while executing runnable.", e);
-			}
-		} else if (message instanceof CallableMessage<?>) {
-			try {
-				Object result = ((CallableMessage<?>) message).getCallable().call();
-				sender().tell(new Status.Success(result), getSelf());
-			} catch (Exception e) {
-				sender().tell(new Status.Failure(e), getSelf());
-			}
-		} else {
-			throw new RuntimeException("Unknown message " + message);
-		}
-	}
-}

http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/BaseAkkaGateway.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/BaseAkkaGateway.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/BaseAkkaGateway.java
deleted file mode 100644
index 512790d..0000000
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/BaseAkkaGateway.java
+++ /dev/null
@@ -1,41 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.runtime.rpc.akka;
-
-import akka.actor.ActorRef;
-import akka.pattern.Patterns;
-import akka.util.Timeout;
-import org.apache.flink.runtime.rpc.MainThreadExecutor;
-import org.apache.flink.runtime.rpc.akka.messages.CallableMessage;
-import org.apache.flink.runtime.rpc.akka.messages.RunnableMessage;
-import scala.concurrent.Future;
-
-import java.util.concurrent.Callable;
-
-public abstract class BaseAkkaGateway implements MainThreadExecutor, AkkaGateway {
-	@Override
-	public void runAsync(Runnable runnable) {
-		getActorRef().tell(new RunnableMessage(runnable), ActorRef.noSender());
-	}
-
-	@Override
-	public <V> Future<V> callAsync(Callable<V> callable, Timeout timeout) {
-		return (Future<V>) Patterns.ask(getActorRef(), new CallableMessage(callable), timeout);
-	}
-}

http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/jobmaster/JobMasterAkkaActor.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/jobmaster/JobMasterAkkaActor.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/jobmaster/JobMasterAkkaActor.java
deleted file mode 100644
index 9e04ea9..0000000
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/jobmaster/JobMasterAkkaActor.java
+++ /dev/null
@@ -1,58 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.runtime.rpc.akka.jobmaster;
-
-import akka.actor.ActorRef;
-import akka.actor.Status;
-import org.apache.flink.runtime.rpc.akka.BaseAkkaActor;
-import org.apache.flink.runtime.rpc.akka.messages.RegisterAtResourceManager;
-import org.apache.flink.runtime.rpc.jobmaster.JobMaster;
-import org.apache.flink.runtime.messages.Acknowledge;
-import org.apache.flink.runtime.rpc.akka.messages.UpdateTaskExecutionState;
-
-public class JobMasterAkkaActor extends BaseAkkaActor {
-	private final JobMaster jobMaster;
-
-	public JobMasterAkkaActor(JobMaster jobMaster) {
-		this.jobMaster = jobMaster;
-	}
-
-	@Override
-	public void onReceive(Object message) throws Exception {
-		if (message instanceof UpdateTaskExecutionState) {
-
-			final ActorRef sender = getSender();
-
-			UpdateTaskExecutionState updateTaskExecutionState = (UpdateTaskExecutionState) message;
-
-			try {
-				Acknowledge result = jobMaster.updateTaskExecutionState(updateTaskExecutionState.getTaskExecutionState());
-				sender.tell(new Status.Success(result), getSelf());
-			} catch (Exception e) {
-				sender.tell(new Status.Failure(e), getSelf());
-			}
-		} else if (message instanceof RegisterAtResourceManager) {
-			RegisterAtResourceManager registerAtResourceManager = (RegisterAtResourceManager) message;
-
-			jobMaster.registerAtResourceManager(registerAtResourceManager.getAddress());
-		} else {
-			super.onReceive(message);
-		}
-	}
-}

http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/jobmaster/JobMasterAkkaGateway.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/jobmaster/JobMasterAkkaGateway.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/jobmaster/JobMasterAkkaGateway.java
deleted file mode 100644
index e6bf061..0000000
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/jobmaster/JobMasterAkkaGateway.java
+++ /dev/null
@@ -1,57 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.runtime.rpc.akka.jobmaster;
-
-import akka.actor.ActorRef;
-import akka.pattern.AskableActorRef;
-import akka.util.Timeout;
-import org.apache.flink.runtime.rpc.akka.BaseAkkaGateway;
-import org.apache.flink.runtime.rpc.akka.messages.RegisterAtResourceManager;
-import org.apache.flink.runtime.rpc.jobmaster.JobMasterGateway;
-import org.apache.flink.runtime.messages.Acknowledge;
-import org.apache.flink.runtime.rpc.akka.messages.UpdateTaskExecutionState;
-import org.apache.flink.runtime.taskmanager.TaskExecutionState;
-import scala.concurrent.Future;
-import scala.reflect.ClassTag$;
-
-public class JobMasterAkkaGateway extends BaseAkkaGateway implements JobMasterGateway {
-	private final AskableActorRef actorRef;
-	private final Timeout timeout;
-
-	public JobMasterAkkaGateway(ActorRef actorRef, Timeout timeout) {
-		this.actorRef = new AskableActorRef(actorRef);
-		this.timeout = timeout;
-	}
-
-	@Override
-	public Future<Acknowledge> updateTaskExecutionState(TaskExecutionState taskExecutionState) {
-		return actorRef.ask(new UpdateTaskExecutionState(taskExecutionState), timeout)
-			.mapTo(ClassTag$.MODULE$.<Acknowledge>apply(Acknowledge.class));
-	}
-
-	@Override
-	public void registerAtResourceManager(String address) {
-		actorRef.actorRef().tell(new RegisterAtResourceManager(address), actorRef.actorRef());
-	}
-
-	@Override
-	public ActorRef getActorRef() {
-		return actorRef.actorRef();
-	}
-}

http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/CallAsync.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/CallAsync.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/CallAsync.java
new file mode 100644
index 0000000..79b7825
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/CallAsync.java
@@ -0,0 +1,41 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.akka.messages;
+
+import org.apache.flink.util.Preconditions;
+
+import java.io.Serializable;
+import java.util.concurrent.Callable;
+
+/**
+ * Message for asynchronous callable invocations
+ */
+public final class CallAsync implements Serializable {
+	private static final long serialVersionUID = 2834204738928484060L;
+
+	private transient Callable<?> callable;
+
+	public CallAsync(Callable<?> callable) {
+		this.callable = Preconditions.checkNotNull(callable);
+	}
+
+	public Callable<?> getCallable() {
+		return callable;
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/CallableMessage.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/CallableMessage.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/CallableMessage.java
deleted file mode 100644
index f0e555f..0000000
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/CallableMessage.java
+++ /dev/null
@@ -1,33 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.runtime.rpc.akka.messages;
-
-import java.util.concurrent.Callable;
-
-public class CallableMessage<V> {
-	private final Callable<V> callable;
-
-	public CallableMessage(Callable<V> callable) {
-		this.callable = callable;
-	}
-
-	public Callable<V> getCallable() {
-		return callable;
-	}
-}

http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/CancelTask.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/CancelTask.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/CancelTask.java
deleted file mode 100644
index 0b9e9dc..0000000
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/CancelTask.java
+++ /dev/null
@@ -1,36 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.runtime.rpc.akka.messages;
-
-import org.apache.flink.runtime.executiongraph.ExecutionAttemptID;
-
-import java.io.Serializable;
-
-public class CancelTask implements Serializable {
-	private static final long serialVersionUID = -2998176874447950595L;
-	private final ExecutionAttemptID executionAttemptID;
-
-	public CancelTask(ExecutionAttemptID executionAttemptID) {
-		this.executionAttemptID = executionAttemptID;
-	}
-
-	public ExecutionAttemptID getExecutionAttemptID() {
-		return executionAttemptID;
-	}
-}

http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/ExecuteTask.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/ExecuteTask.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/ExecuteTask.java
deleted file mode 100644
index a83d539..0000000
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/ExecuteTask.java
+++ /dev/null
@@ -1,36 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.runtime.rpc.akka.messages;
-
-import org.apache.flink.runtime.deployment.TaskDeploymentDescriptor;
-
-import java.io.Serializable;
-
-public class ExecuteTask implements Serializable {
-	private static final long serialVersionUID = -6769958430967048348L;
-	private final TaskDeploymentDescriptor taskDeploymentDescriptor;
-
-	public ExecuteTask(TaskDeploymentDescriptor taskDeploymentDescriptor) {
-		this.taskDeploymentDescriptor = taskDeploymentDescriptor;
-	}
-
-	public TaskDeploymentDescriptor getTaskDeploymentDescriptor() {
-		return taskDeploymentDescriptor;
-	}
-}

http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RegisterAtResourceManager.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RegisterAtResourceManager.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RegisterAtResourceManager.java
deleted file mode 100644
index 3ade082..0000000
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RegisterAtResourceManager.java
+++ /dev/null
@@ -1,36 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.runtime.rpc.akka.messages;
-
-import java.io.Serializable;
-
-public class RegisterAtResourceManager implements Serializable {
-
-	private static final long serialVersionUID = -4175905742620903602L;
-
-	private final String address;
-
-	public RegisterAtResourceManager(String address) {
-		this.address = address;
-	}
-
-	public String getAddress() {
-		return address;
-	}
-}

http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RegisterJobMaster.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RegisterJobMaster.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RegisterJobMaster.java
deleted file mode 100644
index b35ea38..0000000
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RegisterJobMaster.java
+++ /dev/null
@@ -1,36 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.runtime.rpc.akka.messages;
-
-import org.apache.flink.runtime.rpc.resourcemanager.JobMasterRegistration;
-
-import java.io.Serializable;
-
-public class RegisterJobMaster implements Serializable{
-	private static final long serialVersionUID = -4616879574192641507L;
-	private final JobMasterRegistration jobMasterRegistration;
-
-	public RegisterJobMaster(JobMasterRegistration jobMasterRegistration) {
-		this.jobMasterRegistration = jobMasterRegistration;
-	}
-
-	public JobMasterRegistration getJobMasterRegistration() {
-		return jobMasterRegistration;
-	}
-}

http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RequestSlot.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RequestSlot.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RequestSlot.java
deleted file mode 100644
index 85ceeec..0000000
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RequestSlot.java
+++ /dev/null
@@ -1,37 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.runtime.rpc.akka.messages;
-
-import org.apache.flink.runtime.rpc.resourcemanager.SlotRequest;
-
-import java.io.Serializable;
-
-public class RequestSlot implements Serializable {
-	private static final long serialVersionUID = 7207463889348525866L;
-
-	private final SlotRequest slotRequest;
-
-	public RequestSlot(SlotRequest slotRequest) {
-		this.slotRequest = slotRequest;
-	}
-
-	public SlotRequest getSlotRequest() {
-		return slotRequest;
-	}
-}

http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RpcInvocation.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RpcInvocation.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RpcInvocation.java
new file mode 100644
index 0000000..5d52ef1
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RpcInvocation.java
@@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.akka.messages;
+
+import org.apache.flink.util.Preconditions;
+
+import java.io.IOException;
+import java.io.ObjectInputStream;
+import java.io.ObjectOutputStream;
+import java.io.Serializable;
+
+/**
+ * Rpc invocation message containing the remote procedure name, its parameter types and the
+ * corresponding call arguments.
+ */
+public final class RpcInvocation implements Serializable {
+	private static final long serialVersionUID = -7058254033460536037L;
+
+	private final String methodName;
+	private final Class<?>[] parameterTypes;
+	private transient Object[] args;
+
+	public RpcInvocation(String methodName, Class<?>[] parameterTypes, Object[] args) {
+		this.methodName = Preconditions.checkNotNull(methodName);
+		this.parameterTypes = Preconditions.checkNotNull(parameterTypes);
+		this.args = args;
+	}
+
+	public String getMethodName() {
+		return methodName;
+	}
+
+	public Class<?>[] getParameterTypes() {
+		return parameterTypes;
+	}
+
+	public Object[] getArgs() {
+		return args;
+	}
+
+	private void writeObject(ObjectOutputStream oos) throws IOException {
+		oos.defaultWriteObject();
+
+		if (args != null) {
+			// write has args true
+			oos.writeBoolean(true);
+
+			for (int i = 0; i < args.length; i++) {
+				try {
+					oos.writeObject(args[i]);
+				} catch (IOException e) {
+					Class<?> argClass = args[i].getClass();
+
+					throw new IOException("Could not write " + i + "th argument of method " +
+						methodName + ". The argument type is " + argClass + ". " +
+						"Make sure that this type is serializable.", e);
+				}
+			}
+		} else {
+			// write has args false
+			oos.writeBoolean(false);
+		}
+	}
+
+	private void readObject(ObjectInputStream ois) throws IOException, ClassNotFoundException {
+		ois.defaultReadObject();
+
+		boolean hasArgs = ois.readBoolean();
+
+		if (hasArgs) {
+			int numberArguments = parameterTypes.length;
+
+			args = new Object[numberArguments];
+
+			for (int i = 0; i < numberArguments; i++) {
+				args[i] = ois.readObject();
+			}
+		} else {
+			args = null;
+		}
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RunAsync.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RunAsync.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RunAsync.java
new file mode 100644
index 0000000..fb95852
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RunAsync.java
@@ -0,0 +1,40 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.akka.messages;
+
+import org.apache.flink.util.Preconditions;
+
+import java.io.Serializable;
+
+/**
+ * Message for asynchronous runnable invocations
+ */
+public final class RunAsync implements Serializable {
+	private static final long serialVersionUID = -3080595100695371036L;
+
+	private final transient Runnable runnable;
+
+	public RunAsync(Runnable runnable) {
+		this.runnable = Preconditions.checkNotNull(runnable);
+	}
+
+	public Runnable getRunnable() {
+		return runnable;
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RunnableMessage.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RunnableMessage.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RunnableMessage.java
deleted file mode 100644
index 3556738..0000000
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RunnableMessage.java
+++ /dev/null
@@ -1,31 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.runtime.rpc.akka.messages;
-
-public class RunnableMessage {
-	private final Runnable runnable;
-
-	public RunnableMessage(Runnable runnable) {
-		this.runnable = runnable;
-	}
-
-	public Runnable getRunnable() {
-		return runnable;
-	}
-}

http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/UpdateTaskExecutionState.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/UpdateTaskExecutionState.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/UpdateTaskExecutionState.java
deleted file mode 100644
index f89cd2f..0000000
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/UpdateTaskExecutionState.java
+++ /dev/null
@@ -1,37 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.runtime.rpc.akka.messages;
-
-import org.apache.flink.runtime.taskmanager.TaskExecutionState;
-
-import java.io.Serializable;
-
-public class UpdateTaskExecutionState implements Serializable{
-	private static final long serialVersionUID = -6662229114427331436L;
-
-	private final TaskExecutionState taskExecutionState;
-
-	public UpdateTaskExecutionState(TaskExecutionState taskExecutionState) {
-		this.taskExecutionState = taskExecutionState;
-	}
-
-	public TaskExecutionState getTaskExecutionState() {
-		return taskExecutionState;
-	}
-}

http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/resourcemanager/ResourceManagerAkkaActor.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/resourcemanager/ResourceManagerAkkaActor.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/resourcemanager/ResourceManagerAkkaActor.java
deleted file mode 100644
index 13101f9..0000000
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/resourcemanager/ResourceManagerAkkaActor.java
+++ /dev/null
@@ -1,65 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.runtime.rpc.akka.resourcemanager;
-
-import akka.actor.ActorRef;
-import akka.actor.Status;
-import akka.pattern.Patterns;
-import org.apache.flink.runtime.rpc.akka.BaseAkkaActor;
-import org.apache.flink.runtime.rpc.resourcemanager.RegistrationResponse;
-import org.apache.flink.runtime.rpc.resourcemanager.ResourceManager;
-import org.apache.flink.runtime.rpc.resourcemanager.SlotAssignment;
-import org.apache.flink.runtime.rpc.akka.messages.RegisterJobMaster;
-import org.apache.flink.runtime.rpc.akka.messages.RequestSlot;
-import scala.concurrent.Future;
-
-public class ResourceManagerAkkaActor extends BaseAkkaActor {
-	private final ResourceManager resourceManager;
-
-	public ResourceManagerAkkaActor(ResourceManager resourceManager) {
-		this.resourceManager = resourceManager;
-	}
-
-	@Override
-	public void onReceive(Object message) throws Exception {
-		final ActorRef sender = getSender();
-
-		if (message instanceof RegisterJobMaster) {
-			RegisterJobMaster registerJobMaster = (RegisterJobMaster) message;
-
-			try {
-				Future<RegistrationResponse> response = resourceManager.registerJobMaster(registerJobMaster.getJobMasterRegistration());
-				Patterns.pipe(response, getContext().dispatcher()).to(sender());
-			} catch (Exception e) {
-				sender.tell(new Status.Failure(e), getSelf());
-			}
-		} else if (message instanceof RequestSlot) {
-			RequestSlot requestSlot = (RequestSlot) message;
-
-			try {
-				SlotAssignment response = resourceManager.requestSlot(requestSlot.getSlotRequest());
-				sender.tell(new Status.Success(response), getSelf());
-			} catch (Exception e) {
-				sender.tell(new Status.Failure(e), getSelf());
-			}
-		} else {
-			super.onReceive(message);
-		}
-	}
-}

http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/resourcemanager/ResourceManagerAkkaGateway.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/resourcemanager/ResourceManagerAkkaGateway.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/resourcemanager/ResourceManagerAkkaGateway.java
deleted file mode 100644
index 1304707..0000000
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/resourcemanager/ResourceManagerAkkaGateway.java
+++ /dev/null
@@ -1,67 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.runtime.rpc.akka.resourcemanager;
-
-import akka.actor.ActorRef;
-import akka.pattern.AskableActorRef;
-import akka.util.Timeout;
-import org.apache.flink.runtime.rpc.akka.BaseAkkaGateway;
-import org.apache.flink.runtime.rpc.resourcemanager.JobMasterRegistration;
-import org.apache.flink.runtime.rpc.resourcemanager.RegistrationResponse;
-import org.apache.flink.runtime.rpc.resourcemanager.ResourceManagerGateway;
-import org.apache.flink.runtime.rpc.resourcemanager.SlotAssignment;
-import org.apache.flink.runtime.rpc.resourcemanager.SlotRequest;
-import org.apache.flink.runtime.rpc.akka.messages.RegisterJobMaster;
-import org.apache.flink.runtime.rpc.akka.messages.RequestSlot;
-import scala.concurrent.Future;
-import scala.concurrent.duration.FiniteDuration;
-import scala.reflect.ClassTag$;
-
-public class ResourceManagerAkkaGateway extends BaseAkkaGateway implements ResourceManagerGateway {
-	private final AskableActorRef actorRef;
-	private final Timeout timeout;
-
-	public ResourceManagerAkkaGateway(ActorRef actorRef, Timeout timeout) {
-		this.actorRef = new AskableActorRef(actorRef);
-		this.timeout = timeout;
-	}
-
-	@Override
-	public Future<RegistrationResponse> registerJobMaster(JobMasterRegistration jobMasterRegistration, FiniteDuration timeout) {
-		return actorRef.ask(new RegisterJobMaster(jobMasterRegistration), new Timeout(timeout))
-			.mapTo(ClassTag$.MODULE$.<RegistrationResponse>apply(RegistrationResponse.class));
-	}
-
-	@Override
-	public Future<RegistrationResponse> registerJobMaster(JobMasterRegistration jobMasterRegistration) {
-		return actorRef.ask(new RegisterJobMaster(jobMasterRegistration), timeout)
-			.mapTo(ClassTag$.MODULE$.<RegistrationResponse>apply(RegistrationResponse.class));
-	}
-
-	@Override
-	public Future<SlotAssignment> requestSlot(SlotRequest slotRequest) {
-		return actorRef.ask(new RequestSlot(slotRequest), timeout)
-			.mapTo(ClassTag$.MODULE$.<SlotAssignment>apply(SlotAssignment.class));
-	}
-
-	@Override
-	public ActorRef getActorRef() {
-		return actorRef.actorRef();
-	}
-}

http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/taskexecutor/TaskExecutorAkkaActor.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/taskexecutor/TaskExecutorAkkaActor.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/taskexecutor/TaskExecutorAkkaActor.java
deleted file mode 100644
index ed522cc..0000000
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/taskexecutor/TaskExecutorAkkaActor.java
+++ /dev/null
@@ -1,77 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.runtime.rpc.akka.taskexecutor;
-
-import akka.actor.ActorRef;
-import akka.actor.Status;
-import akka.dispatch.OnComplete;
-import org.apache.flink.runtime.messages.Acknowledge;
-import org.apache.flink.runtime.rpc.akka.BaseAkkaActor;
-import org.apache.flink.runtime.rpc.akka.messages.CancelTask;
-import org.apache.flink.runtime.rpc.akka.messages.ExecuteTask;
-import org.apache.flink.runtime.rpc.taskexecutor.TaskExecutorGateway;
-
-public class TaskExecutorAkkaActor extends BaseAkkaActor {
-	private final TaskExecutorGateway taskExecutor;
-
-	public TaskExecutorAkkaActor(TaskExecutorGateway taskExecutor) {
-		this.taskExecutor = taskExecutor;
-	}
-
-	@Override
-	public void onReceive(Object message) throws Exception {
-		final ActorRef sender = getSender();
-
-		if (message instanceof ExecuteTask) {
-			ExecuteTask executeTask = (ExecuteTask) message;
-
-			taskExecutor.executeTask(executeTask.getTaskDeploymentDescriptor()).onComplete(
-				new OnComplete<Acknowledge>() {
-					@Override
-					public void onComplete(Throwable failure, Acknowledge success) throws Throwable {
-						if (failure != null) {
-							sender.tell(new Status.Failure(failure), getSelf());
-						} else {
-							sender.tell(new Status.Success(Acknowledge.get()), getSelf());
-						}
-					}
-				},
-				getContext().dispatcher()
-			);
-		} else if (message instanceof CancelTask) {
-			CancelTask cancelTask = (CancelTask) message;
-
-			taskExecutor.cancelTask(cancelTask.getExecutionAttemptID()).onComplete(
-				new OnComplete<Acknowledge>() {
-					@Override
-					public void onComplete(Throwable failure, Acknowledge success) throws Throwable {
-						if (failure != null) {
-							sender.tell(new Status.Failure(failure), getSelf());
-						} else {
-							sender.tell(new Status.Success(Acknowledge.get()), getSelf());
-						}
-					}
-				},
-				getContext().dispatcher()
-			);
-		} else {
-			super.onReceive(message);
-		}
-	}
-}

http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/taskexecutor/TaskExecutorAkkaGateway.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/taskexecutor/TaskExecutorAkkaGateway.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/taskexecutor/TaskExecutorAkkaGateway.java
deleted file mode 100644
index 7f0a522..0000000
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/taskexecutor/TaskExecutorAkkaGateway.java
+++ /dev/null
@@ -1,59 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.runtime.rpc.akka.taskexecutor;
-
-import akka.actor.ActorRef;
-import akka.pattern.AskableActorRef;
-import akka.util.Timeout;
-import org.apache.flink.runtime.deployment.TaskDeploymentDescriptor;
-import org.apache.flink.runtime.executiongraph.ExecutionAttemptID;
-import org.apache.flink.runtime.messages.Acknowledge;
-import org.apache.flink.runtime.rpc.akka.BaseAkkaGateway;
-import org.apache.flink.runtime.rpc.akka.messages.CancelTask;
-import org.apache.flink.runtime.rpc.akka.messages.ExecuteTask;
-import org.apache.flink.runtime.rpc.taskexecutor.TaskExecutorGateway;
-import scala.concurrent.Future;
-import scala.reflect.ClassTag$;
-
-public class TaskExecutorAkkaGateway extends BaseAkkaGateway implements TaskExecutorGateway {
-	private final AskableActorRef actorRef;
-	private final Timeout timeout;
-
-	public TaskExecutorAkkaGateway(ActorRef actorRef, Timeout timeout) {
-		this.actorRef = new AskableActorRef(actorRef);
-		this.timeout = timeout;
-	}
-
-	@Override
-	public Future<Acknowledge> executeTask(TaskDeploymentDescriptor taskDeploymentDescriptor) {
-		return actorRef.ask(new ExecuteTask(taskDeploymentDescriptor), timeout)
-			.mapTo(ClassTag$.MODULE$.<Acknowledge>apply(Acknowledge.class));
-	}
-
-	@Override
-	public Future<Acknowledge> cancelTask(ExecutionAttemptID executionAttemptId) {
-		return actorRef.ask(new CancelTask(executionAttemptId), timeout)
-			.mapTo(ClassTag$.MODULE$.<Acknowledge>apply(Acknowledge.class));
-	}
-
-	@Override
-	public ActorRef getActorRef() {
-		return actorRef.actorRef();
-	}
-}

http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/jobmaster/JobMaster.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/jobmaster/JobMaster.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/jobmaster/JobMaster.java
index b81b19c..e53cd68 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/jobmaster/JobMaster.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/jobmaster/JobMaster.java
@@ -30,6 +30,7 @@ import org.apache.flink.runtime.rpc.resourcemanager.ResourceManagerGateway;
 import org.apache.flink.runtime.rpc.RpcEndpoint;
 import org.apache.flink.runtime.rpc.RpcService;
 import org.apache.flink.runtime.taskmanager.TaskExecutionState;
+import org.apache.flink.util.Preconditions;
 import scala.Tuple2;
 import scala.concurrent.ExecutionContext;
 import scala.concurrent.ExecutionContext$;
@@ -76,7 +77,8 @@ public class JobMaster extends RpcEndpoint<JobMasterGateway> {
 
 	public JobMaster(RpcService rpcService, ExecutorService executorService) {
 		super(rpcService);
-		executionContext = ExecutionContext$.MODULE$.fromExecutor(executorService);
+		executionContext = ExecutionContext$.MODULE$.fromExecutor(
+			Preconditions.checkNotNull(executorService));
 		scheduledExecutorService = new ScheduledThreadPoolExecutor(1);
 	}
 

http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/ResourceManager.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/ResourceManager.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/ResourceManager.java
index c7e8def..729ef0c 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/ResourceManager.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/ResourceManager.java
@@ -25,6 +25,7 @@ import org.apache.flink.runtime.rpc.RpcEndpoint;
 import org.apache.flink.runtime.rpc.RpcService;
 import org.apache.flink.runtime.rpc.jobmaster.JobMaster;
 import org.apache.flink.runtime.rpc.jobmaster.JobMasterGateway;
+import org.apache.flink.util.Preconditions;
 import scala.concurrent.ExecutionContext;
 import scala.concurrent.ExecutionContext$;
 import scala.concurrent.Future;
@@ -49,7 +50,8 @@ public class ResourceManager extends RpcEndpoint<ResourceManagerGateway> {
 
 	public ResourceManager(RpcService rpcService, ExecutorService executorService) {
 		super(rpcService);
-		this.executionContext = ExecutionContext$.MODULE$.fromExecutor(executorService);
+		this.executionContext = ExecutionContext$.MODULE$.fromExecutor(
+			Preconditions.checkNotNull(executorService));
 		this.jobMasterGateways = new HashMap<>();
 	}
 

http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutor.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutor.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutor.java
index cdfc3bd..3a7dd9f 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutor.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutor.java
@@ -25,6 +25,7 @@ import org.apache.flink.runtime.messages.Acknowledge;
 import org.apache.flink.runtime.rpc.RpcMethod;
 import org.apache.flink.runtime.rpc.RpcEndpoint;
 import org.apache.flink.runtime.rpc.RpcService;
+import org.apache.flink.util.Preconditions;
 import scala.concurrent.ExecutionContext;
 
 import java.util.HashSet;
@@ -47,7 +48,8 @@ public class TaskExecutor extends RpcEndpoint<TaskExecutorGateway> {
 
 	public TaskExecutor(RpcService rpcService, ExecutorService executorService) {
 		super(rpcService);
-		this.executionContext = ExecutionContexts$.MODULE$.fromExecutor(executorService);
+		this.executionContext = ExecutionContexts$.MODULE$.fromExecutor(
+			Preconditions.checkNotNull(executorService));
 	}
 
 	/**

http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/RpcCompletenessTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/RpcCompletenessTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/RpcCompletenessTest.java
index 0ded25e..e50533e 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/RpcCompletenessTest.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/RpcCompletenessTest.java
@@ -18,15 +18,15 @@
 
 package org.apache.flink.runtime.rpc;
 
+import org.apache.flink.util.ReflectionUtil;
 import org.apache.flink.util.TestLogger;
 import org.junit.Test;
 import org.reflections.Reflections;
 import scala.concurrent.Future;
+import scala.concurrent.duration.FiniteDuration;
 
 import java.lang.annotation.Annotation;
 import java.lang.reflect.Method;
-import java.lang.reflect.ParameterizedType;
-import java.lang.reflect.Type;
 import java.util.ArrayList;
 import java.util.HashMap;
 import java.util.HashSet;
@@ -51,9 +51,8 @@ public class RpcCompletenessTest extends TestLogger {
 
 		for (Class<? extends RpcEndpoint> rpcEndpoint :classes){
 			c = rpcEndpoint;
-			Type superClass = c.getGenericSuperclass();
 
-			Class<?> rpcGatewayType = extractTypeParameter(superClass, 0);
+			Class<?> rpcGatewayType = ReflectionUtil.getTemplateType1(c);
 
 			if (rpcGatewayType != null) {
 				checkCompleteness(rpcEndpoint, (Class<? extends RpcGateway>) rpcGatewayType);
@@ -137,13 +136,16 @@ public class RpcCompletenessTest extends TestLogger {
 		}
 
 		Annotation[][] parameterAnnotations = gatewayMethod.getParameterAnnotations();
+		Class<?>[] parameterTypes = gatewayMethod.getParameterTypes();
 		int rpcTimeoutParameters = 0;
 
-		for (Annotation[] parameterAnnotation : parameterAnnotations) {
-			for (Annotation annotation : parameterAnnotation) {
-				if (annotation.equals(RpcTimeout.class)) {
-					rpcTimeoutParameters++;
-				}
+		for (int i = 0; i < parameterAnnotations.length; i++) {
+			if (isRpcTimeout(parameterAnnotations[i])) {
+				assertTrue(
+					"The rpc timeout has to be of type " + FiniteDuration.class.getName() + ".",
+					parameterTypes[i].equals(FiniteDuration.class));
+
+				rpcTimeoutParameters++;
 			}
 		}
 
@@ -211,10 +213,10 @@ public class RpcCompletenessTest extends TestLogger {
 				if (!futureClass.equals(RpcCompletenessTest.futureClass)) {
 					return false;
 				} else {
-					Class<?> valueClass = extractTypeParameter(futureClass, 0);
+					Class<?> valueClass = ReflectionUtil.getTemplateType1(gatewayMethod.getGenericReturnType());
 
 					if (endpointMethod.getReturnType().equals(futureClass)) {
-						Class<?> rpcEndpointValueClass = extractTypeParameter(endpointMethod.getReturnType(), 0);
+						Class<?> rpcEndpointValueClass = ReflectionUtil.getTemplateType1(endpointMethod.getGenericReturnType());
 
 						// check if we have the same future value types
 						if (valueClass != null && rpcEndpointValueClass != null && !checkType(valueClass, rpcEndpointValueClass)) {
@@ -251,7 +253,7 @@ public class RpcCompletenessTest extends TestLogger {
 		if (method.getReturnType().equals(Void.TYPE)) {
 			builder.append("void").append(" ");
 		} else if (method.getReturnType().equals(futureClass)) {
-			Class<?> valueClass = extractTypeParameter(method.getGenericReturnType(), 0);
+			Class<?> valueClass = ReflectionUtil.getTemplateType1(method.getGenericReturnType());
 
 			builder
 				.append(futureClass.getSimpleName())
@@ -291,30 +293,6 @@ public class RpcCompletenessTest extends TestLogger {
 		return builder.toString();
 	}
 
-	private Class<?> extractTypeParameter(Type genericType, int position) {
-		if (genericType instanceof ParameterizedType) {
-			ParameterizedType parameterizedType = (ParameterizedType) genericType;
-
-			Type[] typeArguments = parameterizedType.getActualTypeArguments();
-
-			if (position < 0 || position >= typeArguments.length) {
-				throw new IndexOutOfBoundsException("The generic type " +
-					parameterizedType.getRawType() + " only has " + typeArguments.length +
-					" type arguments.");
-			} else {
-				Type typeArgument = typeArguments[position];
-
-				if (typeArgument instanceof Class<?>) {
-					return (Class<?>) typeArgument;
-				} else {
-					return null;
-				}
-			}
-		} else {
-			return null;
-		}
-	}
-
 	private boolean isRpcTimeout(Annotation[] annotations) {
 		for (Annotation annotation : annotations) {
 			if (annotation.annotationType().equals(RpcTimeout.class)) {


[22/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/batch/dataset_transformations.md
----------------------------------------------------------------------
diff --git a/docs/dev/batch/dataset_transformations.md b/docs/dev/batch/dataset_transformations.md
new file mode 100644
index 0000000..1bfb87c
--- /dev/null
+++ b/docs/dev/batch/dataset_transformations.md
@@ -0,0 +1,2335 @@
+---
+title: "DataSet Transformations"
+nav-title: Transformations
+nav-parent_id: batch
+nav-pos: 1
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+This document gives a deep-dive into the available transformations on DataSets. For a general introduction to the
+Flink Java API, please refer to the [Programming Guide](index.html).
+
+For zipping elements in a data set with a dense index, please refer to the [Zip Elements Guide](zip_elements_guide.html).
+
+* This will be replaced by the TOC
+{:toc}
+
+### Map
+
+The Map transformation applies a user-defined map function on each element of a DataSet.
+It implements a one-to-one mapping, that is, exactly one element must be returned by
+the function.
+
+The following code transforms a DataSet of Integer pairs into a DataSet of Integers:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+// MapFunction that adds two integer values
+public class IntAdder implements MapFunction<Tuple2<Integer, Integer>, Integer> {
+  @Override
+  public Integer map(Tuple2<Integer, Integer> in) {
+    return in.f0 + in.f1;
+  }
+}
+
+// [...]
+DataSet<Tuple2<Integer, Integer>> intPairs = // [...]
+DataSet<Integer> intSums = intPairs.map(new IntAdder());
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+val intPairs: DataSet[(Int, Int)] = // [...]
+val intSums = intPairs.map { pair => pair._1 + pair._2 }
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+ intSums = intPairs.map(lambda x: sum(x))
+~~~
+
+</div>
+</div>
+
+### FlatMap
+
+The FlatMap transformation applies a user-defined flat-map function on each element of a DataSet.
+This variant of a map function can return arbitrary many result elements (including none) for each input element.
+
+The following code transforms a DataSet of text lines into a DataSet of words:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+// FlatMapFunction that tokenizes a String by whitespace characters and emits all String tokens.
+public class Tokenizer implements FlatMapFunction<String, String> {
+  @Override
+  public void flatMap(String value, Collector<String> out) {
+    for (String token : value.split("\\W")) {
+      out.collect(token);
+    }
+  }
+}
+
+// [...]
+DataSet<String> textLines = // [...]
+DataSet<String> words = textLines.flatMap(new Tokenizer());
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+val textLines: DataSet[String] = // [...]
+val words = textLines.flatMap { _.split(" ") }
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+ words = lines.flat_map(lambda x,c: [line.split() for line in x])
+~~~
+
+</div>
+</div>
+
+### MapPartition
+
+MapPartition transforms a parallel partition in a single function call. The map-partition function
+gets the partition as Iterable and can produce an arbitrary number of result values. The number of elements in each partition depends on the degree-of-parallelism
+and previous operations.
+
+The following code transforms a DataSet of text lines into a DataSet of counts per partition:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+public class PartitionCounter implements MapPartitionFunction<String, Long> {
+
+  public void mapPartition(Iterable<String> values, Collector<Long> out) {
+    long c = 0;
+    for (String s : values) {
+      c++;
+    }
+    out.collect(c);
+  }
+}
+
+// [...]
+DataSet<String> textLines = // [...]
+DataSet<Long> counts = textLines.mapPartition(new PartitionCounter());
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+val textLines: DataSet[String] = // [...]
+// Some is required because the return value must be a Collection.
+// There is an implicit conversion from Option to a Collection.
+val counts = texLines.mapPartition { in => Some(in.size) }
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+ counts = lines.map_partition(lambda x,c: [sum(1 for _ in x)])
+~~~
+
+</div>
+</div>
+
+### Filter
+
+The Filter transformation applies a user-defined filter function on each element of a DataSet and retains only those elements for which the function returns `true`.
+
+The following code removes all Integers smaller than zero from a DataSet:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+// FilterFunction that filters out all Integers smaller than zero.
+public class NaturalNumberFilter implements FilterFunction<Integer> {
+  @Override
+  public boolean filter(Integer number) {
+    return number >= 0;
+  }
+}
+
+// [...]
+DataSet<Integer> intNumbers = // [...]
+DataSet<Integer> naturalNumbers = intNumbers.filter(new NaturalNumberFilter());
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+val intNumbers: DataSet[Int] = // [...]
+val naturalNumbers = intNumbers.filter { _ > 0 }
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+ naturalNumbers = intNumbers.filter(lambda x: x > 0)
+~~~
+
+</div>
+</div>
+
+**IMPORTANT:** The system assumes that the function does not modify the elements on which the predicate is applied. Violating this assumption
+can lead to incorrect results.
+
+### Projection of Tuple DataSet
+
+The Project transformation removes or moves Tuple fields of a Tuple DataSet.
+The `project(int...)` method selects Tuple fields that should be retained by their index and defines their order in the output Tuple.
+
+Projections do not require the definition of a user function.
+
+The following code shows different ways to apply a Project transformation on a DataSet:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+DataSet<Tuple3<Integer, Double, String>> in = // [...]
+// converts Tuple3<Integer, Double, String> into Tuple2<String, Integer>
+DataSet<Tuple2<String, Integer>> out = in.project(2,0);
+~~~
+
+#### Projection with Type Hint
+
+Note that the Java compiler cannot infer the return type of `project` operator. This can cause a problem if you call another operator on a result of `project` operator such as:
+
+~~~java
+DataSet<Tuple5<String,String,String,String,String>> ds = ....
+DataSet<Tuple1<String>> ds2 = ds.project(0).distinct(0);
+~~~
+
+This problem can be overcome by hinting the return type of `project` operator like this:
+
+~~~java
+DataSet<Tuple1<String>> ds2 = ds.<Tuple1<String>>project(0).distinct(0);
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+Not supported.
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+out = in.project(2,0);
+~~~
+
+</div>
+</div>
+
+### Transformations on Grouped DataSet
+
+The reduce operations can operate on grouped data sets. Specifying the key to
+be used for grouping can be done in many ways:
+
+- key expressions
+- a key-selector function
+- one or more field position keys (Tuple DataSet only)
+- Case Class fields (Case Classes only)
+
+Please look at the reduce examples to see how the grouping keys are specified.
+
+### Reduce on Grouped DataSet
+
+A Reduce transformation that is applied on a grouped DataSet reduces each group to a single
+element using a user-defined reduce function.
+For each group of input elements, a reduce function successively combines pairs of elements into one
+element until only a single element for each group remains.
+
+#### Reduce on DataSet Grouped by Key Expression
+
+Key expressions specify one or more fields of each element of a DataSet. Each key expression is
+either the name of a public field or a getter method. A dot can be used to drill down into objects.
+The key expression "*" selects all fields.
+The following code shows how to group a POJO DataSet using key expressions and to reduce it
+with a reduce function.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+// some ordinary POJO
+public class WC {
+  public String word;
+  public int count;
+  // [...]
+}
+
+// ReduceFunction that sums Integer attributes of a POJO
+public class WordCounter implements ReduceFunction<WC> {
+  @Override
+  public WC reduce(WC in1, WC in2) {
+    return new WC(in1.word, in1.count + in2.count);
+  }
+}
+
+// [...]
+DataSet<WC> words = // [...]
+DataSet<WC> wordCounts = words
+                         // DataSet grouping on field "word"
+                         .groupBy("word")
+                         // apply ReduceFunction on grouped DataSet
+                         .reduce(new WordCounter());
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+// some ordinary POJO
+class WC(val word: String, val count: Int) {
+  def this() {
+    this(null, -1)
+  }
+  // [...]
+}
+
+val words: DataSet[WC] = // [...]
+val wordCounts = words.groupBy("word").reduce {
+  (w1, w2) => new WC(w1.word, w1.count + w2.count)
+}
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+Not supported.
+~~~
+</div>
+</div>
+
+#### Reduce on DataSet Grouped by KeySelector Function
+
+A key-selector function extracts a key value from each element of a DataSet. The extracted key
+value is used to group the DataSet.
+The following code shows how to group a POJO DataSet using a key-selector function and to reduce it
+with a reduce function.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+// some ordinary POJO
+public class WC {
+  public String word;
+  public int count;
+  // [...]
+}
+
+// ReduceFunction that sums Integer attributes of a POJO
+public class WordCounter implements ReduceFunction<WC> {
+  @Override
+  public WC reduce(WC in1, WC in2) {
+    return new WC(in1.word, in1.count + in2.count);
+  }
+}
+
+// [...]
+DataSet<WC> words = // [...]
+DataSet<WC> wordCounts = words
+                         // DataSet grouping on field "word"
+                         .groupBy(new SelectWord())
+                         // apply ReduceFunction on grouped DataSet
+                         .reduce(new WordCounter());
+
+public class SelectWord implements KeySelector<WC, String> {
+  @Override
+  public String getKey(Word w) {
+    return w.word;
+  }
+}
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+// some ordinary POJO
+class WC(val word: String, val count: Int) {
+  def this() {
+    this(null, -1)
+  }
+  // [...]
+}
+
+val words: DataSet[WC] = // [...]
+val wordCounts = words.groupBy { _.word } reduce {
+  (w1, w2) => new WC(w1.word, w1.count + w2.count)
+}
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+class WordCounter(ReduceFunction):
+    def reduce(self, in1, in2):
+        return (in1[0], in1[1] + in2[1])
+
+words = // [...]
+wordCounts = words \
+    .group_by(lambda x: x[0]) \
+    .reduce(WordCounter())
+~~~
+</div>
+</div>
+
+#### Reduce on DataSet Grouped by Field Position Keys (Tuple DataSets only)
+
+Field position keys specify one or more fields of a Tuple DataSet that are used as grouping keys.
+The following code shows how to use field position keys and apply a reduce function
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+DataSet<Tuple3<String, Integer, Double>> tuples = // [...]
+DataSet<Tuple3<String, Integer, Double>> reducedTuples = tuples
+                                         // group DataSet on first and second field of Tuple
+                                         .groupBy(0, 1)
+                                         // apply ReduceFunction on grouped DataSet
+                                         .reduce(new MyTupleReducer());
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+val tuples = DataSet[(String, Int, Double)] = // [...]
+// group on the first and second Tuple field
+val reducedTuples = tuples.groupBy(0, 1).reduce { ... }
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+ reducedTuples = tuples.group_by(0, 1).reduce( ... )
+~~~
+
+</div>
+</div>
+
+#### Reduce on DataSet grouped by Case Class Fields
+
+When using Case Classes you can also specify the grouping key using the names of the fields:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+Not supported.
+~~~
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+case class MyClass(val a: String, b: Int, c: Double)
+val tuples = DataSet[MyClass] = // [...]
+// group on the first and second field
+val reducedTuples = tuples.groupBy("a", "b").reduce { ... }
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+Not supported.
+~~~
+</div>
+</div>
+
+### GroupReduce on Grouped DataSet
+
+A GroupReduce transformation that is applied on a grouped DataSet calls a user-defined
+group-reduce function for each group. The difference
+between this and *Reduce* is that the user defined function gets the whole group at once.
+The function is invoked with an Iterable over all elements of a group and can return an arbitrary
+number of result elements.
+
+#### GroupReduce on DataSet Grouped by Field Position Keys (Tuple DataSets only)
+
+The following code shows how duplicate strings can be removed from a DataSet grouped by Integer.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+public class DistinctReduce
+         implements GroupReduceFunction<Tuple2<Integer, String>, Tuple2<Integer, String>> {
+
+  @Override
+  public void reduce(Iterable<Tuple2<Integer, String>> in, Collector<Tuple2<Integer, String>> out) {
+
+    Set<String> uniqStrings = new HashSet<String>();
+    Integer key = null;
+
+    // add all strings of the group to the set
+    for (Tuple2<Integer, String> t : in) {
+      key = t.f0;
+      uniqStrings.add(t.f1);
+    }
+
+    // emit all unique strings.
+    for (String s : uniqStrings) {
+      out.collect(new Tuple2<Integer, String>(key, s));
+    }
+  }
+}
+
+// [...]
+DataSet<Tuple2<Integer, String>> input = // [...]
+DataSet<Tuple2<Integer, String>> output = input
+                           .groupBy(0)            // group DataSet by the first tuple field
+                           .reduceGroup(new DistinctReduce());  // apply GroupReduceFunction
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+val input: DataSet[(Int, String)] = // [...]
+val output = input.groupBy(0).reduceGroup {
+      (in, out: Collector[(Int, String)]) =>
+        in.toSet foreach (out.collect)
+    }
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+ class DistinctReduce(GroupReduceFunction):
+   def reduce(self, iterator, collector):
+     dic = dict()
+     for value in iterator:
+       dic[value[1]] = 1
+     for key in dic.keys():
+       collector.collect(key)
+
+ output = data.group_by(0).reduce_group(DistinctReduce())
+~~~
+
+</div>
+</div>
+
+#### GroupReduce on DataSet Grouped by Key Expression, KeySelector Function, or Case Class Fields
+
+Work analogous to [key expressions](#reduce-on-dataset-grouped-by-key-expression),
+[key-selector functions](#reduce-on-dataset-grouped-by-keyselector-function),
+and [case class fields](#reduce-on-dataset-grouped-by-case-class-fields) in *Reduce* transformations.
+
+
+#### GroupReduce on sorted groups
+
+A group-reduce function accesses the elements of a group using an Iterable. Optionally, the Iterable can hand out the elements of a group in a specified order. In many cases this can help to reduce the complexity of a user-defined
+group-reduce function and improve its efficiency.
+
+The following code shows another example how to remove duplicate Strings in a DataSet grouped by an Integer and sorted by String.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+// GroupReduceFunction that removes consecutive identical elements
+public class DistinctReduce
+         implements GroupReduceFunction<Tuple2<Integer, String>, Tuple2<Integer, String>> {
+
+  @Override
+  public void reduce(Iterable<Tuple2<Integer, String>> in, Collector<Tuple2<Integer, String>> out) {
+    Integer key = null;
+    String comp = null;
+
+    for (Tuple2<Integer, String> t : in) {
+      key = t.f0;
+      String next = t.f1;
+
+      // check if strings are different
+      if (com == null || !next.equals(comp)) {
+        out.collect(new Tuple2<Integer, String>(key, next));
+        comp = next;
+      }
+    }
+  }
+}
+
+// [...]
+DataSet<Tuple2<Integer, String>> input = // [...]
+DataSet<Double> output = input
+                         .groupBy(0)                         // group DataSet by first field
+                         .sortGroup(1, Order.ASCENDING)      // sort groups on second tuple field
+                         .reduceGroup(new DistinctReduce());
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+val input: DataSet[(Int, String)] = // [...]
+val output = input.groupBy(0).sortGroup(1, Order.ASCENDING).reduceGroup {
+      (in, out: Collector[(Int, String)]) =>
+        var prev: (Int, String) = null
+        for (t <- in) {
+          if (prev == null || prev != t)
+            out.collect(t)
+        }
+    }
+
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+ class DistinctReduce(GroupReduceFunction):
+   def reduce(self, iterator, collector):
+     dic = dict()
+     for value in iterator:
+       dic[value[1]] = 1
+     for key in dic.keys():
+       collector.collect(key)
+
+ output = data.group_by(0).sort_group(1, Order.ASCENDING).reduce_group(DistinctReduce())
+~~~
+
+
+</div>
+</div>
+
+**Note:** A GroupSort often comes for free if the grouping is established using a sort-based execution strategy of an operator before the reduce operation.
+
+#### Combinable GroupReduceFunctions
+
+In contrast to a reduce function, a group-reduce function is not
+implicitly combinable. In order to make a group-reduce function
+combinable it must implement the `GroupCombineFunction` interface.
+
+**Important**: The generic input and output types of
+the `GroupCombineFunction` interface must be equal to the generic input type
+of the `GroupReduceFunction` as shown in the following example:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+// Combinable GroupReduceFunction that computes a sum.
+public class MyCombinableGroupReducer implements
+  GroupReduceFunction<Tuple2<String, Integer>, String>,
+  GroupCombineFunction<Tuple2<String, Integer>, Tuple2<String, Integer>>
+{
+  @Override
+  public void reduce(Iterable<Tuple2<String, Integer>> in,
+                     Collector<String> out) {
+
+    String key = null;
+    int sum = 0;
+
+    for (Tuple2<String, Integer> curr : in) {
+      key = curr.f0;
+      sum += curr.f1;
+    }
+    // concat key and sum and emit
+    out.collect(key + "-" + sum);
+  }
+
+  @Override
+  public void combine(Iterable<Tuple2<String, Integer>> in,
+                      Collector<Tuple2<String, Integer>> out) {
+    String key = null;
+    int sum = 0;
+
+    for (Tuple2<String, Integer> curr : in) {
+      key = curr.f0;
+      sum += curr.f1;
+    }
+    // emit tuple with key and sum
+    out.collect(new Tuple2<>(key, sum));
+  }
+}
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+
+// Combinable GroupReduceFunction that computes two sums.
+class MyCombinableGroupReducer
+  extends GroupReduceFunction[(String, Int), String]
+  with GroupCombineFunction[(String, Int), (String, Int)]
+{
+  override def reduce(
+    in: java.lang.Iterable[(String, Int)],
+    out: Collector[String]): Unit =
+  {
+    val r: (String, Int) =
+      in.asScala.reduce( (a,b) => (a._1, a._2 + b._2) )
+    // concat key and sum and emit
+    out.collect (r._1 + "-" + r._2)
+  }
+
+  override def combine(
+    in: java.lang.Iterable[(String, Int)],
+    out: Collector[(String, Int)]): Unit =
+  {
+    val r: (String, Int) =
+      in.asScala.reduce( (a,b) => (a._1, a._2 + b._2) )
+    // emit tuple with key and sum
+    out.collect(r)
+  }
+}
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+ class GroupReduce(GroupReduceFunction):
+   def reduce(self, iterator, collector):
+     key, int_sum = iterator.next()
+     for value in iterator:
+       int_sum += value[1]
+     collector.collect(key + "-" + int_sum))
+
+   def combine(self, iterator, collector):
+     key, int_sum = iterator.next()
+     for value in iterator:
+       int_sum += value[1]
+     collector.collect((key, int_sum))
+
+data.reduce_group(GroupReduce(), combinable=True)
+~~~
+
+</div>
+</div>
+
+### GroupCombine on a Grouped DataSet
+
+The GroupCombine transformation is the generalized form of the combine step in
+the combinable GroupReduceFunction. It is generalized in the sense that it
+allows combining of input type `I` to an arbitrary output type `O`. In contrast,
+the combine step in the GroupReduce only allows combining from input type `I` to
+output type `I`. This is because the reduce step in the GroupReduceFunction
+expects input type `I`.
+
+In some applications, it is desirable to combine a DataSet into an intermediate
+format before performing additional transformations (e.g. to reduce data
+size). This can be achieved with a CombineGroup transformation with very little
+costs.
+
+**Note:** The GroupCombine on a Grouped DataSet is performed in memory with a
+  greedy strategy which may not process all data at once but in multiple
+  steps. It is also performed on the individual partitions without a data
+  exchange like in a GroupReduce transformation. This may lead to partial
+  results.
+
+The following example demonstrates the use of a CombineGroup transformation for
+an alternative WordCount implementation.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+DataSet<String> input = [..] // The words received as input
+
+DataSet<Tuple2<String, Integer>> combinedWords = input
+  .groupBy(0); // group identical words
+  .combineGroup(new GroupCombineFunction<String, Tuple2<String, Integer>() {
+
+    public void combine(Iterable<String> words, Collector<Tuple2<String, Integer>>) { // combine
+        String key = null;
+        int count = 0;
+
+        for (String word : words) {
+            key = word;
+            count++;
+        }
+        // emit tuple with word and count
+        out.collect(new Tuple2(key, count));
+    }
+});
+
+DataSet<Tuple2<String, Integer>> output = combinedWords
+  .groupBy(0);                             // group by words again
+  .reduceGroup(new GroupReduceFunction() { // group reduce with full data exchange
+
+    public void reduce(Iterable<Tuple2<String, Integer>>, Collector<Tuple2<String, Integer>>) {
+        String key = null;
+        int count = 0;
+
+        for (Tuple2<String, Integer> word : words) {
+            key = word;
+            count++;
+        }
+        // emit tuple with word and count
+        out.collect(new Tuple2(key, count));
+    }
+});
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+val input: DataSet[String] = [..] // The words received as input
+
+val combinedWords: DataSet[(String, Int)] = input
+  .groupBy(0)
+  .combineGroup {
+    (words, out: Collector[(String, Int)]) =>
+        var key: String = null
+        var count = 0
+
+        for (word <- words) {
+            key = word
+            count += 1
+        }
+        out.collect((key, count))
+}
+
+val output: DataSet[(String, Int)] = combinedWords
+  .groupBy(0)
+  .reduceGroup {
+    (words, out: Collector[(String, Int)]) =>
+        var key: String = null
+        var sum = 0
+
+        for ((word, sum) <- words) {
+            key = word
+            sum += count
+        }
+        out.collect((key, sum))
+}
+
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+Not supported.
+~~~
+
+</div>
+</div>
+
+The above alternative WordCount implementation demonstrates how the GroupCombine
+combines words before performing the GroupReduce transformation. The above
+example is just a proof of concept. Note, how the combine step changes the type
+of the DataSet which would normally require an additional Map transformation
+before executing the GroupReduce.
+
+### Aggregate on Grouped Tuple DataSet
+
+There are some common aggregation operations that are frequently used. The Aggregate transformation provides the following build-in aggregation functions:
+
+- Sum,
+- Min, and
+- Max.
+
+The Aggregate transformation can only be applied on a Tuple DataSet and supports only field position keys for grouping.
+
+The following code shows how to apply an Aggregation transformation on a DataSet grouped by field position keys:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+DataSet<Tuple3<Integer, String, Double>> input = // [...]
+DataSet<Tuple3<Integer, String, Double>> output = input
+                                   .groupBy(1)        // group DataSet on second field
+                                   .aggregate(SUM, 0) // compute sum of the first field
+                                   .and(MIN, 2);      // compute minimum of the third field
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+val input: DataSet[(Int, String, Double)] = // [...]
+val output = input.groupBy(1).aggregate(SUM, 0).and(MIN, 2)
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+from flink.functions.Aggregation import Sum, Min
+
+input = # [...]
+output = input.group_by(1).aggregate(Sum, 0).and_agg(Min, 2)
+~~~
+
+</div>
+</div>
+
+To apply multiple aggregations on a DataSet it is necessary to use the `.and()` function after the first aggregate, that means `.aggregate(SUM, 0).and(MIN, 2)` produces the sum of field 0 and the minimum of field 2 of the original DataSet.
+In contrast to that `.aggregate(SUM, 0).aggregate(MIN, 2)` will apply an aggregation on an aggregation. In the given example it would produce the minimum of field 2 after calculating the sum of field 0 grouped by field 1.
+
+**Note:** The set of aggregation functions will be extended in the future.
+
+### MinBy / MaxBy on Grouped Tuple DataSet
+
+The MinBy (MaxBy) transformation selects a single tuple for each group of tuples. The selected tuple is the tuple whose values of one or more specified fields are minimum (maximum). The fields which are used for comparison must be valid key fields, i.e., comparable. If multiple tuples have minimum (maximum) fields values, an arbitrary tuple of these tuples is returned.
+
+The following code shows how to select the tuple with the minimum values for the `Integer` and `Double` fields for each group of tuples with the same `String` value from a `DataSet<Tuple3<Integer, String, Double>>`:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+DataSet<Tuple3<Integer, String, Double>> input = // [...]
+DataSet<Tuple3<Integer, String, Double>> output = input
+                                   .groupBy(1)   // group DataSet on second field
+                                   .minBy(0, 2); // select tuple with minimum values for first and third field.
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+val input: DataSet[(Int, String, Double)] = // [...]
+val output: DataSet[(Int, String, Double)] = input
+                                   .groupBy(1)  // group DataSet on second field
+                                   .minBy(0, 2) // select tuple with minimum values for first and third field.
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+Not supported.
+~~~
+
+</div>
+</div>
+
+### Reduce on full DataSet
+
+The Reduce transformation applies a user-defined reduce function to all elements of a DataSet.
+The reduce function subsequently combines pairs of elements into one element until only a single element remains.
+
+The following code shows how to sum all elements of an Integer DataSet:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+// ReduceFunction that sums Integers
+public class IntSummer implements ReduceFunction<Integer> {
+  @Override
+  public Integer reduce(Integer num1, Integer num2) {
+    return num1 + num2;
+  }
+}
+
+// [...]
+DataSet<Integer> intNumbers = // [...]
+DataSet<Integer> sum = intNumbers.reduce(new IntSummer());
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+val intNumbers = env.fromElements(1,2,3)
+val sum = intNumbers.reduce (_ + _)
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+ intNumbers = env.from_elements(1,2,3)
+ sum = intNumbers.reduce(lambda x,y: x + y)
+~~~
+
+</div>
+</div>
+
+Reducing a full DataSet using the Reduce transformation implies that the final Reduce operation cannot be done in parallel. However, a reduce function is automatically combinable such that a Reduce transformation does not limit scalability for most use cases.
+
+### GroupReduce on full DataSet
+
+The GroupReduce transformation applies a user-defined group-reduce function on all elements of a DataSet.
+A group-reduce can iterate over all elements of DataSet and return an arbitrary number of result elements.
+
+The following example shows how to apply a GroupReduce transformation on a full DataSet:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+DataSet<Integer> input = // [...]
+// apply a (preferably combinable) GroupReduceFunction to a DataSet
+DataSet<Double> output = input.reduceGroup(new MyGroupReducer());
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+val input: DataSet[Int] = // [...]
+val output = input.reduceGroup(new MyGroupReducer())
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+ output = data.reduce_group(MyGroupReducer())
+~~~
+
+</div>
+</div>
+
+**Note:** A GroupReduce transformation on a full DataSet cannot be done in parallel if the
+group-reduce function is not combinable. Therefore, this can be a very compute intensive operation.
+See the paragraph on "Combinable GroupReduceFunctions" above to learn how to implement a
+combinable group-reduce function.
+
+### GroupCombine on a full DataSet
+
+The GroupCombine on a full DataSet works similar to the GroupCombine on a
+grouped DataSet. The data is partitioned on all nodes and then combined in a
+greedy fashion (i.e. only data fitting into memory is combined at once).
+
+### Aggregate on full Tuple DataSet
+
+There are some common aggregation operations that are frequently used. The Aggregate transformation
+provides the following build-in aggregation functions:
+
+- Sum,
+- Min, and
+- Max.
+
+The Aggregate transformation can only be applied on a Tuple DataSet.
+
+The following code shows how to apply an Aggregation transformation on a full DataSet:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+DataSet<Tuple2<Integer, Double>> input = // [...]
+DataSet<Tuple2<Integer, Double>> output = input
+                                     .aggregate(SUM, 0)    // compute sum of the first field
+                                     .and(MIN, 1);    // compute minimum of the second field
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+val input: DataSet[(Int, String, Double)] = // [...]
+val output = input.aggregate(SUM, 0).and(MIN, 2)
+
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+from flink.functions.Aggregation import Sum, Min
+
+input = # [...]
+output = input.aggregate(Sum, 0).and_agg(Min, 2)
+~~~
+
+</div>
+</div>
+
+**Note:** Extending the set of supported aggregation functions is on our roadmap.
+
+### MinBy / MaxBy on full Tuple DataSet
+
+The MinBy (MaxBy) transformation selects a single tuple from a DataSet of tuples. The selected tuple is the tuple whose values of one or more specified fields are minimum (maximum). The fields which are used for comparison must be valid key fields, i.e., comparable. If multiple tuples have minimum (maximum) fields values, an arbitrary tuple of these tuples is returned.
+
+The following code shows how to select the tuple with the maximum values for the `Integer` and `Double` fields from a `DataSet<Tuple3<Integer, String, Double>>`:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+DataSet<Tuple3<Integer, String, Double>> input = // [...]
+DataSet<Tuple3<Integer, String, Double>> output = input
+                                   .maxBy(0, 2); // select tuple with maximum values for first and third field.
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+val input: DataSet[(Int, String, Double)] = // [...]
+val output: DataSet[(Int, String, Double)] = input                          
+                                   .maxBy(0, 2) // select tuple with maximum values for first and third field.
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+Not supported.
+~~~
+
+</div>
+</div>
+
+### Distinct
+
+The Distinct transformation computes the DataSet of the distinct elements of the source DataSet.
+The following code removes all duplicate elements from the DataSet:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+DataSet<Tuple2<Integer, Double>> input = // [...]
+DataSet<Tuple2<Integer, Double>> output = input.distinct();
+
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+val input: DataSet[(Int, String, Double)] = // [...]
+val output = input.distinct()
+
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+Not supported.
+~~~
+
+</div>
+</div>
+
+It is also possible to change how the distinction of the elements in the DataSet is decided, using:
+
+- one or more field position keys (Tuple DataSets only),
+- a key-selector function, or
+- a key expression.
+
+#### Distinct with field position keys
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+DataSet<Tuple2<Integer, Double, String>> input = // [...]
+DataSet<Tuple2<Integer, Double, String>> output = input.distinct(0,2);
+
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+val input: DataSet[(Int, Double, String)] = // [...]
+val output = input.distinct(0,2)
+
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+Not supported.
+~~~
+
+</div>
+</div>
+
+#### Distinct with KeySelector function
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+private static class AbsSelector implements KeySelector<Integer, Integer> {
+private static final long serialVersionUID = 1L;
+	@Override
+	public Integer getKey(Integer t) {
+    	return Math.abs(t);
+	}
+}
+DataSet<Integer> input = // [...]
+DataSet<Integer> output = input.distinct(new AbsSelector());
+
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+val input: DataSet[Int] = // [...]
+val output = input.distinct {x => Math.abs(x)}
+
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+Not supported.
+~~~
+
+</div>
+</div>
+
+#### Distinct with key expression
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+// some ordinary POJO
+public class CustomType {
+  public String aName;
+  public int aNumber;
+  // [...]
+}
+
+DataSet<CustomType> input = // [...]
+DataSet<CustomType> output = input.distinct("aName", "aNumber");
+
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+// some ordinary POJO
+case class CustomType(aName : String, aNumber : Int) { }
+
+val input: DataSet[CustomType] = // [...]
+val output = input.distinct("aName", "aNumber")
+
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+Not supported.
+~~~
+
+</div>
+</div>
+
+It is also possible to indicate to use all the fields by the wildcard character:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+DataSet<CustomType> input = // [...]
+DataSet<CustomType> output = input.distinct("*");
+
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+// some ordinary POJO
+val input: DataSet[CustomType] = // [...]
+val output = input.distinct("_")
+
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+Not supported.
+~~~
+
+</div>
+</div>
+
+### Join
+
+The Join transformation joins two DataSets into one DataSet. The elements of both DataSets are joined on one or more keys which can be specified using
+
+- a key expression
+- a key-selector function
+- one or more field position keys (Tuple DataSet only).
+- Case Class Fields
+
+There are a few different ways to perform a Join transformation which are shown in the following.
+
+#### Default Join (Join into Tuple2)
+
+The default Join transformation produces a new Tuple DataSet with two fields. Each tuple holds a joined element of the first input DataSet in the first tuple field and a matching element of the second input DataSet in the second field.
+
+The following code shows a default Join transformation using field position keys:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+public static class User { public String name; public int zip; }
+public static class Store { public Manager mgr; public int zip; }
+DataSet<User> input1 = // [...]
+DataSet<Store> input2 = // [...]
+// result dataset is typed as Tuple2
+DataSet<Tuple2<User, Store>>
+            result = input1.join(input2)
+                           .where("zip")       // key of the first input (users)
+                           .equalTo("zip");    // key of the second input (stores)
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+val input1: DataSet[(Int, String)] = // [...]
+val input2: DataSet[(Double, Int)] = // [...]
+val result = input1.join(input2).where(0).equalTo(1)
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+ result = input1.join(input2).where(0).equal_to(1)
+~~~
+
+</div>
+</div>
+
+#### Join with Join Function
+
+A Join transformation can also call a user-defined join function to process joining tuples.
+A join function receives one element of the first input DataSet and one element of the second input DataSet and returns exactly one element.
+
+The following code performs a join of DataSet with custom java objects and a Tuple DataSet using key-selector functions and shows how to use a user-defined join function:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+// some POJO
+public class Rating {
+  public String name;
+  public String category;
+  public int points;
+}
+
+// Join function that joins a custom POJO with a Tuple
+public class PointWeighter
+         implements JoinFunction<Rating, Tuple2<String, Double>, Tuple2<String, Double>> {
+
+  @Override
+  public Tuple2<String, Double> join(Rating rating, Tuple2<String, Double> weight) {
+    // multiply the points and rating and construct a new output tuple
+    return new Tuple2<String, Double>(rating.name, rating.points * weight.f1);
+  }
+}
+
+DataSet<Rating> ratings = // [...]
+DataSet<Tuple2<String, Double>> weights = // [...]
+DataSet<Tuple2<String, Double>>
+            weightedRatings =
+            ratings.join(weights)
+
+                   // key of the first input
+                   .where("category")
+
+                   // key of the second input
+                   .equalTo("f0")
+
+                   // applying the JoinFunction on joining pairs
+                   .with(new PointWeighter());
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+case class Rating(name: String, category: String, points: Int)
+
+val ratings: DataSet[Ratings] = // [...]
+val weights: DataSet[(String, Double)] = // [...]
+
+val weightedRatings = ratings.join(weights).where("category").equalTo(0) {
+  (rating, weight) => (rating.name, rating.points * weight._2)
+}
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+ class PointWeighter(JoinFunction):
+   def join(self, rating, weight):
+     return (rating[0], rating[1] * weight[1])
+       if value1[3]:
+
+ weightedRatings =
+   ratings.join(weights).where(0).equal_to(0). \
+   with(new PointWeighter());
+~~~
+
+</div>
+</div>
+
+#### Join with Flat-Join Function
+
+Analogous to Map and FlatMap, a FlatJoin behaves in the same
+way as a Join, but instead of returning one element, it can
+return (collect), zero, one, or more elements.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+public class PointWeighter
+         implements FlatJoinFunction<Rating, Tuple2<String, Double>, Tuple2<String, Double>> {
+  @Override
+  public void join(Rating rating, Tuple2<String, Double> weight,
+	  Collector<Tuple2<String, Double>> out) {
+	if (weight.f1 > 0.1) {
+		out.collect(new Tuple2<String, Double>(rating.name, rating.points * weight.f1));
+	}
+  }
+}
+
+DataSet<Tuple2<String, Double>>
+            weightedRatings =
+            ratings.join(weights) // [...]
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+case class Rating(name: String, category: String, points: Int)
+
+val ratings: DataSet[Ratings] = // [...]
+val weights: DataSet[(String, Double)] = // [...]
+
+val weightedRatings = ratings.join(weights).where("category").equalTo(0) {
+  (rating, weight, out: Collector[(String, Double)]) =>
+    if (weight._2 > 0.1) out.collect(rating.name, rating.points * weight._2)
+}
+
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+Not supported.
+</div>
+</div>
+
+#### Join with Projection (Java/Python Only)
+
+A Join transformation can construct result tuples using a projection as shown here:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+DataSet<Tuple3<Integer, Byte, String>> input1 = // [...]
+DataSet<Tuple2<Integer, Double>> input2 = // [...]
+DataSet<Tuple4<Integer, String, Double, Byte>
+            result =
+            input1.join(input2)
+                  // key definition on first DataSet using a field position key
+                  .where(0)
+                  // key definition of second DataSet using a field position key
+                  .equalTo(0)
+                  // select and reorder fields of matching tuples
+                  .projectFirst(0,2).projectSecond(1).projectFirst(1);
+~~~
+
+`projectFirst(int...)` and `projectSecond(int...)` select the fields of the first and second joined input that should be assembled into an output Tuple. The order of indexes defines the order of fields in the output tuple.
+The join projection works also for non-Tuple DataSets. In this case, `projectFirst()` or `projectSecond()` must be called without arguments to add a joined element to the output Tuple.
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+Not supported.
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+ result = input1.join(input2).where(0).equal_to(0) \
+  .project_first(0,2).project_second(1).project_first(1);
+~~~
+
+`project_first(int...)` and `project_second(int...)` select the fields of the first and second joined input that should be assembled into an output Tuple. The order of indexes defines the order of fields in the output tuple.
+The join projection works also for non-Tuple DataSets. In this case, `project_first()` or `project_second()` must be called without arguments to add a joined element to the output Tuple.
+
+</div>
+</div>
+
+#### Join with DataSet Size Hint
+
+In order to guide the optimizer to pick the right execution strategy, you can hint the size of a DataSet to join as shown here:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+DataSet<Tuple2<Integer, String>> input1 = // [...]
+DataSet<Tuple2<Integer, String>> input2 = // [...]
+
+DataSet<Tuple2<Tuple2<Integer, String>, Tuple2<Integer, String>>>
+            result1 =
+            // hint that the second DataSet is very small
+            input1.joinWithTiny(input2)
+                  .where(0)
+                  .equalTo(0);
+
+DataSet<Tuple2<Tuple2<Integer, String>, Tuple2<Integer, String>>>
+            result2 =
+            // hint that the second DataSet is very large
+            input1.joinWithHuge(input2)
+                  .where(0)
+                  .equalTo(0);
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+val input1: DataSet[(Int, String)] = // [...]
+val input2: DataSet[(Int, String)] = // [...]
+
+// hint that the second DataSet is very small
+val result1 = input1.joinWithTiny(input2).where(0).equalTo(0)
+
+// hint that the second DataSet is very large
+val result1 = input1.joinWithHuge(input2).where(0).equalTo(0)
+
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+
+ #hint that the second DataSet is very small
+ result1 = input1.join_with_tiny(input2).where(0).equal_to(0)
+
+ #hint that the second DataSet is very large
+ result1 = input1.join_with_huge(input2).where(0).equal_to(0)
+
+~~~
+
+</div>
+</div>
+
+#### Join Algorithm Hints
+
+The Flink runtime can execute joins in various ways. Each possible way outperforms the others under
+different circumstances. The system tries to pick a reasonable way automatically, but allows you
+to manually pick a strategy, in case you want to enforce a specific way of executing the join.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+DataSet<SomeType> input1 = // [...]
+DataSet<AnotherType> input2 = // [...]
+
+DataSet<Tuple2<SomeType, AnotherType> result =
+      input1.join(input2, JoinHint.BROADCAST_HASH_FIRST)
+            .where("id").equalTo("key");
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+val input1: DataSet[SomeType] = // [...]
+val input2: DataSet[AnotherType] = // [...]
+
+// hint that the second DataSet is very small
+val result1 = input1.join(input2, JoinHint.BROADCAST_HASH_FIRST).where("id").equalTo("key")
+
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+Not supported.
+~~~
+
+</div>
+</div>
+
+The following hints are available:
+
+* `OPTIMIZER_CHOOSES`: Equivalent to not giving a hint at all, leaves the choice to the system.
+
+* `BROADCAST_HASH_FIRST`: Broadcasts the first input and builds a hash table from it, which is
+  probed by the second input. A good strategy if the first input is very small.
+
+* `BROADCAST_HASH_SECOND`: Broadcasts the second input and builds a hash table from it, which is
+  probed by the first input. A good strategy if the second input is very small.
+
+* `REPARTITION_HASH_FIRST`: The system partitions (shuffles) each input (unless the input is already
+  partitioned) and builds a hash table from the first input. This strategy is good if the first
+  input is smaller than the second, but both inputs are still large.
+  *Note:* This is the default fallback strategy that the system uses if no size estimates can be made
+  and no pre-existing partitions and sort-orders can be re-used.
+
+* `REPARTITION_HASH_SECOND`: The system partitions (shuffles) each input (unless the input is already
+  partitioned) and builds a hash table from the second input. This strategy is good if the second
+  input is smaller than the first, but both inputs are still large.
+
+* `REPARTITION_SORT_MERGE`: The system partitions (shuffles) each input (unless the input is already
+  partitioned) and sorts each input (unless it is already sorted). The inputs are joined by
+  a streamed merge of the sorted inputs. This strategy is good if one or both of the inputs are
+  already sorted.
+
+
+### OuterJoin
+
+The OuterJoin transformation performs a left, right, or full outer join on two data sets. Outer joins are similar to regular (inner) joins and create all pairs of elements that are equal on their keys. In addition, records of the "outer" side (left, right, or both in case of full) are preserved if no matching key is found in the other side. Matching pair of elements (or one element and a `null` value for the other input) are given to a `JoinFunction` to turn the pair of elements into a single element, or to a `FlatJoinFunction` to turn the pair of elements into arbitrarily many (including none) elements.
+
+The elements of both DataSets are joined on one or more keys which can be specified using
+
+- a key expression
+- a key-selector function
+- one or more field position keys (Tuple DataSet only).
+- Case Class Fields
+
+**OuterJoins are only supported for the Java and Scala DataSet API.**
+
+
+#### OuterJoin with Join Function
+
+A OuterJoin transformation calls a user-defined join function to process joining tuples.
+A join function receives one element of the first input DataSet and one element of the second input DataSet and returns exactly one element. Depending on the type of the outer join (left, right, full) one of both input elements of the join function can be `null`.
+
+The following code performs a left outer join of DataSet with custom java objects and a Tuple DataSet using key-selector functions and shows how to use a user-defined join function:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+// some POJO
+public class Rating {
+  public String name;
+  public String category;
+  public int points;
+}
+
+// Join function that joins a custom POJO with a Tuple
+public class PointAssigner
+         implements JoinFunction<Tuple2<String, String>, Rating, Tuple2<String, Integer>> {
+
+  @Override
+  public Tuple2<String, Integer> join(Tuple2<String, String> movie, Rating rating) {
+    // Assigns the rating points to the movie.
+    // NOTE: rating might be null
+    return new Tuple2<String, Double>(movie.f0, rating == null ? -1 : rating.points;
+  }
+}
+
+DataSet<Tuple2<String, String>> movies = // [...]
+DataSet<Rating> ratings = // [...]
+DataSet<Tuple2<String, Integer>>
+            moviesWithPoints =
+            movies.leftOuterJoin(ratings)
+
+                   // key of the first input
+                   .where("f0")
+
+                   // key of the second input
+                   .equalTo("name")
+
+                   // applying the JoinFunction on joining pairs
+                   .with(new PointAssigner());
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+case class Rating(name: String, category: String, points: Int)
+
+val movies: DataSet[(String, String)] = // [...]
+val ratings: DataSet[Ratings] = // [...]
+
+val moviesWithPoints = movies.leftOuterJoin(ratings).where(0).equalTo("name") {
+  (movie, rating) => (movie._1, if (rating == null) -1 else rating.points)
+}
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+Not supported.
+~~~
+
+</div>
+</div>
+
+#### OuterJoin with Flat-Join Function
+
+Analogous to Map and FlatMap, an OuterJoin with flat-join function behaves in the same
+way as an OuterJoin with join function, but instead of returning one element, it can
+return (collect), zero, one, or more elements.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+public class PointAssigner
+         implements FlatJoinFunction<Tuple2<String, String>, Rating, Tuple2<String, Integer>> {
+  @Override
+  public void join(Tuple2<String, String> movie, Rating rating
+    Collector<Tuple2<String, Integer>> out) {
+  if (rating == null ) {
+    out.collect(new Tuple2<String, Integer>(movie.f0, -1));
+  } else if (rating.points < 10) {
+    out.collect(new Tuple2<String, Integer>(movie.f0, rating.points));
+  } else {
+    // do not emit
+  }
+}
+
+DataSet<Tuple2<String, Integer>>
+            moviesWithPoints =
+            movies.leftOuterJoin(ratings) // [...]
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+Not supported.
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+Not supported.
+~~~
+
+</div>
+</div>
+
+#### Join Algorithm Hints
+
+The Flink runtime can execute outer joins in various ways. Each possible way outperforms the others under
+different circumstances. The system tries to pick a reasonable way automatically, but allows you
+to manually pick a strategy, in case you want to enforce a specific way of executing the outer join.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+DataSet<SomeType> input1 = // [...]
+DataSet<AnotherType> input2 = // [...]
+
+DataSet<Tuple2<SomeType, AnotherType> result1 =
+      input1.leftOuterJoin(input2, JoinHint.REPARTITION_SORT_MERGE)
+            .where("id").equalTo("key");
+
+DataSet<Tuple2<SomeType, AnotherType> result2 =
+      input1.rightOuterJoin(input2, JoinHint.BROADCAST_HASH_FIRST)
+            .where("id").equalTo("key");
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+val input1: DataSet[SomeType] = // [...]
+val input2: DataSet[AnotherType] = // [...]
+
+// hint that the second DataSet is very small
+val result1 = input1.leftOuterJoin(input2, JoinHint.REPARTITION_SORT_MERGE).where("id").equalTo("key")
+
+val result2 = input1.rightOuterJoin(input2, JoinHint.BROADCAST_HASH_FIRST).where("id").equalTo("key")
+
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+Not supported.
+~~~
+
+</div>
+</div>
+
+The following hints are available.
+
+* `OPTIMIZER_CHOOSES`: Equivalent to not giving a hint at all, leaves the choice to the system.
+
+* `BROADCAST_HASH_FIRST`: Broadcasts the first input and builds a hash table from it, which is
+  probed by the second input. A good strategy if the first input is very small.
+
+* `BROADCAST_HASH_SECOND`: Broadcasts the second input and builds a hash table from it, which is
+  probed by the first input. A good strategy if the second input is very small.
+
+* `REPARTITION_HASH_FIRST`: The system partitions (shuffles) each input (unless the input is already
+  partitioned) and builds a hash table from the first input. This strategy is good if the first
+  input is smaller than the second, but both inputs are still large.
+
+* `REPARTITION_HASH_SECOND`: The system partitions (shuffles) each input (unless the input is already
+  partitioned) and builds a hash table from the second input. This strategy is good if the second
+  input is smaller than the first, but both inputs are still large.
+
+* `REPARTITION_SORT_MERGE`: The system partitions (shuffles) each input (unless the input is already
+  partitioned) and sorts each input (unless it is already sorted). The inputs are joined by
+  a streamed merge of the sorted inputs. This strategy is good if one or both of the inputs are
+  already sorted.
+
+**NOTE:** Not all execution strategies are supported by every outer join type, yet.
+
+* `LeftOuterJoin` supports:
+  * `OPTIMIZER_CHOOSES`
+  * `BROADCAST_HASH_SECOND`
+  * `REPARTITION_HASH_SECOND`
+  * `REPARTITION_SORT_MERGE`
+
+* `RightOuterJoin` supports:
+  * `OPTIMIZER_CHOOSES`
+  * `BROADCAST_HASH_FIRST`
+  * `REPARTITION_HASH_FIRST`
+  * `REPARTITION_SORT_MERGE`
+
+* `FullOuterJoin` supports:
+  * `OPTIMIZER_CHOOSES`
+  * `REPARTITION_SORT_MERGE`
+
+
+### Cross
+
+The Cross transformation combines two DataSets into one DataSet. It builds all pairwise combinations of the elements of both input DataSets, i.e., it builds a Cartesian product.
+The Cross transformation either calls a user-defined cross function on each pair of elements or outputs a Tuple2. Both modes are shown in the following.
+
+**Note:** Cross is potentially a *very* compute-intensive operation which can challenge even large compute clusters!
+
+#### Cross with User-Defined Function
+
+A Cross transformation can call a user-defined cross function. A cross function receives one element of the first input and one element of the second input and returns exactly one result element.
+
+The following code shows how to apply a Cross transformation on two DataSets using a cross function:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+public class Coord {
+  public int id;
+  public int x;
+  public int y;
+}
+
+// CrossFunction computes the Euclidean distance between two Coord objects.
+public class EuclideanDistComputer
+         implements CrossFunction<Coord, Coord, Tuple3<Integer, Integer, Double>> {
+
+  @Override
+  public Tuple3<Integer, Integer, Double> cross(Coord c1, Coord c2) {
+    // compute Euclidean distance of coordinates
+    double dist = sqrt(pow(c1.x - c2.x, 2) + pow(c1.y - c2.y, 2));
+    return new Tuple3<Integer, Integer, Double>(c1.id, c2.id, dist);
+  }
+}
+
+DataSet<Coord> coords1 = // [...]
+DataSet<Coord> coords2 = // [...]
+DataSet<Tuple3<Integer, Integer, Double>>
+            distances =
+            coords1.cross(coords2)
+                   // apply CrossFunction
+                   .with(new EuclideanDistComputer());
+~~~
+
+#### Cross with Projection
+
+A Cross transformation can also construct result tuples using a projection as shown here:
+
+~~~java
+DataSet<Tuple3<Integer, Byte, String>> input1 = // [...]
+DataSet<Tuple2<Integer, Double>> input2 = // [...]
+DataSet<Tuple4<Integer, Byte, Integer, Double>
+            result =
+            input1.cross(input2)
+                  // select and reorder fields of matching tuples
+                  .projectSecond(0).projectFirst(1,0).projectSecond(1);
+~~~
+
+The field selection in a Cross projection works the same way as in the projection of Join results.
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+case class Coord(id: Int, x: Int, y: Int)
+
+val coords1: DataSet[Coord] = // [...]
+val coords2: DataSet[Coord] = // [...]
+
+val distances = coords1.cross(coords2) {
+  (c1, c2) =>
+    val dist = sqrt(pow(c1.x - c2.x, 2) + pow(c1.y - c2.y, 2))
+    (c1.id, c2.id, dist)
+}
+~~~
+
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+ class Euclid(CrossFunction):
+   def cross(self, c1, c2):
+     return (c1[0], c2[0], sqrt(pow(c1[1] - c2.[1], 2) + pow(c1[2] - c2[2], 2)))
+
+ distances = coords1.cross(coords2).using(Euclid())
+~~~
+
+#### Cross with Projection
+
+A Cross transformation can also construct result tuples using a projection as shown here:
+
+~~~python
+result = input1.cross(input2).projectFirst(1,0).projectSecond(0,1);
+~~~
+
+The field selection in a Cross projection works the same way as in the projection of Join results.
+
+</div>
+</div>
+
+#### Cross with DataSet Size Hint
+
+In order to guide the optimizer to pick the right execution strategy, you can hint the size of a DataSet to cross as shown here:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+DataSet<Tuple2<Integer, String>> input1 = // [...]
+DataSet<Tuple2<Integer, String>> input2 = // [...]
+
+DataSet<Tuple4<Integer, String, Integer, String>>
+            udfResult =
+                  // hint that the second DataSet is very small
+            input1.crossWithTiny(input2)
+                  // apply any Cross function (or projection)
+                  .with(new MyCrosser());
+
+DataSet<Tuple3<Integer, Integer, String>>
+            projectResult =
+                  // hint that the second DataSet is very large
+            input1.crossWithHuge(input2)
+                  // apply a projection (or any Cross function)
+                  .projectFirst(0,1).projectSecond(1);
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+val input1: DataSet[(Int, String)] = // [...]
+val input2: DataSet[(Int, String)] = // [...]
+
+// hint that the second DataSet is very small
+val result1 = input1.crossWithTiny(input2)
+
+// hint that the second DataSet is very large
+val result1 = input1.crossWithHuge(input2)
+
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+ #hint that the second DataSet is very small
+ result1 = input1.cross_with_tiny(input2)
+
+ #hint that the second DataSet is very large
+ result1 = input1.cross_with_huge(input2)
+
+~~~
+
+</div>
+</div>
+
+### CoGroup
+
+The CoGroup transformation jointly processes groups of two DataSets. Both DataSets are grouped on a defined key and groups of both DataSets that share the same key are handed together to a user-defined co-group function. If for a specific key only one DataSet has a group, the co-group function is called with this group and an empty group.
+A co-group function can separately iterate over the elements of both groups and return an arbitrary number of result elements.
+
+Similar to Reduce, GroupReduce, and Join, keys can be defined using the different key-selection methods.
+
+#### CoGroup on DataSets
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+The example shows how to group by Field Position Keys (Tuple DataSets only). You can do the same with Pojo-types and key expressions.
+
+~~~java
+// Some CoGroupFunction definition
+class MyCoGrouper
+         implements CoGroupFunction<Tuple2<String, Integer>, Tuple2<String, Double>, Double> {
+
+  @Override
+  public void coGroup(Iterable<Tuple2<String, Integer>> iVals,
+                      Iterable<Tuple2<String, Double>> dVals,
+                      Collector<Double> out) {
+
+    Set<Integer> ints = new HashSet<Integer>();
+
+    // add all Integer values in group to set
+    for (Tuple2<String, Integer>> val : iVals) {
+      ints.add(val.f1);
+    }
+
+    // multiply each Double value with each unique Integer values of group
+    for (Tuple2<String, Double> val : dVals) {
+      for (Integer i : ints) {
+        out.collect(val.f1 * i);
+      }
+    }
+  }
+}
+
+// [...]
+DataSet<Tuple2<String, Integer>> iVals = // [...]
+DataSet<Tuple2<String, Double>> dVals = // [...]
+DataSet<Double> output = iVals.coGroup(dVals)
+                         // group first DataSet on first tuple field
+                         .where(0)
+                         // group second DataSet on first tuple field
+                         .equalTo(0)
+                         // apply CoGroup function on each pair of groups
+                         .with(new MyCoGrouper());
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+val iVals: DataSet[(String, Int)] = // [...]
+val dVals: DataSet[(String, Double)] = // [...]
+
+val output = iVals.coGroup(dVals).where(0).equalTo(0) {
+  (iVals, dVals, out: Collector[Double]) =>
+    val ints = iVals map { _._2 } toSet
+
+    for (dVal <- dVals) {
+      for (i <- ints) {
+        out.collect(dVal._2 * i)
+      }
+    }
+}
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+ class CoGroup(CoGroupFunction):
+   def co_group(self, ivals, dvals, collector):
+     ints = dict()
+     # add all Integer values in group to set
+     for value in ivals:
+       ints[value[1]] = 1
+     # multiply each Double value with each unique Integer values of group
+     for value in dvals:
+       for i in ints.keys():
+         collector.collect(value[1] * i)
+
+
+ output = ivals.co_group(dvals).where(0).equal_to(0).using(CoGroup())
+~~~
+
+</div>
+</div>
+
+
+### Union
+
+Produces the union of two DataSets, which have to be of the same type. A union of more than two DataSets can be implemented with multiple union calls, as shown here:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+DataSet<Tuple2<String, Integer>> vals1 = // [...]
+DataSet<Tuple2<String, Integer>> vals2 = // [...]
+DataSet<Tuple2<String, Integer>> vals3 = // [...]
+DataSet<Tuple2<String, Integer>> unioned = vals1.union(vals2).union(vals3);
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+val vals1: DataSet[(String, Int)] = // [...]
+val vals2: DataSet[(String, Int)] = // [...]
+val vals3: DataSet[(String, Int)] = // [...]
+
+val unioned = vals1.union(vals2).union(vals3)
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+ unioned = vals1.union(vals2).union(vals3)
+~~~
+
+</div>
+</div>
+
+### Rebalance
+Evenly rebalances the parallel partitions of a DataSet to eliminate data skew.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+DataSet<String> in = // [...]
+// rebalance DataSet and apply a Map transformation.
+DataSet<Tuple2<String, String>> out = in.rebalance()
+                                        .map(new Mapper());
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+val in: DataSet[String] = // [...]
+// rebalance DataSet and apply a Map transformation.
+val out = in.rebalance().map { ... }
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+Not supported.
+~~~
+
+</div>
+</div>
+
+
+### Hash-Partition
+
+Hash-partitions a DataSet on a given key.
+Keys can be specified as position keys, expression keys, and key selector functions (see [Reduce examples](#reduce-on-grouped-dataset) for how to specify keys).
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+DataSet<Tuple2<String, Integer>> in = // [...]
+// hash-partition DataSet by String value and apply a MapPartition transformation.
+DataSet<Tuple2<String, String>> out = in.partitionByHash(0)
+                                        .mapPartition(new PartitionMapper());
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+val in: DataSet[(String, Int)] = // [...]
+// hash-partition DataSet by String value and apply a MapPartition transformation.
+val out = in.partitionByHash(0).mapPartition { ... }
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+Not supported.
+~~~
+
+</div>
+</div>
+
+### Range-Partition
+
+Range-partitions a DataSet on a given key.
+Keys can be specified as position keys, expression keys, and key selector functions (see [Reduce examples](#reduce-on-grouped-dataset) for how to specify keys).
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+DataSet<Tuple2<String, Integer>> in = // [...]
+// range-partition DataSet by String value and apply a MapPartition transformation.
+DataSet<Tuple2<String, String>> out = in.partitionByRange(0)
+                                        .mapPartition(new PartitionMapper());
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+val in: DataSet[(String, Int)] = // [...]
+// range-partition DataSet by String value and apply a MapPartition transformation.
+val out = in.partitionByRange(0).mapPartition { ... }
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+Not supported.
+~~~
+
+</div>
+</div>
+
+
+### Sort Partition
+
+Locally sorts all partitions of a DataSet on a specified field in a specified order.
+Fields can be specified as field expressions or field positions (see [Reduce examples](#reduce-on-grouped-dataset) for how to specify keys).
+Partitions can be sorted on multiple fields by chaining `sortPartition()` calls.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+DataSet<Tuple2<String, Integer>> in = // [...]
+// Locally sort partitions in ascending order on the second String field and
+// in descending order on the first String field.
+// Apply a MapPartition transformation on the sorted partitions.
+DataSet<Tuple2<String, String>> out = in.sortPartition(1, Order.ASCENDING)
+                                        .sortPartition(0, Order.DESCENDING)
+                                        .mapPartition(new PartitionMapper());
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+val in: DataSet[(String, Int)] = // [...]
+// Locally sort partitions in ascending order on the second String field and
+// in descending order on the first String field.
+// Apply a MapPartition transformation on the sorted partitions.
+val out = in.sortPartition(1, Order.ASCENDING)
+            .sortPartition(0, Order.DESCENDING)
+            .mapPartition { ... }
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+Not supported.
+~~~
+
+</div>
+</div>
+
+### First-n
+
+Returns the first n (arbitrary) elements of a DataSet. First-n can be applied on a regular DataSet, a grouped DataSet, or a grouped-sorted DataSet. Grouping keys can be specified as key-selector functions or field position keys (see [Reduce examples](#reduce-on-grouped-dataset) for how to specify keys).
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+DataSet<Tuple2<String, Integer>> in = // [...]
+// Return the first five (arbitrary) elements of the DataSet
+DataSet<Tuple2<String, Integer>> out1 = in.first(5);
+
+// Return the first two (arbitrary) elements of each String group
+DataSet<Tuple2<String, Integer>> out2 = in.groupBy(0)
+                                          .first(2);
+
+// Return the first three elements of each String group ordered by the Integer field
+DataSet<Tuple2<String, Integer>> out3 = in.groupBy(0)
+                                          .sortGroup(1, Order.ASCENDING)
+                                          .first(3);
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+val in: DataSet[(String, Int)] = // [...]
+// Return the first five (arbitrary) elements of the DataSet
+val out1 = in.first(5)
+
+// Return the first two (arbitrary) elements of each String group
+val out2 = in.groupBy(0).first(2)
+
+// Return the first three elements of each String group ordered by the Integer field
+val out3 = in.groupBy(0).sortGroup(1, Order.ASCENDING).first(3)
+~~~
+
+</div>
+<div data-lang="python" markdown="1">
+
+~~~python
+Not supported.
+~~~
+
+</div>
+</div>


[11/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/table_api.md
----------------------------------------------------------------------
diff --git a/docs/dev/table_api.md b/docs/dev/table_api.md
new file mode 100644
index 0000000..cdd3667
--- /dev/null
+++ b/docs/dev/table_api.md
@@ -0,0 +1,2079 @@
+---
+title: "Table and SQL"
+is_beta: true
+nav-parent_id: apis
+nav-pos: 3
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+**Table API and SQL are experimental features**
+
+The Table API is a SQL-like expression language for relational stream and batch processing that can be easily embedded in Flink's DataSet and DataStream APIs (Java and Scala).
+The Table API and SQL interface operate on a relational `Table` abstraction, which can be created from external data sources, or existing DataSets and DataStreams. With the Table API, you can apply relational operators such as selection, aggregation, and joins on `Table`s.
+
+`Table`s can also be queried with regular SQL, as long as they are registered (see [Registering Tables](#registering-tables)). The Table API and SQL offer equivalent functionality and can be mixed in the same program. When a `Table` is converted back into a `DataSet` or `DataStream`, the logical plan, which was defined by relational operators and SQL queries, is optimized using [Apache Calcite](https://calcite.apache.org/) and transformed into a `DataSet` or `DataStream` program.
+
+* This will be replaced by the TOC
+{:toc}
+
+Using the Table API and SQL
+----------------------------
+
+The Table API and SQL are part of the *flink-table* Maven project.
+The following dependency must be added to your project in order to use the Table API and SQL:
+
+{% highlight xml %}
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-table{{ site.scala_version_suffix }}</artifactId>
+  <version>{{site.version }}</version>
+</dependency>
+{% endhighlight %}
+
+*Note: The Table API is currently not part of the binary distribution. See linking with it for cluster execution [here]({{ site.baseurl }}/dev/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution).*
+
+
+Registering Tables
+--------------------------------
+
+`TableEnvironment`s have an internal table catalog to which tables can be registered with a unique name. After registration, a table can be accessed from the `TableEnvironment` by its name.
+
+*Note: `DataSet`s or `DataStream`s can be directly converted into `Table`s without registering them in the `TableEnvironment`.*
+
+### Register a DataSet
+
+A `DataSet` is registered as a `Table` in a `BatchTableEnvironment` as follows:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+BatchTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);
+
+// register the DataSet cust as table "Customers" with fields derived from the dataset
+tableEnv.registerDataSet("Customers", cust)
+
+// register the DataSet ord as table "Orders" with fields user, product, and amount
+tableEnv.registerDataSet("Orders", ord, "user, product, amount");
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = ExecutionEnvironment.getExecutionEnvironment
+val tableEnv = TableEnvironment.getTableEnvironment(env)
+
+// register the DataSet cust as table "Customers" with fields derived from the dataset
+tableEnv.registerDataSet("Customers", cust)
+
+// register the DataSet ord as table "Orders" with fields user, product, and amount
+tableEnv.registerDataSet("Orders", ord, 'user, 'product, 'amount)
+{% endhighlight %}
+</div>
+</div>
+
+*Note: The name of a `DataSet` `Table` must not match the `^_DataSetTable_[0-9]+` pattern which is reserved for internal use only.*
+
+### Register a DataStream
+
+A `DataStream` is registered as a `Table` in a `StreamTableEnvironment` as follows:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
+StreamTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);
+
+// register the DataStream cust as table "Customers" with fields derived from the datastream
+tableEnv.registerDataStream("Customers", cust)
+
+// register the DataStream ord as table "Orders" with fields user, product, and amount
+tableEnv.registerDataStream("Orders", ord, "user, product, amount");
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = StreamExecutionEnvironment.getExecutionEnvironment
+val tableEnv = TableEnvironment.getTableEnvironment(env)
+
+// register the DataStream cust as table "Customers" with fields derived from the datastream
+tableEnv.registerDataStream("Customers", cust)
+
+// register the DataStream ord as table "Orders" with fields user, product, and amount
+tableEnv.registerDataStream("Orders", ord, 'user, 'product, 'amount)
+{% endhighlight %}
+</div>
+</div>
+
+*Note: The name of a `DataStream` `Table` must not match the `^_DataStreamTable_[0-9]+` pattern which is reserved for internal use only.*
+
+### Register a Table
+
+A `Table` that originates from a Table API operation or a SQL query is registered in a `TableEnvironment` as follows:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+// works for StreamExecutionEnvironment identically
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+BatchTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);
+
+// convert a DataSet into a Table
+Table custT = tableEnv
+  .toTable(custDs, "name, zipcode")
+  .where("zipcode = '12345'")
+  .select("name")
+
+// register the Table custT as table "custNames"
+tableEnv.registerTable("custNames", custT)
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+// works for StreamExecutionEnvironment identically
+val env = ExecutionEnvironment.getExecutionEnvironment
+val tableEnv = TableEnvironment.getTableEnvironment(env)
+
+// convert a DataSet into a Table
+val custT = custDs
+  .toTable(tableEnv, 'name, 'zipcode)
+  .where('zipcode === "12345")
+  .select('name)
+
+// register the Table custT as table "custNames"
+tableEnv.registerTable("custNames", custT)
+{% endhighlight %}
+</div>
+</div>
+
+A registered `Table` that originates from a Table API operation or SQL query is treated similarly as a view as known from relational DBMS, i.e., it can be inlined when optimizing the query.
+
+### Register an external Table using a TableSource
+
+An external table is registered in a `TableEnvironment` using a `TableSource` as follows:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+// works for StreamExecutionEnvironment identically
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+BatchTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);
+
+TableSource custTS = new CsvTableSource("/path/to/file", ...)
+
+// register a `TableSource` as external table "Customers"
+tableEnv.registerTableSource("Customers", custTS)
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+// works for StreamExecutionEnvironment identically
+val env = ExecutionEnvironment.getExecutionEnvironment
+val tableEnv = TableEnvironment.getTableEnvironment(env)
+
+val custTS: TableSource = new CsvTableSource("/path/to/file", ...)
+
+// register a `TableSource` as external table "Customers"
+tableEnv.registerTableSource("Customers", custTS)
+
+{% endhighlight %}
+</div>
+</div>
+
+A `TableSource` can provide access to data stored in various storage systems such as databases (MySQL, HBase, ...), file formats (CSV, Apache Parquet, Avro, ORC, ...), or messaging systems (Apache Kafka, RabbitMQ, ...).
+
+Currently, Flink provides the `CsvTableSource` to read CSV files and the `Kafka08JsonTableSource`/`Kafka09JsonTableSource` to read JSON objects from Kafka.
+A custom `TableSource` can be defined by implementing the `BatchTableSource` or `StreamTableSource` interface.
+
+### Available Table Sources
+
+| **Class name** | **Maven dependency** | **Batch?** | **Streaming?** | **Description**
+| `CsvTableSouce` | `flink-table` | Y | Y | A simple source for CSV files.
+| `Kafka08JsonTableSource` | `flink-connector-kafka-0.8` | N | Y | A Kafka 0.8 source for JSON data.
+| `Kafka09JsonTableSource` | `flink-connector-kafka-0.9` | N | Y | A Kafka 0.9 source for JSON data.
+
+All sources that come with the `flink-table` dependency can be directly used by your Table programs. For all other table sources, you have to add the respective dependency in addition to the `flink-table` dependency.
+
+#### KafkaJsonTableSource
+
+To use the Kafka JSON source, you have to add the Kafka connector dependency to your project:
+
+  - `flink-connector-kafka-0.8` for Kafka 0.8, and
+  - `flink-connector-kafka-0.9` for Kafka 0.9, respectively.
+
+You can then create the source as follows (example for Kafka 0.8):
+
+```java
+// The JSON field names and types
+String[] fieldNames =  new String[] { "id", "name", "score"};
+Class<?>[] fieldTypes = new Class<?>[] { Integer.class, String.class, Double.class };
+
+KafkaJsonTableSource kafkaTableSource = new Kafka08JsonTableSource(
+    kafkaTopic,
+    kafkaProperties,
+    fieldNames,
+    fieldTypes);
+```
+
+By default, a missing JSON field does not fail the source. You can configure this via:
+
+```java
+// Fail on missing JSON field
+tableSource.setFailOnMissingField(true);
+```
+
+You can work with the Table as explained in the rest of the Table API guide:
+
+```java
+tableEnvironment.registerTableSource("kafka-source", kafkaTableSource);
+Table result = tableEnvironment.ingest("kafka-source");
+```
+
+#### CsvTableSource
+
+The `CsvTableSource` is already included in `flink-table` without additional dependecies.
+
+It can be configured with the following properties:
+
+ - `path` The path to the CSV file, required.
+ - `fieldNames` The names of the table fields, required.
+ - `fieldTypes` The types of the table fields, required.
+ - `fieldDelim` The field delimiter, `","` by default.
+ - `rowDelim` The row delimiter, `"\n"` by default.
+ - `quoteCharacter` An optional quote character for String values, `null` by default.
+ - `ignoreFirstLine` Flag to ignore the first line, `false` by default.
+ - `ignoreComments` An optional prefix to indicate comments, `null` by default.
+ - `lenient` Flag to skip records with parse error instead to fail, `false` by default.
+
+You can create the source as follows:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+CsvTableSource csvTableSource = new CsvTableSource(
+    "/path/to/your/file.csv",
+    new String[] { "name", "id", "score", "comments" },
+    new TypeInformation<?>[] {
+      Types.STRING(),
+      Types.INT(),
+      Types.DOUBLE(),
+      Types.STRING()
+    },
+    "#",    // fieldDelim
+    "$",    // rowDelim
+    null,   // quoteCharacter
+    true,   // ignoreFirstLine
+    "%",    // ignoreComments
+    false); // lenient
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val csvTableSource = new CsvTableSource(
+    "/path/to/your/file.csv",
+    Array("name", "id", "score", "comments"),
+    Array(
+      Types.STRING,
+      Types.INT,
+      Types.DOUBLE,
+      Types.STRING
+    ),
+    fieldDelim = "#",
+    rowDelim = "$",
+    ignoreFirstLine = true,
+    ignoreComments = "%")
+{% endhighlight %}
+</div>
+</div>
+
+You can work with the Table as explained in the rest of the Table API guide in both stream and batch `TableEnvironment`s:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+tableEnvironment.registerTableSource("mycsv", csvTableSource);
+
+Table streamTable = streamTableEnvironment.ingest("mycsv");
+
+Table batchTable = batchTableEnvironment.scan("mycsv");
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+tableEnvironment.registerTableSource("mycsv", csvTableSource)
+
+val streamTable = streamTableEnvironment.ingest("mycsv")
+
+val batchTable = batchTableEnvironment.scan("mycsv")
+{% endhighlight %}
+</div>
+</div>
+
+
+Table API
+----------
+The Table API provides methods to apply relational operations on DataSets and Datastreams both in Scala and Java.
+
+The central concept of the Table API is a `Table` which represents a table with relational schema (or relation). Tables can be created from a `DataSet` or `DataStream`, converted into a `DataSet` or `DataStream`, or registered in a table catalog using a `TableEnvironment`. A `Table` is always bound to a specific `TableEnvironment`. It is not possible to combine Tables of different TableEnvironments.
+
+*Note: The only operations currently supported on streaming Tables are selection, projection, and union.*
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+When using Flink's Java DataSet API, DataSets are converted to Tables and Tables to DataSets using a `TableEnvironment`.
+The following example shows:
+
+- how a `DataSet` is converted to a `Table`,
+- how relational queries are specified, and
+- how a `Table` is converted back to a `DataSet`.
+
+{% highlight java %}
+public class WC {
+
+  public WC(String word, int count) {
+    this.word = word; this.count = count;
+  }
+
+  public WC() {} // empty constructor to satisfy POJO requirements
+
+  public String word;
+  public int count;
+}
+
+...
+
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+BatchTableEnvironment tEnv = TableEnvironment.getTableEnvironment(env);
+
+DataSet<WC> input = env.fromElements(
+        new WC("Hello", 1),
+        new WC("Ciao", 1),
+        new WC("Hello", 1));
+
+Table table = tEnv.fromDataSet(input);
+
+Table wordCounts = table
+        .groupBy("word")
+        .select("word, count.sum as count");
+
+DataSet<WC> result = tableEnv.toDataSet(wordCounts, WC.class);
+{% endhighlight %}
+
+With Java, expressions must be specified by Strings. The embedded expression DSL is not supported.
+
+{% highlight java %}
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+BatchTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);
+
+// register the DataSet cust as table "Customers" with fields derived from the dataset
+tableEnv.registerDataSet("Customers", cust)
+
+// register the DataSet ord as table "Orders" with fields user, product, and amount
+tableEnv.registerDataSet("Orders", ord, "user, product, amount");
+{% endhighlight %}
+
+Please refer to the Javadoc for a full list of supported operations and a description of the expression syntax.
+</div>
+
+<div data-lang="scala" markdown="1">
+The Table API is enabled by importing `org.apache.flink.api.scala.table._`. This enables
+implicit conversions to convert a `DataSet` or `DataStream` to a Table. The following example shows:
+
+- how a `DataSet` is converted to a `Table`,
+- how relational queries are specified, and
+- how a `Table` is converted back to a `DataSet`.
+
+{% highlight scala %}
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.table._
+
+case class WC(word: String, count: Int)
+
+val env = ExecutionEnvironment.getExecutionEnvironment
+val tEnv = TableEnvironment.getTableEnvironment(env)
+
+val input = env.fromElements(WC("hello", 1), WC("hello", 1), WC("ciao", 1))
+val expr = input.toTable(tEnv)
+val result = expr
+               .groupBy('word)
+               .select('word, 'count.sum as 'count)
+               .toDataSet[WC]
+{% endhighlight %}
+
+The expression DSL uses Scala symbols to refer to field names and code generation to
+transform expressions to efficient runtime code. Please note that the conversion to and from
+Tables only works when using Scala case classes or Java POJOs. Please refer to the [Type Extraction and Serialization]({{ site.baseurl }}/internals/types_serialization.html) section
+to learn the characteristics of a valid POJO.
+
+Another example shows how to join two Tables:
+
+{% highlight scala %}
+case class MyResult(a: String, d: Int)
+
+val input1 = env.fromElements(...).toTable(tEnv).as('a, 'b)
+val input2 = env.fromElements(...).toTable(tEnv, 'c, 'd)
+
+val joined = input1.join(input2)
+               .where("a = c && d > 42")
+               .select("a, d")
+               .toDataSet[MyResult]
+{% endhighlight %}
+
+Notice, how the field names of a Table can be changed with `as()` or specified with `toTable()` when converting a DataSet to a Table. In addition, the example shows how to use Strings to specify relational expressions.
+
+Creating a `Table` from a `DataStream` works in a similar way.
+The following example shows how to convert a `DataStream` to a `Table` and filter it with the Table API.
+
+{% highlight scala %}
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.table._
+
+val env = StreamExecutionEnvironment.getExecutionEnvironment
+val tEnv = TableEnvironment.getTableEnvironment(env)
+
+val inputStream = env.addSource(...)
+val result = inputStream
+                .toTable(tEnv, 'a, 'b, 'c)
+                .filter('a === 3)
+val resultStream = result.toDataStream[Row]
+{% endhighlight %}
+
+Please refer to the Scaladoc for a full list of supported operations and a description of the expression syntax.
+</div>
+</div>
+
+{% top %}
+
+
+### Access a registered Table
+
+A registered table can be accessed from a `TableEnvironment` as follows:
+
+- `tEnv.scan("tName")` scans a `Table` that was registered as `"tName"` in a `BatchTableEnvironment`.
+- `tEnv.ingest("tName")` ingests a `Table` that was registered as `"tName"` in a `StreamTableEnvironment`.
+
+{% top %}
+
+### Table API Operators
+
+The Table API features a domain-specific language to execute language-integrated queries on structured data in Scala and Java.
+This section gives a brief overview of the available operators. You can find more details of operators in the [Javadoc]({{site.baseurl}}/api/java/org/apache/flink/api/table/Table.html).
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+<table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 20%">Operators</th>
+      <th class="text-center">Description</th>
+    </tr>
+  </thead>
+
+  <tbody>
+    <tr>
+      <td><strong>Select</strong></td>
+      <td>
+        <p>Similar to a SQL SELECT statement. Performs a select operation.</p>
+{% highlight java %}
+Table in = tableEnv.fromDataSet(ds, "a, b, c");
+Table result = in.select("a, c as d");
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>As</strong></td>
+      <td>
+        <p>Renames fields.</p>
+{% highlight java %}
+Table in = tableEnv.fromDataSet(ds, "a, b, c");
+Table result = in.as("d, e, f");
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>Where / Filter</strong></td>
+      <td>
+        <p>Similar to a SQL WHERE clause. Filters out rows that do not pass the filter predicate.</p>
+{% highlight java %}
+Table in = tableEnv.fromDataSet(ds, "a, b, c");
+Table result = in.where("b = 'red'");
+{% endhighlight %}
+or
+{% highlight java %}
+Table in = tableEnv.fromDataSet(ds, "a, b, c");
+Table result = in.filter("a % 2 = 0");
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>GroupBy</strong></td>
+      <td>
+        <p>Similar to a SQL GROUPBY clause. Groups the rows on the grouping keys, with a following aggregation
+        operator to aggregate rows group-wise.</p>
+{% highlight java %}
+Table in = tableEnv.fromDataSet(ds, "a, b, c");
+Table result = in.groupBy("a").select("a, b.sum as d");
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>Join</strong></td>
+      <td>
+        <p>Similar to a SQL JOIN clause. Joins two tables. Both tables must have distinct field names and at least one equality join predicate must be defined through join operator or using a where or filter operator.</p>
+{% highlight java %}
+Table left = tableEnv.fromDataSet(ds1, "a, b, c");
+Table right = tableEnv.fromDataSet(ds2, "d, e, f");
+Table result = left.join(right).where("a = d").select("a, b, e");
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>LeftOuterJoin</strong></td>
+      <td>
+        <p>Similar to a SQL LEFT OUTER JOIN clause. Joins two tables. Both tables must have distinct field names and at least one equality join predicate must be defined.</p>
+{% highlight java %}
+Table left = tableEnv.fromDataSet(ds1, "a, b, c");
+Table right = tableEnv.fromDataSet(ds2, "d, e, f");
+Table result = left.leftOuterJoin(right, "a = d").select("a, b, e");
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>RightOuterJoin</strong></td>
+      <td>
+        <p>Similar to a SQL RIGHT OUTER JOIN clause. Joins two tables. Both tables must have distinct field names and at least one equality join predicate must be defined.</p>
+{% highlight java %}
+Table left = tableEnv.fromDataSet(ds1, "a, b, c");
+Table right = tableEnv.fromDataSet(ds2, "d, e, f");
+Table result = left.rightOuterJoin(right, "a = d").select("a, b, e");
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>FullOuterJoin</strong></td>
+      <td>
+        <p>Similar to a SQL FULL OUTER JOIN clause. Joins two tables. Both tables must have distinct field names and at least one equality join predicate must be defined.</p>
+{% highlight java %}
+Table left = tableEnv.fromDataSet(ds1, "a, b, c");
+Table right = tableEnv.fromDataSet(ds2, "d, e, f");
+Table result = left.fullOuterJoin(right, "a = d").select("a, b, e");
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>Union</strong></td>
+      <td>
+        <p>Similar to a SQL UNION clause. Unions two tables with duplicate records removed. Both tables must have identical field types.</p>
+{% highlight java %}
+Table left = tableEnv.fromDataSet(ds1, "a, b, c");
+Table right = tableEnv.fromDataSet(ds2, "a, b, c");
+Table result = left.union(right);
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>UnionAll</strong></td>
+      <td>
+        <p>Similar to a SQL UNION ALL clause. Unions two tables. Both tables must have identical field types.</p>
+{% highlight java %}
+Table left = tableEnv.fromDataSet(ds1, "a, b, c");
+Table right = tableEnv.fromDataSet(ds2, "a, b, c");
+Table result = left.unionAll(right);
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>Intersect</strong></td>
+      <td>
+        <p>Similar to a SQL INTERSECT clause. Intersect returns records that exist in both tables. If a record is present one or both tables more than once, it is returned just once, i.e., the resulting table has no duplicate records. Both tables must have identical field types.</p>
+{% highlight java %}
+Table left = tableEnv.fromDataSet(ds1, "a, b, c");
+Table right = tableEnv.fromDataSet(ds2, "d, e, f");
+Table result = left.intersect(right);
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>IntersectAll</strong></td>
+      <td>
+        <p>Similar to a SQL INTERSECT ALL clause. IntersectAll returns records that exist in both tables. If a record is present in both tables more than once, it is returned as many times as it is present in both tables, i.e., the resulting table might have duplicate records. Both tables must have identical field types.</p>
+{% highlight java %}
+Table left = tableEnv.fromDataSet(ds1, "a, b, c");
+Table right = tableEnv.fromDataSet(ds2, "d, e, f");
+Table result = left.intersectAll(right);
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>Minus</strong></td>
+      <td>
+        <p>Similar to a SQL EXCEPT clause. Minus returns records from the left table that do not exist in the right table. Duplicate records in the left table are returned exactly once, i.e., duplicates are removed. Both tables must have identical field types.</p>
+{% highlight java %}
+Table left = tableEnv.fromDataSet(ds1, "a, b, c");
+Table right = tableEnv.fromDataSet(ds2, "a, b, c");
+Table result = left.minus(right);
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>MinusAll</strong></td>
+      <td>
+        <p>Similar to a SQL EXCEPT ALL clause. MinusAll returns the records that do not exist in the right table. A record that is present n times in the left table and m times in the right table is returned (n - m) times, i.e., as many duplicates as are present in the right table are removed. Both tables must have identical field types.</p>
+{% highlight java %}
+Table left = tableEnv.fromDataSet(ds1, "a, b, c");
+Table right = tableEnv.fromDataSet(ds2, "a, b, c");
+Table result = left.minusAll(right);
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>Distinct</strong></td>
+      <td>
+        <p>Similar to a SQL DISTINCT clause. Returns records with distinct value combinations.</p>
+{% highlight java %}
+Table in = tableEnv.fromDataSet(ds, "a, b, c");
+Table result = in.distinct();
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>Order By</strong></td>
+      <td>
+        <p>Similar to a SQL ORDER BY clause. Returns records globally sorted across all parallel partitions.</p>
+{% highlight java %}
+Table in = tableEnv.fromDataSet(ds, "a, b, c");
+Table result = in.orderBy("a.asc");
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>Limit</strong></td>
+      <td>
+        <p>Similar to a SQL LIMIT clause. Limits a sorted result to a specified number of records from an offset position. Limit is technically part of the Order By operator and thus must be preceded by it.</p>
+{% highlight java %}
+Table in = tableEnv.fromDataSet(ds, "a, b, c");
+Table result = in.orderBy("a.asc").limit(3); // returns unlimited number of records beginning with the 4th record
+{% endhighlight %}
+or
+{% highlight java %}
+Table in = tableEnv.fromDataSet(ds, "a, b, c");
+Table result = in.orderBy("a.asc").limit(3, 5); // returns 5 records beginning with the 4th record
+{% endhighlight %}
+      </td>
+    </tr>
+
+  </tbody>
+</table>
+
+</div>
+<div data-lang="scala" markdown="1">
+
+<table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 20%">Operators</th>
+      <th class="text-center">Description</th>
+    </tr>
+  </thead>
+
+  <tbody>
+    <tr>
+      <td><strong>Select</strong></td>
+      <td>
+        <p>Similar to a SQL SELECT statement. Performs a select operation.</p>
+{% highlight scala %}
+val in = ds.toTable(tableEnv, 'a, 'b, 'c);
+val result = in.select('a, 'c as 'd);
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>As</strong></td>
+      <td>
+        <p>Renames fields.</p>
+{% highlight scala %}
+val in = ds.toTable(tableEnv).as('a, 'b, 'c);
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>Where / Filter</strong></td>
+      <td>
+        <p>Similar to a SQL WHERE clause. Filters out rows that do not pass the filter predicate.</p>
+{% highlight scala %}
+val in = ds.toTable(tableEnv, 'a, 'b, 'c);
+val result = in.filter('a % 2 === 0)
+{% endhighlight %}
+or
+{% highlight scala %}
+val in = ds.toTable(tableEnv, 'a, 'b, 'c);
+val result = in.where('b === "red");
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>GroupBy</strong></td>
+      <td>
+        <p>Similar to a SQL GROUPBY clause. Groups rows on the grouping keys, with a following aggregation
+        operator to aggregate rows group-wise.</p>
+{% highlight scala %}
+val in = ds.toTable(tableEnv, 'a, 'b, 'c);
+val result = in.groupBy('a).select('a, 'b.sum as 'd);
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>Join</strong></td>
+      <td>
+        <p>Similar to a SQL JOIN clause. Joins two tables. Both tables must have distinct field names and an equality join predicate must be defined using a where or filter operator.</p>
+{% highlight scala %}
+val left = ds1.toTable(tableEnv, 'a, 'b, 'c);
+val right = ds2.toTable(tableEnv, 'd, 'e, 'f);
+val result = left.join(right).where('a === 'd).select('a, 'b, 'e);
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>LeftOuterJoin</strong></td>
+      <td>
+        <p>Similar to a SQL LEFT OUTER JOIN clause. Joins two tables. Both tables must have distinct field names and at least one equality join predicate must be defined.</p>
+{% highlight scala %}
+val left = tableEnv.fromDataSet(ds1, 'a, 'b, 'c)
+val right = tableEnv.fromDataSet(ds2, 'd, 'e, 'f)
+val result = left.leftOuterJoin(right, 'a === 'd).select('a, 'b, 'e)
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>RightOuterJoin</strong></td>
+      <td>
+        <p>Similar to a SQL RIGHT OUTER JOIN clause. Joins two tables. Both tables must have distinct field names and at least one equality join predicate must be defined.</p>
+{% highlight scala %}
+val left = tableEnv.fromDataSet(ds1, 'a, 'b, 'c)
+val right = tableEnv.fromDataSet(ds2, 'd, 'e, 'f)
+val result = left.rightOuterJoin(right, 'a === 'd).select('a, 'b, 'e)
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>FullOuterJoin</strong></td>
+      <td>
+        <p>Similar to a SQL FULL OUTER JOIN clause. Joins two tables. Both tables must have distinct field names and at least one equality join predicate must be defined.</p>
+{% highlight scala %}
+val left = tableEnv.fromDataSet(ds1, 'a, 'b, 'c)
+val right = tableEnv.fromDataSet(ds2, 'd, 'e, 'f)
+val result = left.fullOuterJoin(right, 'a === 'd).select('a, 'b, 'e)
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>Union</strong></td>
+      <td>
+        <p>Similar to a SQL UNION clause. Unions two tables with duplicate records removed, both tables must have identical field types.</p>
+{% highlight scala %}
+val left = ds1.toTable(tableEnv, 'a, 'b, 'c);
+val right = ds2.toTable(tableEnv, 'a, 'b, 'c);
+val result = left.union(right);
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>UnionAll</strong></td>
+      <td>
+        <p>Similar to a SQL UNION ALL clause. Unions two tables, both tables must have identical field types.</p>
+{% highlight scala %}
+val left = ds1.toTable(tableEnv, 'a, 'b, 'c);
+val right = ds2.toTable(tableEnv, 'a, 'b, 'c);
+val result = left.unionAll(right);
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>Intersect</strong></td>
+      <td>
+        <p>Similar to a SQL INTERSECT clause. Intersect returns records that exist in both tables. If a record is present in one or both tables more than once, it is returned just once, i.e., the resulting table has no duplicate records. Both tables must have identical field types.</p>
+{% highlight scala %}
+val left = ds1.toTable(tableEnv, 'a, 'b, 'c);
+val right = ds2.toTable(tableEnv, 'e, 'f, 'g);
+val result = left.intersect(right);
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>IntersectAll</strong></td>
+      <td>
+        <p>Similar to a SQL INTERSECT ALL clause. IntersectAll returns records that exist in both tables. If a record is present in both tables more than once, it is returned as many times as it is present in both tables, i.e., the resulting table might have duplicate records. Both tables must have identical field types.</p>
+{% highlight scala %}
+val left = ds1.toTable(tableEnv, 'a, 'b, 'c);
+val right = ds2.toTable(tableEnv, 'e, 'f, 'g);
+val result = left.intersectAll(right);
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>Minus</strong></td>
+      <td>
+        <p>Similar to a SQL EXCEPT clause. Minus returns records from the left table that do not exist in the right table. Duplicate records in the left table are returned exactly once, i.e., duplicates are removed. Both tables must have identical field types.</p>
+{% highlight scala %}
+val left = ds1.toTable(tableEnv, 'a, 'b, 'c);
+val right = ds2.toTable(tableEnv, 'a, 'b, 'c);
+val result = left.minus(right);
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>MinusAll</strong></td>
+      <td>
+        <p>Similar to a SQL EXCEPT ALL clause. MinusAll returns the records that do not exist in the right table. A record that is present n times in the left table and m times in the right table is returned (n - m) times, i.e., as many duplicates as are present in the right table are removed. Both tables must have identical field types.</p>
+{% highlight scala %}
+val left = ds1.toTable(tableEnv, 'a, 'b, 'c);
+val right = ds2.toTable(tableEnv, 'a, 'b, 'c);
+val result = left.minusAll(right);
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>Distinct</strong></td>
+      <td>
+        <p>Similar to a SQL DISTINCT clause. Returns records with distinct value combinations.</p>
+{% highlight scala %}
+val in = ds.toTable(tableEnv, 'a, 'b, 'c);
+val result = in.distinct();
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>Order By</strong></td>
+      <td>
+        <p>Similar to a SQL ORDER BY clause. Returns records globally sorted across all parallel partitions.</p>
+{% highlight scala %}
+val in = ds.toTable(tableEnv, 'a, 'b, 'c);
+val result = in.orderBy('a.asc);
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>Limit</strong></td>
+      <td>
+        <p>Similar to a SQL LIMIT clause. Limits a sorted result to a specified number of records from an offset position. Limit is technically part of the Order By operator and thus must be preceded by it.</p>
+{% highlight scala %}
+val in = ds.toTable(tableEnv, 'a, 'b, 'c);
+val result = in.orderBy('a.asc).limit(3); // returns unlimited number of records beginning with the 4th record
+{% endhighlight %}
+or
+{% highlight scala %}
+val in = ds.toTable(tableEnv, 'a, 'b, 'c);
+val result = in.orderBy('a.asc).limit(3, 5); // returns 5 records beginning with the 4th record
+{% endhighlight %}
+      </td>
+    </tr>
+
+  </tbody>
+</table>
+</div>
+</div>
+
+{% top %}
+
+### Expression Syntax
+Some of the operators in previous sections expect one or more expressions. Expressions can be specified using an embedded Scala DSL or as Strings. Please refer to the examples above to learn how expressions can be specified.
+
+This is the EBNF grammar for expressions:
+
+{% highlight ebnf %}
+
+expressionList = expression , { "," , expression } ;
+
+expression = alias ;
+
+alias = logic | ( logic , "AS" , fieldReference ) ;
+
+logic = comparison , [ ( "&&" | "||" ) , comparison ] ;
+
+comparison = term , [ ( "=" | "===" | "!=" | "!==" | ">" | ">=" | "<" | "<=" ) , term ] ;
+
+term = product , [ ( "+" | "-" ) , product ] ;
+
+product = unary , [ ( "*" | "/" | "%") , unary ] ;
+
+unary = [ "!" | "-" ] , composite ;
+
+composite = suffixed | atom ;
+
+suffixed = interval | cast | as | aggregation | nullCheck | if | functionCall ;
+
+interval = composite , "." , ("year" | "month" | "day" | "hour" | "minute" | "second" | "milli") ;
+
+cast = composite , ".cast(" , dataType , ")" ;
+
+dataType = "BYTE" | "SHORT" | "INT" | "LONG" | "FLOAT" | "DOUBLE" | "BOOLEAN" | "STRING" | "DECIMAL" | "DATE" | "TIME" | "TIMESTAMP" | "INTERVAL_MONTHS" | "INTERVAL_MILLIS" ;
+
+as = composite , ".as(" , fieldReference , ")" ;
+
+aggregation = composite , ( ".sum" | ".min" | ".max" | ".count" | ".avg" ) , [ "()" ] ;
+
+nullCheck = composite , ( ".isNull" | ".isNotNull" ) , [ "()" ] ;
+
+if = composite , ".?(" , expression , "," , expression , ")" ;
+
+functionCall = composite , "." , functionIdentifier , "(" , [ expression , { "," , expression } ] , ")" ;
+
+atom = ( "(" , expression , ")" ) | literal | nullLiteral | fieldReference ;
+
+nullLiteral = "Null(" , dataType , ")" ;
+
+timeIntervalUnit = "YEAR" | "YEAR_TO_MONTH" | "MONTH" | "DAY" | "DAY_TO_HOUR" | "DAY_TO_MINUTE" | "DAY_TO_SECOND" | "HOUR" | "HOUR_TO_MINUTE" | "HOUR_TO_SECOND" | "MINUTE" | "MINUTE_TO_SECOND" | "SECOND" ;
+
+timePointUnit = "YEAR" | "MONTH" | "DAY" | "HOUR" | "MINUTE" | "SECOND" | "QUARTER" | "WEEK" | "MILLISECOND" | "MICROSECOND" ;
+
+{% endhighlight %}
+
+Here, `literal` is a valid Java literal, `fieldReference` specifies a column in the data, and `functionIdentifier` specifies a supported scalar function. The
+column names and function names follow Java identifier syntax. Expressions specified as Strings can also use prefix notation instead of suffix notation to call operators and functions.
+
+If working with exact numeric values or large decimals is required, the Table API also supports Java's BigDecimal type. In the Scala Table API decimals can be defined by `BigDecimal("123456")` and in Java by appending a "p" for precise e.g. `123456p`.
+
+In order to work with temporal values the Table API supports Java SQL's Date, Time, and Timestamp types. In the Scala Table API literals can be defined by using `java.sql.Date.valueOf("2016-06-27")`, `java.sql.Time.valueOf("10:10:42")`, or `java.sql.Timestamp.valueOf("2016-06-27 10:10:42.123")`. The Java and Scala Table API also support calling `"2016-06-27".toDate()`, `"10:10:42".toTime()`, and `"2016-06-27 10:10:42.123".toTimestamp()` for converting Strings into temporal types. *Note:* Since Java's temporal SQL types are time zone dependent, please make sure that the Flink Client and all TaskManagers use the same time zone.
+
+Temporal intervals can be represented as number of months (`Types.INTERVAL_MONTHS`) or number of milliseconds (`Types.INTERVAL_MILLIS`). Intervals of same type can be added or subtracted (e.g. `2.hour + 10.minutes`). Intervals of milliseconds can be added to time points (e.g. `"2016-08-10".toDate + 5.day`).
+
+{% top %}
+
+
+SQL
+----
+SQL queries are specified using the `sql()` method of the `TableEnvironment`. The method returns the result of the SQL query as a `Table` which can be converted into a `DataSet` or `DataStream`, used in subsequent Table API queries, or written to a `TableSink` (see [Writing Tables to External Sinks](#writing-tables-to-external-sinks)). SQL and Table API queries can seamlessly mixed and are holistically optimized and translated into a single DataStream or DataSet program.
+
+A `Table`, `DataSet`, `DataStream`, or external `TableSource` must be registered in the `TableEnvironment` in order to be accessible by a SQL query (see [Registering Tables](#registering-tables)).
+
+*Note: Flink's SQL support is not feature complete, yet. Queries that include unsupported SQL features will cause a `TableException`. The limitations of SQL on batch and streaming tables are listed in the following sections.*
+
+### SQL on Batch Tables
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+BatchTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);
+
+// read a DataSet from an external source
+DataSet<Tuple3<Long, String, Integer>> ds = env.readCsvFile(...);
+// register the DataSet as table "Orders"
+tableEnv.registerDataSet("Orders", ds, "user, product, amount");
+// run a SQL query on the Table and retrieve the result as a new Table
+Table result = tableEnv.sql(
+  "SELECT SUM(amount) FROM Orders WHERE product LIKE '%Rubber%'");
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = ExecutionEnvironment.getExecutionEnvironment
+val tableEnv = TableEnvironment.getTableEnvironment(env)
+
+// read a DataSet from an external source
+val ds: DataSet[(Long, String, Integer)] = env.readCsvFile(...)
+// register the DataSet under the name "Orders"
+tableEnv.registerDataSet("Orders", ds, 'user, 'product, 'amount)
+// run a SQL query on the Table and retrieve the result as a new Table
+val result = tableEnv.sql(
+  "SELECT SUM(amount) FROM Orders WHERE product LIKE '%Rubber%'")
+{% endhighlight %}
+</div>
+</div>
+
+#### Limitations
+
+The current version supports selection (filter), projection, inner equi-joins, grouping, non-distinct aggregates, and sorting on batch tables.
+
+Among others, the following SQL features are not supported, yet:
+
+- Timestamps and intervals are limited to milliseconds precision
+- Interval arithmetic is currenly limited
+- Distinct aggregates (e.g., `COUNT(DISTINCT name)`)
+- Non-equi joins and Cartesian products
+- Grouping sets
+
+*Note: Tables are joined in the order in which they are specified in the `FROM` clause. In some cases the table order must be manually tweaked to resolve Cartesian products.*
+
+### SQL on Streaming Tables
+
+SQL queries can be executed on streaming Tables (Tables backed by `DataStream` or `StreamTableSource`) by using the `SELECT STREAM` keywords instead of `SELECT`. Please refer to the [Apache Calcite's Streaming SQL documentation](https://calcite.apache.org/docs/stream.html) for more information on the Streaming SQL syntax.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
+StreamTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);
+
+// ingest a DataStream from an external source
+DataStream<Tuple3<Long, String, Integer>> ds = env.addSource(...);
+// register the DataStream as table "Orders"
+tableEnv.registerDataStream("Orders", ds, "user, product, amount");
+// run a SQL query on the Table and retrieve the result as a new Table
+Table result = tableEnv.sql(
+  "SELECT STREAM product, amount FROM Orders WHERE product LIKE '%Rubber%'");
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = StreamExecutionEnvironment.getExecutionEnvironment
+val tEnv = TableEnvironment.getTableEnvironment(env)
+
+// read a DataStream from an external source
+val ds: DataStream[(Long, String, Integer)] = env.addSource(...)
+// register the DataStream under the name "Orders"
+tableEnv.registerDataStream("Orders", ds, 'user, 'product, 'amount)
+// run a SQL query on the Table and retrieve the result as a new Table
+val result = tableEnv.sql(
+  "SELECT STREAM product, amount FROM Orders WHERE product LIKE '%Rubber%'")
+{% endhighlight %}
+</div>
+</div>
+
+#### Limitations
+
+The current version of streaming SQL only supports `SELECT`, `FROM`, `WHERE`, and `UNION` clauses. Aggregations or joins are not supported yet.
+
+{% top %}
+
+### SQL Syntax
+
+Flink uses [Apache Calcite](https://calcite.apache.org/docs/reference.html) for SQL parsing. Currently, Flink SQL only supports query-related SQL syntax and only a subset of the comprehensive SQL standard. The following BNF-grammar describes the supported SQL features:
+
+```
+
+query:
+  values
+  | {
+      select
+      | selectWithoutFrom
+      | query UNION [ ALL ] query
+      | query EXCEPT query
+      | query INTERSECT query
+    }
+    [ ORDER BY orderItem [, orderItem ]* ]
+    [ LIMIT { count | ALL } ]
+    [ OFFSET start { ROW | ROWS } ]
+    [ FETCH { FIRST | NEXT } [ count ] { ROW | ROWS } ONLY]
+
+orderItem:
+  expression [ ASC | DESC ]
+
+select:
+  SELECT [ STREAM ] [ ALL | DISTINCT ]
+  { * | projectItem [, projectItem ]* }
+  FROM tableExpression
+  [ WHERE booleanExpression ]
+  [ GROUP BY { groupItem [, groupItem ]* } ]
+  [ HAVING booleanExpression ]
+
+selectWithoutFrom:
+  SELECT [ ALL | DISTINCT ]
+  { * | projectItem [, projectItem ]* }
+
+projectItem:
+  expression [ [ AS ] columnAlias ]
+  | tableAlias . *
+
+tableExpression:
+  tableReference [, tableReference ]*
+  | tableExpression [ NATURAL ] [ LEFT | RIGHT | FULL ] JOIN tableExpression [ joinCondition ]
+
+joinCondition:
+  ON booleanExpression
+  | USING '(' column [, column ]* ')'
+
+tableReference:
+  tablePrimary
+  [ [ AS ] alias [ '(' columnAlias [, columnAlias ]* ')' ] ]
+
+tablePrimary:
+  [ TABLE ] [ [ catalogName . ] schemaName . ] tableName
+
+values:
+  VALUES expression [, expression ]*
+
+groupItem:
+  expression
+  | '(' ')'
+  | '(' expression [, expression ]* ')'
+
+```
+
+
+{% top %}
+
+### Reserved Keywords
+
+Although not every SQL feature is implemented yet, some string combinations are already reserved as keywords for future use. If you want to use one of the following strings as a field name, make sure to surround them with backticks (e.g. `` `value` ``, `` `count` ``).
+
+{% highlight sql %}
+
+A, ABS, ABSOLUTE, ACTION, ADA, ADD, ADMIN, AFTER, ALL, ALLOCATE, ALLOW, ALTER, ALWAYS, AND, ANY, ARE, ARRAY, AS, ASC, ASENSITIVE, ASSERTION, ASSIGNMENT, ASYMMETRIC, AT, ATOMIC, ATTRIBUTE, ATTRIBUTES, AUTHORIZATION, AVG, BEFORE, BEGIN, BERNOULLI, BETWEEN, BIGINT, BINARY, BIT, BLOB, BOOLEAN, BOTH, BREADTH, BY, C, CALL, CALLED, CARDINALITY, CASCADE, CASCADED, CASE, CAST, CATALOG, CATALOG_NAME, CEIL, CEILING, CENTURY, CHAIN, CHAR, CHARACTER, CHARACTERISTICTS, CHARACTERS, CHARACTER_LENGTH, CHARACTER_SET_CATALOG, CHARACTER_SET_NAME, CHARACTER_SET_SCHEMA, CHAR_LENGTH, CHECK, CLASS_ORIGIN, CLOB, CLOSE, COALESCE, COBOL, COLLATE, COLLATION, COLLATION_CATALOG, COLLATION_NAME, COLLATION_SCHEMA, COLLECT, COLUMN, COLUMN_NAME, COMMAND_FUNCTION, COMMAND_FUNCTION_CODE, COMMIT, COMMITTED, CONDITION, CONDITION_NUMBER, CONNECT, CONNECTION, CONNECTION_NAME, CONSTRAINT, CONSTRAINTS, CONSTRAINT_CATALOG, CONSTRAINT_NAME, CONSTRAINT_SCHEMA, CONSTRUCTOR, CONTAINS, CONTINUE, CONVERT, CORR, CORRESPONDING, COUN
 T, COVAR_POP, COVAR_SAMP, CREATE, CROSS, CUBE, CUME_DIST, CURRENT, CURRENT_CATALOG, CURRENT_DATE, CURRENT_DEFAULT_TRANSFORM_GROUP, CURRENT_PATH, CURRENT_ROLE, CURRENT_SCHEMA, CURRENT_TIME, CURRENT_TIMESTAMP, CURRENT_TRANSFORM_GROUP_FOR_TYPE, CURRENT_USER, CURSOR, CURSOR_NAME, CYCLE, DATA, DATABASE, DATE, DATETIME_INTERVAL_CODE, DATETIME_INTERVAL_PRECISION, DAY, DEALLOCATE, DEC, DECADE, DECIMAL, DECLARE, DEFAULT, DEFAULTS, DEFERRABLE, DEFERRED, DEFINED, DEFINER, DEGREE, DELETE, DENSE_RANK, DEPTH, DEREF, DERIVED, DESC, DESCRIBE, DESCRIPTION, DESCRIPTOR, DETERMINISTIC, DIAGNOSTICS, DISALLOW, DISCONNECT, DISPATCH, DISTINCT, DOMAIN, DOUBLE, DOW, DOY, DROP, DYNAMIC, DYNAMIC_FUNCTION, DYNAMIC_FUNCTION_CODE, EACH, ELEMENT, ELSE, END, END-EXEC, EPOCH, EQUALS, ESCAPE, EVERY, EXCEPT, EXCEPTION, EXCLUDE, EXCLUDING, EXEC, EXECUTE, EXISTS, EXP, EXPLAIN, EXTEND, EXTERNAL, EXTRACT, FALSE, FETCH, FILTER, FINAL, FIRST, FIRST_VALUE, FLOAT, FLOOR, FOLLOWING, FOR, FOREIGN, FORTRAN, FOUND, FRAC_SECOND, F
 REE, FROM, FULL, FUNCTION, FUSION, G, GENERAL, GENERATED, GET, GLOBAL, GO, GOTO, GRANT, GRANTED, GROUP, GROUPING, HAVING, HIERARCHY, HOLD, HOUR, IDENTITY, IMMEDIATE, IMPLEMENTATION, IMPORT, IN, INCLUDING, INCREMENT, INDICATOR, INITIALLY, INNER, INOUT, INPUT, INSENSITIVE, INSERT, INSTANCE, INSTANTIABLE, INT, INTEGER, INTERSECT, INTERSECTION, INTERVAL, INTO, INVOKER, IS, ISOLATION, JAVA, JOIN, K, KEY, KEY_MEMBER, KEY_TYPE, LABEL, LANGUAGE, LARGE, LAST, LAST_VALUE, LATERAL, LEADING, LEFT, LENGTH, LEVEL, LIBRARY, LIKE, LIMIT, LN, LOCAL, LOCALTIME, LOCALTIMESTAMP, LOCATOR, LOWER, M, MAP, MATCH, MATCHED, MAX, MAXVALUE, MEMBER, MERGE, MESSAGE_LENGTH, MESSAGE_OCTET_LENGTH, MESSAGE_TEXT, METHOD, MICROSECOND, MILLENNIUM, MIN, MINUTE, MINVALUE, MOD, MODIFIES, MODULE, MONTH, MORE, MULTISET, MUMPS, NAME, NAMES, NATIONAL, NATURAL, NCHAR, NCLOB, NESTING, NEW, NEXT, NO, NONE, NORMALIZE, NORMALIZED, NOT, NULL, NULLABLE, NULLIF, NULLS, NUMBER, NUMERIC, OBJECT, OCTETS, OCTET_LENGTH, OF, OFFSET, OLD, O
 N, ONLY, OPEN, OPTION, OPTIONS, OR, ORDER, ORDERING, ORDINALITY, OTHERS, OUT, OUTER, OUTPUT, OVER, OVERLAPS, OVERLAY, OVERRIDING, PAD, PARAMETER, PARAMETER_MODE, PARAMETER_NAME, PARAMETER_ORDINAL_POSITION, PARAMETER_SPECIFIC_CATALOG, PARAMETER_SPECIFIC_NAME, PARAMETER_SPECIFIC_SCHEMA, PARTIAL, PARTITION, PASCAL, PASSTHROUGH, PATH, PERCENTILE_CONT, PERCENTILE_DISC, PERCENT_RANK, PLACING, PLAN, PLI, POSITION, POWER, PRECEDING, PRECISION, PREPARE, PRESERVE, PRIMARY, PRIOR, PRIVILEGES, PROCEDURE, PUBLIC, QUARTER, RANGE, RANK, READ, READS, REAL, RECURSIVE, REF, REFERENCES, REFERENCING, REGR_AVGX, REGR_AVGY, REGR_COUNT, REGR_INTERCEPT, REGR_R2, REGR_SLOPE, REGR_SXX, REGR_SXY, REGR_SYY, RELATIVE, RELEASE, REPEATABLE, RESET, RESTART, RESTRICT, RESULT, RETURN, RETURNED_CARDINALITY, RETURNED_LENGTH, RETURNED_OCTET_LENGTH, RETURNED_SQLSTATE, RETURNS, REVOKE, RIGHT, ROLE, ROLLBACK, ROLLUP, ROUTINE, ROUTINE_CATALOG, ROUTINE_NAME, ROUTINE_SCHEMA, ROW, ROWS, ROW_COUNT, ROW_NUMBER, SAVEPOINT, SCALE
 , SCHEMA, SCHEMA_NAME, SCOPE, SCOPE_CATALOGS, SCOPE_NAME, SCOPE_SCHEMA, SCROLL, SEARCH, SECOND, SECTION, SECURITY, SELECT, SELF, SENSITIVE, SEQUENCE, SERIALIZABLE, SERVER, SERVER_NAME, SESSION, SESSION_USER, SET, SETS, SIMILAR, SIMPLE, SIZE, SMALLINT, SOME, SOURCE, SPACE, SPECIFIC, SPECIFICTYPE, SPECIFIC_NAME, SQL, SQLEXCEPTION, SQLSTATE, SQLWARNING, SQL_TSI_DAY, SQL_TSI_FRAC_SECOND, SQL_TSI_HOUR, SQL_TSI_MICROSECOND, SQL_TSI_MINUTE, SQL_TSI_MONTH, SQL_TSI_QUARTER, SQL_TSI_SECOND, SQL_TSI_WEEK, SQL_TSI_YEAR, SQRT, START, STATE, STATEMENT, STATIC, STDDEV_POP, STDDEV_SAMP, STREAM, STRUCTURE, STYLE, SUBCLASS_ORIGIN, SUBMULTISET, SUBSTITUTE, SUBSTRING, SUM, SYMMETRIC, SYSTEM, SYSTEM_USER, TABLE, TABLESAMPLE, TABLE_NAME, TEMPORARY, THEN, TIES, TIME, TIMESTAMP, TIMESTAMPADD, TIMESTAMPDIFF, TIMEZONE_HOUR, TIMEZONE_MINUTE, TINYINT, TO, TOP_LEVEL_COUNT, TRAILING, TRANSACTION, TRANSACTIONS_ACTIVE, TRANSACTIONS_COMMITTED, TRANSACTIONS_ROLLED_BACK, TRANSFORM, TRANSFORMS, TRANSLATE, TRANSLATION,
  TREAT, TRIGGER, TRIGGER_CATALOG, TRIGGER_NAME, TRIGGER_SCHEMA, TRIM, TRUE, TYPE, UESCAPE, UNBOUNDED, UNCOMMITTED, UNDER, UNION, UNIQUE, UNKNOWN, UNNAMED, UNNEST, UPDATE, UPPER, UPSERT, USAGE, USER, USER_DEFINED_TYPE_CATALOG, USER_DEFINED_TYPE_CODE, USER_DEFINED_TYPE_NAME, USER_DEFINED_TYPE_SCHEMA, USING, VALUE, VALUES, VARBINARY, VARCHAR, VARYING, VAR_POP, VAR_SAMP, VERSION, VIEW, WEEK, WHEN, WHENEVER, WHERE, WIDTH_BUCKET, WINDOW, WITH, WITHIN, WITHOUT, WORK, WRAPPER, WRITE, XML, YEAR, ZONE
+
+{% endhighlight %}
+
+{% top %}
+
+Data Types
+----------
+
+The Table API is built on top of Flink's DataSet and DataStream API. Internally, it also uses Flink's `TypeInformation` to distinguish between types. The Table API does not support all Flink types so far. All supported simple types are listed in `org.apache.flink.api.table.Types`. The following table summarizes the relation between Table API types, SQL types, and the resulting Java class.
+
+| Table API              | SQL                         | Java type              |
+| :--------------------- | :-------------------------- | :--------------------- |
+| `Types.STRING`         | `VARCHAR`                   | `java.lang.String`     |
+| `Types.BOOLEAN`        | `BOOLEAN`                   | `java.lang.Boolean`    |
+| `Types.BYTE`           | `TINYINT`                   | `java.lang.Byte`       |
+| `Types.SHORT`          | `SMALLINT`                  | `java.lang.Short`      |
+| `Types.INT`            | `INTEGER, INT`              | `java.lang.Integer`    |
+| `Types.LONG`           | `BIGINT`                    | `java.lang.Long`       |
+| `Types.FLOAT`          | `REAL, FLOAT`               | `java.lang.Float`      |
+| `Types.DOUBLE`         | `DOUBLE`                    | `java.lang.Double`     |
+| `Types.DECIMAL`        | `DECIMAL`                   | `java.math.BigDecimal` |
+| `Types.DATE`           | `DATE`                      | `java.sql.Date`        |
+| `Types.TIME`           | `TIME`                      | `java.sql.Time`        |
+| `Types.TIMESTAMP`      | `TIMESTAMP(3)`              | `java.sql.Timestamp`   |
+| `Types.INTERVAL_MONTHS`| `INTERVAL YEAR TO MONTH`    | `java.lang.Integer`    |
+| `Types.INTERVAL_MILLIS`| `INTERVAL DAY TO SECOND(3)` | `java.lang.Long`       |
+
+Advanced types such as generic types, composite types (e.g. POJOs or Tuples), and arrays can be fields of a row but can not be accessed yet. They are treated like a black box within Table API and SQL.
+
+{% top %}
+
+Scalar Functions
+----------------
+
+Both the Table API and SQL come with a set of built-in scalar functions for data transformations. This section gives a brief overview of the available scalar function so far.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+<br/>
+
+<table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 40%">Function</th>
+      <th class="text-center">Description</th>
+    </tr>
+  </thead>
+
+  <tbody>
+    <tr>
+      <td>
+        {% highlight java %}
+NUMERIC.exp()
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Calculates the Euler's number raised to the given power.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight java %}
+NUMERIC.log10()
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Calculates the base 10 logarithm of given value.</p>
+      </td>
+    </tr>
+
+
+    <tr>
+      <td>
+        {% highlight java %}
+NUMERIC.ln()
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Calculates the natural logarithm of given value.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight java %}
+NUMERIC.power(NUMERIC)
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Calculates the given number raised to the power of the other value.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight java %}
+NUMERIC.abs()
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Calculates the absolute value of given value.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight java %}
+NUMERIC.floor()
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Calculates the largest integer less than or equal to a given number.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight java %}
+NUMERIC.ceil()
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Calculates the smallest integer greater than or equal to a given number.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight java %}
+STRING.substring(INT, INT)
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Creates a substring of the given string at the given index for the given length. The index starts at 1 and is inclusive, i.e., the character at the index is included in the substring. The substring has the specified length or less.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight java %}
+STRING.substring(INT)
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Creates a substring of the given string beginning at the given index to the end. The start index starts at 1 and is inclusive.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight java %}
+STRING.trim(LEADING, STRING)
+STRING.trim(TRAILING, STRING)
+STRING.trim(BOTH, STRING)
+STRING.trim(BOTH)
+STRING.trim()
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Removes leading and/or trailing characters from the given string. By default, whitespaces at both sides are removed.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight java %}
+STRING.charLength()
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Returns the length of a String.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight java %}
+STRING.upperCase()
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Returns all of the characters in a string in upper case using the rules of the default locale.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight java %}
+STRING.lowerCase()
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Returns all of the characters in a string in lower case using the rules of the default locale.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight java %}
+STRING.initCap()
+{% endhighlight %}
+      </td>
+
+      <td>
+        <p>Converts the initial letter of each word in a string to uppercase. Assumes a string containing only [A-Za-z0-9], everything else is treated as whitespace.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight java %}
+STRING.like(STRING)
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Returns true, if a string matches the specified LIKE pattern. E.g. "Jo_n%" matches all strings that start with "Jo(arbitrary letter)n".</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight java %}
+STRING.similar(STRING)
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Returns true, if a string matches the specified SQL regex pattern. E.g. "A+" matches all strings that consist of at least one "A".</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight java %}
+STRING.toDate()
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Parses a date string in the form "yy-mm-dd" to a SQL date.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight java %}
+STRING.toTime()
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Parses a time string in the form "hh:mm:ss" to a SQL time.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight java %}
+STRING.toTimestamp()
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Parses a timestamp string in the form "yy-mm-dd hh:mm:ss.fff" to a SQL timestamp.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight java %}
+TEMPORAL.extract(TIMEINTERVALUNIT)
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Extracts parts of a time point or time interval. Returns the part as a long value. E.g. <code>"2006-06-05".toDate.extract(DAY)</code> leads to 5.</p>
+      </td>
+    </tr>
+
+  </tbody>
+</table>
+
+</div>
+<div data-lang="scala" markdown="1">
+<br />
+
+<table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 40%">Function</th>
+      <th class="text-center">Description</th>
+    </tr>
+  </thead>
+
+  <tbody>
+    <tr>
+      <td>
+        {% highlight scala %}
+NUMERIC.exp()
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Calculates the Euler's number raised to the given power.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight scala %}
+NUMERIC.log10()
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Calculates the base 10 logarithm of given value.</p>
+      </td>
+    </tr>
+
+
+    <tr>
+      <td>
+        {% highlight scala %}
+NUMERIC.ln()
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Calculates the natural logarithm of given value.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight scala %}
+NUMERIC.power(NUMERIC)
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Calculates the given number raised to the power of the other value.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight scala %}
+NUMERIC.abs()
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Calculates the absolute value of given value.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight scala %}
+NUMERIC.floor()
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Calculates the largest integer less than or equal to a given number.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight scala %}
+NUMERIC.ceil()
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Calculates the smallest integer greater than or equal to a given number.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight scala %}
+STRING.substring(INT, INT)
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Creates a substring of the given string at the given index for the given length. The index starts at 1 and is inclusive, i.e., the character at the index is included in the substring. The substring has the specified length or less.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight scala %}
+STRING.substring(INT)
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Creates a substring of the given string beginning at the given index to the end. The start index starts at 1 and is inclusive.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight scala %}
+STRING.trim(
+  leading = true,
+  trailing = true,
+  character = " ")
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Removes leading and/or trailing characters from the given string.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight scala %}
+STRING.charLength()
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Returns the length of a String.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight scala %}
+STRING.upperCase()
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Returns all of the characters in a string in upper case using the rules of the default locale.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight scala %}
+STRING.lowerCase()
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Returns all of the characters in a string in lower case using the rules of the default locale.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight scala %}
+STRING.initCap()
+{% endhighlight %}
+      </td>
+
+      <td>
+        <p>Converts the initial letter of each word in a string to uppercase. Assumes a string containing only [A-Za-z0-9], everything else is treated as whitespace.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight scala %}
+STRING.like(STRING)
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Returns true, if a string matches the specified LIKE pattern. E.g. "Jo_n%" matches all strings that start with "Jo(arbitrary letter)n".</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight scala %}
+STRING.similar(STRING)
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Returns true, if a string matches the specified SQL regex pattern. E.g. "A+" matches all strings that consist of at least one "A".</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight scala %}
+STRING.toDate
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Parses a date string in the form "yy-mm-dd" to a SQL date.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight scala %}
+STRING.toTime
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Parses a time string in the form "hh:mm:ss" to a SQL time.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight scala %}
+STRING.toTimestamp
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Parses a timestamp string in the form "yy-mm-dd hh:mm:ss.fff" to a SQL timestamp.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight scala %}
+TEMPORAL.extract(TimeIntervalUnit)
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Extracts parts of a time point or time interval. Returns the part as a long value. E.g. <code>"2006-06-05".toDate.extract(TimeIntervalUnit.DAY)</code> leads to 5.</p>
+      </td>
+    </tr>
+
+  </tbody>
+</table>
+</div>
+
+<div data-lang="SQL" markdown="1">
+<br />
+
+<table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 40%">Function</th>
+      <th class="text-center">Description</th>
+    </tr>
+  </thead>
+
+  <tbody>
+    <tr>
+      <td>
+        {% highlight sql %}
+EXP(NUMERIC)
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Calculates the Euler's number raised to the given power.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight sql %}
+LOG10(NUMERIC)
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Calculates the base 10 logarithm of given value.</p>
+      </td>
+    </tr>
+
+
+    <tr>
+      <td>
+        {% highlight sql %}
+LN(NUMERIC)
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Calculates the natural logarithm of given value.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight sql %}
+POWER(NUMERIC, NUMERIC)
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Calculates the given number raised to the power of the other value.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight sql %}
+ABS(NUMERIC)
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Calculates the absolute value of given value.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight sql %}
+FLOOR(NUMERIC)
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Calculates the largest integer less than or equal to a given number.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight sql %}
+CEIL(NUMERIC)
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Calculates the smallest integer greater than or equal to a given number.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight sql %}
+SUBSTRING(VARCHAR, INT, INT)
+SUBSTRING(VARCHAR FROM INT FOR INT)
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Creates a substring of the given string at the given index for the given length. The index starts at 1 and is inclusive, i.e., the character at the index is included in the substring. The substring has the specified length or less.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight sql %}
+SUBSTRING(VARCHAR, INT)
+SUBSTRING(VARCHAR FROM INT)
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Creates a substring of the given string beginning at the given index to the end. The start index starts at 1 and is inclusive.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight sql %}
+TRIM(LEADING VARCHAR FROM VARCHAR)
+TRIM(TRAILING VARCHAR FROM VARCHAR)
+TRIM(BOTH VARCHAR FROM VARCHAR)
+TRIM(VARCHAR)
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Removes leading and/or trailing characters from the given string. By default, whitespaces at both sides are removed.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight sql %}
+CHAR_LENGTH(VARCHAR)
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Returns the length of a String.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight sql %}
+UPPER(VARCHAR)
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Returns all of the characters in a string in upper case using the rules of the default locale.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight sql %}
+LOWER(VARCHAR)
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Returns all of the characters in a string in lower case using the rules of the default locale.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight sql %}
+INITCAP(VARCHAR)
+{% endhighlight %}
+      </td>
+
+      <td>
+        <p>Converts the initial letter of each word in a string to uppercase. Assumes a string containing only [A-Za-z0-9], everything else is treated as whitespace.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight sql %}
+VARCHAR LIKE VARCHAR
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Returns true, if a string matches the specified LIKE pattern. E.g. "Jo_n%" matches all strings that start with "Jo(arbitrary letter)n".</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight sql %}
+VARCHAR SIMILAR TO VARCHAR
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Returns true, if a string matches the specified SQL regex pattern. E.g. "A+" matches all strings that consist of at least one "A".</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight sql %}
+DATE VARCHAR
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Parses a date string in the form "yy-mm-dd" to a SQL date.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight sql %}
+TIME VARCHAR
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Parses a time string in the form "hh:mm:ss" to a SQL time.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight sql %}
+TIMESTAMP VARCHAR
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Parses a timestamp string in the form "yy-mm-dd hh:mm:ss.fff" to a SQL timestamp.</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td>
+        {% highlight sql %}
+EXTRACT(TIMEINTERVALUNIT FROM TEMPORAL)
+{% endhighlight %}
+      </td>
+      <td>
+        <p>Extracts parts of a time point or time interval. Returns the part as a long value. E.g. <code>EXTRACT(DAY FROM DATE '2006-06-05')</code> leads to 5.</p>
+      </td>
+    </tr>
+
+  </tbody>
+</table>
+</div>
+</div>
+
+### User-defined Scalar Functions
+
+If a required scalar function is not contained in the built-in functions, it is possible to define custom, user-defined scalar functions for both the Table API and SQL. A user-defined scalar functions maps zero, one, or multiple scalar values to a new scalar value.
+
+In order to define a scalar function one has to extend the base class `ScalarFunction` in `org.apache.flink.api.table.functions` and implement (one or more) evaluation methods. The behavior of a scalar function is determined by the evaluation method. An evaluation method must be declared publicly and named `eval`. The parameter types and return type of the evaluation method also determine the parameter and return types of the scalar function. Evaluation methods can also be overloaded by implementing multiple methods named `eval`.
+
+The following example snippet shows how to define your own hash code function:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+public static class HashCode extends ScalarFunction {
+  public int eval(String s) {
+    return s.hashCode() * 12;
+  }
+}
+
+BatchTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);
+
+// register the function
+tableEnv.registerFunction("hashCode", new HashCode())
+
+// use the function in Java Table API
+myTable.select("string, string.hashCode(), hashCode(string)");
+
+// use the function in SQL API
+tableEnv.sql("SELECT string, HASHCODE(string) FROM MyTable");
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+// must be defined in static/object context
+object hashCode extends ScalarFunction {
+  def eval(s: String): Int = {
+    s.hashCode() * 12
+  }
+}
+
+val tableEnv = TableEnvironment.getTableEnvironment(env)
+
+// use the function in Scala Table API
+myTable.select('string, hashCode('string))
+
+// register and use the function in SQL
+tableEnv.registerFunction("hashCode", hashCode)
+tableEnv.sql("SELECT string, HASHCODE(string) FROM MyTable");
+{% endhighlight %}
+</div>
+</div>
+
+By default the result type of an evaluation method is determined by Flink's type extraction facilities. This is sufficient for basic types or simple POJOs but might be wrong for more complex, custom, or composite types. In these cases `TypeInformation` of the result type can be manually defined by overriding `ScalarFunction#getResultType()`.
+
+Internally, the Table API and SQL code generation works with primitive values as much as possible. If a user-defined scalar function should not introduce much overhead through object creation/casting during runtime, it is recommended to declare parameters and result types as primitive types instead of their boxed classes. `Types.DATE` and `Types.TIME` can also be represented as `int`. `Types.TIMESTAMP` can be represented as `long`.
+
+The following example shows an advanced example which takes the internal timestamp representation and also returns the internal timestamp representation as a long value. By overriding `ScalarFunction#getResultType()` we define that the returned long value should be interpreted as a `Types.TIMESTAMP` by the code generation.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+public static class TimestampModifier extends ScalarFunction {
+  public long eval(long t) {
+    return t % 1000;
+  }
+
+  public TypeInformation<?> getResultType(signature: Class<?>[]) {
+    return Types.TIMESTAMP;
+  }
+}
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+object TimestampModifier extends ScalarFunction {
+  def eval(t: Long): Long = {
+    t % 1000
+  }
+
+  override def getResultType(signature: Array[Class[_]]): TypeInformation[_] = {
+    Types.TIMESTAMP
+  }
+}
+{% endhighlight %}
+</div>
+</div>
+
+
+
+{% top %}
+
+Writing Tables to External Sinks
+--------------------------------
+
+A `Table` can be written to a `TableSink`, which is a generic interface to support a wide variety of file formats (e.g. CSV, Apache Parquet, Apache Avro), storage systems (e.g., JDBC, Apache HBase, Apache Cassandra, Elasticsearch), or messaging systems (e.g., Apache Kafka, RabbitMQ). A batch `Table` can only be written to a `BatchTableSink`, a streaming table requires a `StreamTableSink`. A `TableSink` can implement both interfaces at the same time.
+
+Currently, Flink only provides a `CsvTableSink` that writes a batch or streaming `Table` to CSV-formatted files. A custom `TableSink` can be defined by implementing the `BatchTableSink` and/or `StreamTableSink` interface.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+BatchTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);
+
+// compute the result Table using Table API operators and/or SQL queries
+Table result = ...
+
+// create a TableSink
+TableSink sink = new CsvTableSink("/path/to/file", fieldDelim = "|");
+// write the result Table to the TableSink
+result.writeToSink(sink);
+
+// execute the program
+env.execute();
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = ExecutionEnvironment.getExecutionEnvironment
+val tableEnv = TableEnvironment.getTableEnvironment(env)
+
+// compute the result Table using Table API operators and/or SQL queries
+val result: Table = ...
+
+// create a TableSink
+val sink: TableSink = new CsvTableSink("/path/to/file", fieldDelim = "|")
+// write the result Table to the TableSink
+result.writeToSink(sink)
+
+// execute the program
+env.execute()
+{% endhighlight %}
+</div>
+</div>
+
+{% top %}
+
+Runtime Configuration
+----
+The Table API provides a configuration (the so-called `TableConfig`) to modify runtime behavior. It can be accessed through the `TableEnvironment`.
+
+### Null Handling
+By default, the Table API supports `null` values. Null handling can be disabled to improve preformance by setting the `nullCheck` property in the `TableConfig` to `false`.
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/types_serialization.md
----------------------------------------------------------------------
diff --git a/docs/dev/types_serialization.md b/docs/dev/types_serialization.md
new file mode 100644
index 0000000..364aeb8
--- /dev/null
+++ b/docs/dev/types_serialization.md
@@ -0,0 +1,253 @@
+---
+title: "Data Types"
+nav-id: types
+nav-parent_id: dev
+nav-pos: 9
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Flink handles types in a unique way, containing its own type descriptors,
+generic type extraction, and type serialization framework.
+This document describes the concepts and the rationale behind them.
+
+There are fundamental differences in the way that the Scala API and
+the Java API handle type information, so most of the issues described
+here relate only to one of the to APIs.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+## Type handling in Flink
+
+Flink tries to know as much information about what types enter and leave user functions as possible.
+This stands in contrast to the approach to just assuming nothing and letting the
+programming language and serialization framework handle all types dynamically.
+
+* To allow using POJOs and grouping/joining them by referring to field names, Flink needs the type
+  information to make checks (for typos and type compatibility) before the job is executed.
+
+* The more we know, the better serialization and data layout schemes the compiler/optimizer can develop.
+  That is quite important for the memory usage paradigm in Flink (work on serialized data
+  inside/outside the heap and make serialization very cheap).
+
+* For the upcoming logical programs (see roadmap draft) we need this to know the "schema" of functions.
+
+* Finally, it also spares users having to worry about serialization frameworks and having to register
+  types at those frameworks.
+
+
+## Flink's TypeInformation class
+
+The class {% gh_link /flink-core/src/main/java/org/apache/flink/api/common/typeinfo/TypeInformation.java "TypeInformation" %}
+is the base class for all type descriptors. It reveals some basic properties of the type and can generate serializers
+and, in specializations, comparators for the types.
+(*Note that comparators in Flink do much more than defining an order - they are basically the utility to handle keys*)
+
+Internally, Flink makes the following distinctions between types:
+
+* Basic types: All Java primitives and their boxed form, plus `void`, `String`, `Date`, `BigDecimal`, and `BigInteger`.
+
+* Primitive arrays and Object arrays
+
+* Composite types
+
+  * Flink Java Tuples (part of the Flink Java API)
+
+  * Scala *case classes* (including Scala tuples)
+
+  * POJOs: classes that follow a certain bean-like pattern
+
+* Scala auxiliary types (Option, Either, Lists, Maps, ...)
+
+* Generic types: These will not be serialized by Flink itself, but by Kryo.
+
+POJOs are of particular interest, because they support the creation of complex types and the use of field
+names in the definition of keys: `dataSet.join(another).where("name").equalTo("personName")`.
+They are also transparent to the runtime and can be handled very efficiently by Flink.
+
+
+**Rules for POJO types**
+
+Flink recognizes a data type as a POJO type (and allows "by-name" field referencing) if the following
+conditions are fulfilled:
+
+* The class is public and standalone (no non-static inner class)
+* The class has a public no-argument constructor
+* All fields in the class (and all superclasses) are either public or
+  or have a public getter and a setter method that follows the Java beans
+  naming conventions for getters and setters.
+
+
+## Type Information in the Scala API
+
+Scala has very elaborate concepts for runtime type information though *type manifests* and *class tags*. In
+general, types and methods have access to the types of their generic parameters - thus, Scala programs do
+not suffer from type erasure as Java programs do.
+
+In addition, Scala allows to run custom code in the Scala Compiler through Scala Macros - that means that some Flink
+code gets executed whenever you compile a Scala program written against Flink's Scala API.
+
+We use the Macros to look at the parameter types and return types of all user functions during compilation - that
+is the point in time when certainly all type information is perfectly available. Within the macro, we create
+a *TypeInformation* for the function's return types (or parameter types) and make it part of the operation.
+
+
+#### No Implicit Value for Evidence Parameter Error
+
+In the case where TypeInformation could not be created, programs fail to compile with an error
+stating *"could not find implicit value for evidence parameter of type TypeInformation"*.
+
+A frequent reason if that the code that generates the TypeInformation has not been imported.
+Make sure to import the entire flink.api.scala package.
+{% highlight scala %}
+import org.apache.flink.api.scala._
+{% endhighlight %}
+
+Another common cause are generic methods, which can be fixed as described in the following section.
+
+
+#### Generic Methods
+
+Consider the following case below:
+
+{% highlight scala %}
+def selectFirst[T](input: DataSet[(T, _)]) : DataSet[T] = {
+  input.map { v => v._1 }
+}
+
+val data : DataSet[(String, Long) = ...
+
+val result = selectFirst(data)
+{% endhighlight %}
+
+For such generic methods, the data types of the function parameters and return type may not be the same
+for every call and are not known at the site where the method is defined. The code above will result
+in an error that not enough implicit evidence is available.
+
+In such cases, the type information has to be generated at the invocation site and passed to the
+method. Scala offers *implicit parameters* for that.
+
+The following code tells Scala to bring a type information for *T* into the function. The type
+information will then be generated at the sites where the method is invoked, rather than where the
+method is defined.
+
+{% highlight scala %}
+def selectFirst[T : TypeInformation](input: DataSet[(T, _)]) : DataSet[T] = {
+  input.map { v => v._1 }
+}
+{% endhighlight %}
+
+
+
+## Type Information in the Java API
+
+Java in general erases generic type information. Only for subclasses of generic classes, the subclass
+stores the type to which the generic type variables bind.
+
+Flink uses reflection on the (anonymous) classes that implement the user functions to figure out the types of
+the generic parameters of the function. This logic also contains some simple type inference for cases where
+the return types of functions are dependent on input types, such as in the generic utility method below:
+
+{% highlight java %}
+public class AppendOne<T> extends MapFunction<T, Tuple2<T, Long>> {
+
+    public Tuple2<T, Long> map(T value) {
+        return new Tuple2<T, Long>(value, 1L);
+    }
+}
+{% endhighlight %}
+
+Not in all cases can Flink figure out the data types of functions reliably in Java.
+Some issues remain with generic lambdas (we are trying to solve this with the Java community,
+see below) and with generic type variables that we cannot infer.
+
+
+#### Type Hints in the Java API
+
+To help cases where Flink cannot reconstruct the erased generic type information, the Java API
+offers so called *type hints* from version 0.9 on. The type hints tell the system the type of
+the data set produced by a function. The following gives an example:
+
+{% highlight java %}
+DataSet<SomeType> result = dataSet
+    .map(new MyGenericNonInferrableFunction<Long, SomeType>())
+        .returns(SomeType.class);
+{% endhighlight %}
+
+The `returns` statement specifies the produced type, in this case via a class. The hints support
+type definition through
+
+* Classes, for non-parameterized types (no generics)
+* Strings in the form of `returns("Tuple2<Integer, my.SomeType>")`, which are parsed and converted
+  to a TypeInformation.
+* A TypeInformation directly
+
+
+#### Type extraction for Java 8 lambdas
+
+Type extraction for Java 8 lambdas works differently than for non-lambdas, because lambdas are not associated
+with an implementing class that extends the function interface.
+
+Currently, Flink tries to figure out which method implements the lambda and uses Java's generic signatures to
+determine the parameter types and the return type. However, these signatures are not generated for lambdas
+by all compilers (as of writing this document only reliably by the Eclipse JDT compiler 4.5 from Milestone 2
+onwards)
+
+
+**Improving Type information for Java Lambdas**
+
+One of the Flink committers (Timo Walther) has actually become active in the Eclipse JDT compiler community and
+in the OpenJDK community and submitted patches to the compiler to improve availability of type information
+available for Java 8 lambdas.
+
+The Eclipse JDT compiler has added support for this as of version 4.5 M4. Discussion about the feature in the
+OpenJDK compiler is pending.
+
+
+#### Serialization of POJO types
+
+The PojoTypeInformation is creating serializers for all the fields inside the POJO. Standard types such as
+int, long, String etc. are handled by serializers we ship with Flink.
+For all other types, we fall back to Kryo.
+
+If Kryo is not able to handle the type, you can ask the PojoTypeInfo to serialize the POJO using Avro.
+To do so, you have to call
+
+{% highlight java %}
+final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+env.getConfig().enableForceAvro();
+{% endhighlight %}
+
+Note that Flink is automatically serializing POJOs generated by Avro with the Avro serializer.
+
+If you want your **entire** POJO Type to be treated by the Kryo serializer, set
+
+{% highlight java %}
+final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+env.getConfig().enableForceKryo();
+{% endhighlight %}
+
+If Kryo is not able to serialize your POJO, you can add a custom serializer to Kryo, using
+{% highlight java %}
+env.getConfig().addDefaultKryoSerializer(Class<?> type, Class<? extends Serializer<?>> serializerClass)
+{% endhighlight %}
+
+There are different variants of these methods available.


[41/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/connectors/kinesis.md
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/connectors/kinesis.md b/docs/apis/streaming/connectors/kinesis.md
deleted file mode 100644
index 54a75db..0000000
--- a/docs/apis/streaming/connectors/kinesis.md
+++ /dev/null
@@ -1,322 +0,0 @@
----
-title: "Amazon AWS Kinesis Streams Connector"
-
-# Sub-level navigation
-sub-nav-group: streaming
-sub-nav-parent: connectors
-sub-nav-pos: 5
-sub-nav-title: Amazon Kinesis Streams
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-The Kinesis connector provides access to [Amazon AWS Kinesis Streams](http://aws.amazon.com/kinesis/streams/). 
-
-To use the connector, add the following Maven dependency to your project:
-
-{% highlight xml %}
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-connector-kinesis{{ site.scala_version_suffix }}</artifactId>
-  <version>{{site.version }}</version>
-</dependency>
-{% endhighlight %}
-
-**The `flink-connector-kinesis{{ site.scala_version_suffix }}` has a dependency on code licensed under the [Amazon Software License](https://aws.amazon.com/asl/) (ASL).
-Linking to the flink-connector-kinesis will include ASL licensed code into your application.**
-
-The `flink-connector-kinesis{{ site.scala_version_suffix }}` artifact is not deployed to Maven central as part of
-Flink releases because of the licensing issue. Therefore, you need to build the connector yourself from the source.
-
-Download the Flink source or check it out from the git repository. Then, use the following Maven command to build the module:
-{% highlight bash %}
-mvn clean install -Pinclude-kinesis -DskipTests
-# In Maven 3.3 the shading of flink-dist doesn't work properly in one run, so we need to run mvn for flink-dist again. 
-cd flink-dist
-mvn clean install -Pinclude-kinesis -DskipTests
-{% endhighlight %}
-
-
-The streaming connectors are not part of the binary distribution. See how to link with them for cluster 
-execution [here]({{site.baseurl}}/apis/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution).
-
-### Using the Amazon Kinesis Streams Service
-Follow the instructions from the [Amazon Kinesis Streams Developer Guide](https://docs.aws.amazon.com/streams/latest/dev/learning-kinesis-module-one-create-stream.html)
-to setup Kinesis streams. Make sure to create the appropriate IAM policy and user to read / write to the Kinesis streams.
-
-### Kinesis Consumer
-
-The `FlinkKinesisConsumer` is an exactly-once parallel streaming data source that subscribes to multiple AWS Kinesis
-streams within the same AWS service region, and can handle resharding of streams. Each subtask of the consumer is
-responsible for fetching data records from multiple Kinesis shards. The number of shards fetched by each subtask will
-change as shards are closed and created by Kinesis.
-
-Before consuming data from Kinesis streams, make sure that all streams are created with the status "ACTIVE" in the AWS dashboard.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-Properties consumerConfig = new Properties();
-consumerConfig.put(ConsumerConfigConstants.AWS_REGION, "us-east-1");
-consumerConfig.put(ConsumerConfigConstants.AWS_ACCESS_KEY_ID, "aws_access_key_id");
-consumerConfig.put(ConsumerConfigConstants.AWS_SECRET_ACCESS_KEY, "aws_secret_access_key");
-consumerConfig.put(ConsumerConfigConstants.STREAM_INITIAL_POSITION, "LATEST");
-
-StreamExecutionEnvironment env = StreamExecutionEnvironment.getEnvironment();
-
-DataStream<String> kinesis = env.addSource(new FlinkKinesisConsumer<>(
-    "kinesis_stream_name", new SimpleStringSchema(), consumerConfig));
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val consumerConfig = new Properties();
-consumerConfig.put(ConsumerConfigConstants.AWS_REGION, "us-east-1");
-consumerConfig.put(ConsumerConfigConstants.AWS_ACCESS_KEY_ID, "aws_access_key_id");
-consumerConfig.put(ConsumerConfigConstants.AWS_SECRET_ACCESS_KEY, "aws_secret_access_key");
-consumerConfig.put(ConsumerConfigConstants.STREAM_INITIAL_POSITION, "LATEST");
-
-val env = StreamExecutionEnvironment.getEnvironment
-
-val kinesis = env.addSource(new FlinkKinesisConsumer[String](
-    "kinesis_stream_name", new SimpleStringSchema, consumerConfig))
-{% endhighlight %}
-</div>
-</div>
-
-The above is a simple example of using the consumer. Configuration for the consumer is supplied with a `java.util.Properties`
-instance, the configuration keys for which can be found in `ConsumerConfigConstants`. The example
-demonstrates consuming a single Kinesis stream in the AWS region "us-east-1". The AWS credentials are supplied using the basic method in which
-the AWS access key ID and secret access key are directly supplied in the configuration (other options are setting
-`ConsumerConfigConstants.AWS_CREDENTIALS_PROVIDER` to `ENV_VAR`, `SYS_PROP`, `PROFILE`, and `AUTO`). Also, data is being consumed
-from the newest position in the Kinesis stream (the other option will be setting `ConsumerConfigConstants.STREAM_INITIAL_POSITION`
-to `TRIM_HORIZON`, which lets the consumer start reading the Kinesis stream from the earliest record possible).
-
-Other optional configuration keys for the consumer can be found in `ConsumerConfigConstants`.
-
-#### Fault Tolerance for Exactly-Once User-Defined State Update Semantics
-
-With Flink's checkpointing enabled, the Flink Kinesis Consumer will consume records from shards in Kinesis streams and
-periodically checkpoint each shard's progress. In case of a job failure, Flink will restore the streaming program to the
-state of the latest complete checkpoint and re-consume the records from Kinesis shards, starting from the progress that
-was stored in the checkpoint.
-
-The interval of drawing checkpoints therefore defines how much the program may have to go back at most, in case of a failure.
-
-To use fault tolerant Kinesis Consumers, checkpointing of the topology needs to be enabled at the execution environment:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
-env.enableCheckpointing(5000); // checkpoint every 5000 msecs
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = StreamExecutionEnvironment.getExecutionEnvironment()
-env.enableCheckpointing(5000) // checkpoint every 5000 msecs
-{% endhighlight %}
-</div>
-</div>
-
-Also note that Flink can only restart the topology if enough processing slots are available to restart the topology.
-Therefore, if the topology fails due to loss of a TaskManager, there must still be enough slots available afterwards.
-Flink on YARN supports automatic restart of lost YARN containers.
-
-#### Event Time for Consumed Records
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
-env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = StreamExecutionEnvironment.getExecutionEnvironment()
-env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
-{% endhighlight %}
-</div>
-</div>
-
-If streaming topologies choose to use the [event time notion]({{site.baseurl}}/apis/streaming/event_time.html) for record
-timestamps, an *approximate arrival timestamp* will be used by default. This timestamp is attached to records by Kinesis once they
-were successfully received and stored by streams. Note that this timestamp is typically referred to as a Kinesis server-side
-timestamp, and there are no guarantees about the accuracy or order correctness (i.e., the timestamps may not always be
-ascending).
-
-Users can choose to override this default with a custom timestamp, as described [here]({{ site.baseurl }}/apis/streaming/event_timestamps_watermarks.html),
-or use one from the [predefined ones]({{ site.baseurl }}/apis/streaming/event_timestamp_extractors.html). After doing so,
-it can be passed to the consumer in the following way:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-DataStream<String> kinesis = env.addSource(new FlinkKinesisConsumer<>(
-    "kinesis_stream_name", new SimpleStringSchema(), kinesisConsumerConfig));
-kinesis = kinesis.assignTimestampsAndWatermarks(new CustomTimestampAssigner());
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val kinesis = env.addSource(new FlinkKinesisConsumer[String](
-    "kinesis_stream_name", new SimpleStringSchema, kinesisConsumerConfig))
-kinesis = kinesis.assignTimestampsAndWatermarks(new CustomTimestampAssigner)
-{% endhighlight %}
-</div>
-</div>
-
-#### Threading Model
-
-The Flink Kinesis Consumer uses multiple threads for shard discovery and data consumption.
-
-For shard discovery, each parallel consumer subtask will have a single thread that constantly queries Kinesis for shard
-information even if the subtask initially did not have shards to read from when the consumer was started. In other words, if
-the consumer is run with a parallelism of 10, there will be a total of 10 threads constantly querying Kinesis regardless
-of the total amount of shards in the subscribed streams.
-
-For data consumption, a single thread will be created to consume each discovered shard. Threads will terminate when the
-shard it is responsible of consuming is closed as a result of stream resharding. In other words, there will always be
-one thread per open shard.
-
-#### Internally Used Kinesis APIs
-
-The Flink Kinesis Consumer uses the [AWS Java SDK](http://aws.amazon.com/sdk-for-java/) internally to call Kinesis APIs
-for shard discovery and data consumption. Due to Amazon's [service limits for Kinesis Streams](http://docs.aws.amazon.com/streams/latest/dev/service-sizes-and-limits.html)
-on the APIs, the consumer will be competing with other non-Flink consuming applications that the user may be running.
-Below is a list of APIs called by the consumer with description of how the consumer uses the API, as well as information
-on how to deal with any errors or warnings that the Flink Kinesis Consumer may have due to these service limits.
-
-- *[DescribeStream](http://docs.aws.amazon.com/kinesis/latest/APIReference/API_DescribeStream.html)*: this is constantly called
-by a single thread in each parallel consumer subtask to discover any new shards as a result of stream resharding. By default,
-the consumer performs the shard discovery at an interval of 10 seconds, and will retry indefinitely until it gets a result
-from Kinesis. If this interferes with other non-Flink consuming applications, users can slow down the consumer of
-calling this API by setting a value for `ConsumerConfigConstants.SHARD_DISCOVERY_INTERVAL_MILLIS` in the supplied
-configuration properties. This sets the discovery interval to a different value. Note that this setting directly impacts
-the maximum delay of discovering a new shard and starting to consume it, as shards will not be discovered during the interval.
-
-- *[GetShardIterator](http://docs.aws.amazon.com/kinesis/latest/APIReference/API_GetShardIterator.html)*: this is called
-only once when per shard consuming threads are started, and will retry if Kinesis complains that the transaction limit for the
-API has exceeded, up to a default of 3 attempts. Note that since the rate limit for this API is per shard (not per stream),
-the consumer itself should not exceed the limit. Usually, if this happens, users can either try to slow down any other
-non-Flink consuming applications of calling this API, or modify the retry behaviour of this API call in the consumer by
-setting keys prefixed by `ConsumerConfigConstants.SHARD_GETITERATOR_*` in the supplied configuration properties.
-
-- *[GetRecords](http://docs.aws.amazon.com/kinesis/latest/APIReference/API_GetRecords.html)*: this is constantly called
-by per shard consuming threads to fetch records from Kinesis. When a shard has multiple concurrent consumers (when there
-are any other non-Flink consuming applications running), the per shard rate limit may be exceeded. By default, on each call
-of this API, the consumer will retry if Kinesis complains that the data size / transaction limit for the API has exceeded,
-up to a default of 3 attempts. Users can either try to slow down other non-Flink consuming applications, or adjust the throughput
-of the consumer by setting the `ConsumerConfigConstants.SHARD_GETRECORDS_MAX` and
-`ConsumerConfigConstants.SHARD_GETRECORDS_INTERVAL_MILLIS` keys in the supplied configuration properties. Setting the former
-adjusts the maximum number of records each consuming thread tries to fetch from shards on each call (default is 100), while
-the latter modifies the sleep interval between each fetch (there will be no sleep by default). The retry behaviour of the
-consumer when calling this API can also be modified by using the other keys prefixed by `ConsumerConfigConstants.SHARD_GETRECORDS_*`.
-
-### Kinesis Producer
-
-The `FlinkKinesisProducer` is used for putting data from a Flink stream into a Kinesis stream. Note that the producer is not participating in
-Flink's checkpointing and doesn't provide exactly-once processing guarantees. 
-Also, the Kinesis producer does not guarantee that records are written in order to the shards (See [here](https://github.com/awslabs/amazon-kinesis-producer/issues/23) and [here](http://docs.aws.amazon.com/kinesis/latest/APIReference/API_PutRecord.html#API_PutRecord_RequestSyntax) for more details). 
-
-In case of a failure or a resharding, data will be written again to Kinesis, leading to duplicates. This behavior is usually called "at-least-once" semantics.
-
-To put data into a Kinesis stream, make sure the stream is marked as "ACTIVE" in the AWS dashboard.
-
-For the monitoring to work, the user accessing the stream needs access to the Cloud watch service.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-Properties producerConfig = new Properties();
-producerConfig.put(ProducerConfigConstants.AWS_REGION, "us-east-1");
-producerConfig.put(ProducerConfigConstants.AWS_ACCESS_KEY_ID, "aws_access_key_id");
-producerConfig.put(ProducerConfigConstants.AWS_SECRET_ACCESS_KEY, "aws_secret_access_key");
-
-FlinkKinesisProducer<String> kinesis = new FlinkKinesisProducer<>(new SimpleStringSchema(), producerConfig);
-kinesis.setFailOnError(true);
-kinesis.setDefaultStream("kinesis_stream_name");
-kinesis.setDefaultPartition("0");
-
-DataStream<String> simpleStringStream = ...;
-simpleStringStream.addSink(kinesis);
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val producerConfig = new Properties();
-producerConfig.put(ProducerConfigConstants.AWS_REGION, "us-east-1");
-producerConfig.put(ProducerConfigConstants.AWS_ACCESS_KEY_ID, "aws_access_key_id");
-producerConfig.put(ProducerConfigConstants.AWS_SECRET_ACCESS_KEY, "aws_secret_access_key");
-
-val kinesis = new FlinkKinesisProducer[String](new SimpleStringSchema, producerConfig);
-kinesis.setFailOnError(true);
-kinesis.setDefaultStream("kinesis_stream_name");
-kinesis.setDefaultPartition("0");
-
-val simpleStringStream = ...;
-simpleStringStream.addSink(kinesis);
-{% endhighlight %}
-</div>
-</div>
-
-The above is a simple example of using the producer. Configuration for the producer with the mandatory configuration values is supplied with a `java.util.Properties`
-instance as described above for the consumer. The example demonstrates producing a single Kinesis stream in the AWS region "us-east-1".
-
-Instead of a `SerializationSchema`, it also supports a `KinesisSerializationSchema`. The `KinesisSerializationSchema` allows to send the data to multiple streams. This is 
-done using the `KinesisSerializationSchema.getTargetStream(T element)` method. Returning `null` there will instruct the producer to write the element to the default stream.
-Otherwise, the returned stream name is used.
-
-Other optional configuration keys for the producer can be found in `ProducerConfigConstants`.
-		
-		
-### Using Non-AWS Kinesis Endpoints for Testing
-
-It is sometimes desirable to have Flink operate as a consumer or producer against a non-AWS Kinesis endpoint such as
-[Kinesalite](https://github.com/mhart/kinesalite); this is especially useful when performing functional testing of a Flink
-application. The AWS endpoint that would normally be inferred by the AWS region set in the Flink configuration must be overridden via a configuration property.
-
-To override the AWS endpoint, taking the producer for example, set the `ProducerConfigConstants.AWS_ENDPOINT` property in the
-Flink configuration, in addition to the `ProducerConfigConstants.AWS_REGION` required by Flink. Although the region is
-required, it will not be used to determine the AWS endpoint URL.
-
-The following example shows how one might supply the `ProducerConfigConstants.AWS_ENDPOINT` configuration property:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-Properties producerConfig = new Properties();
-producerConfig.put(ProducerConfigConstants.AWS_REGION, "us-east-1");
-producerConfig.put(ProducerConfigConstants.AWS_ACCESS_KEY_ID, "aws_access_key_id");
-producerConfig.put(ProducerConfigConstants.AWS_SECRET_ACCESS_KEY, "aws_secret_access_key");
-producerConfig.put(ProducerConfigConstants.AWS_ENDPOINT, "http://localhost:4567");
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val producerConfig = new Properties();
-producerConfig.put(ProducerConfigConstants.AWS_REGION, "us-east-1");
-producerConfig.put(ProducerConfigConstants.AWS_ACCESS_KEY_ID, "aws_access_key_id");
-producerConfig.put(ProducerConfigConstants.AWS_SECRET_ACCESS_KEY, "aws_secret_access_key");
-producerConfig.put(ProducerConfigConstants.AWS_ENDPOINT, "http://localhost:4567");
-{% endhighlight %}
-</div>
-</div>

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/connectors/nifi.md
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/connectors/nifi.md b/docs/apis/streaming/connectors/nifi.md
deleted file mode 100644
index a47b8f0..0000000
--- a/docs/apis/streaming/connectors/nifi.md
+++ /dev/null
@@ -1,141 +0,0 @@
----
-title: "Apache NiFi Connector"
-
-# Sub-level navigation
-sub-nav-group: streaming
-sub-nav-parent: connectors
-sub-nav-pos: 7
-sub-nav-title: Apache NiFi
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-This connector provides a Source and Sink that can read from and write to 
-[Apache NiFi](https://nifi.apache.org/). To use this connector, add the
-following dependency to your project:
-
-{% highlight xml %}
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-connector-nifi{{ site.scala_version_suffix }}</artifactId>
-  <version>{{site.version }}</version>
-</dependency>
-{% endhighlight %}
-
-Note that the streaming connectors are currently not part of the binary
-distribution. See
-[here]({{site.baseurl}}/apis/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution)
-for information about how to package the program with the libraries for
-cluster execution.
-
-#### Installing Apache NiFi
-
-Instructions for setting up a Apache NiFi cluster can be found
-[here](https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#how-to-install-and-start-nifi).
-
-#### Apache NiFi Source
-
-The connector provides a Source for reading data from Apache NiFi to Apache Flink.
-
-The class `NiFiSource(\u2026)` provides 2 constructors for reading data from NiFi.
-
-- `NiFiSource(SiteToSiteConfig config)` - Constructs a `NiFiSource(\u2026)` given the client's SiteToSiteConfig and a
-     default wait time of 1000 ms.
-      
-- `NiFiSource(SiteToSiteConfig config, long waitTimeMs)` - Constructs a `NiFiSource(\u2026)` given the client's
-     SiteToSiteConfig and the specified wait time (in milliseconds).
-     
-Example:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-StreamExecutionEnvironment streamExecEnv = StreamExecutionEnvironment.getExecutionEnvironment();
-
-SiteToSiteClientConfig clientConfig = new SiteToSiteClient.Builder()
-        .url("http://localhost:8080/nifi")
-        .portName("Data for Flink")
-        .requestBatchCount(5)
-        .buildConfig();
-
-SourceFunction<NiFiDataPacket> nifiSource = new NiFiSource(clientConfig);
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val streamExecEnv = StreamExecutionEnvironment.getExecutionEnvironment()
-
-val clientConfig: SiteToSiteClientConfig = new SiteToSiteClient.Builder()
-       .url("http://localhost:8080/nifi")
-       .portName("Data for Flink")
-       .requestBatchCount(5)
-       .buildConfig()
-       
-val nifiSource = new NiFiSource(clientConfig)       
-{% endhighlight %}       
-</div>
-</div>
-
-Here data is read from the Apache NiFi Output Port called "Data for Flink" which is part of Apache NiFi 
-Site-to-site protocol configuration.
- 
-#### Apache NiFi Sink
-
-The connector provides a Sink for writing data from Apache Flink to Apache NiFi.
-
-The class `NiFiSink(\u2026)` provides a constructor for instantiating a `NiFiSink`.
-
-- `NiFiSink(SiteToSiteClientConfig, NiFiDataPacketBuilder<T>)` constructs a `NiFiSink(\u2026)` given the client's `SiteToSiteConfig` and a `NiFiDataPacketBuilder` that converts data from Flink to `NiFiDataPacket` to be ingested by NiFi.
-      
-Example:
-      
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-StreamExecutionEnvironment streamExecEnv = StreamExecutionEnvironment.getExecutionEnvironment();
-
-SiteToSiteClientConfig clientConfig = new SiteToSiteClient.Builder()
-        .url("http://localhost:8080/nifi")
-        .portName("Data from Flink")
-        .requestBatchCount(5)
-        .buildConfig();
-
-SinkFunction<NiFiDataPacket> nifiSink = new NiFiSink<>(clientConfig, new NiFiDataPacketBuilder<T>() {...});
-
-streamExecEnv.addSink(nifiSink);
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val streamExecEnv = StreamExecutionEnvironment.getExecutionEnvironment()
-
-val clientConfig: SiteToSiteClientConfig = new SiteToSiteClient.Builder()
-       .url("http://localhost:8080/nifi")
-       .portName("Data from Flink")
-       .requestBatchCount(5)
-       .buildConfig()
-       
-val nifiSink: NiFiSink[NiFiDataPacket] = new NiFiSink[NiFiDataPacket](clientConfig, new NiFiDataPacketBuilder<T>() {...})
-
-streamExecEnv.addSink(nifiSink)
-{% endhighlight %}       
-</div>
-</div>      
-
-More information about [Apache NiFi](https://nifi.apache.org) Site-to-Site Protocol can be found [here](https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#site-to-site)

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/connectors/rabbitmq.md
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/connectors/rabbitmq.md b/docs/apis/streaming/connectors/rabbitmq.md
deleted file mode 100644
index 0e186e0..0000000
--- a/docs/apis/streaming/connectors/rabbitmq.md
+++ /dev/null
@@ -1,132 +0,0 @@
----
-title: "RabbitMQ Connector"
-
-# Sub-level navigation
-sub-nav-group: streaming
-sub-nav-parent: connectors
-sub-nav-pos: 4
-sub-nav-title: RabbitMQ
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-This connector provides access to data streams from [RabbitMQ](http://www.rabbitmq.com/). To use this connector, add the following dependency to your project:
-
-{% highlight xml %}
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-connector-rabbitmq{{ site.scala_version_suffix }}</artifactId>
-  <version>{{site.version }}</version>
-</dependency>
-{% endhighlight %}
-
-Note that the streaming connectors are currently not part of the binary distribution. See linking with them for cluster execution [here]({{site.baseurl}}/apis/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution).
-
-#### Installing RabbitMQ
-Follow the instructions from the [RabbitMQ download page](http://www.rabbitmq.com/download.html). After the installation the server automatically starts, and the application connecting to RabbitMQ can be launched.
-
-#### RabbitMQ Source
-
-A class which provides an interface for receiving data from RabbitMQ.
-
-The followings have to be provided for the `RMQSource(\u2026)` constructor in order:
-
-- RMQConnectionConfig.
-- queueName: The RabbitMQ queue name.
-- usesCorrelationId: `true` when correlation ids should be used, `false` otherwise (default is `false`).
-- deserializationSchema: Deserialization schema to turn messages into Java objects.
-
-This source can be operated in three different modes:
-
-1. Exactly-once (when checkpointed) with RabbitMQ transactions and messages with
-    unique correlation IDs.
-2. At-least-once (when checkpointed) with RabbitMQ transactions but no deduplication mechanism
-    (correlation id is not set).
-3. No strong delivery guarantees (without checkpointing) with RabbitMQ auto-commit mode.
-
-Correlation ids are a RabbitMQ application feature. You have to set it in the message properties
-when injecting messages into RabbitMQ. If you set `usesCorrelationId` to true and do not supply
-unique correlation ids, the source will throw an exception (if the correlation id is null) or ignore
-messages with non-unique correlation ids. If you set `usesCorrelationId` to false, then you don't
-have to supply correlation ids.
-
-Example:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-RMQConnectionConfig connectionConfig = new RMQConnectionConfig.Builder()
-.setHost("localhost").setPort(5000).setUserName(..)
-.setPassword(..).setVirtualHost("/").build();
-DataStream<String> streamWithoutCorrelationIds = env
-	.addSource(new RMQSource<String>(connectionConfig, "hello", new SimpleStringSchema()))
-	.print
-
-DataStream<String> streamWithCorrelationIds = env
-	.addSource(new RMQSource<String>(connectionConfig, "hello", true, new SimpleStringSchema()))
-	.print
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val connectionConfig = new RMQConnectionConfig.Builder()
-.setHost("localhost").setPort(5000).setUserName(..)
-.setPassword(..).setVirtualHost("/").build()
-streamWithoutCorrelationIds = env
-    .addSource(new RMQSource[String](connectionConfig, "hello", new SimpleStringSchema))
-    .print
-
-streamWithCorrelationIds = env
-    .addSource(new RMQSource[String](connectionConfig, "hello", true, new SimpleStringSchema))
-    .print
-{% endhighlight %}
-</div>
-</div>
-
-#### RabbitMQ Sink
-A class providing an interface for sending data to RabbitMQ.
-
-The followings have to be provided for the `RMQSink(\u2026)` constructor in order:
-
-1. RMQConnectionConfig
-2. The queue name
-3. Serialization schema
-
-Example:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-RMQConnectionConfig connectionConfig = new RMQConnectionConfig.Builder()
-.setHost("localhost").setPort(5000).setUserName(..)
-.setPassword(..).setVirtualHost("/").build();
-stream.addSink(new RMQSink<String>(connectionConfig, "hello", new SimpleStringSchema()));
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val connectionConfig = new RMQConnectionConfig.Builder()
-.setHost("localhost").setPort(5000).setUserName(..)
-.setPassword(..).setVirtualHost("/").build()
-stream.addSink(new RMQSink[String](connectionConfig, "hello", new SimpleStringSchema))
-{% endhighlight %}
-</div>
-</div>
-
-More about RabbitMQ can be found [here](http://www.rabbitmq.com/).

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/connectors/redis.md
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/connectors/redis.md b/docs/apis/streaming/connectors/redis.md
deleted file mode 100644
index dfa5296..0000000
--- a/docs/apis/streaming/connectors/redis.md
+++ /dev/null
@@ -1,177 +0,0 @@
----
-title: "Redis Connector"
-
-# Sub-level navigation
-sub-nav-group: streaming
-sub-nav-parent: connectors
-sub-nav-pos: 6
-sub-nav-title: Redis
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-This connector provides a Sink that can write to
-[Redis](http://redis.io/) and also can publish data to [Redis PubSub](http://redis.io/topics/pubsub). To use this connector, add the
-following dependency to your project:
-{% highlight xml %}
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-connector-redis{{ site.scala_version_suffix }}</artifactId>
-  <version>{{site.version }}</version>
-</dependency>
-{% endhighlight %}
-Version Compatibility: This module is compatible with Redis 2.8.5.
-
-Note that the streaming connectors are currently not part of the binary distribution. You need to link them for cluster execution [explicitly]({{site.baseurl}}/apis/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution).
-
-#### Installing Redis
-Follow the instructions from the [Redis download page](http://redis.io/download).
-
-#### Redis Sink
-A class providing an interface for sending data to Redis. 
-The sink can use three different methods for communicating with different type of Redis environments:
-1. Single Redis Server
-2. Redis Cluster
-3. Redis Sentinel
-
-This code shows how to create a sink that communicate to a single redis server:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-public static class RedisExampleMapper implements RedisMapper<Tuple2<String, String>>{
-
-    @Override
-    public RedisCommandDescription getCommandDescription() {
-        return new RedisCommandDescription(RedisCommand.HSET, "HASH_NAME");
-    }
-
-    @Override
-    public String getKeyFromData(Tuple2<String, String> data) {
-        return data.f0;
-    }
-
-    @Override
-    public String getValueFromData(Tuple2<String, String> data) {
-        return data.f1;
-    }
-}
-FlinkJedisPoolConfig conf = new FlinkJedisPoolConfig.Builder().setHost("127.0.0.1").build();
-
-DataStream<String> stream = ...;
-stream.addSink(new RedisSink<Tuple2<String, String>>(conf, new RedisExampleMapper());
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-class RedisExampleMapper extends RedisMapper[(String, String)]{
-  override def getCommandDescription: RedisCommandDescription = {
-    new RedisCommandDescription(RedisCommand.HSET, "HASH_NAME")
-  }
-
-  override def getKeyFromData(data: (String, String)): String = data._1
-
-  override def getValueFromData(data: (String, String)): String = data._2
-}
-val conf = new FlinkJedisPoolConfig.Builder().setHost("127.0.0.1").build()
-stream.addSink(new RedisSink[(String, String)](conf, new RedisExampleMapper))
-{% endhighlight %}
-</div>
-</div>
-
-This example code does the same, but for Redis Cluster:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-
-FlinkJedisPoolConfig conf = new FlinkJedisPoolConfig.Builder()
-    .setNodes(new HashSet<InetSocketAddress>(Arrays.asList(new InetSocketAddress(5601)))).build();
-
-DataStream<String> stream = ...;
-stream.addSink(new RedisSink<Tuple2<String, String>>(conf, new RedisExampleMapper());
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val conf = new FlinkJedisPoolConfig.Builder().setNodes(...).build()
-stream.addSink(new RedisSink[(String, String)](conf, new RedisExampleMapper))
-{% endhighlight %}
-</div>
-</div>
-
-This example shows when the Redis environment is with Sentinels:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-
-FlinkJedisSentinelConfig conf = new FlinkJedisSentinelConfig.Builder()
-    .setMasterName("master").setSentinels(...).build();
-
-DataStream<String> stream = ...;
-stream.addSink(new RedisSink<Tuple2<String, String>>(conf, new RedisExampleMapper());
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val conf = new FlinkJedisSentinelConfig.Builder().setMasterName("master").setSentinels(...).build()
-stream.addSink(new RedisSink[(String, String)](conf, new RedisExampleMapper))
-{% endhighlight %}
-</div>
-</div>
-
-This section gives a description of all the available data types and what Redis command used for that.
-
-<table class="table table-bordered" style="width: 75%">
-    <thead>
-        <tr>
-          <th class="text-center" style="width: 20%">Data Type</th>
-          <th class="text-center" style="width: 25%">Redis Command [Sink]</th>
-          <th class="text-center" style="width: 25%">Redis Command [Source]</th>
-        </tr>
-      </thead>
-      <tbody>
-        <tr>
-            <td>HASH</td><td><a href="http://redis.io/commands/hset">HSET</a></td><td>--NA--</td>
-        </tr>
-        <tr>
-            <td>LIST</td><td>
-                <a href="http://redis.io/commands/rpush">RPUSH</a>, 
-                <a href="http://redis.io/commands/lpush">LPUSH</a>
-            </td><td>--NA--</td>
-        </tr>
-        <tr>
-            <td>SET</td><td><a href="http://redis.io/commands/rpush">SADD</a></td><td>--NA--</td>
-        </tr>
-        <tr>
-            <td>PUBSUB</td><td><a href="http://redis.io/commands/publish">PUBLISH</a></td><td>--NA--</td>
-        </tr>
-        <tr>
-            <td>STRING</td><td><a href="http://redis.io/commands/set">SET</a></td><td>--NA--</td>
-        </tr>
-        <tr>
-            <td>HYPER_LOG_LOG</td><td><a href="http://redis.io/commands/pfadd">PFADD</a></td><td>--NA--</td>
-        </tr>
-        <tr>
-            <td>SORTED_SET</td><td><a href="http://redis.io/commands/zadd">ZADD</a></td><td>--NA--</td>
-        </tr>                
-      </tbody>
-</table>
-More about Redis can be found [here](http://redis.io/).

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/connectors/twitter.md
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/connectors/twitter.md b/docs/apis/streaming/connectors/twitter.md
deleted file mode 100644
index 9e84481..0000000
--- a/docs/apis/streaming/connectors/twitter.md
+++ /dev/null
@@ -1,89 +0,0 @@
----
-title: "Twitter Connector"
-
-# Sub-level navigation
-sub-nav-group: streaming
-sub-nav-parent: connectors
-sub-nav-pos: 5
-sub-nav-title: Twitter
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-The Twitter Streaming API provides access to the stream of tweets made available by Twitter. 
-Flink Streaming comes with a built-in `TwitterSource` class for establishing a connection to this stream. 
-To use this connector, add the following dependency to your project:
-
-{% highlight xml %}
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-connector-twitter{{ site.scala_version_suffix }}</artifactId>
-  <version>{{site.version }}</version>
-</dependency>
-{% endhighlight %}
-
-Note that the streaming connectors are currently not part of the binary distribution. 
-See linking with them for cluster execution [here]({{site.baseurl}}/apis/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution).
-
-#### Authentication
-In order to connect to the Twitter stream the user has to register their program and acquire the necessary information for the authentication. The process is described below.
-
-#### Acquiring the authentication information
-First of all, a Twitter account is needed. Sign up for free at [twitter.com/signup](https://twitter.com/signup) 
-or sign in at Twitter's [Application Management](https://apps.twitter.com/) and register the application by 
-clicking on the "Create New App" button. Fill out a form about your program and accept the Terms and Conditions.
-After selecting the application, the API key and API secret (called `twitter-source.consumerKey` and `twitter-source.consumerSecret` in `TwitterSource` respectively) are located on the "API Keys" tab. 
-The necessary OAuth Access Token data (`twitter-source.token` and `twitter-source.tokenSecret` in `TwitterSource`) can be generated and acquired on the "Keys and Access Tokens" tab.
-Remember to keep these pieces of information secret and do not push them to public repositories.
- 
- 
-
-#### Usage
-In contrast to other connectors, the `TwitterSource` depends on no additional services. For example the following code should run gracefully:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-Properties props = new Properties();
-p.setProperty(TwitterSource.CONSUMER_KEY, "");
-p.setProperty(TwitterSource.CONSUMER_SECRET, "");
-p.setProperty(TwitterSource.TOKEN, "");
-p.setProperty(TwitterSource.TOKEN_SECRET, "");
-DataStream<String> streamSource = env.addSource(new TwitterSource(props));
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val props = new Properties();
-p.setProperty(TwitterSource.CONSUMER_KEY, "");
-p.setProperty(TwitterSource.CONSUMER_SECRET, "");
-p.setProperty(TwitterSource.TOKEN, "");
-p.setProperty(TwitterSource.TOKEN_SECRET, "");
-DataStream<String> streamSource = env.addSource(new TwitterSource(props));
-{% endhighlight %}
-</div>
-</div>
-
-The `TwitterSource` emits strings containing a JSON object, representing a Tweet.
-
-The `TwitterExample` class in the `flink-examples-streaming` package shows a full example how to use the `TwitterSource`.
-
-By default, the `TwitterSource` uses the `StatusesSampleEndpoint`. This endpoint returns a random sample of Tweets.
-There is a `TwitterSource.EndpointInitializer` interface allowing users to provide a custom endpoint.
-

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/event_time.md
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/event_time.md b/docs/apis/streaming/event_time.md
deleted file mode 100644
index 7f94d68..0000000
--- a/docs/apis/streaming/event_time.md
+++ /dev/null
@@ -1,208 +0,0 @@
----
-title: "Event Time"
-
-sub-nav-id: eventtime
-sub-nav-group: streaming
-sub-nav-pos: 3
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-* toc
-{:toc}
-
-# Event Time / Processing Time / Ingestion Time
-
-Flink supports different notions of *time* in streaming programs.
-
-- **Processing time:** Processing time refers to the system time of the machine that is executing the
-    respective operation.
-
-    When a streaming program runs on processing time, all time-based operations (like time windows) will
-    use the system clock of the machines that run the respective operator. For example, an hourly
-    processing time window will include all records that arrived at a specific operator between the
-    times when the system clock indicated the full hour.
-
-    Processing time is the simplest notion of time and requires no coordination between streams and machines.
-    It provides the best performance and the lowest latency. However, in distributed and asynchronous
-    environments processing time does not provide determinism, because it is susceptible to the speed at which
-    records arrive in the system (for example from the message queue), and to the speed at which the
-    records flow between operators inside the system.
-
-- **Event time:** Event time is the time that each individual event occurred on its producing device.
-    This time is typically embedded within the records before they enter Flink and that *event timestamp*
-    can be extracted from the record. An hourly event time window will contain all records that carry an
-    event timestamp that falls into that hour, regardless of when the records arrive, and in what order
-    they arrive.
-
-    Event time gives correct results even on out-of-order events, late events, or on replays
-    of data from backups or persistent logs. In event time, the progress of time depends on the data,
-    not on any wall clocks. Event time programs must specify how to generate *Event Time Watermarks*,
-    which is the mechanism that signals time progress in event time. The mechanism is
-    described below.
-
-    Event time processing often incurs a certain latency, due to it nature of waiting a certain time for
-    late events and out-of-order events. Because of that, event time programs are often combined with
-    *processing time* operations.
-
-- **Ingestion time:** Ingestion time is the time that events enter Flink. At the source operator, each
-    record gets the source's current time as a timestamp, and time-based operations (like time windows)
-    refer to that timestamp.
-
-    *Ingestion Time* sits conceptually in between *Event Time* and *Processing Time*. Compared to
-    *Processing Time*, it is slightly more expensive, but gives more predictable results: Because
-    *Ingestion Time* uses stable timestamps (assigned once at the source), different window operations
-    over the records will refer to the same timestamp, whereas in *Processing Time* each window operator
-    may assign the record to a different window (based on the local system clock and any transport delay).
-
-    Compered to *Event Time*, *Ingestion Time* programs cannot handle any out-of-order events or late data,
-    but the programs don't have to specify how to generate *Watermarks*.
-
-    Internally, *Ingestion Time* is treated much like event time, with automatic timestamp assignment and
-    automatic Watermark generation.
-
-<img src="fig/times_clocks.svg" class="center" width="80%" />
-
-
-### Setting a Time Characteristic
-
-The first part of a Flink DataStream program is usually to set the base *time characteristic*. That setting
-defines how data stream sources behave (for example whether to assign timestamps), and what notion of
-time the window operations like `KeyedStream.timeWindow(Time.seconds(30))` refer to.
-
-The following example shows a Flink program that aggregates events in hourly time windows. The behavior of the
-windows adapts with the time characteristic.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
-
-env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);
-
-// alternatively:
-// env.setStreamTimeCharacteristic(TimeCharacteristic.IngestionTime);
-// env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
-
-DataStream<MyEvent> stream = env.addSource(new FlinkKafkaConsumer09<MyEvent>(topic, schema, props));
-
-stream
-    .keyBy( (event) -> event.getUser() )
-    .timeWindow(Time.hours(1))
-    .reduce( (a, b) -> a.add(b) )
-    .addSink(...);
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = StreamExecutionEnvironment.getExecutionEnvironment
-
-env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime)
-
-// alternatively:
-// env.setStreamTimeCharacteristic(TimeCharacteristic.IngestionTime)
-// env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
-
-val stream: DataStream[MyEvent] = env.addSource(new FlinkKafkaConsumer09[MyEvent](topic, schema, props))
-
-stream
-    .keyBy( _.getUser )
-    .timeWindow(Time.hours(1))
-    .reduce( (a, b) => a.add(b) )
-    .addSink(...)
-{% endhighlight %}
-</div>
-</div>
-
-
-Note that in order to run this example in *Event Time*, the program needs to use either an event time
-source, or inject a *Timestamp Assigner & Watermark Generator*. Those functions describe how to access
-the event timestamps, and what timely out-of-orderness the event stream exhibits.
-
-The section below describes the general mechanism behind *Timestamps* and *Watermarks*. For a guide on how
-to use timestamp assignment and watermark generation in the Flink DataStream API, please refer to
-[Generating Timestamps / Watermarks]({{ site.baseurl }}/apis/streaming/event_timestamps_watermarks.html)
-
-
-# Event Time and Watermarks
-
-*Note: Flink implements many techniques from the Dataflow Model. For a good introduction to Event Time and, have also a look at these articles*
-
-  - [Streaming 101](https://www.oreilly.com/ideas/the-world-beyond-batch-streaming-101) by Tyler Akidau
-  - The [Dataflow Model paper](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43864.pdf)
-
-
-A stream processor that supports *event time* needs a way to measure the progress of event time. 
-For example, a window operator that builds hourly windows needs to be notified when event time has reached the
-next full hour, such that the operator can close the next window.
-
-*Event Time* can progress independently of *Processing Time* (measures by wall clocks).
-For example, in one program, the current *event time* of an operator can trail slightly behind the processing time
-(accounting for a delay in receiving the latest elements) and both proceed at the same speed. In another streaming
-program, which reads fast-forward through some data already buffered in a Kafka topic (or another message queue), event time
-can progress by weeks in seconds.
-
-------
-
-The mechanism in Flink to measure progress in event time is **Watermarks**.
-Watermarks flow as part of the data stream and carry a timestamp *t*. A *Watermark(t)* declares that event time has reached time
-*t* in that stream, meaning that all events with a timestamps *t' < t* have occurred.
-
-The figure below shows a stream of events with (logical) timestamps, and watermarks flowing inline. The events are in order
-(with respect to their timestamp), meaning that watermarks are simply periodic markers in the stream with an in-order timestamp.
-
-<img src="fig/stream_watermark_in_order.svg" alt="A data stream with events (in order) and watermarks" class="center" width="65%" />
-
-Watermarks are crucial for *out-of-order* streams, as shown in the figure below, where, events do not occur ordered by their timestamp.
-Watermarks establish points in the stream where all events up to a certain timestamp have occurred. Once these watermarks reach an
-operator, the operator can advance its internal *event time clock* to the value of the watermark.
-
-<img src="fig/stream_watermark_out_of_order.svg" alt="A data stream with events (out of order) and watermarks" class="center" width="65%" />
-
-
-## Watermarks in Parallel Streams
-
-Watermarks are generated at source functions, or directly after source functions. Each parallel subtask of a source function usually
-generates its watermarks independently. These watermarks define the event time at that particular parallel source.
-
-As the watermarks flow through the streaming program, they advance the event time at the operators where they arrive. Whenever an
-operator advances its event time, it generates a new watermark downstream for its successor operators.
-
-Operators that consume multiple input streams (e.g., after a *keyBy(...)* or *partition(...)* function, or a union) track the event time
-on each of their input streams. The operator's current event time is the minimum of the input streams' event time. As the input streams
-update their event time, so does the operator.
-
-The figure below shows an example of events and watermarks flowing through parallel streams, and operators tracking event time.
-
-<img src="fig/parallel_streams_watermarks.svg" alt="Parallel data streams and operators with events and watermarks" class="center" width="80%" />
-
-
-## Late Elements
-
-It is possible that certain elements violate the watermark condition, meaning that even after the *Watermark(t)* has occurred,
-more elements with timestamp *t' < t* will occur. In fact, in many real world setups, certain elements can be arbitrarily
-delayed, making it impossible to define a time when all elements of a certain event timestamp have occurred.
-Further more, even if the lateness can be bounded, delaying the watermarks by too much is often not desirable, because it delays
-the evaluation of the event time windows by too much.
-
-Due to that, some streaming programs will explicitly expect a number of *late* elements. Late elements are elements that
-arrive after the system's event time clock (as signaled by the watermarks) has already passed the time of the late element's
-timestamp.
-
-

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/event_timestamp_extractors.md
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/event_timestamp_extractors.md b/docs/apis/streaming/event_timestamp_extractors.md
deleted file mode 100644
index 83a90d2..0000000
--- a/docs/apis/streaming/event_timestamp_extractors.md
+++ /dev/null
@@ -1,108 +0,0 @@
----
-title: "Pre-defined Timestamp Extractors / Watermark Emitters"
-
-sub-nav-group: streaming
-sub-nav-pos: 2
-sub-nav-parent: eventtime
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-* toc
-{:toc}
-
-As described in [timestamps and watermark handling]({{ site.baseurl }}/apis/streaming/event_timestamps_watermarks.html),
-Flink provides abstractions that allow the programmer to assign their own timestamps and emit their own watermarks. More specifically, 
-one can do so by implementing one of the `AssignerWithPeriodicWatermarks` and `AssignerWithPunctuatedWatermarks` interfaces, depending 
-on their use-case. In a nutshell, the first will emit watermarks periodically, while the second does so based on some property of 
-the incoming records, e.g. whenever a special element is encountered in the stream.
-
-In order to further ease the programming effort for such tasks, Flink comes with some pre-implemented timestamp assigners. 
-This section provides a list of them. Apart from their out-of-the-box functionality, their implementation can serve as an example 
-for custom assigner implementations.
-
-#### **Assigner with Ascending Timestamps**
-
-The simplest special case for *periodic* watermark generation is the case where timestamps seen by a given source task 
-occur in ascending order. In that case, the current timestamp can always act as a watermark, because no earlier timestamps will 
-arrive.
-
-Note that it is only necessary that timestamps are ascending *per parallel data source task*. For example, if
-in a specific setup one Kafka partition is read by one parallel data source instance, then it is only necessary that
-timestamps are ascending within each Kafka partition. Flink's Watermark merging mechanism will generate correct
-watermarks whenever parallel streams are shuffled, unioned, connected, or merged.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-DataStream<MyEvent> stream = ...
-
-DataStream<MyEvent> withTimestampsAndWatermarks = 
-    stream.assignTimestampsAndWatermarks(new AscendingTimestampExtractor<MyEvent>() {
-
-        @Override
-        public long extractAscendingTimestamp(MyEvent element) {
-            return element.getCreationTime();
-        }
-});
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val stream: DataStream[MyEvent] = ...
-
-val withTimestampsAndWatermarks = stream.assignAscendingTimestamps( _.getCreationTime )
-{% endhighlight %}
-</div>
-</div>
-
-#### **Assigner which allows a fixed amount of record lateness**
-
-Another example of periodic watermark generation is when the watermark lags behind the maximum (event-time) timestamp
-seen in the stream by a fixed amount of time. This case covers scenarios where the maximum lateness that can be encountered in a 
-stream is known in advance, e.g. when creating a custom source containing elements with timestamps spread within a fixed period of 
-time for testing. For these cases, Flink provides the `BoundedOutOfOrdernessTimestampExtractor` which takes as an argument 
-the `maxOutOfOrderness`, i.e. the maximum amount of time an element is allowed to be late before being ignored when computing the 
-final result for the given window. Lateness corresponds to the result of `t - t_w`, where `t` is the (event-time) timestamp of an 
-element, and `t_w` that of the previous watermark. If `lateness > 0` then the element is considered late and is ignored when computing 
-the result of the job for its corresponding window.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-DataStream<MyEvent> stream = ...
-
-DataStream<MyEvent> withTimestampsAndWatermarks = 
-    stream.assignTimestampsAndWatermarks(new BoundedOutOfOrdernessTimestampExtractor<MyEvent>(Time.seconds(10)) {
-
-        @Override
-        public long extractAscendingTimestamp(MyEvent element) {
-            return element.getCreationTime();
-        }
-});
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val stream: DataStream[MyEvent] = ...
-
-val withTimestampsAndWatermarks = stream.assignTimestampsAndWatermarks(new BoundedOutOfOrdernessTimestampExtractor[MyEvent](Time.seconds(10))( _.getCreationTime ))
-{% endhighlight %}
-</div>
-</div>

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/event_timestamps_watermarks.md
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/event_timestamps_watermarks.md b/docs/apis/streaming/event_timestamps_watermarks.md
deleted file mode 100644
index 05c9f51..0000000
--- a/docs/apis/streaming/event_timestamps_watermarks.md
+++ /dev/null
@@ -1,332 +0,0 @@
----
-title: "Generating Timestamps / Watermarks"
-
-sub-nav-group: streaming
-sub-nav-pos: 1
-sub-nav-parent: eventtime
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-* toc
-{:toc}
-
-
-This section is relevant for program running on **Event Time**. For an introduction to *Event Time*,
-*Processing Time*, and *Ingestion Time*, please refer to the [event time introduction]({{ site.baseurl }}/apis/streaming/event_time.html)
-
-To work with *Event Time*, streaming programs need to set the *time characteristic* accordingly.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
-env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = StreamExecutionEnvironment.getExecutionEnvironment
-env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
-{% endhighlight %}
-</div>
-</div>
-
-## Assigning Timestamps
-
-In order to work with *Event Time*, Flink needs to know the events' *timestamps*, meaning each element in the
-stream needs to get its event timestamp *assigned*. That happens usually by accessing/extracting the
-timestamp from some field in the element.
-
-Timestamp assignment goes hand-in-hand with generating watermarks, which tell the system about
-the progress in event time.
-
-There are two ways to assign timestamps and generate Watermarks:
-
-  1. Directly in the data stream source
-  2. Via a timestamp assigner / watermark generator: in Flink timestamp assigners also define the watermarks to be emitted
-
-<span class="label label-danger">Attention</span> Both timestamps and watermarks are specified as
-millliseconds since the Java epoch of 1970-01-01T00:00:00Z.
-
-### Source Functions with Timestamps and Watermarks
-
-Stream sources can also directly assign timestamps to the elements they produce and emit Watermarks. In that case,
-no Timestamp Assigner is needed.
-
-To assign a timestamp to an element in the source directly, the source must use the `collectWithTimestamp(...)`
-method on the `SourceContext`. To generate Watermarks, the source must call the `emitWatermark(Watermark)` function.
-
-Below is a simple example of a source *(non-checkpointed)* that assigns timestamps and generates Watermarks
-depending on special events:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-@Override
-public void run(SourceContext<MyType> ctx) throws Exception {
-	while (/* condition */) {
-		MyType next = getNext();
-		ctx.collectWithTimestamp(next, next.getEventTimestamp());
-
-		if (next.hasWatermarkTime()) {
-			ctx.emitWatermark(new Watermark(next.getWatermarkTime()));
-		}
-	}
-}
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-override def run(ctx: SourceContext[MyType]): Unit = {
-	while (/* condition */) {
-		val next: MyType = getNext()
-		ctx.collectWithTimestamp(next, next.eventTimestamp)
-
-		if (next.hasWatermarkTime) {
-			ctx.emitWatermark(new Watermark(next.getWatermarkTime))
-		}
-	}
-}
-{% endhighlight %}
-</div>
-</div>
-
-*Note:* If the streaming program uses a TimestampAssigner on a stream where elements have a timestamp already,
-those timestamps will be overwritten by the TimestampAssigner. Similarly, Watermarks will be overwritten as well.
-
-
-### Timestamp Assigners / Watermark Generators
-
-Timestamp Assigners take a stream and produce a new stream with timestamped elements and watermarks. If the
-original stream had timestamps and/or watermarks already, the timestamp assigner overwrites them.
-
-The timestamp assigners usually are specified immediately after the data source but it is not strictly required to do so.
-A common pattern is, for example, to parse (*MapFunction*) and filter (*FilterFunction*) before the timestamp assigner.
-In any case, the timestamp assigner needs to be specified before the first operation on event time
-(such as the first window operation). As a special case, when using Kafka as the source of a streaming job,
-Flink allows the specification of a timestamp assigner / watermark emitter inside
-the source (or consumer) itself. More information on how to do so can be found in the
-[Kafka Connector documentation]({{ site.baseurl }}/apis/streaming/connectors/kafka.html).
-
-
-**NOTE:** The remainder of this section presents the main interfaces a programmer has
-to implement in order to create her own timestamp extractors/watermark emitters.
-To see the pre-implemented extractors that ship with Flink, please refer to the
-[Pre-defined Timestamp Extractors / Watermark Emitters]({{ site.baseurl }}/apis/streaming/event_timestamp_extractors.html) page.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
-env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
-
-DataStream<MyEvent> stream = env.readFile(
-        myFormat, myFilePath, FileProcessingMode.PROCESS_CONTINUOUSLY, 100,
-        FilePathFilter.createDefaultFilter(), typeInfo);
-
-DataStream<MyEvent> withTimestampsAndWatermarks = stream
-        .filter( event -> event.severity() == WARNING )
-        .assignTimestampsAndWatermarks(new MyTimestampsAndWatermarks());
-
-withTimestampsAndWatermarks
-        .keyBy( (event) -> event.getGroup() )
-        .timeWindow(Time.seconds(10))
-        .reduce( (a, b) -> a.add(b) )
-        .addSink(...);
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = StreamExecutionEnvironment.getExecutionEnvironment
-env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
-
-val stream: DataStream[MyEvent] = env.readFile(
-         myFormat, myFilePath, FileProcessingMode.PROCESS_CONTINUOUSLY, 100,
-         FilePathFilter.createDefaultFilter());
-
-val withTimestampsAndWatermarks: DataStream[MyEvent] = stream
-        .filter( _.severity == WARNING )
-        .assignTimestampsAndWatermarks(new MyTimestampsAndWatermarks())
-
-withTimestampsAndWatermarks
-        .keyBy( _.getGroup )
-        .timeWindow(Time.seconds(10))
-        .reduce( (a, b) => a.add(b) )
-        .addSink(...)
-{% endhighlight %}
-</div>
-</div>
-
-
-#### **With Periodic Watermarks**
-
-The `AssignerWithPeriodicWatermarks` assigns timestamps and generates watermarks periodically (possibly depending
-on the stream elements, or purely based on processing time).
-
-The interval (every *n* milliseconds) in which the watermark will be generated is defined via
-`ExecutionConfig.setAutoWatermarkInterval(...)`. Each time, the assigner's `getCurrentWatermark()` method will be
-called, and a new Watermark will be emitted, if the returned Watermark is non-null and larger than the previous
-Watermark.
-
-Two simple examples of timestamp assigners with periodic watermark generation are below.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-/**
- * This generator generates watermarks assuming that elements come out of order to a certain degree only.
- * The latest elements for a certain timestamp t will arrive at most n milliseconds after the earliest
- * elements for timestamp t.
- */
-public class BoundedOutOfOrdernessGenerator extends AssignerWithPeriodicWatermarks<MyEvent> {
-
-    private final long maxOutOfOrderness = 3500; // 3.5 seconds
-
-    private long currentMaxTimestamp;
-
-    @Override
-    public long extractTimestamp(MyEvent element, long previousElementTimestamp) {
-        long timestamp = element.getCreationTime();
-        currentMaxTimestamp = Math.max(timestamp, currentMaxTimestamp);
-        return timestamp;
-    }
-
-    @Override
-    public Watermark getCurrentWatermark() {
-        // return the watermark as current highest timestamp minus the out-of-orderness bound
-        return new Watermark(currentMaxTimestamp - maxOutOfOrderness);
-    }
-}
-
-/**
- * This generator generates watermarks that are lagging behind processing time by a certain amount.
- * It assumes that elements arrive in Flink after at most a certain time.
- */
-public class TimeLagWatermarkGenerator extends AssignerWithPeriodicWatermarks<MyEvent> {
-
-	private final long maxTimeLag = 5000; // 5 seconds
-
-	@Override
-	public long extractTimestamp(MyEvent element, long previousElementTimestamp) {
-		return element.getCreationTime();
-	}
-
-	@Override
-	public Watermark getCurrentWatermark() {
-		// return the watermark as current time minus the maximum time lag
-		return new Watermark(System.currentTimeMillis() - maxTimeLag);
-	}
-}
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-/**
- * This generator generates watermarks assuming that elements come out of order to a certain degree only.
- * The latest elements for a certain timestamp t will arrive at most n milliseconds after the earliest
- * elements for timestamp t.
- */
-class BoundedOutOfOrdernessGenerator extends AssignerWithPeriodicWatermarks[MyEvent] {
-
-    val maxOutOfOrderness = 3500L; // 3.5 seconds
-
-    var currentMaxTimestamp: Long;
-
-    override def extractTimestamp(element: MyEvent, previousElementTimestamp: Long): Long = {
-        val timestamp = element.getCreationTime()
-        currentMaxTimestamp = max(timestamp, currentMaxTimestamp)
-        timestamp;
-    }
-
-    override def getCurrentWatermark(): Watermark = {
-        // return the watermark as current highest timestamp minus the out-of-orderness bound
-        new Watermark(currentMaxTimestamp - maxOutOfOrderness);
-    }
-}
-
-/**
- * This generator generates watermarks that are lagging behind processing time by a certain amount.
- * It assumes that elements arrive in Flink after at most a certain time.
- */
-class TimeLagWatermarkGenerator extends AssignerWithPeriodicWatermarks[MyEvent] {
-
-    val maxTimeLag = 5000L; // 5 seconds
-
-    override def extractTimestamp(element: MyEvent, previousElementTimestamp: Long): Long = {
-        element.getCreationTime
-    }
-
-    override def getCurrentWatermark(): Watermark = {
-        // return the watermark as current time minus the maximum time lag
-        new Watermark(System.currentTimeMillis() - maxTimeLag)
-    }
-}
-{% endhighlight %}
-</div>
-</div>
-
-#### **With Punctuated Watermarks**
-
-To generate Watermarks whenever a certain event indicates that a new watermark can be generated, use the
-`AssignerWithPunctuatedWatermarks`. For this class, Flink will first call the `extractTimestamp(...)` method
-to assign the element a timestamp, and then immediately call for that element the
-`checkAndGetNextWatermark(...)` method.
-
-The `checkAndGetNextWatermark(...)` method gets the timestamp that was assigned in the `extractTimestamp(...)`
-method, and can decide whether it wants to generate a Watermark. Whenever the `checkAndGetNextWatermark(...)`
-method returns a non-null Watermark, and that Watermark is larger than the latest previous Watermark, that
-new Watermark will be emitted.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-public class PunctuatedAssigner extends AssignerWithPunctuatedWatermarks<MyEvent> {
-
-	@Override
-	public long extractTimestamp(MyEvent element, long previousElementTimestamp) {
-		return element.getCreationTime();
-	}
-
-	@Override
-	public Watermark checkAndGetNextWatermark(MyEvent lastElement, long extractedTimestamp) {
-		return element.hasWatermarkMarker() ? new Watermark(extractedTimestamp) : null;
-	}
-}
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-class PunctuatedAssigner extends AssignerWithPunctuatedWatermarks[MyEvent] {
-
-	override def extractTimestamp(element: MyEvent, previousElementTimestamp: Long): Long = {
-		element.getCreationTime
-	}
-
-	override def checkAndGetNextWatermark(lastElement: MyEvent, extractedTimestamp: Long): Watermark = {
-		if (element.hasWatermarkMarker()) new Watermark(extractedTimestamp) else null
-	}
-}
-{% endhighlight %}
-</div>
-</div>
-
-*Note:* It is possible to generate a watermark on every single event. However, because each watermark causes some
-computation downstream, an excessive number of watermarks slows down performance.
-

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/fault_tolerance.md
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/fault_tolerance.md b/docs/apis/streaming/fault_tolerance.md
deleted file mode 100644
index 99221e5..0000000
--- a/docs/apis/streaming/fault_tolerance.md
+++ /dev/null
@@ -1,462 +0,0 @@
----
-title: "Fault Tolerance"
-is_beta: false
-
-sub-nav-group: streaming
-sub-nav-id: fault_tolerance
-sub-nav-pos: 5
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-Flink's fault tolerance mechanism recovers programs in the presence of failures and
-continues to execute them. Such failures include machine hardware failures, network failures,
-transient program failures, etc.
-
-* This will be replaced by the TOC
-{:toc}
-
-
-Streaming Fault Tolerance
--------------------------
-
-Flink has a checkpointing mechanism that recovers streaming jobs after failures. The checkpointing mechanism requires a *persistent* (or *durable*) source that
-can be asked for prior records again (Apache Kafka is a good example of such a source).
-
-The checkpointing mechanism stores the progress in the data sources and data sinks, the state of windows, as well as the user-defined state (see [Working with State](state.html)) consistently to provide *exactly once* processing semantics. Where the checkpoints are stored (e.g., JobManager memory, file system, database) depends on the configured [state backend](state_backends.html).
-
-The [docs on streaming fault tolerance]({{ site.baseurl }}/internals/stream_checkpointing.html) describe in detail the technique behind Flink's streaming fault tolerance mechanism.
-
-To enable checkpointing, call `enableCheckpointing(n)` on the `StreamExecutionEnvironment`, where *n* is the checkpoint interval in milliseconds.
-
-Other parameters for checkpointing include:
-
-- *Number of retries*: The `setNumberOfExecutionRerties()` method defines how many times the job is restarted after a failure.
-  When checkpointing is activated, but this value is not explicitly set, the job is restarted infinitely often.
-
-- *exactly-once vs. at-least-once*: You can optionally pass a mode to the `enableCheckpointing(n)` method to choose between the two guarantee levels.
-  Exactly-once is preferrable for most applications. At-least-once may be relevant for certain super-low-latency (consistently few milliseconds) applications.
-
-- *number of concurrent checkpoints*: By default, the system will not trigger another checkpoint while one is still in progress. This ensures that the topology does not spend too much time on checkpoints and not make progress with processing the streams. It is possible to allow for multiple overlapping checkpoints, which is interesting for pipelines that have a certain processing delay (for example because the functions call external services that need some time to respond) but that still want to do very frequent checkpoints (100s of milliseconds) to re-process very little upon failures.
-
-- *checkpoint timeout*: The time after which a checkpoint-in-progress is aborted, if it did not complete by then.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
-
-// start a checkpoint every 1000 ms
-env.enableCheckpointing(1000);
-
-// advanced options:
-
-// set mode to exactly-once (this is the default)
-env.getCheckpointConfig().setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE);
-
-// checkpoints have to complete within one minute, or are discarded
-env.getCheckpointConfig().setCheckpointTimeout(60000);
-
-// allow only one checkpoint to be in progress at the same time
-env.getCheckpointConfig().setMaxConcurrentCheckpoints(1);
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = StreamExecutionEnvironment.getExecutionEnvironment()
-
-// start a checkpoint every 1000 ms
-env.enableCheckpointing(1000)
-
-// advanced options:
-
-// set mode to exactly-once (this is the default)
-env.getCheckpointConfig.setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE)
-
-// checkpoints have to complete within one minute, or are discarded
-env.getCheckpointConfig.setCheckpointTimeout(60000)
-
-// allow only one checkpoint to be in progress at the same time
-env.getCheckpointConfig.setMaxConcurrentCheckpoints(1)
-{% endhighlight %}
-</div>
-</div>
-
-{% top %}
-
-### Fault Tolerance Guarantees of Data Sources and Sinks
-
-Flink can guarantee exactly-once state updates to user-defined state only when the source participates in the
-snapshotting mechanism. The following table lists the state update guarantees of Flink coupled with the bundled connectors.
-
-Please read the documentation of each connector to understand the details of the fault tolerance guarantees.
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 25%">Source</th>
-      <th class="text-left" style="width: 25%">Guarantees</th>
-      <th class="text-left">Notes</th>
-    </tr>
-   </thead>
-   <tbody>
-        <tr>
-            <td>Apache Kafka</td>
-            <td>exactly once</td>
-            <td>Use the appropriate Kafka connector for your version</td>
-        </tr>
-        <tr>
-            <td>AWS Kinesis Streams</td>
-            <td>exactly once</td>
-            <td></td>
-        </tr>
-        <tr>
-            <td>RabbitMQ</td>
-            <td>at most once (v 0.10) / exactly once (v 1.0) </td>
-            <td></td>
-        </tr>
-        <tr>
-            <td>Twitter Streaming API</td>
-            <td>at most once</td>
-            <td></td>
-        </tr>
-        <tr>
-            <td>Collections</td>
-            <td>exactly once</td>
-            <td></td>
-        </tr>
-        <tr>
-            <td>Files</td>
-            <td>exactly once</td>
-            <td></td>
-        </tr>
-        <tr>
-            <td>Sockets</td>
-            <td>at most once</td>
-            <td></td>
-        </tr>
-  </tbody>
-</table>
-
-To guarantee end-to-end exactly-once record delivery (in addition to exactly-once state semantics), the data sink needs
-to take part in the checkpointing mechanism. The following table lists the delivery guarantees (assuming exactly-once
-state updates) of Flink coupled with bundled sinks:
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 25%">Sink</th>
-      <th class="text-left" style="width: 25%">Guarantees</th>
-      <th class="text-left">Notes</th>
-    </tr>
-  </thead>
-  <tbody>
-    <tr>
-        <td>HDFS rolling sink</td>
-        <td>exactly once</td>
-        <td>Implementation depends on Hadoop version</td>
-    </tr>
-    <tr>
-        <td>Elasticsearch</td>
-        <td>at least once</td>
-        <td></td>
-    </tr>
-    <tr>
-        <td>Kafka producer</td>
-        <td>at least once</td>
-        <td></td>
-    </tr>
-    <tr>
-        <td>Cassandra sink</td>
-        <td>at least once / exactly once</td>
-        <td>exactly once only for idempotent updates</td>
-    </tr>
-    <tr>
-        <td>AWS Kinesis Streams</td>
-        <td>at least once</td>
-        <td></td>
-    </tr>
-    <tr>
-        <td>File sinks</td>
-        <td>at least once</td>
-        <td></td>
-    </tr>
-    <tr>
-        <td>Socket sinks</td>
-        <td>at least once</td>
-        <td></td>
-    </tr>
-    <tr>
-        <td>Standard output</td>
-        <td>at least once</td>
-        <td></td>
-    </tr>
-    <tr>
-        <td>Redis sink</td>
-        <td>at least once</td>
-        <td></td>
-    </tr>
-  </tbody>
-</table>
-
-{% top %}
-
-## Restart Strategies
-
-Flink supports different restart strategies which control how the jobs are restarted in case of a failure.
-The cluster can be started with a default restart strategy which is always used when no job specific restart strategy has been defined.
-In case that the job is submitted with a restart strategy, this strategy overrides the cluster's default setting.
- 
-The default restart strategy is set via Flink's configuration file `flink-conf.yaml`.
-The configuration parameter *restart-strategy* defines which strategy is taken.
-Per default, the no-restart strategy is used.
-See the following list of available restart strategies to learn what values are supported.
-
-Each restart strategy comes with its own set of parameters which control its behaviour.
-These values are also set in the configuration file.
-The description of each restart strategy contains more information about the respective configuration values.
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 50%">Restart Strategy</th>
-      <th class="text-left">Value for restart-strategy</th>
-    </tr>
-  </thead>
-  <tbody>
-    <tr>
-        <td>Fixed delay</td>
-        <td>fixed-delay</td>
-    </tr>
-    <tr>
-        <td>Failure rate</td>
-        <td>failure-rate</td>
-    </tr>
-    <tr>
-        <td>No restart</td>
-        <td>none</td>
-    </tr>
-  </tbody>
-</table>
-
-Apart from defining a default restart strategy, it is possible to define for each Flink job a specific restart strategy.
-This restart strategy is set programmatically by calling the `setRestartStrategy` method on the `ExecutionEnvironment`.
-Note that this also works for the `StreamExecutionEnvironment`.
-
-The following example shows how we can set a fixed delay restart strategy for our job.
-In case of a failure the system tries to restart the job 3 times and waits 10 seconds in-between successive restart attempts.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-env.setRestartStrategy(RestartStrategies.fixedDelayRestart(
-  3, // number of restart attempts 
-  Time.of(10, TimeUnit.SECONDS) // delay
-));
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = ExecutionEnvironment.getExecutionEnvironment()
-env.setRestartStrategy(RestartStrategies.fixedDelayRestart(
-  3, // number of restart attempts 
-  Time.of(10, TimeUnit.SECONDS) // delay
-))
-{% endhighlight %}
-</div>
-</div>
-
-{% top %}
-
-### Fixed Delay Restart Strategy
-
-The fixed delay restart strategy attempts a given number of times to restart the job.
-If the maximum number of attempts is exceeded, the job eventually fails.
-In-between two consecutive restart attempts, the restart strategy waits a fixed amount of time.
-
-This strategy is enabled as default by setting the following configuration parameter in `flink-conf.yaml`.
-
-~~~
-restart-strategy: fixed-delay
-~~~
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 40%">Configuration Parameter</th>
-      <th class="text-left" style="width: 40%">Description</th>
-      <th class="text-left">Default Value</th>
-    </tr>
-  </thead>
-  <tbody>
-    <tr>
-        <td><it>restart-strategy.fixed-delay.attempts</it></td>
-        <td>Number of restart attempts</td>
-        <td>1</td>
-    </tr>
-    <tr>
-        <td><it>restart-strategy.fixed-delay.delay</it></td>
-        <td>Delay between two consecutive restart attempts</td>
-        <td><it>akka.ask.timeout</it></td>
-    </tr>
-  </tbody>
-</table>
-
-~~~
-restart-strategy.fixed-delay.attempts: 3
-restart-strategy.fixed-delay.delay: 10 s
-~~~
-
-The fixed delay restart strategy can also be set programmatically:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-env.setRestartStrategy(RestartStrategies.fixedDelayRestart(
-  3, // number of restart attempts 
-  Time.of(10, TimeUnit.SECONDS) // delay
-));
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = ExecutionEnvironment.getExecutionEnvironment()
-env.setRestartStrategy(RestartStrategies.fixedDelayRestart(
-  3, // number of restart attempts 
-  Time.of(10, TimeUnit.SECONDS) // delay
-))
-{% endhighlight %}
-</div>
-</div>
-
-#### Restart Attempts
-
-The number of times that Flink retries the execution before the job is declared as failed is configurable via the *restart-strategy.fixed-delay.attempts* parameter.
-
-The default value is **1**.
-
-#### Retry Delays
-
-Execution retries can be configured to be delayed. Delaying the retry means that after a failed execution, the re-execution does not start immediately, but only after a certain delay.
-
-Delaying the retries can be helpful when the program interacts with external systems where for example connections or pending transactions should reach a timeout before re-execution is attempted.
-
-The default value is the value of *akka.ask.timeout*.
-
-{% top %}
-
-### Failure Rate Restart Strategy
-
-The failure rate restart strategy restarts job after failure, but when `failure rate` (failures per time interval) is exceeded, the job eventually fails.
-In-between two consecutive restart attempts, the restart strategy waits a fixed amount of time.
-
-This strategy is enabled as default by setting the following configuration parameter in `flink-conf.yaml`.
-
-~~~
-restart-strategy: failure-rate
-~~~
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 40%">Configuration Parameter</th>
-      <th class="text-left" style="width: 40%">Description</th>
-      <th class="text-left">Default Value</th>
-    </tr>
-  </thead>
-  <tbody>
-    <tr>
-        <td><it>restart-strategy.failure-rate.max-failures-per-interval</it></td>
-        <td>Maximum number of restarts in given time interval before failing a job</td>
-        <td>1</td>
-    </tr>
-    <tr>
-        <td><it>restart-strategy.failure-rate.failure-rate-interval</it></td>
-        <td>Time interval for measuring failure rate.</td>
-        <td>1 minute</td>
-    </tr>
-    <tr>
-        <td><it>restart-strategy.failure-rate.delay</it></td>
-        <td>Delay between two consecutive restart attempts</td>
-        <td><it>akka.ask.timeout</it></td>
-    </tr>
-  </tbody>
-</table>
-
-~~~
-restart-strategy.failure-rate.max-failures-per-interval: 3
-restart-strategy.failure-rate.failure-rate-interval: 5 min
-restart-strategy.failure-rate.delay: 10 s
-~~~
-
-The failure rate restart strategy can also be set programmatically:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-env.setRestartStrategy(RestartStrategies.failureRateRestart(
-  3, // max failures per interval
-  Time.of(5, TimeUnit.MINUTES), //time interval for measuring failure rate
-  Time.of(10, TimeUnit.SECONDS) // delay
-));
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = ExecutionEnvironment.getExecutionEnvironment()
-env.setRestartStrategy(RestartStrategies.failureRateRestart(
-  3, // max failures per unit
-  Time.of(5, TimeUnit.MINUTES), //time interval for measuring failure rate
-  Time.of(10, TimeUnit.SECONDS) // delay
-))
-{% endhighlight %}
-</div>
-</div>
-
-{% top %}
-
-### No Restart Strategy
-
-The job fails directly and no restart is attempted.
-
-~~~
-restart-strategy: none
-~~~
-
-The no restart strategy can also be set programmatically:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-env.setRestartStrategy(RestartStrategies.noRestart());
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = ExecutionEnvironment.getExecutionEnvironment()
-env.setRestartStrategy(RestartStrategies.noRestart())
-{% endhighlight %}
-</div>
-</div>
-
-{% top %}


[38/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/index.md
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/index.md b/docs/apis/streaming/index.md
deleted file mode 100644
index f6fbbd5..0000000
--- a/docs/apis/streaming/index.md
+++ /dev/null
@@ -1,1787 +0,0 @@
----
-title: "Flink DataStream API Programming Guide"
-
-# Top-level navigation
-top-nav-group: apis
-top-nav-pos: 2
-top-nav-title: <strong>Streaming Guide</strong> (DataStream API)
-
-# Sub-level navigation
-sub-nav-group: streaming
-sub-nav-group-title: Streaming Guide
-sub-nav-pos: 1
-sub-nav-title: DataStream API
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-DataStream programs in Flink are regular programs that implement transformations on data streams
-(e.g., filtering, updating state, defining windows, aggregating). The data streams are initially created from various
-sources (e.g., message queues, socket streams, files). Results are returned via sinks, which may for
-example write the data to files, or to standard output (for example the command line
-terminal). Flink programs run in a variety of contexts, standalone, or embedded in other programs.
-The execution can happen in a local JVM, or on clusters of many machines.
-
-Please see [basic concepts]({{ site.baseurl }}/apis/common/index.html) for an introduction
-to the basic concepts of the Flink API.
-
-In order to create your own Flink DataStream program, we encourage you to start with
-[anatomy of a Flink Program]({{ site.baseurl }}/apis/common/index.html#anatomy-of-a-flink-program)
-and gradually add your own
-[transformations](#datastream-transformations). The remaining sections act as references for additional
-operations and advanced features.
-
-
-* This will be replaced by the TOC
-{:toc}
-
-
-Example Program
----------------
-
-The following program is a complete, working example of streaming window word count application, that counts the
-words coming from a web socket in 5 second windows. You can copy &amp; paste the code to run it locally.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-{% highlight java %}
-import org.apache.flink.api.common.functions.FlatMapFunction;
-import org.apache.flink.api.java.tuple.Tuple2;
-import org.apache.flink.streaming.api.datastream.DataStream;
-import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
-import org.apache.flink.streaming.api.windowing.time.Time;
-import org.apache.flink.util.Collector;
-
-public class WindowWordCount {
-
-    public static void main(String[] args) throws Exception {
-
-        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
-
-        DataStream<Tuple2<String, Integer>> dataStream = env
-                .socketTextStream("localhost", 9999)
-                .flatMap(new Splitter())
-                .keyBy(0)
-                .timeWindow(Time.seconds(5))
-                .sum(1);
-
-        dataStream.print();
-
-        env.execute("Window WordCount");
-    }
-
-    public static class Splitter implements FlatMapFunction<String, Tuple2<String, Integer>> {
-        @Override
-        public void flatMap(String sentence, Collector<Tuple2<String, Integer>> out) throws Exception {
-            for (String word: sentence.split(" ")) {
-                out.collect(new Tuple2<String, Integer>(word, 1));
-            }
-        }
-    }
-
-}
-{% endhighlight %}
-
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-
-import org.apache.flink.streaming.api.scala._
-import org.apache.flink.streaming.api.windowing.time.Time
-
-object WindowWordCount {
-  def main(args: Array[String]) {
-
-    val env = StreamExecutionEnvironment.getExecutionEnvironment
-    val text = env.socketTextStream("localhost", 9999)
-
-    val counts = text.flatMap { _.toLowerCase.split("\\W+") filter { _.nonEmpty } }
-      .map { (_, 1) }
-      .keyBy(0)
-      .timeWindow(Time.seconds(5))
-      .sum(1)
-
-    counts.print
-
-    env.execute("Window Stream WordCount")
-  }
-}
-{% endhighlight %}
-</div>
-
-</div>
-
-To run the example program, start the input stream with netcat first from a terminal:
-
-~~~bash
-nc -lk 9999
-~~~
-
-Just type some words hitting return for a new word. These will be the input to the
-word count program. If you want to see counts greater than 1, type the same word again and again within
-5 seconds (increase the window size from 5 seconds if you cannot type that fast &#9786;).
-
-{% top %}
-
-DataStream Transformations
---------------------------
-
-Data transformations transform one or more DataStreams into a new DataStream. Programs can combine
-multiple transformations into sophisticated topologies.
-
-This section gives a description of all the available transformations.
-
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-<br />
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 25%">Transformation</th>
-      <th class="text-center">Description</th>
-    </tr>
-  </thead>
-  <tbody>
-    <tr>
-          <td><strong>Map</strong><br>DataStream &rarr; DataStream</td>
-          <td>
-            <p>Takes one element and produces one element. A map function that doubles the values of the input stream:</p>
-    {% highlight java %}
-DataStream<Integer> dataStream = //...
-dataStream.map(new MapFunction<Integer, Integer>() {
-    @Override
-    public Integer map(Integer value) throws Exception {
-        return 2 * value;
-    }
-});
-    {% endhighlight %}
-          </td>
-        </tr>
-
-        <tr>
-          <td><strong>FlatMap</strong><br>DataStream &rarr; DataStream</td>
-          <td>
-            <p>Takes one element and produces zero, one, or more elements. A flatmap function that splits sentences to words:</p>
-    {% highlight java %}
-dataStream.flatMap(new FlatMapFunction<String, String>() {
-    @Override
-    public void flatMap(String value, Collector<String> out)
-        throws Exception {
-        for(String word: value.split(" ")){
-            out.collect(word);
-        }
-    }
-});
-    {% endhighlight %}
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Filter</strong><br>DataStream &rarr; DataStream</td>
-          <td>
-            <p>Evaluates a boolean function for each element and retains those for which the function returns true.
-            A filter that filters out zero values:
-            </p>
-    {% highlight java %}
-dataStream.filter(new FilterFunction<Integer>() {
-    @Override
-    public boolean filter(Integer value) throws Exception {
-        return value != 0;
-    }
-});
-    {% endhighlight %}
-          </td>
-        </tr>
-        <tr>
-          <td><strong>KeyBy</strong><br>DataStream &rarr; KeyedStream</td>
-          <td>
-            <p>Logically partitions a stream into disjoint partitions, each partition containing elements of the same key.
-            Internally, this is implemented with hash partitioning. See <a href="#specifying-keys">keys</a> on how to specify keys.
-            This transformation returns a KeyedDataStream.</p>
-    {% highlight java %}
-dataStream.keyBy("someKey") // Key by field "someKey"
-dataStream.keyBy(0) // Key by the first element of a Tuple
-    {% endhighlight %}
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Reduce</strong><br>KeyedStream &rarr; DataStream</td>
-          <td>
-            <p>A "rolling" reduce on a keyed data stream. Combines the current element with the last reduced value and
-            emits the new value.
-                    <br/>
-            	<br/>
-            A reduce function that creates a stream of partial sums:</p>
-            {% highlight java %}
-keyedStream.reduce(new ReduceFunction<Integer>() {
-    @Override
-    public Integer reduce(Integer value1, Integer value2)
-    throws Exception {
-        return value1 + value2;
-    }
-});
-            {% endhighlight %}
-            </p>
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Fold</strong><br>KeyedStream &rarr; DataStream</td>
-          <td>
-          <p>A "rolling" fold on a keyed data stream with an initial value.
-          Combines the current element with the last folded value and
-          emits the new value.
-          <br/>
-          <br/>
-          <p>A fold function that, when applied on the sequence (1,2,3,4,5),
-          emits the sequence "start-1", "start-1-2", "start-1-2-3", ...</p>
-          {% highlight java %}
-DataStream<String> result =
-  keyedStream.fold("start", new FoldFunction<Integer, String>() {
-    @Override
-    public String fold(String current, Integer value) {
-        return current + "-" + value;
-    }
-  });
-          {% endhighlight %}
-          </p>
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Aggregations</strong><br>KeyedStream &rarr; DataStream</td>
-          <td>
-            <p>Rolling aggregations on a keyed data stream. The difference between min
-	    and minBy is that min returns the minimum value, whereas minBy returns
-	    the element that has the minimum value in this field (same for max and maxBy).</p>
-    {% highlight java %}
-keyedStream.sum(0);
-keyedStream.sum("key");
-keyedStream.min(0);
-keyedStream.min("key");
-keyedStream.max(0);
-keyedStream.max("key");
-keyedStream.minBy(0);
-keyedStream.minBy("key");
-keyedStream.maxBy(0);
-keyedStream.maxBy("key");
-    {% endhighlight %}
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Window</strong><br>KeyedStream &rarr; WindowedStream</td>
-          <td>
-            <p>Windows can be defined on already partitioned KeyedStreams. Windows group the data in each
-            key according to some characteristic (e.g., the data that arrived within the last 5 seconds).
-            See <a href="windows.html">windows</a> for a complete description of windows.
-    {% highlight java %}
-dataStream.keyBy(0).window(TumblingEventTimeWindows.of(Time.seconds(5))); // Last 5 seconds of data
-    {% endhighlight %}
-        </p>
-          </td>
-        </tr>
-        <tr>
-          <td><strong>WindowAll</strong><br>DataStream &rarr; AllWindowedStream</td>
-          <td>
-              <p>Windows can be defined on regular DataStreams. Windows group all the stream events
-              according to some characteristic (e.g., the data that arrived within the last 5 seconds).
-              See <a href="windows.html">windows</a> for a complete description of windows.</p>
-              <p><strong>WARNING:</strong> This is in many cases a <strong>non-parallel</strong> transformation. All records will be
-               gathered in one task for the windowAll operator.</p>
-  {% highlight java %}
-dataStream.windowAll(TumblingEventTimeWindows.of(Time.seconds(5))); // Last 5 seconds of data
-  {% endhighlight %}
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Window Apply</strong><br>WindowedStream &rarr; DataStream<br>AllWindowedStream &rarr; DataStream</td>
-          <td>
-            <p>Applies a general function to the window as a whole. Below is a function that manually sums the elements of a window.</p>
-            <p><strong>Note:</strong> If you are using a windowAll transformation, you need to use an AllWindowFunction instead.</p>
-    {% highlight java %}
-windowedStream.apply (new WindowFunction<Tuple2<String,Integer>, Integer, Tuple, Window>() {
-    public void apply (Tuple tuple,
-            Window window,
-            Iterable<Tuple2<String, Integer>> values,
-            Collector<Integer> out) throws Exception {
-        int sum = 0;
-        for (value t: values) {
-            sum += t.f1;
-        }
-        out.collect (new Integer(sum));
-    }
-});
-
-// applying an AllWindowFunction on non-keyed window stream
-allWindowedStream.apply (new AllWindowFunction<Tuple2<String,Integer>, Integer, Window>() {
-    public void apply (Window window,
-            Iterable<Tuple2<String, Integer>> values,
-            Collector<Integer> out) throws Exception {
-        int sum = 0;
-        for (value t: values) {
-            sum += t.f1;
-        }
-        out.collect (new Integer(sum));
-    }
-});
-    {% endhighlight %}
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Window Reduce</strong><br>WindowedStream &rarr; DataStream</td>
-          <td>
-            <p>Applies a functional reduce function to the window and returns the reduced value.</p>
-    {% highlight java %}
-windowedStream.reduce (new ReduceFunction<Tuple2<String,Integer>() {
-    public Tuple2<String, Integer> reduce(Tuple2<String, Integer> value1, Tuple2<String, Integer> value2) throws Exception {
-        return new Tuple2<String,Integer>(value1.f0, value1.f1 + value2.f1);
-    }
-};
-    {% endhighlight %}
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Window Fold</strong><br>WindowedStream &rarr; DataStream</td>
-          <td>
-            <p>Applies a functional fold function to the window and returns the folded value.
-               The example function, when applied on the sequence (1,2,3,4,5),
-               folds the sequence into the string "start-1-2-3-4-5":</p>
-    {% highlight java %}
-windowedStream.fold("start-", new FoldFunction<Integer, String>() {
-    public String fold(String current, Integer value) {
-        return current + "-" + value;
-    }
-};
-    {% endhighlight %}
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Aggregations on windows</strong><br>WindowedStream &rarr; DataStream</td>
-          <td>
-            <p>Aggregates the contents of a window. The difference between min
-	    and minBy is that min returns the minimun value, whereas minBy returns
-	    the element that has the minimum value in this field (same for max and maxBy).</p>
-    {% highlight java %}
-windowedStream.sum(0);
-windowedStream.sum("key");
-windowedStream.min(0);
-windowedStream.min("key");
-windowedStream.max(0);
-windowedStream.max("key");
-windowedStream.minBy(0);
-windowedStream.minBy("key");
-windowedStream.maxBy(0);
-windowedStream.maxBy("key");
-    {% endhighlight %}
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Union</strong><br>DataStream* &rarr; DataStream</td>
-          <td>
-            <p>Union of two or more data streams creating a new stream containing all the elements from all the streams. Node: If you union a data stream
-            with itself you will get each element twice in the resulting stream.</p>
-    {% highlight java %}
-dataStream.union(otherStream1, otherStream2, ...);
-    {% endhighlight %}
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Window Join</strong><br>DataStream,DataStream &rarr; DataStream</td>
-          <td>
-            <p>Join two data streams on a given key and a common window.</p>
-    {% highlight java %}
-dataStream.join(otherStream)
-    .where(0).equalTo(1)
-    .window(TumblingEventTimeWindows.of(Time.seconds(3)))
-    .apply (new JoinFunction () {...});
-    {% endhighlight %}
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Window CoGroup</strong><br>DataStream,DataStream &rarr; DataStream</td>
-          <td>
-            <p>Cogroups two data streams on a given key and a common window.</p>
-    {% highlight java %}
-dataStream.coGroup(otherStream)
-    .where(0).equalTo(1)
-    .window(TumblingEventTimeWindows.of(Time.seconds(3)))
-    .apply (new CoGroupFunction () {...});
-    {% endhighlight %}
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Connect</strong><br>DataStream,DataStream &rarr; ConnectedStreams</td>
-          <td>
-            <p>"Connects" two data streams retaining their types. Connect allowing for shared state between
-            the two streams.</p>
-    {% highlight java %}
-DataStream<Integer> someStream = //...
-DataStream<String> otherStream = //...
-
-ConnectedStreams<Integer, String> connectedStreams = someStream.connect(otherStream);
-    {% endhighlight %}
-          </td>
-        </tr>
-        <tr>
-          <td><strong>CoMap, CoFlatMap</strong><br>ConnectedStreams &rarr; DataStream</td>
-          <td>
-            <p>Similar to map and flatMap on a connected data stream</p>
-    {% highlight java %}
-connectedStreams.map(new CoMapFunction<Integer, String, Boolean>() {
-    @Override
-    public Boolean map1(Integer value) {
-        return true;
-    }
-
-    @Override
-    public Boolean map2(String value) {
-        return false;
-    }
-});
-connectedStreams.flatMap(new CoFlatMapFunction<Integer, String, String>() {
-
-   @Override
-   public void flatMap1(Integer value, Collector<String> out) {
-       out.collect(value.toString());
-   }
-
-   @Override
-   public void flatMap2(String value, Collector<String> out) {
-       for (String word: value.split(" ")) {
-         out.collect(word);
-       }
-   }
-});
-    {% endhighlight %}
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Split</strong><br>DataStream &rarr; SplitStream</td>
-          <td>
-            <p>
-                Split the stream into two or more streams according to some criterion.
-                {% highlight java %}
-SplitStream<Integer> split = someDataStream.split(new OutputSelector<Integer>() {
-    @Override
-    public Iterable<String> select(Integer value) {
-        List<String> output = new ArrayList<String>();
-        if (value % 2 == 0) {
-            output.add("even");
-        }
-        else {
-            output.add("odd");
-        }
-        return output;
-    }
-});
-                {% endhighlight %}
-            </p>
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Select</strong><br>SplitStream &rarr; DataStream</td>
-          <td>
-            <p>
-                Select one or more streams from a split stream.
-                {% highlight java %}
-SplitStream<Integer> split;
-DataStream<Integer> even = split.select("even");
-DataStream<Integer> odd = split.select("odd");
-DataStream<Integer> all = split.select("even","odd");
-                {% endhighlight %}
-            </p>
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Iterate</strong><br>DataStream &rarr; IterativeStream &rarr; DataStream</td>
-          <td>
-            <p>
-                Creates a "feedback" loop in the flow, by redirecting the output of one operator
-                to some previous operator. This is especially useful for defining algorithms that
-                continuously update a model. The following code starts with a stream and applies
-		the iteration body continuously. Elements that are greater than 0 are sent back
-		to the feedback channel, and the rest of the elements are forwarded downstream.
-		See <a href="#iterations">iterations</a> for a complete description.
-                {% highlight java %}
-IterativeStream<Long> iteration = initialStream.iterate();
-DataStream<Long> iterationBody = iteration.map (/*do something*/);
-DataStream<Long> feedback = iterationBody.filter(new FilterFunction<Long>(){
-    @Override
-    public boolean filter(Integer value) throws Exception {
-        return value > 0;
-    }
-});
-iteration.closeWith(feedback);
-DataStream<Long> output = iterationBody.filter(new FilterFunction<Long>(){
-    @Override
-    public boolean filter(Integer value) throws Exception {
-        return value <= 0;
-    }
-});
-                {% endhighlight %}
-            </p>
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Extract Timestamps</strong><br>DataStream &rarr; DataStream</td>
-          <td>
-            <p>
-                Extracts timestamps from records in order to work with windows
-                that use event time semantics. See <a href="{{ site.baseurl }}/apis/streaming/event_time.html">Event Time</a>.
-                {% highlight java %}
-stream.assignTimestamps (new TimeStampExtractor() {...});
-                {% endhighlight %}
-            </p>
-          </td>
-        </tr>
-  </tbody>
-</table>
-
-</div>
-
-<div data-lang="scala" markdown="1">
-
-<br />
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 25%">Transformation</th>
-      <th class="text-center">Description</th>
-    </tr>
-  </thead>
-  <tbody>
-    <tr>
-          <td><strong>Map</strong><br>DataStream &rarr; DataStream</td>
-          <td>
-            <p>Takes one element and produces one element. A map function that doubles the values of the input stream:</p>
-    {% highlight scala %}
-dataStream.map { x => x * 2 }
-    {% endhighlight %}
-          </td>
-        </tr>
-
-        <tr>
-          <td><strong>FlatMap</strong><br>DataStream &rarr; DataStream</td>
-          <td>
-            <p>Takes one element and produces zero, one, or more elements. A flatmap function that splits sentences to words:</p>
-    {% highlight scala %}
-dataStream.flatMap { str => str.split(" ") }
-    {% endhighlight %}
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Filter</strong><br>DataStream &rarr; DataStream</td>
-          <td>
-            <p>Evaluates a boolean function for each element and retains those for which the function returns true.
-            A filter that filters out zero values:
-            </p>
-    {% highlight scala %}
-dataStream.filter { _ != 0 }
-    {% endhighlight %}
-          </td>
-        </tr>
-        <tr>
-          <td><strong>KeyBy</strong><br>DataStream &rarr; KeyedStream</td>
-          <td>
-            <p>Logically partitions a stream into disjoint partitions, each partition containing elements of the same key.
-            Internally, this is implemented with hash partitioning. See <a href="#specifying-keys">keys</a> on how to specify keys.
-            This transformation returns a KeyedDataStream.</p>
-    {% highlight scala %}
-dataStream.keyBy("someKey") // Key by field "someKey"
-dataStream.keyBy(0) // Key by the first element of a Tuple
-    {% endhighlight %}
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Reduce</strong><br>KeyedStream &rarr; DataStream</td>
-          <td>
-            <p>A "rolling" reduce on a keyed data stream. Combines the current element with the last reduced value and
-            emits the new value.
-                    <br/>
-            	<br/>
-            A reduce function that creates a stream of partial sums:</p>
-            {% highlight scala %}
-keyedStream.reduce { _ + _ }
-            {% endhighlight %}
-            </p>
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Fold</strong><br>KeyedStream &rarr; DataStream</td>
-          <td>
-          <p>A "rolling" fold on a keyed data stream with an initial value.
-          Combines the current element with the last folded value and
-          emits the new value.
-          <br/>
-          <br/>
-          <p>A fold function that, when applied on the sequence (1,2,3,4,5),
-          emits the sequence "start-1", "start-1-2", "start-1-2-3", ...</p>
-          {% highlight scala %}
-val result: DataStream[String] =
-    keyedStream.fold("start")((str, i) => { str + "-" + i })
-          {% endhighlight %}
-          </p>
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Aggregations</strong><br>KeyedStream &rarr; DataStream</td>
-          <td>
-            <p>Rolling aggregations on a keyed data stream. The difference between min
-	    and minBy is that min returns the minimun value, whereas minBy returns
-	    the element that has the minimum value in this field (same for max and maxBy).</p>
-    {% highlight scala %}
-keyedStream.sum(0)
-keyedStream.sum("key")
-keyedStream.min(0)
-keyedStream.min("key")
-keyedStream.max(0)
-keyedStream.max("key")
-keyedStream.minBy(0)
-keyedStream.minBy("key")
-keyedStream.maxBy(0)
-keyedStream.maxBy("key")
-    {% endhighlight %}
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Window</strong><br>KeyedStream &rarr; WindowedStream</td>
-          <td>
-            <p>Windows can be defined on already partitioned KeyedStreams. Windows group the data in each
-            key according to some characteristic (e.g., the data that arrived within the last 5 seconds).
-            See <a href="windows.html">windows</a> for a description of windows.
-    {% highlight scala %}
-dataStream.keyBy(0).window(TumblingEventTimeWindows.of(Time.seconds(5))) // Last 5 seconds of data
-    {% endhighlight %}
-        </p>
-          </td>
-        </tr>
-        <tr>
-          <td><strong>WindowAll</strong><br>DataStream &rarr; AllWindowedStream</td>
-          <td>
-              <p>Windows can be defined on regular DataStreams. Windows group all the stream events
-              according to some characteristic (e.g., the data that arrived within the last 5 seconds).
-              See <a href="windows.html">windows</a> for a complete description of windows.</p>
-              <p><strong>WARNING:</strong> This is in many cases a <strong>non-parallel</strong> transformation. All records will be
-               gathered in one task for the windowAll operator.</p>
-  {% highlight scala %}
-dataStream.windowAll(TumblingEventTimeWindows.of(Time.seconds(5))) // Last 5 seconds of data
-  {% endhighlight %}
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Window Apply</strong><br>WindowedStream &rarr; DataStream<br>AllWindowedStream &rarr; DataStream</td>
-          <td>
-            <p>Applies a general function to the window as a whole. Below is a function that manually sums the elements of a window.</p>
-            <p><strong>Note:</strong> If you are using a windowAll transformation, you need to use an AllWindowFunction instead.</p>
-    {% highlight scala %}
-windowedStream.apply { WindowFunction }
-
-// applying an AllWindowFunction on non-keyed window stream
-allWindowedStream.apply { AllWindowFunction }
-
-    {% endhighlight %}
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Window Reduce</strong><br>WindowedStream &rarr; DataStream</td>
-          <td>
-            <p>Applies a functional reduce function to the window and returns the reduced value.</p>
-    {% highlight scala %}
-windowedStream.reduce { _ + _ }
-    {% endhighlight %}
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Window Fold</strong><br>WindowedStream &rarr; DataStream</td>
-          <td>
-            <p>Applies a functional fold function to the window and returns the folded value.
-               The example function, when applied on the sequence (1,2,3,4,5),
-               folds the sequence into the string "start-1-2-3-4-5":</p>
-          {% highlight scala %}
-val result: DataStream[String] =
-    windowedStream.fold("start", (str, i) => { str + "-" + i })
-          {% endhighlight %}
-          </td>
-	</tr>
-        <tr>
-          <td><strong>Aggregations on windows</strong><br>WindowedStream &rarr; DataStream</td>
-          <td>
-            <p>Aggregates the contents of a window. The difference between min
-	    and minBy is that min returns the minimum value, whereas minBy returns
-	    the element that has the minimum value in this field (same for max and maxBy).</p>
-    {% highlight scala %}
-windowedStream.sum(0)
-windowedStream.sum("key")
-windowedStream.min(0)
-windowedStream.min("key")
-windowedStream.max(0)
-windowedStream.max("key")
-windowedStream.minBy(0)
-windowedStream.minBy("key")
-windowedStream.maxBy(0)
-windowedStream.maxBy("key")
-    {% endhighlight %}
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Union</strong><br>DataStream* &rarr; DataStream</td>
-          <td>
-            <p>Union of two or more data streams creating a new stream containing all the elements from all the streams. Node: If you union a data stream
-            with itself you will get each element twice in the resulting stream.</p>
-    {% highlight scala %}
-dataStream.union(otherStream1, otherStream2, ...)
-    {% endhighlight %}
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Window Join</strong><br>DataStream,DataStream &rarr; DataStream</td>
-          <td>
-            <p>Join two data streams on a given key and a common window.</p>
-    {% highlight scala %}
-dataStream.join(otherStream)
-    .where(0).equalTo(1)
-    .window(TumblingEventTimeWindows.of(Time.seconds(3)))
-    .apply { ... }
-    {% endhighlight %}
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Window CoGroup</strong><br>DataStream,DataStream &rarr; DataStream</td>
-          <td>
-            <p>Cogroups two data streams on a given key and a common window.</p>
-    {% highlight scala %}
-dataStream.coGroup(otherStream)
-    .where(0).equalTo(1)
-    .window(TumblingEventTimeWindows.of(Time.seconds(3)))
-    .apply {}
-    {% endhighlight %}
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Connect</strong><br>DataStream,DataStream &rarr; ConnectedStreams</td>
-          <td>
-            <p>"Connects" two data streams retaining their types, allowing for shared state between
-            the two streams.</p>
-    {% highlight scala %}
-someStream : DataStream[Int] = ...
-otherStream : DataStream[String] = ...
-
-val connectedStreams = someStream.connect(otherStream)
-    {% endhighlight %}
-          </td>
-        </tr>
-        <tr>
-          <td><strong>CoMap, CoFlatMap</strong><br>ConnectedStreams &rarr; DataStream</td>
-          <td>
-            <p>Similar to map and flatMap on a connected data stream</p>
-    {% highlight scala %}
-connectedStreams.map(
-    (_ : Int) => true,
-    (_ : String) => false
-)
-connectedStreams.flatMap(
-    (_ : Int) => true,
-    (_ : String) => false
-)
-    {% endhighlight %}
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Split</strong><br>DataStream &rarr; SplitStream</td>
-          <td>
-            <p>
-                Split the stream into two or more streams according to some criterion.
-                {% highlight scala %}
-val split = someDataStream.split(
-  (num: Int) =>
-    (num % 2) match {
-      case 0 => List("even")
-      case 1 => List("odd")
-    }
-)
-                {% endhighlight %}
-            </p>
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Select</strong><br>SplitStream &rarr; DataStream</td>
-          <td>
-            <p>
-                Select one or more streams from a split stream.
-                {% highlight scala %}
-
-val even = split select "even"
-val odd = split select "odd"
-val all = split.select("even","odd")
-                {% endhighlight %}
-            </p>
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Iterate</strong><br>DataStream &rarr; IterativeStream  &rarr; DataStream</td>
-          <td>
-            <p>
-                Creates a "feedback" loop in the flow, by redirecting the output of one operator
-                to some previous operator. This is especially useful for defining algorithms that
-                continuously update a model. The following code starts with a stream and applies
-		the iteration body continuously. Elements that are greater than 0 are sent back
-		to the feedback channel, and the rest of the elements are forwarded downstream.
-		See <a href="#iterations">iterations</a> for a complete description.
-                {% highlight java %}
-initialStream.iterate {
-  iteration => {
-    val iterationBody = iteration.map {/*do something*/}
-    (iterationBody.filter(_ > 0), iterationBody.filter(_ <= 0))
-  }
-}
-                {% endhighlight %}
-            </p>
-          </td>
-        </tr>
-        <tr>
-          <td><strong>Extract Timestamps</strong><br>DataStream &rarr; DataStream</td>
-          <td>
-            <p>
-                Extracts timestamps from records in order to work with windows
-                that use event time semantics.
-                See <a href="{{ site.baseurl }}/apis/streaming/event_time.html">Event Time</a>.
-                {% highlight scala %}
-stream.assignTimestamps { timestampExtractor }
-                {% endhighlight %}
-            </p>
-          </td>
-        </tr>
-  </tbody>
-</table>
-
-Extraction from tuples, case classes and collections via anonymous pattern matching, like the following:
-{% highlight scala %}
-val data: DataStream[(Int, String, Double)] = // [...]
-data.map {
-  case (id, name, temperature) => // [...]
-}
-{% endhighlight %}
-is not supported by the API out-of-the-box. To use this feature, you should use a <a href="../scala_api_extensions.html">Scala API extension</a>.
-
-
-</div>
-</div>
-
-The following transformations are available on data streams of Tuples:
-
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-<br />
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 20%">Transformation</th>
-      <th class="text-center">Description</th>
-    </tr>
-  </thead>
-  <tbody>
-   <tr>
-      <td><strong>Project</strong><br>DataStream &rarr; DataStream</td>
-      <td>
-        <p>Selects a subset of fields from the tuples
-{% highlight java %}
-DataStream<Tuple3<Integer, Double, String>> in = // [...]
-DataStream<Tuple2<String, Integer>> out = in.project(2,0);
-{% endhighlight %}
-        </p>
-      </td>
-    </tr>
-  </tbody>
-</table>
-
-</div>
-
-<div data-lang="scala" markdown="1">
-
-<br />
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 20%">Transformation</th>
-      <th class="text-center">Description</th>
-    </tr>
-  </thead>
-  <tbody>
-   <tr>
-      <td><strong>Project</strong><br>DataStream &rarr; DataStream</td>
-      <td>
-        <p>Selects a subset of fields from the tuples
-{% highlight scala %}
-val in : DataStream[(Int,Double,String)] = // [...]
-val out = in.project(2,0)
-{% endhighlight %}
-        </p>
-      </td>
-    </tr>
-  </tbody>
-</table>
-
-</div>
-</div>
-
-
-### Physical partitioning
-
-Flink also gives low-level control (if desired) on the exact stream partitioning after a transformation,
-via the following functions.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-<br />
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 20%">Transformation</th>
-      <th class="text-center">Description</th>
-    </tr>
-  </thead>
-  <tbody>
-   <tr>
-      <td><strong>Custom partitioning</strong><br>DataStream &rarr; DataStream</td>
-      <td>
-        <p>
-            Uses a user-defined Partitioner to select the target task for each element.
-            {% highlight java %}
-dataStream.partitionCustom(partitioner, "someKey");
-dataStream.partitionCustom(partitioner, 0);
-            {% endhighlight %}
-        </p>
-      </td>
-    </tr>
-   <tr>
-     <td><strong>Random partitioning</strong><br>DataStream &rarr; DataStream</td>
-     <td>
-       <p>
-            Partitions elements randomly according to a uniform distribution.
-            {% highlight java %}
-dataStream.shuffle();
-            {% endhighlight %}
-       </p>
-     </td>
-   </tr>
-   <tr>
-      <td><strong>Rebalancing (Round-robin partitioning)</strong><br>DataStream &rarr; DataStream</td>
-      <td>
-        <p>
-            Partitions elements round-robin, creating equal load per partition. Useful for performance
-            optimization in the presence of data skew.
-            {% highlight java %}
-dataStream.rebalance();
-            {% endhighlight %}
-        </p>
-      </td>
-    </tr>
-    <tr>
-      <td><strong>Rescaling</strong><br>DataStream &rarr; DataStream</td>
-      <td>
-        <p>
-            Partitions elements, round-robin, to a subset of downstream operations. This is
-            useful if you want to have pipelines where you, for example, fan out from
-            each parallel instance of a source to a subset of several mappers to distribute load
-            but don't want the full rebalance that rebalance() would incur. This would require only
-            local data transfers instead of transferring data over network, depending on
-            other configuration values such as the number of slots of TaskManagers.
-        </p>
-        <p>
-            The subset of downstream operations to which the upstream operation sends
-            elements depends on the degree of parallelism of both the upstream and downstream operation.
-            For example, if the upstream operation has parallelism 2 and the downstream operation
-            has parallelism 4, then one upstream operation would distribute elements to two
-            downstream operations while the other upstream operation would distribute to the other
-            two downstream operations. If, on the other hand, the downstream operation has parallelism
-            2 while the upstream operation has parallelism 4 then two upstream operations would
-            distribute to one downstream operation while the other two upstream operations would
-            distribute to the other downstream operations.
-        </p>
-        <p>
-            In cases where the different parallelisms are not multiples of each other one or several
-            downstream operations will have a differing number of inputs from upstream operations.
-
-        </p>
-        </p>
-            Please see this figure for a visualization of the connection pattern in the above
-            example:
-        </p>
-
-        <div style="text-align: center">
-            <img src="{{ site.baseurl }}/apis/streaming/fig/rescale.svg" alt="Checkpoint barriers in data streams" />
-            </div>
-
-
-        <p>
-                    {% highlight java %}
-dataStream.rescale();
-            {% endhighlight %}
-
-        </p>
-      </td>
-    </tr>
-   <tr>
-      <td><strong>Broadcasting</strong><br>DataStream &rarr; DataStream</td>
-      <td>
-        <p>
-            Broadcasts elements to every partition.
-            {% highlight java %}
-dataStream.broadcast();
-            {% endhighlight %}
-        </p>
-      </td>
-    </tr>
-  </tbody>
-</table>
-
-</div>
-
-<div data-lang="scala" markdown="1">
-
-<br />
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 20%">Transformation</th>
-      <th class="text-center">Description</th>
-    </tr>
-  </thead>
-  <tbody>
-   <tr>
-      <td><strong>Custom partitioning</strong><br>DataStream &rarr; DataStream</td>
-      <td>
-        <p>
-            Uses a user-defined Partitioner to select the target task for each element.
-            {% highlight scala %}
-dataStream.partitionCustom(partitioner, "someKey")
-dataStream.partitionCustom(partitioner, 0)
-            {% endhighlight %}
-        </p>
-      </td>
-    </tr>
-   <tr>
-     <td><strong>Random partitioning</strong><br>DataStream &rarr; DataStream</td>
-     <td>
-       <p>
-            Partitions elements randomly according to a uniform distribution.
-            {% highlight scala %}
-dataStream.shuffle()
-            {% endhighlight %}
-       </p>
-     </td>
-   </tr>
-   <tr>
-      <td><strong>Rebalancing (Round-robin partitioning)</strong><br>DataStream &rarr; DataStream</td>
-      <td>
-        <p>
-            Partitions elements round-robin, creating equal load per partition. Useful for performance
-            optimization in the presence of data skew.
-            {% highlight scala %}
-dataStream.rebalance()
-            {% endhighlight %}
-        </p>
-      </td>
-    </tr>
-    <tr>
-      <td><strong>Rescaling</strong><br>DataStream &rarr; DataStream</td>
-      <td>
-        <p>
-            Partitions elements, round-robin, to a subset of downstream operations. This is
-            useful if you want to have pipelines where you, for example, fan out from
-            each parallel instance of a source to a subset of several mappers to distribute load
-            but don't want the full rebalance that rebalance() would incur. This would require only
-            local data transfers instead of transferring data over network, depending on
-            other configuration values such as the number of slots of TaskManagers.
-        </p>
-        <p>
-            The subset of downstream operations to which the upstream operation sends
-            elements depends on the degree of parallelism of both the upstream and downstream operation.
-            For example, if the upstream operation has parallelism 2 and the downstream operation
-            has parallelism 4, then one upstream operation would distribute elements to two
-            downstream operations while the other upstream operation would distribute to the other
-            two downstream operations. If, on the other hand, the downstream operation has parallelism
-            2 while the upstream operation has parallelism 4 then two upstream operations would
-            distribute to one downstream operation while the other two upstream operations would
-            distribute to the other downstream operations.
-        </p>
-        <p>
-            In cases where the different parallelisms are not multiples of each other one or several
-            downstream operations will have a differing number of inputs from upstream operations.
-
-        </p>
-        </p>
-            Please see this figure for a visualization of the connection pattern in the above
-            example:
-        </p>
-
-        <div style="text-align: center">
-            <img src="{{ site.baseurl }}/apis/streaming/fig/rescale.svg" alt="Checkpoint barriers in data streams" />
-            </div>
-
-
-        <p>
-                    {% highlight java %}
-dataStream.rescale()
-            {% endhighlight %}
-
-        </p>
-      </td>
-    </tr>
-   <tr>
-      <td><strong>Broadcasting</strong><br>DataStream &rarr; DataStream</td>
-      <td>
-        <p>
-            Broadcasts elements to every partition.
-            {% highlight scala %}
-dataStream.broadcast()
-            {% endhighlight %}
-        </p>
-      </td>
-    </tr>
-  </tbody>
-</table>
-
-</div>
-</div>
-
-### Task chaining and resource groups
-
-Chaining two subsequent transformations means co-locating them within the same thread for better
-performance. Flink by default chains operators if this is possible (e.g., two subsequent map
-transformations). The API gives fine-grained control over chaining if desired:
-
-Use `StreamExecutionEnvironment.disableOperatorChaining()` if you want to disable chaining in
-the whole job. For more fine grained control, the following functions are available. Note that
-these functions can only be used right after a DataStream transformation as they refer to the
-previous transformation. For example, you can use `someStream.map(...).startNewChain()`, but
-you cannot use `someStream.startNewChain()`.
-
-A resource group is a slot in Flink, see
-[slots]({{site.baseurl}}/setup/config.html#configuring-taskmanager-processing-slots). You can
-manually isolate operators in separate slots if desired.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-<br />
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 20%">Transformation</th>
-      <th class="text-center">Description</th>
-    </tr>
-  </thead>
-  <tbody>
-   <tr>
-      <td>Start new chain</td>
-      <td>
-        <p>Begin a new chain, starting with this operator. The two
-	mappers will be chained, and filter will not be chained to
-	the first mapper.
-{% highlight java %}
-someStream.filter(...).map(...).startNewChain().map(...);
-{% endhighlight %}
-        </p>
-      </td>
-    </tr>
-   <tr>
-      <td>Disable chaining</td>
-      <td>
-        <p>Do not chain the map operator
-{% highlight java %}
-someStream.map(...).disableChaining();
-{% endhighlight %}
-        </p>
-      </td>
-    </tr>
-    <tr>
-      <td>Set slot sharing group</td>
-      <td>
-        <p>Set the slot sharing group of an operation. Flink will put operations with the same
-        slot sharing group into the same slot while keeping operations that don't have the
-        slot sharing group in other slots. This can be used to isolate slots. The slot sharing
-        group is inherited from input operations if all input operations are in the same slot
-        sharing group.
-        The name of the default slot sharing group is "default", operations can explicitly
-        be put into this group by calling slotSharingGroup("default").
-{% highlight java %}
-someStream.filter(...).slotSharingGroup("name");
-{% endhighlight %}
-        </p>
-      </td>
-    </tr>
-  </tbody>
-</table>
-
-</div>
-
-<div data-lang="scala" markdown="1">
-
-<br />
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 20%">Transformation</th>
-      <th class="text-center">Description</th>
-    </tr>
-  </thead>
-  <tbody>
-   <tr>
-      <td>Start new chain</td>
-      <td>
-        <p>Begin a new chain, starting with this operator. The two
-	mappers will be chained, and filter will not be chained to
-	the first mapper.
-{% highlight scala %}
-someStream.filter(...).map(...).startNewChain().map(...)
-{% endhighlight %}
-        </p>
-      </td>
-    </tr>
-   <tr>
-      <td>Disable chaining</td>
-      <td>
-        <p>Do not chain the map operator
-{% highlight scala %}
-someStream.map(...).disableChaining()
-{% endhighlight %}
-        </p>
-      </td>
-    </tr>
-  <tr>
-      <td>Set slot sharing group</td>
-      <td>
-        <p>Set the slot sharing group of an operation. Flink will put operations with the same
-        slot sharing group into the same slot while keeping operations that don't have the
-        slot sharing group in other slots. This can be used to isolate slots. The slot sharing
-        group is inherited from input operations if all input operations are in the same slot
-        sharing group.
-        The name of the default slot sharing group is "default", operations can explicitly
-        be put into this group by calling slotSharingGroup("default").
-{% highlight java %}
-someStream.filter(...).slotSharingGroup("name")
-{% endhighlight %}
-        </p>
-      </td>
-    </tr>
-  </tbody>
-</table>
-
-</div>
-</div>
-
-
-{% top %}
-
-Data Sources
-------------
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-<br />
-
-Sources are where your program reads its input from. You can attach a source to your program by 
-using `StreamExecutionEnvironment.addSource(sourceFunction)`. Flink comes with a number of pre-implemented 
-source functions, but you can always write your own custom sources by implementing the `SourceFunction` 
-for non-parallel sources, or by implementing the `ParallelSourceFunction` interface or extending the 
-`RichParallelSourceFunction` for parallel sources.
-
-There are several predefined stream sources accessible from the `StreamExecutionEnvironment`:
-
-File-based:
-
-- `readTextFile(path)` - Reads text files, i.e. files that respect the `TextInputFormat` specification, line-by-line and returns them as Strings.
-
-- `readFile(fileInputFormat, path)` - Reads (once) files as dictated by the specified file input format.
-
-- `readFile(fileInputFormat, path, watchType, interval, pathFilter, typeInfo)` -  This is the method called internally by the two previous ones. It reads files in the `path` based on the given `fileInputFormat`. Depending on the provided `watchType`, this source may periodically monitor (every `interval` ms) the path for new data (`FileProcessingMode.PROCESS_CONTINUOUSLY`), or process once the data currently in the path and exit (`FileProcessingMode.PROCESS_ONCE`). Using the `pathFilter`, the user can further exclude files from being processed.
-           
-    *IMPLEMENTATION:*
-    
-    Under the hood, Flink splits the file reading process into two sub-tasks, namely *directory monitoring* and *data reading*. Each of these sub-tasks is implemented by a separate entity. Monitoring is implemented by a single, **non-parallel** (parallelism = 1) task, while reading is performed by multiple tasks running in parallel. The parallelism of the latter is equal to the job parallelism. The role of the single monitoring task is to scan the directory (periodically or only once depending on the `watchType`), find the files to be processed, divide them in *splits*, and assign these splits to the downstream readers. The readers are the ones who will read the actual data. Each split is read by only one reader, while a reader can read muplitple splits, one-by-one. 
-
-    *IMPORTANT NOTES:* 
-               
-    1. If the `watchType` is set to `FileProcessingMode.PROCESS_CONTINUOUSLY`, when a file is modified, its contents are re-processed entirely. This can brake the "exactly-once" semantics, as appending data at the end of a file will lead to **all** its contents being re-processed.
-               
-    2. If the `watchType` is set to `FileProcessingMode.PROCESS_ONCE`, the source scans the path **once** and exits, without waiting for the readers to finish reading the file contents. Of course the readers will continue reading until all file contents are read. Closing the source leads to no more checkpoints after that point. This may lead to slower recovery after a node failure, as the job will resume reading from the last checkpoint.
-                                                                             
-Socket-based:
-
-- `socketTextStream` - Reads from a socket. Elements can be separated by a delimiter.
-
-Collection-based:
-
-- `fromCollection(Collection)` - Creates a data stream from the Java Java.util.Collection. All elements
-  in the collection must be of the same type.
-
-- `fromCollection(Iterator, Class)` - Creates a data stream from an iterator. The class specifies the
-  data type of the elements returned by the iterator.
-
-- `fromElements(T ...)` - Creates a data stream from the given sequence of objects. All objects must be
-  of the same type.
-
-- `fromParallelCollection(SplittableIterator, Class)` - Creates a data stream from an iterator, in
-  parallel. The class specifies the data type of the elements returned by the iterator.
-
-- `generateSequence(from, to)` - Generates the sequence of numbers in the given interval, in
-  parallel.
-
-Custom:
-
-- `addSource` - Attache a new source function. For example, to read from Apache Kafka you can use
-    `addSource(new FlinkKafkaConsumer08<>(...))`. See [connectors]({{ site.baseurl }}/apis/streaming/connectors/) for more details.
-
-</div>
-
-<div data-lang="scala" markdown="1">
-
-<br />
-
-Sources are where your program reads its input from. You can attach a source to your program by 
-using `StreamExecutionEnvironment.addSource(sourceFunction)`. Flink comes with a number of pre-implemented 
-source functions, but you can always write your own custom sources by implementing the `SourceFunction` 
-for non-parallel sources, or by implementing the `ParallelSourceFunction` interface or extending the 
-`RichParallelSourceFunction` for parallel sources.
-
-There are several predefined stream sources accessible from the `StreamExecutionEnvironment`:
-
-File-based:
-
-- `readTextFile(path)` - Reads text files, i.e. files that respect the `TextInputFormat` specification, line-by-line and returns them as Strings.
-
-- `readFile(fileInputFormat, path)` - Reads (once) files as dictated by the specified file input format.
-
-- `readFile(fileInputFormat, path, watchType, interval, pathFilter)` -  This is the method called internally by the two previous ones. It reads files in the `path` based on the given `fileInputFormat`. Depending on the provided `watchType`, this source may periodically monitor (every `interval` ms) the path for new data (`FileProcessingMode.PROCESS_CONTINUOUSLY`), or process once the data currently in the path and exit (`FileProcessingMode.PROCESS_ONCE`). Using the `pathFilter`, the user can further exclude files from being processed.
-
-    *IMPLEMENTATION:*
-
-    Under the hood, Flink splits the file reading process into two sub-tasks, namely *directory monitoring* and *data reading*. Each of these sub-tasks is implemented by a separate entity. Monitoring is implemented by a single, **non-parallel** (parallelism = 1) task, while reading is performed by multiple tasks running in parallel. The parallelism of the latter is equal to the job parallelism. The role of the single monitoring task is to scan the directory (periodically or only once depending on the `watchType`), find the files to be processed, divide them in *splits*, and assign these splits to the downstream readers. The readers are the ones who will read the actual data. Each split is read by only one reader, while a reader can read muplitple splits, one-by-one. 
-
-    *IMPORTANT NOTES:* 
-
-    1. If the `watchType` is set to `FileProcessingMode.PROCESS_CONTINUOUSLY`, when a file is modified, its contents are re-processed entirely. This can brake the "exactly-once" semantics, as appending data at the end of a file will lead to **all** its contents being re-processed.
-
-    2. If the `watchType` is set to `FileProcessingMode.PROCESS_ONCE`, the source scans the path **once** and exits, without waiting for the readers to finish reading the file contents. Of course the readers will continue reading until all file contents are read. Closing the source leads to no more checkpoints after that point. This may lead to slower recovery after a node failure, as the job will resume reading from the last checkpoint.
-
-Socket-based:
-
-- `socketTextStream` - Reads from a socket. Elements can be separated by a delimiter.
-
-Collection-based:
-
-- `fromCollection(Seq)` - Creates a data stream from the Java Java.util.Collection. All elements
-  in the collection must be of the same type.
-
-- `fromCollection(Iterator)` - Creates a data stream from an iterator. The class specifies the
-  data type of the elements returned by the iterator.
-
-- `fromElements(elements: _*)` - Creates a data stream from the given sequence of objects. All objects must be
-  of the same type.
-
-- `fromParallelCollection(SplittableIterator)` - Creates a data stream from an iterator, in
-  parallel. The class specifies the data type of the elements returned by the iterator.
-
-- `generateSequence(from, to)` - Generates the sequence of numbers in the given interval, in
-  parallel.
-
-Custom:
-
-- `addSource` - Attache a new source function. For example, to read from Apache Kafka you can use
-    `addSource(new FlinkKafkaConsumer08<>(...))`. See [connectors]({{ site.baseurl }}/apis/streaming/connectors/) for more details.
-
-</div>
-</div>
-
-{% top %}
-
-Data Sinks
-----------
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-<br />
-
-Data sinks consume DataStreams and forward them to files, sockets, external systems, or print them.
-Flink comes with a variety of built-in output formats that are encapsulated behind operations on the
-DataStreams:
-
-- `writeAsText()` / `TextOutputFormat` - Writes elements line-wise as Strings. The Strings are
-  obtained by calling the *toString()* method of each element.
-
-- `writeAsCsv(...)` / `CsvOutputFormat` - Writes tuples as comma-separated value files. Row and field
-  delimiters are configurable. The value for each field comes from the *toString()* method of the objects.
-
-- `print()` / `printToErr()`  - Prints the *toString()* value
-of each element on the standard out / standard error stream. Optionally, a prefix (msg) can be provided which is
-prepended to the output. This can help to distinguish between different calls to *print*. If the parallelism is
-greater than 1, the output will also be prepended with the identifier of the task which produced the output.
-
-- `writeUsingOutputFormat()` / `FileOutputFormat` - Method and base class for custom file outputs. Supports
-  custom object-to-bytes conversion.
-
-- `writeToSocket` - Writes elements to a socket according to a `SerializationSchema`
-
-- `addSink` - Invokes a custom sink function. Flink comes bundled with connectors to other systems (such as
-    Apache Kafka) that are implemented as sink functions.
-
-</div>
-<div data-lang="scala" markdown="1">
-
-<br />
-
-Data sinks consume DataStreams and forward them to files, sockets, external systems, or print them.
-Flink comes with a variety of built-in output formats that are encapsulated behind operations on the
-DataStreams:
-
-- `writeAsText()` / `TextOutputFormat` - Writes elements line-wise as Strings. The Strings are
-  obtained by calling the *toString()* method of each element.
-
-- `writeAsCsv(...)` / `CsvOutputFormat` - Writes tuples as comma-separated value files. Row and field
-  delimiters are configurable. The value for each field comes from the *toString()* method of the objects.
-
-- `print()` / `printToErr()`  - Prints the *toString()* value
-of each element on the standard out / standard error stream. Optionally, a prefix (msg) can be provided which is
-prepended to the output. This can help to distinguish between different calls to *print*. If the parallelism is
-greater than 1, the output will also be prepended with the identifier of the task which produced the output.
-
-- `writeUsingOutputFormat()` / `FileOutputFormat` - Method and base class for custom file outputs. Supports
-  custom object-to-bytes conversion.
-
-- `writeToSocket` - Writes elements to a socket according to a `SerializationSchema`
-
-- `addSink` - Invokes a custom sink function. Flink comes bundled with connectors to other systems (such as
-    Apache Kafka) that are implemented as sink functions.
-
-</div>
-</div>
-
-Note that the `write*()` methods on `DataStream` are mainly intended for debugging purposes.
-They are not participating in Flink's checkpointing, this means these functions usually have
-at-least-once semantics. The data flushing to the target system depends on the implementation of the
-OutputFormat. This means that not all elements send to the OutputFormat are immediately showing up
-in the target system. Also, in failure cases, those records might be lost.
-
-For reliable, exactly-once delivery of a stream into a file system, use the `flink-connector-filesystem`.
-Also, custom implementations through the `.addSink(...)` method can participate in Flink's checkpointing
-for exactly-once semantics.
-
-{% top %}
-
-Iterations
-----------
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-<br />
-
-Iterative streaming programs implement a step function and embed it into an `IterativeStream`. As a DataStream
-program may never finish, there is no maximum number of iterations. Instead, you need to specify which part
-of the stream is fed back to the iteration and which part is forwarded downstream using a `split` transformation
-or a `filter`. Here, we show an example using filters. First, we define an `IterativeStream`
-
-{% highlight java %}
-IterativeStream<Integer> iteration = input.iterate();
-{% endhighlight %}
-
-Then, we specify the logic that will be executed inside the loop using a series of transformations (here
-a simple `map` transformation)
-
-{% highlight java %}
-DataStream<Integer> iterationBody = iteration.map(/* this is executed many times */);
-{% endhighlight %}
-
-To close an iteration and define the iteration tail, call the `closeWith(feedbackStream)` method of the `IterativeStream`.
-The DataStream given to the `closeWith` function will be fed back to the iteration head.
-A common pattern is to use a filter to separate the part of the stream that is fed back,
-and the part of the stream which is propagated forward. These filters can, e.g., define
-the "termination" logic, where an element is allowed to propagate downstream rather
-than being fed back.
-
-{% highlight java %}
-iteration.closeWith(iterationBody.filter(/* one part of the stream */));
-DataStream<Integer> output = iterationBody.filter(/* some other part of the stream */);
-{% endhighlight %}
-
-By default the partitioning of the feedback stream will be automatically set to be the same as the input of the
-iteration head. To override this the user can set an optional boolean flag in the `closeWith` method.
-
-For example, here is program that continuously subtracts 1 from a series of integers until they reach zero:
-
-{% highlight java %}
-DataStream<Long> someIntegers = env.generateSequence(0, 1000);
-
-IterativeStream<Long> iteration = someIntegers.iterate();
-
-DataStream<Long> minusOne = iteration.map(new MapFunction<Long, Long>() {
-  @Override
-  public Long map(Long value) throws Exception {
-    return value - 1 ;
-  }
-});
-
-DataStream<Long> stillGreaterThanZero = minusOne.filter(new FilterFunction<Long>() {
-  @Override
-  public boolean filter(Long value) throws Exception {
-    return (value > 0);
-  }
-});
-
-iteration.closeWith(stillGreaterThanZero);
-
-DataStream<Long> lessThanZero = minusOne.filter(new FilterFunction<Long>() {
-  @Override
-  public boolean filter(Long value) throws Exception {
-    return (value <= 0);
-  }
-});
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-
-<br />
-
-Iterative streaming programs implement a step function and embed it into an `IterativeStream`. As a DataStream
-program may never finish, there is no maximum number of iterations. Instead, you need to specify which part
-of the stream is fed back to the iteration and which part is forwarded downstream using a `split` transformation
-or a `filter`. Here, we show an example iteration where the body (the part of the computation that is repeated)
-is a simple map transformation, and the elements that are fed back are distinguished by the elements that
-are forwarded downstream using filters.
-
-{% highlight scala %}
-val iteratedStream = someDataStream.iterate(
-  iteration => {
-    val iterationBody = iteration.map(/* this is executed many times */)
-    (tail.filter(/* one part of the stream */), tail.filter(/* some other part of the stream */))
-})
-{% endhighlight %}
-
-
-By default the partitioning of the feedback stream will be automatically set to be the same as the input of the
-iteration head. To override this the user can set an optional boolean flag in the `closeWith` method.
-
-For example, here is program that continuously subtracts 1 from a series of integers until they reach zero:
-
-{% highlight scala %}
-val someIntegers: DataStream[Long] = env.generateSequence(0, 1000)
-
-val iteratedStream = someIntegers.iterate(
-  iteration => {
-    val minusOne = iteration.map( v => v - 1)
-    val stillGreaterThanZero = minusOne.filter (_ > 0)
-    val lessThanZero = minusOne.filter(_ <= 0)
-    (stillGreaterThanZero, lessThanZero)
-  }
-)
-{% endhighlight %}
-
-</div>
-</div>
-
-{% top %}
-
-Execution Parameters
---------------------
-
-The `StreamExecutionEnvironment` contains the `ExecutionConfig` which allows to set job specific configuration values for the runtime.
-
-Please refer to [execution configuration]({{ site.baseurl }}/apis/common/index.html#execution-configuration)
-for an explanation of most parameters. These parameters pertain specifically to the DataStream API:
-
-- `enableTimestamps()` / **`disableTimestamps()`**: Attach a timestamp to each event emitted from a source.
-    `areTimestampsEnabled()` returns the current value.
-
-- `setAutoWatermarkInterval(long milliseconds)`: Set the interval for automatic watermark emission. You can
-    get the current value with `long getAutoWatermarkInterval()`
-
-{% top %}
-
-### Fault Tolerance
-
-The [Fault Tolerance Documentation](fault_tolerance.html) describes the options and parameters to enable and configure Flink's checkpointing mechanism.
-
-### Controlling Latency
-
-By default, elements are not transferred on the network one-by-one (which would cause unnecessary network traffic)
-but are buffered. The size of the buffers (which are actually transferred between machines) can be set in the Flink config files.
-While this method is good for optimizing throughput, it can cause latency issues when the incoming stream is not fast enough.
-To control throughput and latency, you can use `env.setBufferTimeout(timeoutMillis)` on the execution environment
-(or on individual operators) to set a maximum wait time for the buffers to fill up. After this time, the
-buffers are sent automatically even if they are not full. The default value for this timeout is 100 ms.
-
-Usage:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-LocalStreamEnvironment env = StreamExecutionEnvironment.createLocalEnvironment();
-env.setBufferTimeout(timeoutMillis);
-
-env.generateSequence(1,10).map(new MyMapper()).setBufferTimeout(timeoutMillis);
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-LocalStreamEnvironment env = StreamExecutionEnvironment.createLocalEnvironment
-env.setBufferTimeout(timeoutMillis)
-
-env.genereateSequence(1,10).map(myMap).setBufferTimeout(timeoutMillis)
-{% endhighlight %}
-</div>
-</div>
-
-To maximize throughput, set `setBufferTimeout(-1)` which will remove the timeout and buffers will only be
-flushed when they are full. To minimize latency, set the timeout to a value close to 0 (for example 5 or 10 ms).
-A buffer timeout of 0 should be avoided, because it can cause severe performance degradation.
-
-{% top %}
-
-Debugging
----------
-
-Before running a streaming program in a distributed cluster, it is a good
-idea to make sure that the implemented algorithm works as desired. Hence, implementing data analysis
-programs is usually an incremental process of checking results, debugging, and improving.
-
-Flink provides features to significantly ease the development process of data analysis
-programs by supporting local debugging from within an IDE, injection of test data, and collection of
-result data. This section give some hints how to ease the development of Flink programs.
-
-### Local Execution Environment
-
-A `LocalStreamEnvironment` starts a Flink system within the same JVM process it was created in. If you
-start the LocalEnvironment from an IDE, you can set breakpoints in your code and easily debug your
-program.
-
-A LocalEnvironment is created and used as follows:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-final StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironment();
-
-DataStream<String> lines = env.addSource(/* some source */);
-// build your program
-
-env.execute();
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-
-{% highlight scala %}
-val env = StreamExecutionEnvironment.createLocalEnvironment()
-
-val lines = env.addSource(/* some source */)
-// build your program
-
-env.execute()
-{% endhighlight %}
-</div>
-</div>
-
-### Collection Data Sources
-
-Flink provides special data sources which are backed
-by Java collections to ease testing. Once a program has been tested, the sources and sinks can be
-easily replaced by sources and sinks that read from / write to external systems.
-
-Collection data sources can be used as follows:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-final StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironment();
-
-// Create a DataStream from a list of elements
-DataStream<Integer> myInts = env.fromElements(1, 2, 3, 4, 5);
-
-// Create a DataStream from any Java collection
-List<Tuple2<String, Integer>> data = ...
-DataStream<Tuple2<String, Integer>> myTuples = env.fromCollection(data);
-
-// Create a DataStream from an Iterator
-Iterator<Long> longIt = ...
-DataStream<Long> myLongs = env.fromCollection(longIt, Long.class);
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = StreamExecutionEnvironment.createLocalEnvironment()
-
-// Create a DataStream from a list of elements
-val myInts = env.fromElements(1, 2, 3, 4, 5)
-
-// Create a DataStream from any Collection
-val data: Seq[(String, Int)] = ...
-val myTuples = env.fromCollection(data)
-
-// Create a DataStream from an Iterator
-val longIt: Iterator[Long] = ...
-val myLongs = env.fromCollection(longIt)
-{% endhighlight %}
-</div>
-</div>
-
-**Note:** Currently, the collection data source requires that data types and iterators implement
-`Serializable`. Furthermore, collection data sources can not be executed in parallel (
-parallelism = 1).
-
-### Iterator Data Sink
-
-Flink also provides a sink to collect DataStream results for testing and debugging purposes. It can be used as follows:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-import org.apache.flink.contrib.streaming.DataStreamUtils
-
-DataStream<Tuple2<String, Integer>> myResult = ...
-Iterator<Tuple2<String, Integer>> myOutput = DataStreamUtils.collect(myResult)
-{% endhighlight %}
-
-</div>
-<div data-lang="scala" markdown="1">
-
-{% highlight scala %}
-import org.apache.flink.contrib.streaming.DataStreamUtils
-import scala.collection.JavaConverters.asScalaIteratorConverter
-
-val myResult: DataStream[(String, Int)] = ...
-val myOutput: Iterator[(String, Int)] = DataStreamUtils.collect(myResult.getJavaStream).asScala
-{% endhighlight %}
-</div>
-</div>
-
-{% top %}


[42/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/metrics.md
----------------------------------------------------------------------
diff --git a/docs/apis/metrics.md b/docs/apis/metrics.md
deleted file mode 100644
index 1cc7a29..0000000
--- a/docs/apis/metrics.md
+++ /dev/null
@@ -1,470 +0,0 @@
----
-title: "Metrics"
-# Top-level navigation
-top-nav-group: apis
-top-nav-pos: 13
-top-nav-title: "Metrics"
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-Flink exposes a metric system that allows gathering and exposing metrics to external systems.
-
-* This will be replaced by the TOC
-{:toc}
-
-## Registering metrics
-
-You can access the metric system from any user function that extends [RichFunction]({{ site.baseurl }}/apis/common/index.html#rich-functions) by calling `getRuntimeContext().getMetricGroup()`.
-This method returns a `MetricGroup` object on which you can create and register new metrics.
-
-### Metric types
-
-Flink supports `Counters`, `Gauges` and `Histograms`.
-
-#### Counter
-
-A `Counter` is used to count something. The current value can be in- or decremented using `inc()/inc(long n)` or `dec()/dec(long n)`.
-You can create and register a `Counter` by calling `counter(String name)` on a `MetricGroup`.
-
-{% highlight java %}
-
-public class MyMapper extends RichMapFunction<String, Integer> {
-  private Counter counter;
-
-  @Override
-  public void open(Configuration config) {
-    this.counter = getRuntimeContext()
-      .getMetricGroup()
-      .counter("myCounter");
-  }
-
-  @public Integer map(String value) throws Exception {
-    this.counter.inc();
-  }
-}
-
-{% endhighlight %}
-
-Alternatively you can also use your own `Counter` implementation:
-
-{% highlight java %}
-
-public class MyMapper extends RichMapFunction<String, Integer> {
-  private Counter counter;
-
-  @Override
-  public void open(Configuration config) {
-    this.counter = getRuntimeContext()
-      .getMetricGroup()
-      .counter("myCustomCounter", new CustomCounter());
-  }
-}
-
-{% endhighlight %}
-
-#### Gauge
-
-A `Gauge` provides a value of any type on demand. In order to use a `Gauge` you must first create a class that implements the `org.apache.flink.metrics.Gauge` interface.
-There is no restriction for the type of the returned value.
-You can register a gauge by calling `gauge(String name, Gauge gauge)` on a `MetricGroup`.
-
-{% highlight java %}
-
-public class MyMapper extends RichMapFunction<String, Integer> {
-  private int valueToExpose;
-
-  @Override
-  public void open(Configuration config) {
-    getRuntimeContext()
-      .getMetricGroup()
-      .gauge("MyGauge", new Gauge<Integer>() {
-        @Override
-        public Integer getValue() {
-          return valueToExpose;
-        }
-      });
-  }
-}
-
-{% endhighlight %}
-
-Note that reporters will turn the exposed object into a `String`, which means that a meaningful `toString()` implementation is required.
-
-#### Histogram
-
-A `Histogram` measures the distribution of long values.
-You can register one by calling `histogram(String name, Histogram histogram)` on a `MetricGroup`.
-
-{% highlight java %}
-public class MyMapper extends RichMapFunction<Long, Integer> {
-  private Histogram histogram;
-
-  @Override
-  public void open(Configuration config) {
-    this.histogram = getRuntimeContext()
-      .getMetricGroup()
-      .histogram("myHistogram", new MyHistogram());
-  }
-
-  @public Integer map(Long value) throws Exception {
-    this.histogram.update(value);
-  }
-}
-{% endhighlight %}
-
-Flink does not provide a default implementation for `Histogram`, but offers a {% gh_link flink-metrics/flink-metrics-dropwizard/src/main/java/org/apache/flink/dropwizard/metrics/DropwizardHistogramWrapper.java "Wrapper" %} that allows usage of Codahale/DropWizard histograms.
-To use this wrapper add the following dependency in your `pom.xml`:
-{% highlight xml %}
-<dependency>
-      <groupId>org.apache.flink</groupId>
-      <artifactId>flink-metrics-dropwizard</artifactId>
-      <version>{{site.version}}</version>
-</dependency>
-{% endhighlight %}
-
-You can then register a Codahale/DropWizard histogram like this:
-
-{% highlight java %}
-public class MyMapper extends RichMapFunction<Long, Integer> {
-  private Histogram histogram;
-
-  @Override
-  public void open(Configuration config) {
-    com.codahale.metrics.Histogram histogram =
-      new com.codahale.metrics.Histogram(new SlidingWindowReservoir(500));
-
-    this.histogram = getRuntimeContext()
-      .getMetricGroup()
-      .histogram("myHistogram", new DropWizardHistogramWrapper(histogram));
-  }
-}
-{% endhighlight %}
-
-## Scope
-
-Every metric is assigned an identifier under which it will be reported that is based on 3 components: the user-provided name when registering the metric, an optional user-defined scope and a system-provided scope.
-For example, if `A.B` is the sytem scope, `C.D` the user scope and `E` the name, then the identifier for the metric will be `A.B.C.D.E`.
-
-You can configure which delimiter to use for the identifier (default: `.`) by setting the `metrics.scope.delimiter` key in `conf/flink-conf.yaml`.
-
-### User Scope
-
-You can define a user scope by calling either `MetricGroup#addGroup(String name)` or `MetricGroup#addGroup(int name)`.
-
-{% highlight java %}
-
-counter = getRuntimeContext()
-  .getMetricGroup()
-  .addGroup("MyMetrics")
-  .counter("myCounter");
-
-{% endhighlight %}
-
-### System Scope
-
-The system scope contains context information about the metric, for example in which task it was registered or what job that task belongs to.
-
-Which context information should be included can be configured by setting the following keys in `conf/flink-conf.yaml`.
-Each of these keys expect a format string that may contain constants (e.g. "taskmanager") and variables (e.g. "&lt;task_id&gt;") which will be replaced at runtime.
-
-- `metrics.scope.jm`
-  - Default: &lt;host&gt;.jobmanager
-  - Applied to all metrics that were scoped to a job manager.
-- `metrics.scope.jm.job`
-  - Default: &lt;host&gt;.jobmanager.&lt;job_name&gt;
-  - Applied to all metrics that were scoped to a job manager and job.
-- `metrics.scope.tm`
-  - Default: &lt;host&gt;.taskmanager.&lt;tm_id&gt;
-  - Applied to all metrics that were scoped to a task manager.
-- `metrics.scope.tm.job`
-  - Default: &lt;host&gt;.taskmanager.&lt;tm_id&gt;.&lt;job_name&gt;
-  - Applied to all metrics that were scoped to a task manager and job.
-- `metrics.scope.task`
-  - Default: &lt;host&gt;.taskmanager.&lt;tm_id&gt;.&lt;job_name&gt;.&lt;task_name&gt;.&lt;subtask_index&gt;
-   - Applied to all metrics that were scoped to a task.
-- `metrics.scope.operator`
-  - Default: &lt;host&gt;.taskmanager.&lt;tm_id&gt;.&lt;job_name&gt;.&lt;operator_name&gt;.&lt;subtask_index&gt;
-  - Applied to all metrics that were scoped to an operator.
-
-There are no restrictions on the number or order of variables. Variables are case sensitive.
-
-The default scope for operator metrics will result in an identifier akin to `localhost.taskmanager.1234.MyJob.MyOperator.0.MyMetric`
-
-If you also want to include the task name but omit the task manager information you can specify the following format:
-
-`metrics.scope.operator: <host>.<job_name>.<task_name>.<operator_name>.<subtask_index>`
-
-This could create the identifier `localhost.MyJob.MySource_->_MyOperator.MyOperator.0.MyMetric`.
-
-Note that for this format string an identifier clash can occur should the same job be run multiple times concurrently, which can lead to inconsistent metric data.
-As such it is advised to either use format strings that provide a certain degree of uniqueness by including IDs (e.g &lt;job_id&gt;)
-or by assigning unique names to jobs and operators.
-
-### List of all Variables
-
-- JobManager: &lt;host&gt;
-- TaskManager: &lt;host&gt;, &lt;tm_id&gt;
-- Job: &lt;job_id&gt;, &lt;job_name&gt;
-- Task: &lt;task_id&gt;, &lt;task_name&gt;, &lt;task_attempt_id&gt;, &lt;task_attempt_num&gt;, &lt;subtask_index&gt;
-- Operator: &lt;operator_name&gt;, &lt;subtask_index&gt;
-
-## Reporter
-
-Metrics can be exposed to an external system by configuring one or several reporters in `conf/flink-conf.yaml`.
-
-- `metrics.reporters`: The list of named reporters.
-- `metrics.reporter.<name>.<config>`: Generic setting `<config>` for the reporter named `<name>`.
-- `metrics.reporter.<name>.class`: The reporter class to use for the reporter named `<name>`.
-- `metrics.reporter.<name>.interval`: The reporter interval to use for the reporter named `<name>`.
-
-All reporters must at least have the `class` property, some allow specifying a reporting `interval`. Below,
-we will list more settings specific to each reporter.
-
-Example reporter configuration that specifies multiple reporters:
-
-```
-metrics.reporters: my_jmx_reporter,my_other_reporter
-
-metrics.reporter.my_jmx_reporter.class: org.apache.flink.metrics.jmx.JMXReporter
-metrics.reporter.my_jmx_reporter.port: 9020-9040
-
-metrics.reporter.my_other_reporter.class: org.apache.flink.metrics.graphite.GraphiteReporter
-metrics.reporter.my_other_reporter.host: 192.168.1.1
-metrics.reporter.my_other_reporter.port: 10000
-
-```
-
-You can write your own `Reporter` by implementing the `org.apache.flink.metrics.reporter.MetricReporter` interface.
-If the Reporter should send out reports regularly you have to implement the `Scheduled` interface as well.
-
-The following sections list the supported reporters.
-
-### JMX (org.apache.flink.metrics.jmx.JMXReporter)
-
-You don't have to include an additional dependency since the JMX reporter is available by default
-but not activated.
-
-Parameters:
-
-- `port` - the port on which JMX listens for connections. This can also be a port range. When a
-range is specified the actual port is shown in the relevant job or task manager log. If you don't
-specify a port no extra JMX server will be started. Metrics are still available on the default
-local JMX interface.
-
-### Ganglia (org.apache.flink.metrics.ganglia.GangliaReporter)
-Dependency:
-{% highlight xml %}
-<dependency>
-      <groupId>org.apache.flink</groupId>
-      <artifactId>flink-metrics-ganglia</artifactId>
-      <version>{{site.version}}</version>
-</dependency>
-{% endhighlight %}
-
-Parameters:
-
-- `host` - the gmond host address configured under `udp_recv_channel.bind` in `gmond.conf`
-- `port` - the gmond port configured under `udp_recv_channel.port` in `gmond.conf`
-- `tmax` - soft limit for how long an old metric should be retained
-- `dmax` - hard limit for how long an old metric should be retained
-- `ttl` - time-to-live for transmitted UDP packets
-- `addressingMode` - UDP addressing mode to use (UNICAST/MULTICAST)
-
-### Graphite (org.apache.flink.metrics.graphite.GraphiteReporter)
-Dependency:
-{% highlight xml %}
-<dependency>
-      <groupId>org.apache.flink</groupId>
-      <artifactId>flink-metrics-graphite</artifactId>
-      <version>{{site.version}}</version>
-</dependency>
-{% endhighlight %}
-
-Parameters:
-
-- `host` - the Graphite server host
-- `port` - the Graphite server port
-
-### StatsD (org.apache.flink.metrics.statsd.StatsDReporter)
-Dependency:
-{% highlight xml %}
-<dependency>
-      <groupId>org.apache.flink</groupId>
-      <artifactId>flink-metrics-statsd</artifactId>
-      <version>{{site.version}}</version>
-</dependency>
-{% endhighlight %}
-
-Parameters:
-
-- `host` - the StatsD server host
-- `port` - the StatsD server port
-
-## System metrics
-
-Flink exposes the following system metrics:
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 20%">Scope</th>
-      <th class="text-left">Metrics</th>
-      <th class="text-left">Description</th>
-    </tr>
-  </thead>
-
-  <tbody>
-    <tr>
-      <th rowspan="1"><strong>JobManager</strong></th>
-      <td></td>
-      <td></td>
-    </tr>
-    <tr>
-      <th rowspan="19"><strong>TaskManager.Status.JVM</strong></th>
-      <td>ClassLoader.ClassesLoaded</td>
-      <td>The total number of classes loaded since the start of the JVM.</td>
-    </tr>
-    <tr>
-      <td>ClassLoader.ClassesUnloaded</td>
-      <td>The total number of classes unloaded since the start of the JVM.</td>
-    </tr>
-    <tr>
-      <td>GargabeCollector.&lt;garbageCollector&gt;.Count</td>
-      <td>The total number of collections that have occurred.</td>
-    </tr>
-    <tr>
-      <td>GargabeCollector.&lt;garbageCollector&gt;.Time</td>
-      <td>The total time spent performing garbage collection.</td>
-    </tr>
-    <tr>
-      <td>Memory.Heap.Used</td>
-      <td>The amount of heap memory currently used.</td>
-    </tr>
-    <tr>
-      <td>Memory.Heap.Committed</td>
-      <td>The amount of heap memory guaranteed to be available to the JVM.</td>
-    </tr>
-    <tr>
-      <td>Memory.Heap.Max</td>
-      <td>The maximum amount of heap memory that can be used for memory management.</td>
-    </tr>
-    <tr>
-      <td>Memory.NonHeap.Used</td>
-      <td>The amount of non-heap memory currently used.</td>
-    </tr>
-    <tr>
-      <td>Memory.NonHeap.Committed</td>
-      <td>The amount of non-heap memory guaranteed to be available to the JVM.</td>
-    </tr>
-    <tr>
-      <td>Memory.NonHeap.Max</td>
-      <td>The maximum amount of non-heap memory that can be used for memory management.</td>
-    </tr>
-    <tr>
-      <td>Memory.Direct.Count</td>
-      <td>The number of buffers in the direct buffer pool.</td>
-    </tr>
-    <tr>
-      <td>Memory.Direct.MemoryUsed</td>
-      <td>The amount of memory used by the JVM for the direct buffer pool.</td>
-    </tr>
-    <tr>
-      <td>Memory.Direct.TotalCapacity</td>
-      <td>The total capacity of all buffers in the direct buffer pool.</td>
-    </tr>
-    <tr>
-      <td>Memory.Mapped.Count</td>
-      <td>The number of buffers in the mapped buffer pool.</td>
-    </tr>
-    <tr>
-      <td>Memory.Mapped.MemoryUsed</td>
-      <td>The amount of memory used by the JVM for the mapped buffer pool.</td>
-    </tr>
-    <tr>
-      <td>Memory.Mapped.TotalCapacity</td>
-      <td>The number of buffers in the mapped buffer pool.</td>
-    </tr>
-    <tr>
-      <td>Threads.Count</td>
-      <td>The total number of live threads.</td>
-    </tr>
-    <tr>
-      <td>CPU.Load</td>
-      <td>The recent CPU usage of the JVM.</td>
-    </tr>
-    <tr>
-      <td>CPU.Time</td>
-      <td>The CPU time used by the JVM.</td>
-    </tr>
-    <tr>
-      <th rowspan="1"><strong>Job</strong></th>
-      <td></td>
-      <td></td>
-    </tr>
-    <tr>
-      <tr>
-        <th rowspan="7"><strong>Task</strong></t>
-        <td>currentLowWatermark</td>
-        <td>The lowest watermark a task has received.</td>
-      </tr>
-      <tr>
-        <td>lastCheckpointDuration</td>
-        <td>The time it took to complete the last checkpoint.</td>
-      </tr>
-      <tr>
-        <td>lastCheckpointSize</td>
-        <td>The total size of the last checkpoint.</td>
-      </tr>
-      <tr>
-        <td>restartingTime</td>
-        <td>The time it took to restart the job.</td>
-      </tr>
-      <tr>
-        <td>numBytesInLocal</td>
-        <td>The total number of bytes this task has read from a local source.</td>
-      </tr>
-      <tr>
-        <td>numBytesInRemote</td>
-        <td>The total number of bytes this task has read from a remote source.</td>
-      </tr>
-      <tr>
-        <td>numBytesOut</td>
-        <td>The total number of bytes this task has emitted.</td>
-      </tr>
-    </tr>
-    <tr>
-      <tr>
-        <th rowspan="3"><strong>Operator</strong></th>
-        <td>numRecordsIn</td>
-        <td>The total number of records this operator has received.</td>
-      </tr>
-      <tr>
-        <td>numRecordsOut</td>
-        <td>The total number of records this operator has emitted.</td>
-      </tr>
-      <tr>
-        <td>numSplitsProcessed</td>
-        <td>The total number of InputSplits this data source has processed.</td>
-      </tr>
-    </tr>
-  </tbody>
-</table>
-
-{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/programming_guide.md
----------------------------------------------------------------------
diff --git a/docs/apis/programming_guide.md b/docs/apis/programming_guide.md
deleted file mode 100644
index 0d865fe..0000000
--- a/docs/apis/programming_guide.md
+++ /dev/null
@@ -1,26 +0,0 @@
----
-title: DataSet API
----
-
-<meta http-equiv="refresh" content="1; url={{ site.baseurl }}/apis/batch/index.html" />
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-The *DataSet API guide* has been moved. Redirecting to [{{ site.baseurl }}/apis/batch/index.html]({{ site.baseurl }}/apis/batch/index.html) in 1 second.

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/scala_api_extensions.md
----------------------------------------------------------------------
diff --git a/docs/apis/scala_api_extensions.md b/docs/apis/scala_api_extensions.md
deleted file mode 100644
index e3268bf..0000000
--- a/docs/apis/scala_api_extensions.md
+++ /dev/null
@@ -1,409 +0,0 @@
----
-title: "Scala API Extensions"
-# Top-level navigation
-top-nav-group: apis
-top-nav-pos: 11
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-In order to keep a fair amount of consistency between the Scala and Java APIs, some 
-of the features that allow a high-level of expressiveness in Scala have been left
-out from the standard APIs for both batch and streaming.
-
-If you want to _enjoy the full Scala experience_ you can choose to opt-in to 
-extensions that enhance the Scala API via implicit conversions.
-
-To use all the available extensions, you can just add a simple `import` for the
-DataSet API
-
-{% highlight scala %}
-import org.apache.flink.api.scala.extensions._
-{% endhighlight %}
-
-or the DataStream API
-
-{% highlight scala %}
-import org.apache.flink.streaming.api.scala.extensions._
-{% endhighlight %}
-
-Alternatively, you can import individual extensions _a-l�-carte_ to only use those
-you prefer.
-
-## Accept partial functions
-
-Normally, both the DataSet and DataStream APIs don't accept anonymous pattern
-matching functions to deconstruct tuples, case classes or collections, like the
-following:
-
-{% highlight scala %}
-val data: DataSet[(Int, String, Double)] = // [...]
-data.map {
-  case (id, name, temperature) => // [...]
-  // The previous line causes the following compilation error:
-  // "The argument types of an anonymous function must be fully known. (SLS 8.5)"
-}
-{% endhighlight %}
-
-This extension introduces new methods in both the DataSet and DataStream Scala API
-that have a one-to-one correspondance in the extended API. These delegating methods 
-do support anonymous pattern matching functions.
-
-#### DataSet API
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 20%">Method</th>
-      <th class="text-left" style="width: 20%">Original</th>
-      <th class="text-center">Example</th>
-    </tr>
-  </thead>
-
-  <tbody>
-    <tr>
-      <td><strong>mapWith</strong></td>
-      <td><strong>map (DataSet)</strong></td>
-      <td>
-{% highlight scala %}
-data.mapWith {
-  case (_, value) => value.toString
-}
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>mapPartitionWith</strong></td>
-      <td><strong>mapPartition (DataSet)</strong></td>
-      <td>
-{% highlight scala %}
-data.mapPartitionWith {
-  case head #:: _ => head
-}
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>flatMapWith</strong></td>
-      <td><strong>flatMap (DataSet)</strong></td>
-      <td>
-{% highlight scala %}
-data.flatMapWith {
-  case (_, name, visitTimes) => visitTimes.map(name -> _)
-}
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>filterWith</strong></td>
-      <td><strong>filter (DataSet)</strong></td>
-      <td>
-{% highlight scala %}
-data.filterWith {
-  case Train(_, isOnTime) => isOnTime
-}
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>reduceWith</strong></td>
-      <td><strong>reduce (DataSet, GroupedDataSet)</strong></td>
-      <td>
-{% highlight scala %}
-data.reduceWith {
-  case ((_, amount1), (_, amount2)) => amount1 + amount2
-}
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>reduceGroupWith</strong></td>
-      <td><strong>reduceGroup (GroupedDataSet)</strong></td>
-      <td>
-{% highlight scala %}
-data.reduceGroupWith {
-  case id #:: value #:: _ => id -> value
-}
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>groupingBy</strong></td>
-      <td><strong>groupBy (DataSet)</strong></td>
-      <td>
-{% highlight scala %}
-data.groupingBy {
-  case (id, _, _) => id
-}
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>sortGroupWith</strong></td>
-      <td><strong>sortGroup (GroupedDataSet)</strong></td>
-      <td>
-{% highlight scala %}
-grouped.sortGroupWith(Order.ASCENDING) {
-  case House(_, value) => value
-}
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>combineGroupWith</strong></td>
-      <td><strong>combineGroup (GroupedDataSet)</strong></td>
-      <td>
-{% highlight scala %}
-grouped.combineGroupWith {
-  case header #:: amounts => amounts.sum
-}
-{% endhighlight %}
-      </td>
-    <tr>
-      <td><strong>projecting</strong></td>
-      <td><strong>apply (JoinDataSet, CrossDataSet)</strong></td>
-      <td>
-{% highlight scala %}
-data1.join(data2).
-  whereClause(case (pk, _) => pk).
-  isEqualTo(case (_, fk) => fk).
-  projecting {
-    case ((pk, tx), (products, fk)) => tx -> products
-  }
-
-data1.cross(data2).projecting {
-  case ((a, _), (_, b) => a -> b
-}
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>projecting</strong></td>
-      <td><strong>apply (CoGroupDataSet)</strong></td>
-      <td>
-{% highlight scala %}
-data1.coGroup(data2).
-  whereClause(case (pk, _) => pk).
-  isEqualTo(case (_, fk) => fk).
-  projecting {
-    case (head1 #:: _, head2 #:: _) => head1 -> head2
-  }
-}
-{% endhighlight %}
-      </td>
-    </tr>
-    </tr>
-  </tbody>
-</table>
-
-#### DataStream API
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 20%">Method</th>
-      <th class="text-left" style="width: 20%">Original</th>
-      <th class="text-center">Example</th>
-    </tr>
-  </thead>
-
-  <tbody>
-    <tr>
-      <td><strong>mapWith</strong></td>
-      <td><strong>map (DataStream)</strong></td>
-      <td>
-{% highlight scala %}
-data.mapWith {
-  case (_, value) => value.toString
-}
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>mapPartitionWith</strong></td>
-      <td><strong>mapPartition (DataStream)</strong></td>
-      <td>
-{% highlight scala %}
-data.mapPartitionWith {
-  case head #:: _ => head
-}
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>flatMapWith</strong></td>
-      <td><strong>flatMap (DataStream)</strong></td>
-      <td>
-{% highlight scala %}
-data.flatMapWith {
-  case (_, name, visits) => visits.map(name -> _)
-}
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>filterWith</strong></td>
-      <td><strong>filter (DataStream)</strong></td>
-      <td>
-{% highlight scala %}
-data.filterWith {
-  case Train(_, isOnTime) => isOnTime
-}
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>keyingBy</strong></td>
-      <td><strong>keyBy (DataStream)</strong></td>
-      <td>
-{% highlight scala %}
-data.keyingBy {
-  case (id, _, _) => id
-}
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>mapWith</strong></td>
-      <td><strong>map (ConnectedDataStream)</strong></td>
-      <td>
-{% highlight scala %}
-data.mapWith(
-  map1 = case (_, value) => value.toString,
-  map2 = case (_, _, value, _) => value + 1
-)
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>flatMapWith</strong></td>
-      <td><strong>flatMap (ConnectedDataStream)</strong></td>
-      <td>
-{% highlight scala %}
-data.flatMapWith(
-  flatMap1 = case (_, json) => parse(json),
-  flatMap2 = case (_, _, json, _) => parse(json)
-)
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>keyingBy</strong></td>
-      <td><strong>keyBy (ConnectedDataStream)</strong></td>
-      <td>
-{% highlight scala %}
-data.keyingBy(
-  key1 = case (_, timestamp) => timestamp,
-  key2 = case (id, _, _) => id
-)
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>reduceWith</strong></td>
-      <td><strong>reduce (KeyedDataStream, WindowedDataStream)</strong></td>
-      <td>
-{% highlight scala %}
-data.reduceWith {
-  case ((_, sum1), (_, sum2) => sum1 + sum2
-}
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>foldWith</strong></td>
-      <td><strong>fold (KeyedDataStream, WindowedDataStream)</strong></td>
-      <td>
-{% highlight scala %}
-data.foldWith(User(bought = 0)) {
-  case (User(b), (_, items)) => User(b + items.size)
-}
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>applyWith</strong></td>
-      <td><strong>apply (WindowedDataStream)</strong></td>
-      <td>
-{% highlight scala %}
-data.applyWith(0)(
-  foldFunction = case (sum, amount) => sum + amount
-  windowFunction = case (k, w, sum) => // [...]
-)
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>projecting</strong></td>
-      <td><strong>apply (JoinedDataStream)</strong></td>
-      <td>
-{% highlight scala %}
-data1.join(data2).
-  whereClause(case (pk, _) => pk).
-  isEqualTo(case (_, fk) => fk).
-  projecting {
-    case ((pk, tx), (products, fk)) => tx -> products
-  }
-{% endhighlight %}
-      </td>
-    </tr>
-  </tbody>
-</table>
-
-
-
-For more information on the semantics of each method, please refer to the 
-[DataStream](batch/index.html) and [DataSet](streaming/index.html) API documentation.
-
-To use this extension exclusively, you can add the following `import`:
-
-{% highlight scala %}
-import org.apache.flink.api.scala.extensions.acceptPartialFunctions
-{% endhighlight %}
-
-for the DataSet extensions and
-
-{% highlight scala %}
-import org.apache.flink.streaming.api.scala.extensions.acceptPartialFunctions
-{% endhighlight %}
-
-The following snippet shows a minimal example of how to use these extension
-methods together (with the DataSet API):
-
-{% highlight scala %}
-object Main {
-  import org.apache.flink.api.scala.extensions._
-  case class Point(x: Double, y: Double)
-  def main(args: Array[String]): Unit = {
-    val env = ExecutionEnvironment.getExecutionEnvironment
-    val ds = env.fromElements(Point(1, 2), Point(3, 4), Point(5, 6))
-    ds.filterWith {
-      case Point(x, _) => x > 1
-    }.reduceWith {
-      case (Point(x1, y1), (Point(x2, y2))) => Point(x1 + y1, x2 + y2)
-    }.mapWith {
-      case Point(x, y) => (x, y)
-    }.flatMapWith {
-      case (x, y) => Seq("x" -> x, "y" -> y)
-    }.groupingBy {
-      case (id, value) => id
-    }
-  }
-}
-{% endhighlight %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/scala_shell.md
----------------------------------------------------------------------
diff --git a/docs/apis/scala_shell.md b/docs/apis/scala_shell.md
deleted file mode 100644
index ad36ca0..0000000
--- a/docs/apis/scala_shell.md
+++ /dev/null
@@ -1,197 +0,0 @@
----
-title: "Scala Shell"
-# Top-level navigation
-top-nav-group: apis
-top-nav-pos: 10
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-
-Flink comes with an integrated interactive Scala Shell.
-It can be used in a local setup as well as in a cluster setup.
-
-
-To use the shell with an integrated Flink cluster just execute:
-
-~~~bash
-bin/start-scala-shell.sh local
-~~~
-
-in the root directory of your binary Flink directory. To run the Shell on a
-cluster, please see the Setup section below.
-
-
-## Usage
-
-The shell supports Batch and Streaming.
-Two different ExecutionEnvironments are automatically prebound after startup.
-Use "benv" and "senv" to access the Batch and Streaming environment respectively.
-
-### DataSet API
-
-The following example will execute the wordcount program in the Scala shell:
-
-~~~scala
-Scala-Flink> val text = benv.fromElements(
-  "To be, or not to be,--that is the question:--",
-  "Whether 'tis nobler in the mind to suffer",
-  "The slings and arrows of outrageous fortune",
-  "Or to take arms against a sea of troubles,")
-Scala-Flink> val counts = text
-    .flatMap { _.toLowerCase.split("\\W+") }
-    .map { (_, 1) }.groupBy(0).sum(1)
-Scala-Flink> counts.print()
-~~~
-
-The print() command will automatically send the specified tasks to the JobManager for execution and will show the result of the computation in the terminal.
-
-It is possible to write results to a file. However, in this case you need to call `execute`, to run your program:
-
-~~~scala
-Scala-Flink> benv.execute("MyProgram")
-~~~
-
-### DataStream API
-
-Similar to the the batch program above, we can execute a streaming program through the DataStream API:
-
-~~~scala
-Scala-Flink> val textStreaming = senv.fromElements(
-  "To be, or not to be,--that is the question:--",
-  "Whether 'tis nobler in the mind to suffer",
-  "The slings and arrows of outrageous fortune",
-  "Or to take arms against a sea of troubles,")
-Scala-Flink> val countsStreaming = textStreaming
-    .flatMap { _.toLowerCase.split("\\W+") }
-    .map { (_, 1) }.keyBy(0).sum(1)
-Scala-Flink> countsStreaming.print()
-Scala-Flink> senv.execute("Streaming Wordcount")
-~~~
-
-Note, that in the Streaming case, the print operation does not trigger execution directly.
-
-The Flink Shell comes with command history and auto-completion.
-
-
-## Adding external dependencies
-
-It is possible to add external classpaths to the Scala-shell. These will be sent to the Jobmanager automatically alongside your shell program, when calling execute.
-
-Use the parameter `-a <path/to/jar.jar>` or `--addclasspath <path/to/jar.jar>` to load additional classes.
-
-~~~bash
-bin/start-scala-shell.sh [local | remote <host> <port> | yarn] --addclasspath <path/to/jar.jar>
-~~~
-
-
-## Setup
-
-To get an overview of what options the Scala Shell provides, please use
-
-~~~bash
-bin/start-scala-shell.sh --help
-~~~
-
-### Local
-
-To use the shell with an integrated Flink cluster just execute:
-
-~~~bash
-bin/start-scala-shell.sh local
-~~~
-
-
-### Remote
-
-To use it with a running cluster start the scala shell with the keyword `remote`
-and supply the host and port of the JobManager with:
-
-~~~bash
-bin/start-scala-shell.sh remote <hostname> <portnumber>
-~~~
-
-### Yarn Scala Shell cluster
-
-The shell can deploy a Flink cluster to YARN, which is used exclusively by the
-shell. The number of YARN containers can be controlled by the parameter `-n <arg>`.
-The shell deploys a new Flink cluster on YARN and connects the
-cluster. You can also specify options for YARN cluster such as memory for
-JobManager, name of YARN application, etc.
- 
-For example, to start a Yarn cluster for the Scala Shell with two TaskManagers
-use the following:
- 
-~~~bash
- bin/start-scala-shell.sh yarn -n 2
-~~~
-
-For all other options, see the full reference at the bottom.
-
-
-### Yarn Session
-
-If you have previously deployed a Flink cluster using the Flink Yarn Session,
-the Scala shell can connect with it using the following command:
-
-~~~bash
- bin/start-scala-shell.sh yarn
-~~~
-
-
-## Full Reference
-
-~~~bash
-Flink Scala Shell
-Usage: start-scala-shell.sh [local|remote|yarn] [options] <args>...
-
-Command: local [options]
-Starts Flink scala shell with a local Flink cluster
-  -a <path/to/jar> | --addclasspath <path/to/jar>
-        Specifies additional jars to be used in Flink
-Command: remote [options] <host> <port>
-Starts Flink scala shell connecting to a remote cluster
-  <host>
-        Remote host name as string
-  <port>
-        Remote port as integer
-
-  -a <path/to/jar> | --addclasspath <path/to/jar>
-        Specifies additional jars to be used in Flink
-Command: yarn [options]
-Starts Flink scala shell connecting to a yarn cluster
-  -n arg | --container arg
-        Number of YARN container to allocate (= Number of TaskManagers)
-  -jm arg | --jobManagerMemory arg
-        Memory for JobManager container [in MB]
-  -nm <value> | --name <value>
-        Set a custom name for the application on YARN
-  -qu <arg> | --queue <arg>
-        Specifies YARN queue
-  -s <arg> | --slots <arg>
-        Number of slots per TaskManager
-  -tm <arg> | --taskManagerMemory <arg>
-        Memory per TaskManager container [in MB]
-  -a <path/to/jar> | --addclasspath <path/to/jar>
-        Specifies additional jars to be used in Flink
-  --configDir <value>
-        The configuration directory.
-  -h | --help
-        Prints this usage text
-~~~

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/connectors/cassandra.md
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/connectors/cassandra.md b/docs/apis/streaming/connectors/cassandra.md
deleted file mode 100644
index 28ad244..0000000
--- a/docs/apis/streaming/connectors/cassandra.md
+++ /dev/null
@@ -1,158 +0,0 @@
----
-title: "Apache Cassandra Connector"
-
-# Sub-level navigation
-sub-nav-group: streaming
-sub-nav-parent: connectors
-sub-nav-pos: 1
-sub-nav-title: Cassandra
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-This connector provides sinks that writes data into a [Cassandra](https://cassandra.apache.org/) database.
-
-To use this connector, add the following dependency to your project:
-
-{% highlight xml %}
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-connector-cassandra{{ site.scala_version_suffix }}</artifactId>
-  <version>{{site.version }}</version>
-</dependency>
-{% endhighlight %}
-
-Note that the streaming connectors are currently not part of the binary distribution. See how to link with them for cluster execution [here]({{ site.baseurl}}/apis/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution).
-
-#### Installing Apache Cassandra
-Follow the instructions from the [Cassandra Getting Started page](http://wiki.apache.org/cassandra/GettingStarted).
-
-#### Cassandra Sink
-
-Flink's Cassandra sink are created by using the static CassandraSink.addSink(DataStream<IN> input) method.
-This method returns a CassandraSinkBuilder, which offers methods to further configure the sink.
-
-The following configuration methods can be used:
-
-1. setQuery(String query)
-2. setHost(String host[, int port])
-3. setClusterBuilder(ClusterBuilder builder)
-4. enableWriteAheadLog([CheckpointCommitter committer])
-5. build()
-
-*setQuery()* sets the query that is executed for every value the sink receives.
-*setHost()* sets the cassandra host/port to connect to. This method is intended for simple use-cases.
-*setClusterBuilder()* sets the cluster builder that is used to configure the connection to cassandra. The *setHost()* functionality can be subsumed with this method.
-*enableWriteAheadLog()* is an optional method, that allows exactly-once processing for non-deterministic algorithms.
-
-A checkpoint committer stores additional information about completed checkpoints
-in some resource. This information is used to prevent a full replay of the last
-completed checkpoint in case of a failure.
-You can use a `CassandraCommitter` to store these in a separate table in cassandra.
-Note that this table will NOT be cleaned up by Flink.
-
-*build()* finalizes the configuration and returns the CassandraSink.
-
-Flink can provide exactly-once guarantees if the query is idempotent (meaning it can be applied multiple
-times without changing the result) and checkpointing is enabled. In case of a failure the failed
-checkpoint will be replayed completely.
-
-Furthermore, for non-deterministic programs the write-ahead log has to be enabled. For such a program
-the replayed checkpoint may be completely different than the previous attempt, which may leave the
-database in an inconsitent state since part of the first attempt may already be written.
-The write-ahead log guarantees that the replayed checkpoint is identical to the first attempt. 
-Note that that enabling this feature will have an adverse impact on latency.
-
-<p style="border-radius: 5px; padding: 5px" class="bg-danger"><b>Note</b>: The write-ahead log functionality is currently experimental. In many cases it is sufficent to use the connector without enabling it. Please report problems to the development mailing list.</p>
-
-
-#### Example
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-CassandraSink.addSink(input)
-  .setQuery("INSERT INTO example.values (id, counter) values (?, ?);")
-  .setClusterBuilder(new ClusterBuilder() {
-    @Override
-    public Cluster buildCluster(Cluster.Builder builder) {
-      return builder.addContactPoint("127.0.0.1").build();
-    }
-  })
-  .build();
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-CassandraSink.addSink(input)
-  .setQuery("INSERT INTO example.values (id, counter) values (?, ?);")
-  .setClusterBuilder(new ClusterBuilder() {
-    @Override
-    public Cluster buildCluster(Cluster.Builder builder) {
-      return builder.addContactPoint("127.0.0.1").build();
-    }
-  })
-  .build();
-{% endhighlight %}
-</div>
-</div>
-
-The Cassandra sinks support both tuples and POJO's that use DataStax annotations.
-Flink automatically detects which type of input is used.
-
-Example for such a Pojo:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-
-@Table(keyspace= "test", name = "mappersink")
-public class Pojo implements Serializable {
-
-	private static final long serialVersionUID = 1038054554690916991L;
-
-	@Column(name = "id")
-	private long id;
-	@Column(name = "value")
-	private String value;
-
-	public Pojo(long id, String value){
-		this.id = id;
-		this.value = value;
-	}
-
-	public long getId() {
-		return id;
-	}
-
-	public void setId(long id) {
-		this.id = id;
-	}
-
-	public String getValue() {
-		return value;
-	}
-
-	public void setValue(String value) {
-		this.value = value;
-	}
-}
-{% endhighlight %}
-</div>
-</div>

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/connectors/elasticsearch.md
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/connectors/elasticsearch.md b/docs/apis/streaming/connectors/elasticsearch.md
deleted file mode 100644
index 93b2bf6..0000000
--- a/docs/apis/streaming/connectors/elasticsearch.md
+++ /dev/null
@@ -1,183 +0,0 @@
----
-title: "Elasticsearch Connector"
-
-# Sub-level navigation
-sub-nav-group: streaming
-sub-nav-parent: connectors
-sub-nav-pos: 2
-sub-nav-title: Elasticsearch
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-This connector provides a Sink that can write to an
-[Elasticsearch](https://elastic.co/) Index. To use this connector, add the
-following dependency to your project:
-
-{% highlight xml %}
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-connector-elasticsearch{{ site.scala_version_suffix }}</artifactId>
-  <version>{{site.version }}</version>
-</dependency>
-{% endhighlight %}
-
-Note that the streaming connectors are currently not part of the binary
-distribution. See
-[here]({{site.baseurl}}/apis/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution)
-for information about how to package the program with the libraries for
-cluster execution.
-
-#### Installing Elasticsearch
-
-Instructions for setting up an Elasticsearch cluster can be found
-[here](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup.html).
-Make sure to set and remember a cluster name. This must be set when
-creating a Sink for writing to your cluster
-
-#### Elasticsearch Sink
-The connector provides a Sink that can send data to an Elasticsearch Index.
-
-The sink can use two different methods for communicating with Elasticsearch:
-
-1. An embedded Node
-2. The TransportClient
-
-See [here](https://www.elastic.co/guide/en/elasticsearch/client/java-api/current/client.html)
-for information about the differences between the two modes.
-
-This code shows how to create a sink that uses an embedded Node for
-communication:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-DataStream<String> input = ...;
-
-Map<String, String> config = Maps.newHashMap();
-// This instructs the sink to emit after every element, otherwise they would be buffered
-config.put("bulk.flush.max.actions", "1");
-config.put("cluster.name", "my-cluster-name");
-
-input.addSink(new ElasticsearchSink<>(config, new IndexRequestBuilder<String>() {
-    @Override
-    public IndexRequest createIndexRequest(String element, RuntimeContext ctx) {
-        Map<String, Object> json = new HashMap<>();
-        json.put("data", element);
-
-        return Requests.indexRequest()
-                .index("my-index")
-                .type("my-type")
-                .source(json);
-    }
-}));
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val input: DataStream[String] = ...
-
-val config = new util.HashMap[String, String]
-config.put("bulk.flush.max.actions", "1")
-config.put("cluster.name", "my-cluster-name")
-
-text.addSink(new ElasticsearchSink(config, new IndexRequestBuilder[String] {
-  override def createIndexRequest(element: String, ctx: RuntimeContext): IndexRequest = {
-    val json = new util.HashMap[String, AnyRef]
-    json.put("data", element)
-    println("SENDING: " + element)
-    Requests.indexRequest.index("my-index").`type`("my-type").source(json)
-  }
-}))
-{% endhighlight %}
-</div>
-</div>
-
-Note how a Map of Strings is used to configure the Sink. The configuration keys
-are documented in the Elasticsearch documentation
-[here](https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html).
-Especially important is the `cluster.name` parameter that must correspond to
-the name of your cluster.
-
-Internally, the sink uses a `BulkProcessor` to send index requests to the cluster.
-This will buffer elements before sending a request to the cluster. The behaviour of the
-`BulkProcessor` can be configured using these config keys:
- * **bulk.flush.max.actions**: Maximum amount of elements to buffer
- * **bulk.flush.max.size.mb**: Maximum amount of data (in megabytes) to buffer
- * **bulk.flush.interval.ms**: Interval at which to flush data regardless of the other two
-  settings in milliseconds
-
-This example code does the same, but with a `TransportClient`:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-DataStream<String> input = ...;
-
-Map<String, String> config = Maps.newHashMap();
-// This instructs the sink to emit after every element, otherwise they would be buffered
-config.put("bulk.flush.max.actions", "1");
-config.put("cluster.name", "my-cluster-name");
-
-List<TransportAddress> transports = new ArrayList<String>();
-transports.add(new InetSocketTransportAddress("node-1", 9300));
-transports.add(new InetSocketTransportAddress("node-2", 9300));
-
-input.addSink(new ElasticsearchSink<>(config, transports, new IndexRequestBuilder<String>() {
-    @Override
-    public IndexRequest createIndexRequest(String element, RuntimeContext ctx) {
-        Map<String, Object> json = new HashMap<>();
-        json.put("data", element);
-
-        return Requests.indexRequest()
-                .index("my-index")
-                .type("my-type")
-                .source(json);
-    }
-}));
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val input: DataStream[String] = ...
-
-val config = new util.HashMap[String, String]
-config.put("bulk.flush.max.actions", "1")
-config.put("cluster.name", "my-cluster-name")
-
-val transports = new ArrayList[String]
-transports.add(new InetSocketTransportAddress("node-1", 9300))
-transports.add(new InetSocketTransportAddress("node-2", 9300))
-
-text.addSink(new ElasticsearchSink(config, transports, new IndexRequestBuilder[String] {
-  override def createIndexRequest(element: String, ctx: RuntimeContext): IndexRequest = {
-    val json = new util.HashMap[String, AnyRef]
-    json.put("data", element)
-    println("SENDING: " + element)
-    Requests.indexRequest.index("my-index").`type`("my-type").source(json)
-  }
-}))
-{% endhighlight %}
-</div>
-</div>
-
-The difference is that we now need to provide a list of Elasticsearch Nodes
-to which the sink should connect using a `TransportClient`.
-
-More information about Elasticsearch can be found [here](https://elastic.co).

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/connectors/elasticsearch2.md
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/connectors/elasticsearch2.md b/docs/apis/streaming/connectors/elasticsearch2.md
deleted file mode 100644
index 36d0920..0000000
--- a/docs/apis/streaming/connectors/elasticsearch2.md
+++ /dev/null
@@ -1,144 +0,0 @@
----
-title: "Elasticsearch 2.x Connector"
-
-# Sub-level navigation
-sub-nav-group: streaming
-sub-nav-parent: connectors
-sub-nav-pos: 2
-sub-nav-title: Elasticsearch 2.x
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-This connector provides a Sink that can write to an
-[Elasticsearch 2.x](https://elastic.co/) Index. To use this connector, add the
-following dependency to your project:
-
-{% highlight xml %}
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-connector-elasticsearch2{{ site.scala_version_suffix }}</artifactId>
-  <version>{{site.version }}</version>
-</dependency>
-{% endhighlight %}
-
-Note that the streaming connectors are currently not part of the binary
-distribution. See
-[here]({{site.baseurl}}/apis/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution)
-for information about how to package the program with the libraries for
-cluster execution.
-
-#### Installing Elasticsearch 2.x
-
-Instructions for setting up an Elasticsearch cluster can be found
-[here](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup.html).
-Make sure to set and remember a cluster name. This must be set when
-creating a Sink for writing to your cluster
-
-#### Elasticsearch 2.x Sink
-The connector provides a Sink that can send data to an Elasticsearch 2.x Index.
-
-The sink communicates with Elasticsearch via Transport Client
-
-See [here](https://www.elastic.co/guide/en/elasticsearch/client/java-api/current/transport-client.html)
-for information about the Transport Client.
-
-The code below shows how to create a sink that uses a `TransportClient` for communication:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-File dataDir = ....;
-
-DataStream<String> input = ...;
-
-Map<String, String> config = new HashMap<>();
-// This instructs the sink to emit after every element, otherwise they would be buffered
-config.put("bulk.flush.max.actions", "1");
-config.put("cluster.name", "my-cluster-name");
-
-List<InetSocketAddress> transports = new ArrayList<>();
-transports.add(new InetSocketAddress(InetAddress.getByName("127.0.0.1"), 9300));
-transports.add(new InetSocketAddress(InetAddress.getByName("10.2.3.1"), 9300));
-
-input.addSink(new ElasticsearchSink(config, transports, new ElasticsearchSinkFunction<String>() {
-  public IndexRequest createIndexRequest(String element) {
-    Map<String, String> json = new HashMap<>();
-    json.put("data", element);
-
-    return Requests.indexRequest()
-            .index("my-index")
-            .type("my-type")
-            .source(json);
-  }
-
-  @Override
-  public void process(String element, RuntimeContext ctx, RequestIndexer indexer) {
-    indexer.add(createIndexRequest(element));
-  }
-}));
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val dataDir = ....;
-
-val input: DataStream[String] = ...
-
-val config = new util.HashMap[String, String]
-config.put("bulk.flush.max.actions", "1")
-config.put("cluster.name", "my-cluster-name")
-
-val transports = new ArrayList[String]
-transports.add(new InetSocketAddress(InetAddress.getByName("127.0.0.1"), 9300))
-transports.add(new InetSocketAddress(InetAddress.getByName("10.2.3.1"), 9300));
-
-input.addSink(new ElasticsearchSink(config, transports, new ElasticsearchSinkFunction[String] {
-  def createIndexRequest(element: String): IndexRequest = {
-    val json = new util.HashMap[String, AnyRef]
-    json.put("data", element)
-    Requests.indexRequest.index("my-index").`type`("my-type").source(json)
-  }
-
-  override def process(element: String, ctx: RuntimeContext, indexer: RequestIndexer) {
-    indexer.add(createIndexRequest(element))
-  }
-}))
-{% endhighlight %}
-</div>
-</div>
-
-A Map of Strings is used to configure the Sink. The configuration keys
-are documented in the Elasticsearch documentation
-[here](https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html).
-Especially important is the `cluster.name`. parameter that must correspond to
-the name of your cluster and with ElasticSearch 2x you also need to specify `path.home`.
-
-Internally, the sink uses a `BulkProcessor` to send Action requests to the cluster.
-This will buffer elements and Action Requests before sending to the cluster. The behaviour of the
-`BulkProcessor` can be configured using these config keys:
- * **bulk.flush.max.actions**: Maximum amount of elements to buffer
- * **bulk.flush.max.size.mb**: Maximum amount of data (in megabytes) to buffer
- * **bulk.flush.interval.ms**: Interval at which to flush data regardless of the other two
-  settings in milliseconds
-
-This now provides a list of Elasticsearch Nodes
-to which the sink should connect via a `TransportClient`.
-
-More information about Elasticsearch can be found [here](https://elastic.co).

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/connectors/filesystem_sink.md
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/connectors/filesystem_sink.md b/docs/apis/streaming/connectors/filesystem_sink.md
deleted file mode 100644
index f2dc012..0000000
--- a/docs/apis/streaming/connectors/filesystem_sink.md
+++ /dev/null
@@ -1,133 +0,0 @@
----
-title: "HDFS Connector"
-
-# Sub-level navigation
-sub-nav-group: streaming
-sub-nav-parent: connectors
-sub-nav-pos: 3
-sub-nav-title: Filesystem Sink
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-This connector provides a Sink that writes rolling files to any filesystem supported by
-Hadoop FileSystem. To use this connector, add the
-following dependency to your project:
-
-{% highlight xml %}
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-connector-filesystem{{ site.scala_version_suffix }}</artifactId>
-  <version>{{site.version}}</version>
-</dependency>
-{% endhighlight %}
-
-Note that the streaming connectors are currently not part of the binary
-distribution. See
-[here]({{site.baseurl}}/apis/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution)
-for information about how to package the program with the libraries for
-cluster execution.
-
-#### Rolling File Sink
-
-The rolling behaviour as well as the writing can be configured but we will get to that later.
-This is how you can create a default rolling sink:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-DataStream<String> input = ...;
-
-input.addSink(new RollingSink<String>("/base/path"));
-
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val input: DataStream[String] = ...
-
-input.addSink(new RollingSink("/base/path"))
-
-{% endhighlight %}
-</div>
-</div>
-
-The only required parameter is the base path where the rolling files (buckets) will be
-stored. The sink can be configured by specifying a custom bucketer, writer and batch size.
-
-By default the rolling sink will use the pattern `"yyyy-MM-dd--HH"` to name the rolling buckets.
-This pattern is passed to `SimpleDateFormat` with the current system time to form a bucket path. A
-new bucket will be created whenever the bucket path changes. For example, if you have a pattern
-that contains minutes as the finest granularity you will get a new bucket every minute.
-Each bucket is itself a directory that contains several part files: Each parallel instance
-of the sink will create its own part file and when part files get too big the sink will also
-create a new part file next to the others. To specify a custom bucketer use `setBucketer()`
-on a `RollingSink`.
-
-The default writer is `StringWriter`. This will call `toString()` on the incoming elements
-and write them to part files, separated by newline. To specify a custom writer use `setWriter()`
-on a `RollingSink`. If you want to write Hadoop SequenceFiles you can use the provided
-`SequenceFileWriter` which can also be configured to use compression.
-
-The last configuration option is the batch size. This specifies when a part file should be closed
-and a new one started. (The default part file size is 384 MB).
-
-Example:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-DataStream<Tuple2<IntWritable,Text>> input = ...;
-
-RollingSink sink = new RollingSink<String>("/base/path");
-sink.setBucketer(new DateTimeBucketer("yyyy-MM-dd--HHmm"));
-sink.setWriter(new SequenceFileWriter<IntWritable, Text>());
-sink.setBatchSize(1024 * 1024 * 400); // this is 400 MB,
-
-input.addSink(sink);
-
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val input: DataStream[Tuple2[IntWritable, Text]] = ...
-
-val sink = new RollingSink[String]("/base/path")
-sink.setBucketer(new DateTimeBucketer("yyyy-MM-dd--HHmm"))
-sink.setWriter(new SequenceFileWriter[IntWritable, Text]())
-sink.setBatchSize(1024 * 1024 * 400) // this is 400 MB,
-
-input.addSink(sink)
-
-{% endhighlight %}
-</div>
-</div>
-
-This will create a sink that writes to bucket files that follow this schema:
-
-```
-/base/path/{date-time}/part-{parallel-task}-{count}
-```
-
-Where `date-time` is the string that we get from the date/time format, `parallel-task` is the index
-of the parallel sink instance and `count` is the running number of part files that where created
-because of the batch size.
-
-For in-depth information, please refer to the JavaDoc for
-[RollingSink](http://flink.apache.org/docs/latest/api/java/org/apache/flink/streaming/connectors/fs/RollingSink.html).

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/connectors/index.md
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/connectors/index.md b/docs/apis/streaming/connectors/index.md
deleted file mode 100644
index 83ca514..0000000
--- a/docs/apis/streaming/connectors/index.md
+++ /dev/null
@@ -1,47 +0,0 @@
----
-title: "Streaming Connectors"
-
-# Sub-level navigation
-sub-nav-group: streaming
-sub-nav-id: connectors
-sub-nav-pos: 6
-sub-nav-title: Connectors
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-Connectors provide code for interfacing with various third-party systems.
-
-Currently these systems are supported:
-
- * [Apache Kafka](https://kafka.apache.org/) (sink/source)
- * [Elasticsearch](https://elastic.co/) (sink)
- * [Elasticsearch 2x](https://elastic.com) (sink)
- * [Hadoop FileSystem](http://hadoop.apache.org) (sink)
- * [RabbitMQ](http://www.rabbitmq.com/) (sink/source)
- * [Amazon Kinesis Streams](http://aws.amazon.com/kinesis/streams/) (sink/source)
- * [Twitter Streaming API](https://dev.twitter.com/docs/streaming-apis) (source)
- * [Apache NiFi](https://nifi.apache.org) (sink/source)
- * [Apache Cassandra](https://cassandra.apache.org/) (sink)
- * [Redis](http://redis.io/) (sink)
-
-To run an application using one of these connectors, additional third party
-components are usually required to be installed and launched, e.g. the servers
-for the message queues. Further instructions for these can be found in the
-corresponding subsections.

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/connectors/kafka.md
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/connectors/kafka.md b/docs/apis/streaming/connectors/kafka.md
deleted file mode 100644
index e7cd05b..0000000
--- a/docs/apis/streaming/connectors/kafka.md
+++ /dev/null
@@ -1,293 +0,0 @@
----
-title: "Apache Kafka Connector"
-
-# Sub-level navigation
-sub-nav-group: streaming
-sub-nav-parent: connectors
-sub-nav-pos: 1
-sub-nav-title: Kafka
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-This connector provides access to event streams served by [Apache Kafka](https://kafka.apache.org/).
-
-Flink provides special Kafka Connectors for reading and writing data from/to Kafka topics.
-The Flink Kafka Consumer integrates with Flink's checkpointing mechanism to provide
-exactly-once processing semantics. To achieve that, Flink does not purely rely on Kafka's consumer group
-offset tracking, but tracks and checkpoints these offsets internally as well.
-
-Please pick a package (maven artifact id) and class name for your use-case and environment.
-For most users, the `FlinkKafkaConsumer08` (part of `flink-connector-kafka`) is appropriate.
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left">Maven Dependency</th>
-      <th class="text-left">Supported since</th>
-      <th class="text-left">Consumer and <br>
-      Producer Class name</th>
-      <th class="text-left">Kafka version</th>
-      <th class="text-left">Notes</th>
-    </tr>
-  </thead>
-  <tbody>
-    <tr>
-        <td>flink-connector-kafka</td>
-        <td>0.9.1, 0.10</td>
-        <td>FlinkKafkaConsumer082<br>
-        FlinkKafkaProducer</td>
-        <td>0.8.x</td>
-        <td>Uses the <a href="https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example">SimpleConsumer</a> API of Kafka internally. Offsets are committed to ZK by Flink.</td>
-    </tr>
-     <tr>
-        <td>flink-connector-kafka-0.8{{ site.scala_version_suffix }}</td>
-        <td>1.0.0</td>
-        <td>FlinkKafkaConsumer08<br>
-        FlinkKafkaProducer08</td>
-        <td>0.8.x</td>
-        <td>Uses the <a href="https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example">SimpleConsumer</a> API of Kafka internally. Offsets are committed to ZK by Flink.</td>
-    </tr>
-     <tr>
-        <td>flink-connector-kafka-0.9{{ site.scala_version_suffix }}</td>
-        <td>1.0.0</td>
-        <td>FlinkKafkaConsumer09<br>
-        FlinkKafkaProducer09</td>
-        <td>0.9.x</td>
-        <td>Uses the new <a href="http://kafka.apache.org/documentation.html#newconsumerapi">Consumer API</a> Kafka.</td>
-    </tr>
-  </tbody>
-</table>
-
-Then, import the connector in your maven project:
-
-{% highlight xml %}
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-connector-kafka-0.8{{ site.scala_version_suffix }}</artifactId>
-  <version>{{site.version }}</version>
-</dependency>
-{% endhighlight %}
-
-Note that the streaming connectors are currently not part of the binary distribution. See how to link with them for cluster execution [here]({{ site.baseurl}}/apis/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution).
-
-### Installing Apache Kafka
-
-* Follow the instructions from [Kafka's quickstart](https://kafka.apache.org/documentation.html#quickstart) to download the code and launch a server (launching a Zookeeper and a Kafka server is required every time before starting the application).
-* On 32 bit computers [this](http://stackoverflow.com/questions/22325364/unrecognized-vm-option-usecompressedoops-when-running-kafka-from-my-ubuntu-in) problem may occur.
-* If the Kafka and Zookeeper servers are running on a remote machine, then the `advertised.host.name` setting in the `config/server.properties` file must be set to the machine's IP address.
-
-### Kafka Consumer
-
-Flink's Kafka consumer is called `FlinkKafkaConsumer08` (or `09` for Kafka 0.9.0.x versions). It provides access to one or more Kafka topics.
-
-The constructor accepts the following arguments:
-
-1. The topic name / list of topic names
-2. A DeserializationSchema / KeyedDeserializationSchema for deserializing the data from Kafka
-3. Properties for the Kafka consumer.
-  The following properties are required:
-  - "bootstrap.servers" (comma separated list of Kafka brokers)
-  - "zookeeper.connect" (comma separated list of Zookeeper servers) (**only required for Kafka 0.8**)
-  - "group.id" the id of the consumer group
-
-Example:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-Properties properties = new Properties();
-properties.setProperty("bootstrap.servers", "localhost:9092");
-// only required for Kafka 0.8
-properties.setProperty("zookeeper.connect", "localhost:2181");
-properties.setProperty("group.id", "test");
-DataStream<String> stream = env
-	.addSource(new FlinkKafkaConsumer08<>("topic", new SimpleStringSchema(), properties))
-	.print();
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val properties = new Properties();
-properties.setProperty("bootstrap.servers", "localhost:9092");
-// only required for Kafka 0.8
-properties.setProperty("zookeeper.connect", "localhost:2181");
-properties.setProperty("group.id", "test");
-stream = env
-    .addSource(new FlinkKafkaConsumer08[String]("topic", new SimpleStringSchema(), properties))
-    .print
-{% endhighlight %}
-</div>
-</div>
-
-The current FlinkKafkaConsumer implementation will establish a connection from the client (when calling the constructor)
-for querying the list of topics and partitions.
-
-For this to work, the consumer needs to be able to access the consumers from the machine submitting the job to the Flink cluster.
-If you experience any issues with the Kafka consumer on the client side, the client log might contain information about failed requests, etc.
-
-#### The `DeserializationSchema`
-
-The Flink Kafka Consumer needs to know how to turn the binary data in Kafka into Java/Scala objects. The 
-`DeserializationSchema` allows users to specify such a schema. The `T deserialize(byte[] message)`
-method gets called for each Kafka message, passing the value from Kafka.
-
-It is usually helpful to start from the `AbstractDeserializationSchema`, which takes care of describing the
-produced Java/Scala type to Flink's type system. Users that implement a vanilla `DeserializationSchema` need
-to implement the `getProducedType(...)` method themselves.
-
-For accessing both the key and value of the Kafka message, the `KeyedDeserializationSchema` has
-the following deserialize method ` T deserialize(byte[] messageKey, byte[] message, String topic, int partition, long offset)`.
-
-For convenience, Flink provides the following schemas:
-
-1. `TypeInformationSerializationSchema` (and `TypeInformationKeyValueSerializationSchema`) which creates 
-    a schema based on a Flink's `TypeInformation`. This is useful if the data is both written and read by Flink.
-    This schema is a performant Flink-specific alternative to other generic serialization approaches.
- 
-2. `JsonDeserializationSchema` (and `JSONKeyValueDeserializationSchema`) which turns the serialized JSON 
-    into an ObjectNode object, from which fields can be accessed using objectNode.get("field").as(Int/String/...)(). 
-    The KeyValue objectNode contains a "key" and "value" field which contain all fields, as well as 
-    an optional "metadata" field that exposes the offset/partition/topic for this message.
-
-#### Kafka Consumers and Fault Tolerance
-
-With Flink's checkpointing enabled, the Flink Kafka Consumer will consume records from a topic and periodically checkpoint all
-its Kafka offsets, together with the state of other operations, in a consistent manner. In case of a job failure, Flink will restore
-the streaming program to the state of the latest checkpoint and re-consume the records from Kafka, starting from the offsets that where
-stored in the checkpoint.
-
-The interval of drawing checkpoints therefore defines how much the program may have to go back at most, in case of a failure.
-
-To use fault tolerant Kafka Consumers, checkpointing of the topology needs to be enabled at the execution environment:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
-env.enableCheckpointing(5000); // checkpoint every 5000 msecs
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = StreamExecutionEnvironment.getExecutionEnvironment()
-env.enableCheckpointing(5000) // checkpoint every 5000 msecs
-{% endhighlight %}
-</div>
-</div>
-
-Also note that Flink can only restart the topology if enough processing slots are available to restart the topology.
-So if the topology fails due to loss of a TaskManager, there must still be enough slots available afterwards.
-Flink on YARN supports automatic restart of lost YARN containers.
-
-If checkpointing is not enabled, the Kafka consumer will periodically commit the offsets to Zookeeper.
-
-#### Kafka Consumers and Timestamp Extraction/Watermark Emission
-
-In many scenarios, the timestamp of a record is embedded (explicitly or implicitly) in the record itself. 
-In addition, the user may want to emit watermarks either periodically, or in an irregular fashion, e.g. based on
-special records in the Kafka stream that contain the current event-time watermark. For these cases, the Flink Kafka 
-Consumer allows the specification of an `AssignerWithPeriodicWatermarks` or an `AssignerWithPunctuatedWatermarks`.
-
-You can specify your custom timestamp extractor/watermark emitter as described 
-[here]({{ site.baseurl }}/apis/streaming/event_timestamps_watermarks.html), or use one from the 
-[predefined ones]({{ site.baseurl }}/apis/streaming/event_timestamp_extractors.html). After doing so, you 
-can pass it to your consumer in the following way:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-Properties properties = new Properties();
-properties.setProperty("bootstrap.servers", "localhost:9092");
-// only required for Kafka 0.8
-properties.setProperty("zookeeper.connect", "localhost:2181");
-properties.setProperty("group.id", "test");
-
-FlinkKafkaConsumer08<String> myConsumer = 
-    new FlinkKafkaConsumer08<>("topic", new SimpleStringSchema(), properties);
-myConsumer.assignTimestampsAndWatermarks(new CustomWatermarkEmitter());
-
-DataStream<String> stream = env
-	.addSource(myConsumer)
-	.print();
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val properties = new Properties();
-properties.setProperty("bootstrap.servers", "localhost:9092");
-// only required for Kafka 0.8
-properties.setProperty("zookeeper.connect", "localhost:2181");
-properties.setProperty("group.id", "test");
-
-val myConsumer = new FlinkKafkaConsumer08[String]("topic", new SimpleStringSchema(), properties);
-myConsumer.assignTimestampsAndWatermarks(new CustomWatermarkEmitter());
-stream = env
-    .addSource(myConsumer)
-    .print
-{% endhighlight %}
-</div>
-</div>
- 
-Internally, an instance of the assigner is executed per Kafka partition.
-When such an assigner is specified, for each record read from Kafka, the 
-`extractTimestamp(T element, long previousElementTimestamp)` is called to assign a timestamp to the record and 
-the `Watermark getCurrentWatermark()` (for periodic) or the 
-`Watermark checkAndGetNextWatermark(T lastElement, long extractedTimestamp)` (for punctuated) is called to determine 
-if a new watermark should be emitted and with which timestamp.
-
-### Kafka Producer
-
-The `FlinkKafkaProducer08` writes data to a Kafka topic. The producer can specify a custom partitioner that assigns
-records to partitions.
-
-Example:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-stream.addSink(new FlinkKafkaProducer08<String>("localhost:9092", "my-topic", new SimpleStringSchema()));
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-stream.addSink(new FlinkKafkaProducer08[String]("localhost:9092", "my-topic", new SimpleStringSchema()))
-{% endhighlight %}
-</div>
-</div>
-
-You can also define a custom Kafka producer configuration for the KafkaSink with the constructor. Please refer to
-the [Apache Kafka documentation](https://kafka.apache.org/documentation.html) for details on how to configure
-Kafka Producers.
-
-Similar to the consumer, the producer also allows using an advanced serialization schema which allows
-serializing the key and value separately. It also allows to override the target topic id, so that
-one producer instance can send data to multiple topics.
-
-The interface of the serialization schema is called `KeyedSerializationSchema`.
-
-
-**Note**: By default, the number of retries is set to "0". This means that the producer fails immediately on errors,
-including leader changes. The value is set to "0" by default to avoid duplicate messages in the target topic.
-For most production environments with frequent broker changes, we recommend setting the number of retries to a 
-higher value.
-
-There is currently no transactional producer for Kafka, so Flink can not guarantee exactly-once delivery
-into a Kafka topic.
-


[17/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/datastream_api.md
----------------------------------------------------------------------
diff --git a/docs/dev/datastream_api.md b/docs/dev/datastream_api.md
new file mode 100644
index 0000000..2dd7842
--- /dev/null
+++ b/docs/dev/datastream_api.md
@@ -0,0 +1,1779 @@
+---
+title: "Flink DataStream API Programming Guide"
+nav-title: Streaming (DataStream API)
+nav-parent_id: apis
+nav-pos: 2
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+DataStream programs in Flink are regular programs that implement transformations on data streams
+(e.g., filtering, updating state, defining windows, aggregating). The data streams are initially created from various
+sources (e.g., message queues, socket streams, files). Results are returned via sinks, which may for
+example write the data to files, or to standard output (for example the command line
+terminal). Flink programs run in a variety of contexts, standalone, or embedded in other programs.
+The execution can happen in a local JVM, or on clusters of many machines.
+
+Please see [basic concepts]({{ site.baseurl }}/dev/api_concepts.html) for an introduction
+to the basic concepts of the Flink API.
+
+In order to create your own Flink DataStream program, we encourage you to start with
+[anatomy of a Flink Program]({{ site.baseurl }}/dev/api_concepts.html#anatomy-of-a-flink-program)
+and gradually add your own
+[transformations](#datastream-transformations). The remaining sections act as references for additional
+operations and advanced features.
+
+
+* This will be replaced by the TOC
+{:toc}
+
+
+Example Program
+---------------
+
+The following program is a complete, working example of streaming window word count application, that counts the
+words coming from a web socket in 5 second windows. You can copy &amp; paste the code to run it locally.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+{% highlight java %}
+import org.apache.flink.api.common.functions.FlatMapFunction;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.windowing.time.Time;
+import org.apache.flink.util.Collector;
+
+public class WindowWordCount {
+
+    public static void main(String[] args) throws Exception {
+
+        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
+
+        DataStream<Tuple2<String, Integer>> dataStream = env
+                .socketTextStream("localhost", 9999)
+                .flatMap(new Splitter())
+                .keyBy(0)
+                .timeWindow(Time.seconds(5))
+                .sum(1);
+
+        dataStream.print();
+
+        env.execute("Window WordCount");
+    }
+
+    public static class Splitter implements FlatMapFunction<String, Tuple2<String, Integer>> {
+        @Override
+        public void flatMap(String sentence, Collector<Tuple2<String, Integer>> out) throws Exception {
+            for (String word: sentence.split(" ")) {
+                out.collect(new Tuple2<String, Integer>(word, 1));
+            }
+        }
+    }
+
+}
+{% endhighlight %}
+
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+
+import org.apache.flink.streaming.api.scala._
+import org.apache.flink.streaming.api.windowing.time.Time
+
+object WindowWordCount {
+  def main(args: Array[String]) {
+
+    val env = StreamExecutionEnvironment.getExecutionEnvironment
+    val text = env.socketTextStream("localhost", 9999)
+
+    val counts = text.flatMap { _.toLowerCase.split("\\W+") filter { _.nonEmpty } }
+      .map { (_, 1) }
+      .keyBy(0)
+      .timeWindow(Time.seconds(5))
+      .sum(1)
+
+    counts.print
+
+    env.execute("Window Stream WordCount")
+  }
+}
+{% endhighlight %}
+</div>
+
+</div>
+
+To run the example program, start the input stream with netcat first from a terminal:
+
+~~~bash
+nc -lk 9999
+~~~
+
+Just type some words hitting return for a new word. These will be the input to the
+word count program. If you want to see counts greater than 1, type the same word again and again within
+5 seconds (increase the window size from 5 seconds if you cannot type that fast &#9786;).
+
+{% top %}
+
+DataStream Transformations
+--------------------------
+
+Data transformations transform one or more DataStreams into a new DataStream. Programs can combine
+multiple transformations into sophisticated topologies.
+
+This section gives a description of all the available transformations.
+
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+<br />
+
+<table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 25%">Transformation</th>
+      <th class="text-center">Description</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+          <td><strong>Map</strong><br>DataStream &rarr; DataStream</td>
+          <td>
+            <p>Takes one element and produces one element. A map function that doubles the values of the input stream:</p>
+    {% highlight java %}
+DataStream<Integer> dataStream = //...
+dataStream.map(new MapFunction<Integer, Integer>() {
+    @Override
+    public Integer map(Integer value) throws Exception {
+        return 2 * value;
+    }
+});
+    {% endhighlight %}
+          </td>
+        </tr>
+
+        <tr>
+          <td><strong>FlatMap</strong><br>DataStream &rarr; DataStream</td>
+          <td>
+            <p>Takes one element and produces zero, one, or more elements. A flatmap function that splits sentences to words:</p>
+    {% highlight java %}
+dataStream.flatMap(new FlatMapFunction<String, String>() {
+    @Override
+    public void flatMap(String value, Collector<String> out)
+        throws Exception {
+        for(String word: value.split(" ")){
+            out.collect(word);
+        }
+    }
+});
+    {% endhighlight %}
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Filter</strong><br>DataStream &rarr; DataStream</td>
+          <td>
+            <p>Evaluates a boolean function for each element and retains those for which the function returns true.
+            A filter that filters out zero values:
+            </p>
+    {% highlight java %}
+dataStream.filter(new FilterFunction<Integer>() {
+    @Override
+    public boolean filter(Integer value) throws Exception {
+        return value != 0;
+    }
+});
+    {% endhighlight %}
+          </td>
+        </tr>
+        <tr>
+          <td><strong>KeyBy</strong><br>DataStream &rarr; KeyedStream</td>
+          <td>
+            <p>Logically partitions a stream into disjoint partitions, each partition containing elements of the same key.
+            Internally, this is implemented with hash partitioning. See <a href="#specifying-keys">keys</a> on how to specify keys.
+            This transformation returns a KeyedDataStream.</p>
+    {% highlight java %}
+dataStream.keyBy("someKey") // Key by field "someKey"
+dataStream.keyBy(0) // Key by the first element of a Tuple
+    {% endhighlight %}
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Reduce</strong><br>KeyedStream &rarr; DataStream</td>
+          <td>
+            <p>A "rolling" reduce on a keyed data stream. Combines the current element with the last reduced value and
+            emits the new value.
+                    <br/>
+            	<br/>
+            A reduce function that creates a stream of partial sums:</p>
+            {% highlight java %}
+keyedStream.reduce(new ReduceFunction<Integer>() {
+    @Override
+    public Integer reduce(Integer value1, Integer value2)
+    throws Exception {
+        return value1 + value2;
+    }
+});
+            {% endhighlight %}
+            </p>
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Fold</strong><br>KeyedStream &rarr; DataStream</td>
+          <td>
+          <p>A "rolling" fold on a keyed data stream with an initial value.
+          Combines the current element with the last folded value and
+          emits the new value.
+          <br/>
+          <br/>
+          <p>A fold function that, when applied on the sequence (1,2,3,4,5),
+          emits the sequence "start-1", "start-1-2", "start-1-2-3", ...</p>
+          {% highlight java %}
+DataStream<String> result =
+  keyedStream.fold("start", new FoldFunction<Integer, String>() {
+    @Override
+    public String fold(String current, Integer value) {
+        return current + "-" + value;
+    }
+  });
+          {% endhighlight %}
+          </p>
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Aggregations</strong><br>KeyedStream &rarr; DataStream</td>
+          <td>
+            <p>Rolling aggregations on a keyed data stream. The difference between min
+	    and minBy is that min returns the minimum value, whereas minBy returns
+	    the element that has the minimum value in this field (same for max and maxBy).</p>
+    {% highlight java %}
+keyedStream.sum(0);
+keyedStream.sum("key");
+keyedStream.min(0);
+keyedStream.min("key");
+keyedStream.max(0);
+keyedStream.max("key");
+keyedStream.minBy(0);
+keyedStream.minBy("key");
+keyedStream.maxBy(0);
+keyedStream.maxBy("key");
+    {% endhighlight %}
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Window</strong><br>KeyedStream &rarr; WindowedStream</td>
+          <td>
+            <p>Windows can be defined on already partitioned KeyedStreams. Windows group the data in each
+            key according to some characteristic (e.g., the data that arrived within the last 5 seconds).
+            See <a href="windows.html">windows</a> for a complete description of windows.
+    {% highlight java %}
+dataStream.keyBy(0).window(TumblingEventTimeWindows.of(Time.seconds(5))); // Last 5 seconds of data
+    {% endhighlight %}
+        </p>
+          </td>
+        </tr>
+        <tr>
+          <td><strong>WindowAll</strong><br>DataStream &rarr; AllWindowedStream</td>
+          <td>
+              <p>Windows can be defined on regular DataStreams. Windows group all the stream events
+              according to some characteristic (e.g., the data that arrived within the last 5 seconds).
+              See <a href="windows.html">windows</a> for a complete description of windows.</p>
+              <p><strong>WARNING:</strong> This is in many cases a <strong>non-parallel</strong> transformation. All records will be
+               gathered in one task for the windowAll operator.</p>
+  {% highlight java %}
+dataStream.windowAll(TumblingEventTimeWindows.of(Time.seconds(5))); // Last 5 seconds of data
+  {% endhighlight %}
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Window Apply</strong><br>WindowedStream &rarr; DataStream<br>AllWindowedStream &rarr; DataStream</td>
+          <td>
+            <p>Applies a general function to the window as a whole. Below is a function that manually sums the elements of a window.</p>
+            <p><strong>Note:</strong> If you are using a windowAll transformation, you need to use an AllWindowFunction instead.</p>
+    {% highlight java %}
+windowedStream.apply (new WindowFunction<Tuple2<String,Integer>, Integer, Tuple, Window>() {
+    public void apply (Tuple tuple,
+            Window window,
+            Iterable<Tuple2<String, Integer>> values,
+            Collector<Integer> out) throws Exception {
+        int sum = 0;
+        for (value t: values) {
+            sum += t.f1;
+        }
+        out.collect (new Integer(sum));
+    }
+});
+
+// applying an AllWindowFunction on non-keyed window stream
+allWindowedStream.apply (new AllWindowFunction<Tuple2<String,Integer>, Integer, Window>() {
+    public void apply (Window window,
+            Iterable<Tuple2<String, Integer>> values,
+            Collector<Integer> out) throws Exception {
+        int sum = 0;
+        for (value t: values) {
+            sum += t.f1;
+        }
+        out.collect (new Integer(sum));
+    }
+});
+    {% endhighlight %}
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Window Reduce</strong><br>WindowedStream &rarr; DataStream</td>
+          <td>
+            <p>Applies a functional reduce function to the window and returns the reduced value.</p>
+    {% highlight java %}
+windowedStream.reduce (new ReduceFunction<Tuple2<String,Integer>() {
+    public Tuple2<String, Integer> reduce(Tuple2<String, Integer> value1, Tuple2<String, Integer> value2) throws Exception {
+        return new Tuple2<String,Integer>(value1.f0, value1.f1 + value2.f1);
+    }
+};
+    {% endhighlight %}
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Window Fold</strong><br>WindowedStream &rarr; DataStream</td>
+          <td>
+            <p>Applies a functional fold function to the window and returns the folded value.
+               The example function, when applied on the sequence (1,2,3,4,5),
+               folds the sequence into the string "start-1-2-3-4-5":</p>
+    {% highlight java %}
+windowedStream.fold("start-", new FoldFunction<Integer, String>() {
+    public String fold(String current, Integer value) {
+        return current + "-" + value;
+    }
+};
+    {% endhighlight %}
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Aggregations on windows</strong><br>WindowedStream &rarr; DataStream</td>
+          <td>
+            <p>Aggregates the contents of a window. The difference between min
+	    and minBy is that min returns the minimun value, whereas minBy returns
+	    the element that has the minimum value in this field (same for max and maxBy).</p>
+    {% highlight java %}
+windowedStream.sum(0);
+windowedStream.sum("key");
+windowedStream.min(0);
+windowedStream.min("key");
+windowedStream.max(0);
+windowedStream.max("key");
+windowedStream.minBy(0);
+windowedStream.minBy("key");
+windowedStream.maxBy(0);
+windowedStream.maxBy("key");
+    {% endhighlight %}
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Union</strong><br>DataStream* &rarr; DataStream</td>
+          <td>
+            <p>Union of two or more data streams creating a new stream containing all the elements from all the streams. Node: If you union a data stream
+            with itself you will get each element twice in the resulting stream.</p>
+    {% highlight java %}
+dataStream.union(otherStream1, otherStream2, ...);
+    {% endhighlight %}
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Window Join</strong><br>DataStream,DataStream &rarr; DataStream</td>
+          <td>
+            <p>Join two data streams on a given key and a common window.</p>
+    {% highlight java %}
+dataStream.join(otherStream)
+    .where(0).equalTo(1)
+    .window(TumblingEventTimeWindows.of(Time.seconds(3)))
+    .apply (new JoinFunction () {...});
+    {% endhighlight %}
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Window CoGroup</strong><br>DataStream,DataStream &rarr; DataStream</td>
+          <td>
+            <p>Cogroups two data streams on a given key and a common window.</p>
+    {% highlight java %}
+dataStream.coGroup(otherStream)
+    .where(0).equalTo(1)
+    .window(TumblingEventTimeWindows.of(Time.seconds(3)))
+    .apply (new CoGroupFunction () {...});
+    {% endhighlight %}
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Connect</strong><br>DataStream,DataStream &rarr; ConnectedStreams</td>
+          <td>
+            <p>"Connects" two data streams retaining their types. Connect allowing for shared state between
+            the two streams.</p>
+    {% highlight java %}
+DataStream<Integer> someStream = //...
+DataStream<String> otherStream = //...
+
+ConnectedStreams<Integer, String> connectedStreams = someStream.connect(otherStream);
+    {% endhighlight %}
+          </td>
+        </tr>
+        <tr>
+          <td><strong>CoMap, CoFlatMap</strong><br>ConnectedStreams &rarr; DataStream</td>
+          <td>
+            <p>Similar to map and flatMap on a connected data stream</p>
+    {% highlight java %}
+connectedStreams.map(new CoMapFunction<Integer, String, Boolean>() {
+    @Override
+    public Boolean map1(Integer value) {
+        return true;
+    }
+
+    @Override
+    public Boolean map2(String value) {
+        return false;
+    }
+});
+connectedStreams.flatMap(new CoFlatMapFunction<Integer, String, String>() {
+
+   @Override
+   public void flatMap1(Integer value, Collector<String> out) {
+       out.collect(value.toString());
+   }
+
+   @Override
+   public void flatMap2(String value, Collector<String> out) {
+       for (String word: value.split(" ")) {
+         out.collect(word);
+       }
+   }
+});
+    {% endhighlight %}
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Split</strong><br>DataStream &rarr; SplitStream</td>
+          <td>
+            <p>
+                Split the stream into two or more streams according to some criterion.
+                {% highlight java %}
+SplitStream<Integer> split = someDataStream.split(new OutputSelector<Integer>() {
+    @Override
+    public Iterable<String> select(Integer value) {
+        List<String> output = new ArrayList<String>();
+        if (value % 2 == 0) {
+            output.add("even");
+        }
+        else {
+            output.add("odd");
+        }
+        return output;
+    }
+});
+                {% endhighlight %}
+            </p>
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Select</strong><br>SplitStream &rarr; DataStream</td>
+          <td>
+            <p>
+                Select one or more streams from a split stream.
+                {% highlight java %}
+SplitStream<Integer> split;
+DataStream<Integer> even = split.select("even");
+DataStream<Integer> odd = split.select("odd");
+DataStream<Integer> all = split.select("even","odd");
+                {% endhighlight %}
+            </p>
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Iterate</strong><br>DataStream &rarr; IterativeStream &rarr; DataStream</td>
+          <td>
+            <p>
+                Creates a "feedback" loop in the flow, by redirecting the output of one operator
+                to some previous operator. This is especially useful for defining algorithms that
+                continuously update a model. The following code starts with a stream and applies
+		the iteration body continuously. Elements that are greater than 0 are sent back
+		to the feedback channel, and the rest of the elements are forwarded downstream.
+		See <a href="#iterations">iterations</a> for a complete description.
+                {% highlight java %}
+IterativeStream<Long> iteration = initialStream.iterate();
+DataStream<Long> iterationBody = iteration.map (/*do something*/);
+DataStream<Long> feedback = iterationBody.filter(new FilterFunction<Long>(){
+    @Override
+    public boolean filter(Integer value) throws Exception {
+        return value > 0;
+    }
+});
+iteration.closeWith(feedback);
+DataStream<Long> output = iterationBody.filter(new FilterFunction<Long>(){
+    @Override
+    public boolean filter(Integer value) throws Exception {
+        return value <= 0;
+    }
+});
+                {% endhighlight %}
+            </p>
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Extract Timestamps</strong><br>DataStream &rarr; DataStream</td>
+          <td>
+            <p>
+                Extracts timestamps from records in order to work with windows
+                that use event time semantics. See <a href="{{ site.baseurl }}/dev/event_time.html">Event Time</a>.
+                {% highlight java %}
+stream.assignTimestamps (new TimeStampExtractor() {...});
+                {% endhighlight %}
+            </p>
+          </td>
+        </tr>
+  </tbody>
+</table>
+
+</div>
+
+<div data-lang="scala" markdown="1">
+
+<br />
+
+<table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 25%">Transformation</th>
+      <th class="text-center">Description</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+          <td><strong>Map</strong><br>DataStream &rarr; DataStream</td>
+          <td>
+            <p>Takes one element and produces one element. A map function that doubles the values of the input stream:</p>
+    {% highlight scala %}
+dataStream.map { x => x * 2 }
+    {% endhighlight %}
+          </td>
+        </tr>
+
+        <tr>
+          <td><strong>FlatMap</strong><br>DataStream &rarr; DataStream</td>
+          <td>
+            <p>Takes one element and produces zero, one, or more elements. A flatmap function that splits sentences to words:</p>
+    {% highlight scala %}
+dataStream.flatMap { str => str.split(" ") }
+    {% endhighlight %}
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Filter</strong><br>DataStream &rarr; DataStream</td>
+          <td>
+            <p>Evaluates a boolean function for each element and retains those for which the function returns true.
+            A filter that filters out zero values:
+            </p>
+    {% highlight scala %}
+dataStream.filter { _ != 0 }
+    {% endhighlight %}
+          </td>
+        </tr>
+        <tr>
+          <td><strong>KeyBy</strong><br>DataStream &rarr; KeyedStream</td>
+          <td>
+            <p>Logically partitions a stream into disjoint partitions, each partition containing elements of the same key.
+            Internally, this is implemented with hash partitioning. See <a href="#specifying-keys">keys</a> on how to specify keys.
+            This transformation returns a KeyedDataStream.</p>
+    {% highlight scala %}
+dataStream.keyBy("someKey") // Key by field "someKey"
+dataStream.keyBy(0) // Key by the first element of a Tuple
+    {% endhighlight %}
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Reduce</strong><br>KeyedStream &rarr; DataStream</td>
+          <td>
+            <p>A "rolling" reduce on a keyed data stream. Combines the current element with the last reduced value and
+            emits the new value.
+                    <br/>
+            	<br/>
+            A reduce function that creates a stream of partial sums:</p>
+            {% highlight scala %}
+keyedStream.reduce { _ + _ }
+            {% endhighlight %}
+            </p>
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Fold</strong><br>KeyedStream &rarr; DataStream</td>
+          <td>
+          <p>A "rolling" fold on a keyed data stream with an initial value.
+          Combines the current element with the last folded value and
+          emits the new value.
+          <br/>
+          <br/>
+          <p>A fold function that, when applied on the sequence (1,2,3,4,5),
+          emits the sequence "start-1", "start-1-2", "start-1-2-3", ...</p>
+          {% highlight scala %}
+val result: DataStream[String] =
+    keyedStream.fold("start")((str, i) => { str + "-" + i })
+          {% endhighlight %}
+          </p>
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Aggregations</strong><br>KeyedStream &rarr; DataStream</td>
+          <td>
+            <p>Rolling aggregations on a keyed data stream. The difference between min
+	    and minBy is that min returns the minimun value, whereas minBy returns
+	    the element that has the minimum value in this field (same for max and maxBy).</p>
+    {% highlight scala %}
+keyedStream.sum(0)
+keyedStream.sum("key")
+keyedStream.min(0)
+keyedStream.min("key")
+keyedStream.max(0)
+keyedStream.max("key")
+keyedStream.minBy(0)
+keyedStream.minBy("key")
+keyedStream.maxBy(0)
+keyedStream.maxBy("key")
+    {% endhighlight %}
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Window</strong><br>KeyedStream &rarr; WindowedStream</td>
+          <td>
+            <p>Windows can be defined on already partitioned KeyedStreams. Windows group the data in each
+            key according to some characteristic (e.g., the data that arrived within the last 5 seconds).
+            See <a href="windows.html">windows</a> for a description of windows.
+    {% highlight scala %}
+dataStream.keyBy(0).window(TumblingEventTimeWindows.of(Time.seconds(5))) // Last 5 seconds of data
+    {% endhighlight %}
+        </p>
+          </td>
+        </tr>
+        <tr>
+          <td><strong>WindowAll</strong><br>DataStream &rarr; AllWindowedStream</td>
+          <td>
+              <p>Windows can be defined on regular DataStreams. Windows group all the stream events
+              according to some characteristic (e.g., the data that arrived within the last 5 seconds).
+              See <a href="windows.html">windows</a> for a complete description of windows.</p>
+              <p><strong>WARNING:</strong> This is in many cases a <strong>non-parallel</strong> transformation. All records will be
+               gathered in one task for the windowAll operator.</p>
+  {% highlight scala %}
+dataStream.windowAll(TumblingEventTimeWindows.of(Time.seconds(5))) // Last 5 seconds of data
+  {% endhighlight %}
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Window Apply</strong><br>WindowedStream &rarr; DataStream<br>AllWindowedStream &rarr; DataStream</td>
+          <td>
+            <p>Applies a general function to the window as a whole. Below is a function that manually sums the elements of a window.</p>
+            <p><strong>Note:</strong> If you are using a windowAll transformation, you need to use an AllWindowFunction instead.</p>
+    {% highlight scala %}
+windowedStream.apply { WindowFunction }
+
+// applying an AllWindowFunction on non-keyed window stream
+allWindowedStream.apply { AllWindowFunction }
+
+    {% endhighlight %}
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Window Reduce</strong><br>WindowedStream &rarr; DataStream</td>
+          <td>
+            <p>Applies a functional reduce function to the window and returns the reduced value.</p>
+    {% highlight scala %}
+windowedStream.reduce { _ + _ }
+    {% endhighlight %}
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Window Fold</strong><br>WindowedStream &rarr; DataStream</td>
+          <td>
+            <p>Applies a functional fold function to the window and returns the folded value.
+               The example function, when applied on the sequence (1,2,3,4,5),
+               folds the sequence into the string "start-1-2-3-4-5":</p>
+          {% highlight scala %}
+val result: DataStream[String] =
+    windowedStream.fold("start", (str, i) => { str + "-" + i })
+          {% endhighlight %}
+          </td>
+	</tr>
+        <tr>
+          <td><strong>Aggregations on windows</strong><br>WindowedStream &rarr; DataStream</td>
+          <td>
+            <p>Aggregates the contents of a window. The difference between min
+	    and minBy is that min returns the minimum value, whereas minBy returns
+	    the element that has the minimum value in this field (same for max and maxBy).</p>
+    {% highlight scala %}
+windowedStream.sum(0)
+windowedStream.sum("key")
+windowedStream.min(0)
+windowedStream.min("key")
+windowedStream.max(0)
+windowedStream.max("key")
+windowedStream.minBy(0)
+windowedStream.minBy("key")
+windowedStream.maxBy(0)
+windowedStream.maxBy("key")
+    {% endhighlight %}
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Union</strong><br>DataStream* &rarr; DataStream</td>
+          <td>
+            <p>Union of two or more data streams creating a new stream containing all the elements from all the streams. Node: If you union a data stream
+            with itself you will get each element twice in the resulting stream.</p>
+    {% highlight scala %}
+dataStream.union(otherStream1, otherStream2, ...)
+    {% endhighlight %}
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Window Join</strong><br>DataStream,DataStream &rarr; DataStream</td>
+          <td>
+            <p>Join two data streams on a given key and a common window.</p>
+    {% highlight scala %}
+dataStream.join(otherStream)
+    .where(0).equalTo(1)
+    .window(TumblingEventTimeWindows.of(Time.seconds(3)))
+    .apply { ... }
+    {% endhighlight %}
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Window CoGroup</strong><br>DataStream,DataStream &rarr; DataStream</td>
+          <td>
+            <p>Cogroups two data streams on a given key and a common window.</p>
+    {% highlight scala %}
+dataStream.coGroup(otherStream)
+    .where(0).equalTo(1)
+    .window(TumblingEventTimeWindows.of(Time.seconds(3)))
+    .apply {}
+    {% endhighlight %}
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Connect</strong><br>DataStream,DataStream &rarr; ConnectedStreams</td>
+          <td>
+            <p>"Connects" two data streams retaining their types, allowing for shared state between
+            the two streams.</p>
+    {% highlight scala %}
+someStream : DataStream[Int] = ...
+otherStream : DataStream[String] = ...
+
+val connectedStreams = someStream.connect(otherStream)
+    {% endhighlight %}
+          </td>
+        </tr>
+        <tr>
+          <td><strong>CoMap, CoFlatMap</strong><br>ConnectedStreams &rarr; DataStream</td>
+          <td>
+            <p>Similar to map and flatMap on a connected data stream</p>
+    {% highlight scala %}
+connectedStreams.map(
+    (_ : Int) => true,
+    (_ : String) => false
+)
+connectedStreams.flatMap(
+    (_ : Int) => true,
+    (_ : String) => false
+)
+    {% endhighlight %}
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Split</strong><br>DataStream &rarr; SplitStream</td>
+          <td>
+            <p>
+                Split the stream into two or more streams according to some criterion.
+                {% highlight scala %}
+val split = someDataStream.split(
+  (num: Int) =>
+    (num % 2) match {
+      case 0 => List("even")
+      case 1 => List("odd")
+    }
+)
+                {% endhighlight %}
+            </p>
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Select</strong><br>SplitStream &rarr; DataStream</td>
+          <td>
+            <p>
+                Select one or more streams from a split stream.
+                {% highlight scala %}
+
+val even = split select "even"
+val odd = split select "odd"
+val all = split.select("even","odd")
+                {% endhighlight %}
+            </p>
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Iterate</strong><br>DataStream &rarr; IterativeStream  &rarr; DataStream</td>
+          <td>
+            <p>
+                Creates a "feedback" loop in the flow, by redirecting the output of one operator
+                to some previous operator. This is especially useful for defining algorithms that
+                continuously update a model. The following code starts with a stream and applies
+		the iteration body continuously. Elements that are greater than 0 are sent back
+		to the feedback channel, and the rest of the elements are forwarded downstream.
+		See <a href="#iterations">iterations</a> for a complete description.
+                {% highlight java %}
+initialStream.iterate {
+  iteration => {
+    val iterationBody = iteration.map {/*do something*/}
+    (iterationBody.filter(_ > 0), iterationBody.filter(_ <= 0))
+  }
+}
+                {% endhighlight %}
+            </p>
+          </td>
+        </tr>
+        <tr>
+          <td><strong>Extract Timestamps</strong><br>DataStream &rarr; DataStream</td>
+          <td>
+            <p>
+                Extracts timestamps from records in order to work with windows
+                that use event time semantics.
+                See <a href="{{ site.baseurl }}/apis/streaming/event_time.html">Event Time</a>.
+                {% highlight scala %}
+stream.assignTimestamps { timestampExtractor }
+                {% endhighlight %}
+            </p>
+          </td>
+        </tr>
+  </tbody>
+</table>
+
+Extraction from tuples, case classes and collections via anonymous pattern matching, like the following:
+{% highlight scala %}
+val data: DataStream[(Int, String, Double)] = // [...]
+data.map {
+  case (id, name, temperature) => // [...]
+}
+{% endhighlight %}
+is not supported by the API out-of-the-box. To use this feature, you should use a <a href="../scala_api_extensions.html">Scala API extension</a>.
+
+
+</div>
+</div>
+
+The following transformations are available on data streams of Tuples:
+
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+<br />
+
+<table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 20%">Transformation</th>
+      <th class="text-center">Description</th>
+    </tr>
+  </thead>
+  <tbody>
+   <tr>
+      <td><strong>Project</strong><br>DataStream &rarr; DataStream</td>
+      <td>
+        <p>Selects a subset of fields from the tuples
+{% highlight java %}
+DataStream<Tuple3<Integer, Double, String>> in = // [...]
+DataStream<Tuple2<String, Integer>> out = in.project(2,0);
+{% endhighlight %}
+        </p>
+      </td>
+    </tr>
+  </tbody>
+</table>
+
+</div>
+
+<div data-lang="scala" markdown="1">
+
+<br />
+
+<table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 20%">Transformation</th>
+      <th class="text-center">Description</th>
+    </tr>
+  </thead>
+  <tbody>
+   <tr>
+      <td><strong>Project</strong><br>DataStream &rarr; DataStream</td>
+      <td>
+        <p>Selects a subset of fields from the tuples
+{% highlight scala %}
+val in : DataStream[(Int,Double,String)] = // [...]
+val out = in.project(2,0)
+{% endhighlight %}
+        </p>
+      </td>
+    </tr>
+  </tbody>
+</table>
+
+</div>
+</div>
+
+
+### Physical partitioning
+
+Flink also gives low-level control (if desired) on the exact stream partitioning after a transformation,
+via the following functions.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+<br />
+
+<table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 20%">Transformation</th>
+      <th class="text-center">Description</th>
+    </tr>
+  </thead>
+  <tbody>
+   <tr>
+      <td><strong>Custom partitioning</strong><br>DataStream &rarr; DataStream</td>
+      <td>
+        <p>
+            Uses a user-defined Partitioner to select the target task for each element.
+            {% highlight java %}
+dataStream.partitionCustom(partitioner, "someKey");
+dataStream.partitionCustom(partitioner, 0);
+            {% endhighlight %}
+        </p>
+      </td>
+    </tr>
+   <tr>
+     <td><strong>Random partitioning</strong><br>DataStream &rarr; DataStream</td>
+     <td>
+       <p>
+            Partitions elements randomly according to a uniform distribution.
+            {% highlight java %}
+dataStream.shuffle();
+            {% endhighlight %}
+       </p>
+     </td>
+   </tr>
+   <tr>
+      <td><strong>Rebalancing (Round-robin partitioning)</strong><br>DataStream &rarr; DataStream</td>
+      <td>
+        <p>
+            Partitions elements round-robin, creating equal load per partition. Useful for performance
+            optimization in the presence of data skew.
+            {% highlight java %}
+dataStream.rebalance();
+            {% endhighlight %}
+        </p>
+      </td>
+    </tr>
+    <tr>
+      <td><strong>Rescaling</strong><br>DataStream &rarr; DataStream</td>
+      <td>
+        <p>
+            Partitions elements, round-robin, to a subset of downstream operations. This is
+            useful if you want to have pipelines where you, for example, fan out from
+            each parallel instance of a source to a subset of several mappers to distribute load
+            but don't want the full rebalance that rebalance() would incur. This would require only
+            local data transfers instead of transferring data over network, depending on
+            other configuration values such as the number of slots of TaskManagers.
+        </p>
+        <p>
+            The subset of downstream operations to which the upstream operation sends
+            elements depends on the degree of parallelism of both the upstream and downstream operation.
+            For example, if the upstream operation has parallelism 2 and the downstream operation
+            has parallelism 4, then one upstream operation would distribute elements to two
+            downstream operations while the other upstream operation would distribute to the other
+            two downstream operations. If, on the other hand, the downstream operation has parallelism
+            2 while the upstream operation has parallelism 4 then two upstream operations would
+            distribute to one downstream operation while the other two upstream operations would
+            distribute to the other downstream operations.
+        </p>
+        <p>
+            In cases where the different parallelisms are not multiples of each other one or several
+            downstream operations will have a differing number of inputs from upstream operations.
+
+        </p>
+        </p>
+            Please see this figure for a visualization of the connection pattern in the above
+            example:
+        </p>
+
+        <div style="text-align: center">
+            <img src="{{ site.baseurl }}/fig/rescale.svg" alt="Checkpoint barriers in data streams" />
+            </div>
+
+
+        <p>
+                    {% highlight java %}
+dataStream.rescale();
+            {% endhighlight %}
+
+        </p>
+      </td>
+    </tr>
+   <tr>
+      <td><strong>Broadcasting</strong><br>DataStream &rarr; DataStream</td>
+      <td>
+        <p>
+            Broadcasts elements to every partition.
+            {% highlight java %}
+dataStream.broadcast();
+            {% endhighlight %}
+        </p>
+      </td>
+    </tr>
+  </tbody>
+</table>
+
+</div>
+
+<div data-lang="scala" markdown="1">
+
+<br />
+
+<table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 20%">Transformation</th>
+      <th class="text-center">Description</th>
+    </tr>
+  </thead>
+  <tbody>
+   <tr>
+      <td><strong>Custom partitioning</strong><br>DataStream &rarr; DataStream</td>
+      <td>
+        <p>
+            Uses a user-defined Partitioner to select the target task for each element.
+            {% highlight scala %}
+dataStream.partitionCustom(partitioner, "someKey")
+dataStream.partitionCustom(partitioner, 0)
+            {% endhighlight %}
+        </p>
+      </td>
+    </tr>
+   <tr>
+     <td><strong>Random partitioning</strong><br>DataStream &rarr; DataStream</td>
+     <td>
+       <p>
+            Partitions elements randomly according to a uniform distribution.
+            {% highlight scala %}
+dataStream.shuffle()
+            {% endhighlight %}
+       </p>
+     </td>
+   </tr>
+   <tr>
+      <td><strong>Rebalancing (Round-robin partitioning)</strong><br>DataStream &rarr; DataStream</td>
+      <td>
+        <p>
+            Partitions elements round-robin, creating equal load per partition. Useful for performance
+            optimization in the presence of data skew.
+            {% highlight scala %}
+dataStream.rebalance()
+            {% endhighlight %}
+        </p>
+      </td>
+    </tr>
+    <tr>
+      <td><strong>Rescaling</strong><br>DataStream &rarr; DataStream</td>
+      <td>
+        <p>
+            Partitions elements, round-robin, to a subset of downstream operations. This is
+            useful if you want to have pipelines where you, for example, fan out from
+            each parallel instance of a source to a subset of several mappers to distribute load
+            but don't want the full rebalance that rebalance() would incur. This would require only
+            local data transfers instead of transferring data over network, depending on
+            other configuration values such as the number of slots of TaskManagers.
+        </p>
+        <p>
+            The subset of downstream operations to which the upstream operation sends
+            elements depends on the degree of parallelism of both the upstream and downstream operation.
+            For example, if the upstream operation has parallelism 2 and the downstream operation
+            has parallelism 4, then one upstream operation would distribute elements to two
+            downstream operations while the other upstream operation would distribute to the other
+            two downstream operations. If, on the other hand, the downstream operation has parallelism
+            2 while the upstream operation has parallelism 4 then two upstream operations would
+            distribute to one downstream operation while the other two upstream operations would
+            distribute to the other downstream operations.
+        </p>
+        <p>
+            In cases where the different parallelisms are not multiples of each other one or several
+            downstream operations will have a differing number of inputs from upstream operations.
+
+        </p>
+        </p>
+            Please see this figure for a visualization of the connection pattern in the above
+            example:
+        </p>
+
+        <div style="text-align: center">
+            <img src="{{ site.baseurl }}/fig/rescale.svg" alt="Checkpoint barriers in data streams" />
+            </div>
+
+
+        <p>
+                    {% highlight java %}
+dataStream.rescale()
+            {% endhighlight %}
+
+        </p>
+      </td>
+    </tr>
+   <tr>
+      <td><strong>Broadcasting</strong><br>DataStream &rarr; DataStream</td>
+      <td>
+        <p>
+            Broadcasts elements to every partition.
+            {% highlight scala %}
+dataStream.broadcast()
+            {% endhighlight %}
+        </p>
+      </td>
+    </tr>
+  </tbody>
+</table>
+
+</div>
+</div>
+
+### Task chaining and resource groups
+
+Chaining two subsequent transformations means co-locating them within the same thread for better
+performance. Flink by default chains operators if this is possible (e.g., two subsequent map
+transformations). The API gives fine-grained control over chaining if desired:
+
+Use `StreamExecutionEnvironment.disableOperatorChaining()` if you want to disable chaining in
+the whole job. For more fine grained control, the following functions are available. Note that
+these functions can only be used right after a DataStream transformation as they refer to the
+previous transformation. For example, you can use `someStream.map(...).startNewChain()`, but
+you cannot use `someStream.startNewChain()`.
+
+A resource group is a slot in Flink, see
+[slots]({{site.baseurl}}/setup/config.html#configuring-taskmanager-processing-slots). You can
+manually isolate operators in separate slots if desired.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+<br />
+
+<table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 20%">Transformation</th>
+      <th class="text-center">Description</th>
+    </tr>
+  </thead>
+  <tbody>
+   <tr>
+      <td>Start new chain</td>
+      <td>
+        <p>Begin a new chain, starting with this operator. The two
+	mappers will be chained, and filter will not be chained to
+	the first mapper.
+{% highlight java %}
+someStream.filter(...).map(...).startNewChain().map(...);
+{% endhighlight %}
+        </p>
+      </td>
+    </tr>
+   <tr>
+      <td>Disable chaining</td>
+      <td>
+        <p>Do not chain the map operator
+{% highlight java %}
+someStream.map(...).disableChaining();
+{% endhighlight %}
+        </p>
+      </td>
+    </tr>
+    <tr>
+      <td>Set slot sharing group</td>
+      <td>
+        <p>Set the slot sharing group of an operation. Flink will put operations with the same
+        slot sharing group into the same slot while keeping operations that don't have the
+        slot sharing group in other slots. This can be used to isolate slots. The slot sharing
+        group is inherited from input operations if all input operations are in the same slot
+        sharing group.
+        The name of the default slot sharing group is "default", operations can explicitly
+        be put into this group by calling slotSharingGroup("default").
+{% highlight java %}
+someStream.filter(...).slotSharingGroup("name");
+{% endhighlight %}
+        </p>
+      </td>
+    </tr>
+  </tbody>
+</table>
+
+</div>
+
+<div data-lang="scala" markdown="1">
+
+<br />
+
+<table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 20%">Transformation</th>
+      <th class="text-center">Description</th>
+    </tr>
+  </thead>
+  <tbody>
+   <tr>
+      <td>Start new chain</td>
+      <td>
+        <p>Begin a new chain, starting with this operator. The two
+	mappers will be chained, and filter will not be chained to
+	the first mapper.
+{% highlight scala %}
+someStream.filter(...).map(...).startNewChain().map(...)
+{% endhighlight %}
+        </p>
+      </td>
+    </tr>
+   <tr>
+      <td>Disable chaining</td>
+      <td>
+        <p>Do not chain the map operator
+{% highlight scala %}
+someStream.map(...).disableChaining()
+{% endhighlight %}
+        </p>
+      </td>
+    </tr>
+  <tr>
+      <td>Set slot sharing group</td>
+      <td>
+        <p>Set the slot sharing group of an operation. Flink will put operations with the same
+        slot sharing group into the same slot while keeping operations that don't have the
+        slot sharing group in other slots. This can be used to isolate slots. The slot sharing
+        group is inherited from input operations if all input operations are in the same slot
+        sharing group.
+        The name of the default slot sharing group is "default", operations can explicitly
+        be put into this group by calling slotSharingGroup("default").
+{% highlight java %}
+someStream.filter(...).slotSharingGroup("name")
+{% endhighlight %}
+        </p>
+      </td>
+    </tr>
+  </tbody>
+</table>
+
+</div>
+</div>
+
+
+{% top %}
+
+Data Sources
+------------
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+<br />
+
+Sources are where your program reads its input from. You can attach a source to your program by
+using `StreamExecutionEnvironment.addSource(sourceFunction)`. Flink comes with a number of pre-implemented
+source functions, but you can always write your own custom sources by implementing the `SourceFunction`
+for non-parallel sources, or by implementing the `ParallelSourceFunction` interface or extending the
+`RichParallelSourceFunction` for parallel sources.
+
+There are several predefined stream sources accessible from the `StreamExecutionEnvironment`:
+
+File-based:
+
+- `readTextFile(path)` - Reads text files, i.e. files that respect the `TextInputFormat` specification, line-by-line and returns them as Strings.
+
+- `readFile(fileInputFormat, path)` - Reads (once) files as dictated by the specified file input format.
+
+- `readFile(fileInputFormat, path, watchType, interval, pathFilter, typeInfo)` -  This is the method called internally by the two previous ones. It reads files in the `path` based on the given `fileInputFormat`. Depending on the provided `watchType`, this source may periodically monitor (every `interval` ms) the path for new data (`FileProcessingMode.PROCESS_CONTINUOUSLY`), or process once the data currently in the path and exit (`FileProcessingMode.PROCESS_ONCE`). Using the `pathFilter`, the user can further exclude files from being processed.
+
+    *IMPLEMENTATION:*
+
+    Under the hood, Flink splits the file reading process into two sub-tasks, namely *directory monitoring* and *data reading*. Each of these sub-tasks is implemented by a separate entity. Monitoring is implemented by a single, **non-parallel** (parallelism = 1) task, while reading is performed by multiple tasks running in parallel. The parallelism of the latter is equal to the job parallelism. The role of the single monitoring task is to scan the directory (periodically or only once depending on the `watchType`), find the files to be processed, divide them in *splits*, and assign these splits to the downstream readers. The readers are the ones who will read the actual data. Each split is read by only one reader, while a reader can read muplitple splits, one-by-one.
+
+    *IMPORTANT NOTES:*
+
+    1. If the `watchType` is set to `FileProcessingMode.PROCESS_CONTINUOUSLY`, when a file is modified, its contents are re-processed entirely. This can brake the "exactly-once" semantics, as appending data at the end of a file will lead to **all** its contents being re-processed.
+
+    2. If the `watchType` is set to `FileProcessingMode.PROCESS_ONCE`, the source scans the path **once** and exits, without waiting for the readers to finish reading the file contents. Of course the readers will continue reading until all file contents are read. Closing the source leads to no more checkpoints after that point. This may lead to slower recovery after a node failure, as the job will resume reading from the last checkpoint.
+
+Socket-based:
+
+- `socketTextStream` - Reads from a socket. Elements can be separated by a delimiter.
+
+Collection-based:
+
+- `fromCollection(Collection)` - Creates a data stream from the Java Java.util.Collection. All elements
+  in the collection must be of the same type.
+
+- `fromCollection(Iterator, Class)` - Creates a data stream from an iterator. The class specifies the
+  data type of the elements returned by the iterator.
+
+- `fromElements(T ...)` - Creates a data stream from the given sequence of objects. All objects must be
+  of the same type.
+
+- `fromParallelCollection(SplittableIterator, Class)` - Creates a data stream from an iterator, in
+  parallel. The class specifies the data type of the elements returned by the iterator.
+
+- `generateSequence(from, to)` - Generates the sequence of numbers in the given interval, in
+  parallel.
+
+Custom:
+
+- `addSource` - Attache a new source function. For example, to read from Apache Kafka you can use
+    `addSource(new FlinkKafkaConsumer08<>(...))`. See [connectors]({{ site.baseurl }}/dev/connectors/index.html) for more details.
+
+</div>
+
+<div data-lang="scala" markdown="1">
+
+<br />
+
+Sources are where your program reads its input from. You can attach a source to your program by
+using `StreamExecutionEnvironment.addSource(sourceFunction)`. Flink comes with a number of pre-implemented
+source functions, but you can always write your own custom sources by implementing the `SourceFunction`
+for non-parallel sources, or by implementing the `ParallelSourceFunction` interface or extending the
+`RichParallelSourceFunction` for parallel sources.
+
+There are several predefined stream sources accessible from the `StreamExecutionEnvironment`:
+
+File-based:
+
+- `readTextFile(path)` - Reads text files, i.e. files that respect the `TextInputFormat` specification, line-by-line and returns them as Strings.
+
+- `readFile(fileInputFormat, path)` - Reads (once) files as dictated by the specified file input format.
+
+- `readFile(fileInputFormat, path, watchType, interval, pathFilter)` -  This is the method called internally by the two previous ones. It reads files in the `path` based on the given `fileInputFormat`. Depending on the provided `watchType`, this source may periodically monitor (every `interval` ms) the path for new data (`FileProcessingMode.PROCESS_CONTINUOUSLY`), or process once the data currently in the path and exit (`FileProcessingMode.PROCESS_ONCE`). Using the `pathFilter`, the user can further exclude files from being processed.
+
+    *IMPLEMENTATION:*
+
+    Under the hood, Flink splits the file reading process into two sub-tasks, namely *directory monitoring* and *data reading*. Each of these sub-tasks is implemented by a separate entity. Monitoring is implemented by a single, **non-parallel** (parallelism = 1) task, while reading is performed by multiple tasks running in parallel. The parallelism of the latter is equal to the job parallelism. The role of the single monitoring task is to scan the directory (periodically or only once depending on the `watchType`), find the files to be processed, divide them in *splits*, and assign these splits to the downstream readers. The readers are the ones who will read the actual data. Each split is read by only one reader, while a reader can read muplitple splits, one-by-one.
+
+    *IMPORTANT NOTES:*
+
+    1. If the `watchType` is set to `FileProcessingMode.PROCESS_CONTINUOUSLY`, when a file is modified, its contents are re-processed entirely. This can brake the "exactly-once" semantics, as appending data at the end of a file will lead to **all** its contents being re-processed.
+
+    2. If the `watchType` is set to `FileProcessingMode.PROCESS_ONCE`, the source scans the path **once** and exits, without waiting for the readers to finish reading the file contents. Of course the readers will continue reading until all file contents are read. Closing the source leads to no more checkpoints after that point. This may lead to slower recovery after a node failure, as the job will resume reading from the last checkpoint.
+
+Socket-based:
+
+- `socketTextStream` - Reads from a socket. Elements can be separated by a delimiter.
+
+Collection-based:
+
+- `fromCollection(Seq)` - Creates a data stream from the Java Java.util.Collection. All elements
+  in the collection must be of the same type.
+
+- `fromCollection(Iterator)` - Creates a data stream from an iterator. The class specifies the
+  data type of the elements returned by the iterator.
+
+- `fromElements(elements: _*)` - Creates a data stream from the given sequence of objects. All objects must be
+  of the same type.
+
+- `fromParallelCollection(SplittableIterator)` - Creates a data stream from an iterator, in
+  parallel. The class specifies the data type of the elements returned by the iterator.
+
+- `generateSequence(from, to)` - Generates the sequence of numbers in the given interval, in
+  parallel.
+
+Custom:
+
+- `addSource` - Attache a new source function. For example, to read from Apache Kafka you can use
+    `addSource(new FlinkKafkaConsumer08<>(...))`. See [connectors]({{ site.baseurl }}/apis/streaming/connectors/) for more details.
+
+</div>
+</div>
+
+{% top %}
+
+Data Sinks
+----------
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+<br />
+
+Data sinks consume DataStreams and forward them to files, sockets, external systems, or print them.
+Flink comes with a variety of built-in output formats that are encapsulated behind operations on the
+DataStreams:
+
+- `writeAsText()` / `TextOutputFormat` - Writes elements line-wise as Strings. The Strings are
+  obtained by calling the *toString()* method of each element.
+
+- `writeAsCsv(...)` / `CsvOutputFormat` - Writes tuples as comma-separated value files. Row and field
+  delimiters are configurable. The value for each field comes from the *toString()* method of the objects.
+
+- `print()` / `printToErr()`  - Prints the *toString()* value
+of each element on the standard out / standard error stream. Optionally, a prefix (msg) can be provided which is
+prepended to the output. This can help to distinguish between different calls to *print*. If the parallelism is
+greater than 1, the output will also be prepended with the identifier of the task which produced the output.
+
+- `writeUsingOutputFormat()` / `FileOutputFormat` - Method and base class for custom file outputs. Supports
+  custom object-to-bytes conversion.
+
+- `writeToSocket` - Writes elements to a socket according to a `SerializationSchema`
+
+- `addSink` - Invokes a custom sink function. Flink comes bundled with connectors to other systems (such as
+    Apache Kafka) that are implemented as sink functions.
+
+</div>
+<div data-lang="scala" markdown="1">
+
+<br />
+
+Data sinks consume DataStreams and forward them to files, sockets, external systems, or print them.
+Flink comes with a variety of built-in output formats that are encapsulated behind operations on the
+DataStreams:
+
+- `writeAsText()` / `TextOutputFormat` - Writes elements line-wise as Strings. The Strings are
+  obtained by calling the *toString()* method of each element.
+
+- `writeAsCsv(...)` / `CsvOutputFormat` - Writes tuples as comma-separated value files. Row and field
+  delimiters are configurable. The value for each field comes from the *toString()* method of the objects.
+
+- `print()` / `printToErr()`  - Prints the *toString()* value
+of each element on the standard out / standard error stream. Optionally, a prefix (msg) can be provided which is
+prepended to the output. This can help to distinguish between different calls to *print*. If the parallelism is
+greater than 1, the output will also be prepended with the identifier of the task which produced the output.
+
+- `writeUsingOutputFormat()` / `FileOutputFormat` - Method and base class for custom file outputs. Supports
+  custom object-to-bytes conversion.
+
+- `writeToSocket` - Writes elements to a socket according to a `SerializationSchema`
+
+- `addSink` - Invokes a custom sink function. Flink comes bundled with connectors to other systems (such as
+    Apache Kafka) that are implemented as sink functions.
+
+</div>
+</div>
+
+Note that the `write*()` methods on `DataStream` are mainly intended for debugging purposes.
+They are not participating in Flink's checkpointing, this means these functions usually have
+at-least-once semantics. The data flushing to the target system depends on the implementation of the
+OutputFormat. This means that not all elements send to the OutputFormat are immediately showing up
+in the target system. Also, in failure cases, those records might be lost.
+
+For reliable, exactly-once delivery of a stream into a file system, use the `flink-connector-filesystem`.
+Also, custom implementations through the `.addSink(...)` method can participate in Flink's checkpointing
+for exactly-once semantics.
+
+{% top %}
+
+Iterations
+----------
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+<br />
+
+Iterative streaming programs implement a step function and embed it into an `IterativeStream`. As a DataStream
+program may never finish, there is no maximum number of iterations. Instead, you need to specify which part
+of the stream is fed back to the iteration and which part is forwarded downstream using a `split` transformation
+or a `filter`. Here, we show an example using filters. First, we define an `IterativeStream`
+
+{% highlight java %}
+IterativeStream<Integer> iteration = input.iterate();
+{% endhighlight %}
+
+Then, we specify the logic that will be executed inside the loop using a series of transformations (here
+a simple `map` transformation)
+
+{% highlight java %}
+DataStream<Integer> iterationBody = iteration.map(/* this is executed many times */);
+{% endhighlight %}
+
+To close an iteration and define the iteration tail, call the `closeWith(feedbackStream)` method of the `IterativeStream`.
+The DataStream given to the `closeWith` function will be fed back to the iteration head.
+A common pattern is to use a filter to separate the part of the stream that is fed back,
+and the part of the stream which is propagated forward. These filters can, e.g., define
+the "termination" logic, where an element is allowed to propagate downstream rather
+than being fed back.
+
+{% highlight java %}
+iteration.closeWith(iterationBody.filter(/* one part of the stream */));
+DataStream<Integer> output = iterationBody.filter(/* some other part of the stream */);
+{% endhighlight %}
+
+By default the partitioning of the feedback stream will be automatically set to be the same as the input of the
+iteration head. To override this the user can set an optional boolean flag in the `closeWith` method.
+
+For example, here is program that continuously subtracts 1 from a series of integers until they reach zero:
+
+{% highlight java %}
+DataStream<Long> someIntegers = env.generateSequence(0, 1000);
+
+IterativeStream<Long> iteration = someIntegers.iterate();
+
+DataStream<Long> minusOne = iteration.map(new MapFunction<Long, Long>() {
+  @Override
+  public Long map(Long value) throws Exception {
+    return value - 1 ;
+  }
+});
+
+DataStream<Long> stillGreaterThanZero = minusOne.filter(new FilterFunction<Long>() {
+  @Override
+  public boolean filter(Long value) throws Exception {
+    return (value > 0);
+  }
+});
+
+iteration.closeWith(stillGreaterThanZero);
+
+DataStream<Long> lessThanZero = minusOne.filter(new FilterFunction<Long>() {
+  @Override
+  public boolean filter(Long value) throws Exception {
+    return (value <= 0);
+  }
+});
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+
+<br />
+
+Iterative streaming programs implement a step function and embed it into an `IterativeStream`. As a DataStream
+program may never finish, there is no maximum number of iterations. Instead, you need to specify which part
+of the stream is fed back to the iteration and which part is forwarded downstream using a `split` transformation
+or a `filter`. Here, we show an example iteration where the body (the part of the computation that is repeated)
+is a simple map transformation, and the elements that are fed back are distinguished by the elements that
+are forwarded downstream using filters.
+
+{% highlight scala %}
+val iteratedStream = someDataStream.iterate(
+  iteration => {
+    val iterationBody = iteration.map(/* this is executed many times */)
+    (tail.filter(/* one part of the stream */), tail.filter(/* some other part of the stream */))
+})
+{% endhighlight %}
+
+
+By default the partitioning of the feedback stream will be automatically set to be the same as the input of the
+iteration head. To override this the user can set an optional boolean flag in the `closeWith` method.
+
+For example, here is program that continuously subtracts 1 from a series of integers until they reach zero:
+
+{% highlight scala %}
+val someIntegers: DataStream[Long] = env.generateSequence(0, 1000)
+
+val iteratedStream = someIntegers.iterate(
+  iteration => {
+    val minusOne = iteration.map( v => v - 1)
+    val stillGreaterThanZero = minusOne.filter (_ > 0)
+    val lessThanZero = minusOne.filter(_ <= 0)
+    (stillGreaterThanZero, lessThanZero)
+  }
+)
+{% endhighlight %}
+
+</div>
+</div>
+
+{% top %}
+
+Execution Parameters
+--------------------
+
+The `StreamExecutionEnvironment` contains the `ExecutionConfig` which allows to set job specific configuration values for the runtime.
+
+Please refer to [execution configuration]({{ site.baseurl }}/dev/api_concepts.html#execution-configuration)
+for an explanation of most parameters. These parameters pertain specifically to the DataStream API:
+
+- `enableTimestamps()` / **`disableTimestamps()`**: Attach a timestamp to each event emitted from a source.
+    `areTimestampsEnabled()` returns the current value.
+
+- `setAutoWatermarkInterval(long milliseconds)`: Set the interval for automatic watermark emission. You can
+    get the current value with `long getAutoWatermarkInterval()`
+
+{% top %}
+
+### Fault Tolerance
+
+The [Fault Tolerance Documentation]({{ site.baseurl }}/setup/fault_tolerance.html) describes the options and parameters to enable and configure Flink's checkpointing mechanism.
+
+### Controlling Latency
+
+By default, elements are not transferred on the network one-by-one (which would cause unnecessary network traffic)
+but are buffered. The size of the buffers (which are actually transferred between machines) can be set in the Flink config files.
+While this method is good for optimizing throughput, it can cause latency issues when the incoming stream is not fast enough.
+To control throughput and latency, you can use `env.setBufferTimeout(timeoutMillis)` on the execution environment
+(or on individual operators) to set a maximum wait time for the buffers to fill up. After this time, the
+buffers are sent automatically even if they are not full. The default value for this timeout is 100 ms.
+
+Usage:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+LocalStreamEnvironment env = StreamExecutionEnvironment.createLocalEnvironment();
+env.setBufferTimeout(timeoutMillis);
+
+env.generateSequence(1,10).map(new MyMapper()).setBufferTimeout(timeoutMillis);
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+LocalStreamEnvironment env = StreamExecutionEnvironment.createLocalEnvironment
+env.setBufferTimeout(timeoutMillis)
+
+env.genereateSequence(1,10).map(myMap).setBufferTimeout(timeoutMillis)
+{% endhighlight %}
+</div>
+</div>
+
+To maximize throughput, set `setBufferTimeout(-1)` which will remove the timeout and buffers will only be
+flushed when they are full. To minimize latency, set the timeout to a value close to 0 (for example 5 or 10 ms).
+A buffer timeout of 0 should be avoided, because it can cause severe performance degradation.
+
+{% top %}
+
+Debugging
+---------
+
+Before running a streaming program in a distributed cluster, it is a good
+idea to make sure that the implemented algorithm works as desired. Hence, implementing data analysis
+programs is usually an incremental process of checking results, debugging, and improving.
+
+Flink provides features to significantly ease the development process of data analysis
+programs by supporting local debugging from within an IDE, injection of test data, and collection of
+result data. This section give some hints how to ease the development of Flink programs.
+
+### Local Execution Environment
+
+A `LocalStreamEnvironment` starts a Flink system within the same JVM process it was created in. If you
+start the LocalEnvironment from an IDE, you can set breakpoints in your code and easily debug your
+program.
+
+A LocalEnvironment is created and used as follows:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+final StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironment();
+
+DataStream<String> lines = env.addSource(/* some source */);
+// build your program
+
+env.execute();
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+
+{% highlight scala %}
+val env = StreamExecutionEnvironment.createLocalEnvironment()
+
+val lines = env.addSource(/* some source */)
+// build your program
+
+env.execute()
+{% endhighlight %}
+</div>
+</div>
+
+### Collection Data Sources
+
+Flink provides special data sources which are backed
+by Java collections to ease testing. Once a program has been tested, the sources and sinks can be
+easily replaced by sources and sinks that read from / write to external systems.
+
+Collection data sources can be used as follows:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+final StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironment();
+
+// Create a DataStream from a list of elements
+DataStream<Integer> myInts = env.fromElements(1, 2, 3, 4, 5);
+
+// Create a DataStream from any Java collection
+List<Tuple2<String, Integer>> data = ...
+DataStream<Tuple2<String, Integer>> myTuples = env.fromCollection(data);
+
+// Create a DataStream from an Iterator
+Iterator<Long> longIt = ...
+DataStream<Long> myLongs = env.fromCollection(longIt, Long.class);
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = StreamExecutionEnvironment.createLocalEnvironment()
+
+// Create a DataStream from a list of elements
+val myInts = env.fromElements(1, 2, 3, 4, 5)
+
+// Create a DataStream from any Collection
+val data: Seq[(String, Int)] = ...
+val myTuples = env.fromCollection(data)
+
+// Create a DataStream from an Iterator
+val longIt: Iterator[Long] = ...
+val myLongs = env.fromCollection(longIt)
+{% endhighlight %}
+</div>
+</div>
+
+**Note:** Currently, the collection data source requires that data types and iterators implement
+`Serializable`. Furthermore, collection data sources can not be executed in parallel (
+parallelism = 1).
+
+### Iterator Data Sink
+
+Flink also provides a sink to collect DataStream results for testing and debugging purposes. It can be used as follows:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+import org.apache.flink.contrib.streaming.DataStreamUtils
+
+DataStream<Tuple2<String, Integer>> myResult = ...
+Iterator<Tuple2<String, Integer>> myOutput = DataStreamUtils.collect(myResult)
+{% endhighlight %}
+
+</div>
+<div data-lang="scala" markdown="1">
+
+{% highlight scala %}
+import org.apache.flink.contrib.streaming.DataStreamUtils
+import scala.collection.JavaConverters.asScalaIteratorConverter
+
+val myResult: DataStream[(String, Int)] = ...
+val myOutput: Iterator[(String, Int)] = DataStreamUtils.collect(myResult.getJavaStream).asScala
+{% endhighlight %}
+</div>
+</div>
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/event_time.md
----------------------------------------------------------------------
diff --git a/docs/dev/event_time.md b/docs/dev/event_time.md
new file mode 100644
index 0000000..7375a0f
--- /dev/null
+++ b/docs/dev/event_time.md
@@ -0,0 +1,206 @@
+---
+title: "Event Time"
+nav-id: event_time
+nav-show_overview: true
+nav-parent_id: dev
+nav-pos: 4
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+* toc
+{:toc}
+
+# Event Time / Processing Time / Ingestion Time
+
+Flink supports different notions of *time* in streaming programs.
+
+- **Processing time:** Processing time refers to the system time of the machine that is executing the
+    respective operation.
+
+    When a streaming program runs on processing time, all time-based operations (like time windows) will
+    use the system clock of the machines that run the respective operator. For example, an hourly
+    processing time window will include all records that arrived at a specific operator between the
+    times when the system clock indicated the full hour.
+
+    Processing time is the simplest notion of time and requires no coordination between streams and machines.
+    It provides the best performance and the lowest latency. However, in distributed and asynchronous
+    environments processing time does not provide determinism, because it is susceptible to the speed at which
+    records arrive in the system (for example from the message queue), and to the speed at which the
+    records flow between operators inside the system.
+
+- **Event time:** Event time is the time that each individual event occurred on its producing device.
+    This time is typically embedded within the records before they enter Flink and that *event timestamp*
+    can be extracted from the record. An hourly event time window will contain all records that carry an
+    event timestamp that falls into that hour, regardless of when the records arrive, and in what order
+    they arrive.
+
+    Event time gives correct results even on out-of-order events, late events, or on replays
+    of data from backups or persistent logs. In event time, the progress of time depends on the data,
+    not on any wall clocks. Event time programs must specify how to generate *Event Time Watermarks*,
+    which is the mechanism that signals time progress in event time. The mechanism is
+    described below.
+
+    Event time processing often incurs a certain latency, due to it nature of waiting a certain time for
+    late events and out-of-order events. Because of that, event time programs are often combined with
+    *processing time* operations.
+
+- **Ingestion time:** Ingestion time is the time that events enter Flink. At the source operator, each
+    record gets the source's current time as a timestamp, and time-based operations (like time windows)
+    refer to that timestamp.
+
+    *Ingestion Time* sits conceptually in between *Event Time* and *Processing Time*. Compared to
+    *Processing Time*, it is slightly more expensive, but gives more predictable results: Because
+    *Ingestion Time* uses stable timestamps (assigned once at the source), different window operations
+    over the records will refer to the same timestamp, whereas in *Processing Time* each window operator
+    may assign the record to a different window (based on the local system clock and any transport delay).
+
+    Compered to *Event Time*, *Ingestion Time* programs cannot handle any out-of-order events or late data,
+    but the programs don't have to specify how to generate *Watermarks*.
+
+    Internally, *Ingestion Time* is treated much like event time, with automatic timestamp assignment and
+    automatic Watermark generation.
+
+<img src="{{ site.baseurl }}/fig/times_clocks.svg" class="center" width="80%" />
+
+
+### Setting a Time Characteristic
+
+The first part of a Flink DataStream program is usually to set the base *time characteristic*. That setting
+defines how data stream sources behave (for example whether to assign timestamps), and what notion of
+time the window operations like `KeyedStream.timeWindow(Time.seconds(30))` refer to.
+
+The following example shows a Flink program that aggregates events in hourly time windows. The behavior of the
+windows adapts with the time characteristic.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
+
+env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);
+
+// alternatively:
+// env.setStreamTimeCharacteristic(TimeCharacteristic.IngestionTime);
+// env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
+
+DataStream<MyEvent> stream = env.addSource(new FlinkKafkaConsumer09<MyEvent>(topic, schema, props));
+
+stream
+    .keyBy( (event) -> event.getUser() )
+    .timeWindow(Time.hours(1))
+    .reduce( (a, b) -> a.add(b) )
+    .addSink(...);
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = StreamExecutionEnvironment.getExecutionEnvironment
+
+env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime)
+
+// alternatively:
+// env.setStreamTimeCharacteristic(TimeCharacteristic.IngestionTime)
+// env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
+
+val stream: DataStream[MyEvent] = env.addSource(new FlinkKafkaConsumer09[MyEvent](topic, schema, props))
+
+stream
+    .keyBy( _.getUser )
+    .timeWindow(Time.hours(1))
+    .reduce( (a, b) => a.add(b) )
+    .addSink(...)
+{% endhighlight %}
+</div>
+</div>
+
+
+Note that in order to run this example in *Event Time*, the program needs to use either an event time
+source, or inject a *Timestamp Assigner & Watermark Generator*. Those functions describe how to access
+the event timestamps, and what timely out-of-orderness the event stream exhibits.
+
+The section below describes the general mechanism behind *Timestamps* and *Watermarks*. For a guide on how
+to use timestamp assignment and watermark generation in the Flink DataStream API, please refer to
+[Generating Timestamps / Watermarks]({{ site.baseurl }}/dev/event_timestamps_watermarks.html)
+
+
+# Event Time and Watermarks
+
+*Note: Flink implements many techniques from the Dataflow Model. For a good introduction to Event Time and, have also a look at these articles*
+
+  - [Streaming 101](https://www.oreilly.com/ideas/the-world-beyond-batch-streaming-101) by Tyler Akidau
+  - The [Dataflow Model paper](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43864.pdf)
+
+
+A stream processor that supports *event time* needs a way to measure the progress of event time.
+For example, a window operator that builds hourly windows needs to be notified when event time has reached the
+next full hour, such that the operator can close the next window.
+
+*Event Time* can progress independently of *Processing Time* (measures by wall clocks).
+For example, in one program, the current *event time* of an operator can trail slightly behind the processing time
+(accounting for a delay in receiving the latest elements) and both proceed at the same speed. In another streaming
+program, which reads fast-forward through some data already buffered in a Kafka topic (or another message queue), event time
+can progress by weeks in seconds.
+
+------
+
+The mechanism in Flink to measure progress in event time is **Watermarks**.
+Watermarks flow as part of the data stream and carry a timestamp *t*. A *Watermark(t)* declares that event time has reached time
+*t* in that stream, meaning that all events with a timestamps *t' < t* have occurred.
+
+The figure below shows a stream of events with (logical) timestamps, and watermarks flowing inline. The events are in order
+(with respect to their timestamp), meaning that watermarks are simply periodic markers in the stream with an in-order timestamp.
+
+<img src="{{ site.baseurl }}/fig/stream_watermark_in_order.svg" alt="A data stream with events (in order) and watermarks" class="center" width="65%" />
+
+Watermarks are crucial for *out-of-order* streams, as shown in the figure below, where, events do not occur ordered by their timestamp.
+Watermarks establish points in the stream where all events up to a certain timestamp have occurred. Once these watermarks reach an
+operator, the operator can advance its internal *event time clock* to the value of the watermark.
+
+<img src="{{ site.baseurl }}/fig/stream_watermark_out_of_order.svg" alt="A data stream with events (out of order) and watermarks" class="center" width="65%" />
+
+
+## Watermarks in Parallel Streams
+
+Watermarks are generated at source functions, or directly after source functions. Each parallel subtask of a source function usually
+generates its watermarks independently. These watermarks define the event time at that particular parallel source.
+
+As the watermarks flow through the streaming program, they advance the event time at the operators where they arrive. Whenever an
+operator advances its event time, it generates a new watermark downstream for its successor operators.
+
+Operators that consume multiple input streams (e.g., after a *keyBy(...)* or *partition(...)* function, or a union) track the event time
+on each of their input streams. The operator's current event time is the minimum of the input streams' event time. As the input streams
+update their event time, so does the operator.
+
+The figure below shows an example of events and watermarks flowing through parallel streams, and operators tracking event time.
+
+<img src="{{ site.baseurl }}/fig/parallel_streams_watermarks.svg" alt="Parallel data streams and operators with events and watermarks" class="center" width="80%" />
+
+
+## Late Elements
+
+It is possible that certain elements violate the watermark condition, meaning that even after the *Watermark(t)* has occurred,
+more elements with timestamp *t' < t* will occur. In fact, in many real world setups, certain elements can be arbitrarily
+delayed, making it impossible to define a time when all elements of a certain event timestamp have occurred.
+Further more, even if the lateness can be bounded, delaying the watermarks by too much is often not desirable, because it delays
+the evaluation of the event time windows by too much.
+
+Due to that, some streaming programs will explicitly expect a number of *late* elements. Late elements are elements that
+arrive after the system's event time clock (as signaled by the watermarks) has already passed the time of the late element's
+timestamp.

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/event_timestamp_extractors.md
----------------------------------------------------------------------
diff --git a/docs/dev/event_timestamp_extractors.md b/docs/dev/event_timestamp_extractors.md
new file mode 100644
index 0000000..a9ec6e5
--- /dev/null
+++ b/docs/dev/event_timestamp_extractors.md
@@ -0,0 +1,106 @@
+---
+title: "Pre-defined Timestamp Extractors / Watermark Emitters"
+nav-parent_id: event_time
+nav-pos: 2
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+* toc
+{:toc}
+
+As described in [timestamps and watermark handling]({{ site.baseurl }}/dev/event_timestamps_watermarks.html),
+Flink provides abstractions that allow the programmer to assign their own timestamps and emit their own watermarks. More specifically,
+one can do so by implementing one of the `AssignerWithPeriodicWatermarks` and `AssignerWithPunctuatedWatermarks` interfaces, depending
+on their use-case. In a nutshell, the first will emit watermarks periodically, while the second does so based on some property of
+the incoming records, e.g. whenever a special element is encountered in the stream.
+
+In order to further ease the programming effort for such tasks, Flink comes with some pre-implemented timestamp assigners.
+This section provides a list of them. Apart from their out-of-the-box functionality, their implementation can serve as an example
+for custom assigner implementations.
+
+#### **Assigner with Ascending Timestamps**
+
+The simplest special case for *periodic* watermark generation is the case where timestamps seen by a given source task
+occur in ascending order. In that case, the current timestamp can always act as a watermark, because no earlier timestamps will
+arrive.
+
+Note that it is only necessary that timestamps are ascending *per parallel data source task*. For example, if
+in a specific setup one Kafka partition is read by one parallel data source instance, then it is only necessary that
+timestamps are ascending within each Kafka partition. Flink's Watermark merging mechanism will generate correct
+watermarks whenever parallel streams are shuffled, unioned, connected, or merged.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+DataStream<MyEvent> stream = ...
+
+DataStream<MyEvent> withTimestampsAndWatermarks =
+    stream.assignTimestampsAndWatermarks(new AscendingTimestampExtractor<MyEvent>() {
+
+        @Override
+        public long extractAscendingTimestamp(MyEvent element) {
+            return element.getCreationTime();
+        }
+});
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val stream: DataStream[MyEvent] = ...
+
+val withTimestampsAndWatermarks = stream.assignAscendingTimestamps( _.getCreationTime )
+{% endhighlight %}
+</div>
+</div>
+
+#### **Assigner which allows a fixed amount of record lateness**
+
+Another example of periodic watermark generation is when the watermark lags behind the maximum (event-time) timestamp
+seen in the stream by a fixed amount of time. This case covers scenarios where the maximum lateness that can be encountered in a
+stream is known in advance, e.g. when creating a custom source containing elements with timestamps spread within a fixed period of
+time for testing. For these cases, Flink provides the `BoundedOutOfOrdernessTimestampExtractor` which takes as an argument
+the `maxOutOfOrderness`, i.e. the maximum amount of time an element is allowed to be late before being ignored when computing the
+final result for the given window. Lateness corresponds to the result of `t - t_w`, where `t` is the (event-time) timestamp of an
+element, and `t_w` that of the previous watermark. If `lateness > 0` then the element is considered late and is ignored when computing
+the result of the job for its corresponding window.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+DataStream<MyEvent> stream = ...
+
+DataStream<MyEvent> withTimestampsAndWatermarks =
+    stream.assignTimestampsAndWatermarks(new BoundedOutOfOrdernessTimestampExtractor<MyEvent>(Time.seconds(10)) {
+
+        @Override
+        public long extractAscendingTimestamp(MyEvent element) {
+            return element.getCreationTime();
+        }
+});
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val stream: DataStream[MyEvent] = ...
+
+val withTimestampsAndWatermarks = stream.assignTimestampsAndWatermarks(new BoundedOutOfOrdernessTimestampExtractor[MyEvent](Time.seconds(10))( _.getCreationTime ))
+{% endhighlight %}
+</div>
+</div>


[13/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/libs/ml/index.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/ml/index.md b/docs/dev/libs/ml/index.md
new file mode 100644
index 0000000..d01e18e
--- /dev/null
+++ b/docs/dev/libs/ml/index.md
@@ -0,0 +1,144 @@
+---
+title: "FlinkML - Machine Learning for Flink"
+nav-id: ml
+nav-show_overview: true
+nav-title: Machine Learning
+nav-parent_id: libs
+nav-pos: 4
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+FlinkML is the Machine Learning (ML) library for Flink. It is a new effort in the Flink community,
+with a growing list of algorithms and contributors. With FlinkML we aim to provide
+scalable ML algorithms, an intuitive API, and tools that help minimize glue code in end-to-end ML
+systems. You can see more details about our goals and where the library is headed in our [vision
+and roadmap here](https://cwiki.apache.org/confluence/display/FLINK/FlinkML%3A+Vision+and+Roadmap).
+
+* This will be replaced by the TOC
+{:toc}
+
+## Supported Algorithms
+
+FlinkML currently supports the following algorithms:
+
+### Supervised Learning
+
+* [SVM using Communication efficient distributed dual coordinate ascent (CoCoA)](svm.html)
+* [Multiple linear regression](multiple_linear_regression.html)
+* [Optimization Framework](optimization.html)
+
+### Unsupervised Learning
+
+* [k-Nearest neighbors join](knn.html)
+
+### Data Preprocessing
+
+* [Polynomial Features](polynomial_features.html)
+* [Standard Scaler](standard_scaler.html)
+* [MinMax Scaler](min_max_scaler.html)
+
+### Recommendation
+
+* [Alternating Least Squares (ALS)](als.html)
+
+### Utilities
+
+* [Distance Metrics](distance_metrics.html)
+* [Cross Validation](cross_validation.html)
+
+## Getting Started
+
+You can check out our [quickstart guide](quickstart.html) for a comprehensive getting started
+example.
+
+If you want to jump right in, you have to [set up a Flink program]({{ site.baseurl }}/dev/api_concepts.html#linking-with-flink).
+Next, you have to add the FlinkML dependency to the `pom.xml` of your project.
+
+{% highlight xml %}
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-ml{{ site.scala_version_suffix }}</artifactId>
+  <version>{{site.version }}</version>
+</dependency>
+{% endhighlight %}
+
+Note that FlinkML is currently not part of the binary distribution.
+See linking with it for cluster execution [here]({{site.baseurl}}/dev/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution).
+
+Now you can start solving your analysis task.
+The following code snippet shows how easy it is to train a multiple linear regression model.
+
+{% highlight scala %}
+// LabeledVector is a feature vector with a label (class or real value)
+val trainingData: DataSet[LabeledVector] = ...
+val testingData: DataSet[Vector] = ...
+
+// Alternatively, a Splitter is used to break up a DataSet into training and testing data.
+val dataSet: DataSet[LabeledVector] = ...
+val trainTestData: DataSet[TrainTestDataSet] = Splitter.trainTestSplit(dataSet)
+val trainingData: DataSet[LabeledVector] = trainTestData.training
+val testingData: DataSet[Vector] = trainTestData.testing.map(lv => lv.vector)
+
+val mlr = MultipleLinearRegression()
+  .setStepsize(1.0)
+  .setIterations(100)
+  .setConvergenceThreshold(0.001)
+
+mlr.fit(trainingData)
+
+// The fitted model can now be used to make predictions
+val predictions: DataSet[LabeledVector] = mlr.predict(testingData)
+{% endhighlight %}
+
+## Pipelines
+
+A key concept of FlinkML is its [scikit-learn](http://scikit-learn.org) inspired pipelining mechanism.
+It allows you to quickly build complex data analysis pipelines how they appear in every data scientist's daily work.
+An in-depth description of FlinkML's pipelines and their internal workings can be found [here](pipelines.html).
+
+The following example code shows how easy it is to set up an analysis pipeline with FlinkML.
+
+{% highlight scala %}
+val trainingData: DataSet[LabeledVector] = ...
+val testingData: DataSet[Vector] = ...
+
+val scaler = StandardScaler()
+val polyFeatures = PolynomialFeatures().setDegree(3)
+val mlr = MultipleLinearRegression()
+
+// Construct pipeline of standard scaler, polynomial features and multiple linear regression
+val pipeline = scaler.chainTransformer(polyFeatures).chainPredictor(mlr)
+
+// Train pipeline
+pipeline.fit(trainingData)
+
+// Calculate predictions
+val predictions: DataSet[LabeledVector] = pipeline.predict(testingData)
+{% endhighlight %}
+
+One can chain a `Transformer` to another `Transformer` or a set of chained `Transformers` by calling the method `chainTransformer`.
+If one wants to chain a `Predictor` to a `Transformer` or a set of chained `Transformers`, one has to call the method `chainPredictor`.
+
+
+## How to contribute
+
+The Flink community welcomes all contributors who want to get involved in the development of Flink and its libraries.
+In order to get quickly started with contributing to FlinkML, please read our official
+[contribution guide]({{site.baseurl}}/dev/libs/ml/contribution_guide.html).

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/libs/ml/knn.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/ml/knn.md b/docs/dev/libs/ml/knn.md
new file mode 100644
index 0000000..0d3ca9a
--- /dev/null
+++ b/docs/dev/libs/ml/knn.md
@@ -0,0 +1,144 @@
+---
+mathjax: include
+title: k-Nearest Neighbors Join
+nav-parent_id: ml
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+* This will be replaced by the TOC
+{:toc}
+
+## Description
+Implements an exact k-nearest neighbors join algorithm.  Given a training set $A$ and a testing set $B$, the algorithm returns
+
+$$
+KNNJ(A, B, k) = \{ \left( b, KNN(b, A, k) \right) \text{ where } b \in B \text{ and } KNN(b, A, k) \text{ are the k-nearest points to }b\text{ in }A \}
+$$
+
+The brute-force approach is to compute the distance between every training and testing point. To ease the brute-force computation of computing the distance between every training point a quadtree is used. The quadtree scales well in the number of training points, though poorly in the spatial dimension. The algorithm will automatically choose whether or not to use the quadtree, though the user can override that decision by setting a parameter to force use or not use a quadtree.
+
+## Operations
+
+`KNN` is a `Predictor`.
+As such, it supports the `fit` and `predict` operation.
+
+### Fit
+
+KNN is trained by a given set of `Vector`:
+
+* `fit[T <: Vector]: DataSet[T] => Unit`
+
+### Predict
+
+KNN predicts for all subtypes of FlinkML's `Vector` the corresponding class label:
+
+* `predict[T <: Vector]: DataSet[T] => DataSet[(T, Array[Vector])]`, where the `(T, Array[Vector])` tuple
+  corresponds to (test point, K-nearest training points)
+
+## Parameters
+
+The KNN implementation can be controlled by the following parameters:
+
+   <table class="table table-bordered">
+    <thead>
+      <tr>
+        <th class="text-left" style="width: 20%">Parameters</th>
+        <th class="text-center">Description</th>
+      </tr>
+    </thead>
+
+    <tbody>
+      <tr>
+        <td><strong>K</strong></td>
+        <td>
+          <p>
+            Defines the number of nearest-neighbors to search for. That is, for each test point, the algorithm finds the K-nearest neighbors in the training set
+            (Default value: <strong>5</strong>)
+          </p>
+        </td>
+      </tr>
+      <tr>
+        <td><strong>DistanceMetric</strong></td>
+        <td>
+          <p>
+            Sets the distance metric we use to calculate the distance between two points. If no metric is specified, then [[org.apache.flink.ml.metrics.distances.EuclideanDistanceMetric]] is used.
+            (Default value: <strong>EuclideanDistanceMetric</strong>)
+          </p>
+        </td>
+      </tr>
+      <tr>
+        <td><strong>Blocks</strong></td>
+        <td>
+          <p>
+            Sets the number of blocks into which the input data will be split. This number should be set
+            at least to the degree of parallelism. If no value is specified, then the parallelism of the
+            input [[DataSet]] is used as the number of blocks.
+            (Default value: <strong>None</strong>)
+          </p>
+        </td>
+      </tr>
+      <tr>
+        <td><strong>UseQuadTree</strong></td>
+        <td>
+          <p>
+            A boolean variable that whether or not to use a quadtree to partition the training set to potentially simplify the KNN search. If no value is specified, the code will automatically decide whether or not to use a quadtree. Use of a quadtree scales well with the number of training and testing points, though poorly with the dimension.
+            (Default value: <strong>None</strong>)
+          </p>
+        </td>
+      </tr>
+      <tr>
+        <td><strong>SizeHint</strong></td>
+        <td>
+          <p>Specifies whether the training set or test set is small to optimize the cross product operation needed for the KNN search. If the training set is small this should be `CrossHint.FIRST_IS_SMALL` and set to `CrossHint.SECOND_IS_SMALL` if the test set is small.
+             (Default value: <strong>None</strong>)
+          </p>
+        </td>
+      </tr>
+    </tbody>
+  </table>
+
+## Examples
+
+{% highlight scala %}
+import org.apache.flink.api.common.operators.base.CrossOperatorBase.CrossHint
+import org.apache.flink.api.scala._
+import org.apache.flink.ml.nn.KNN
+import org.apache.flink.ml.math.Vector
+import org.apache.flink.ml.metrics.distances.SquaredEuclideanDistanceMetric
+
+val env = ExecutionEnvironment.getExecutionEnvironment
+
+// prepare data
+val trainingSet: DataSet[Vector] = ...
+val testingSet: DataSet[Vector] = ...
+
+val knn = KNN()
+  .setK(3)
+  .setBlocks(10)
+  .setDistanceMetric(SquaredEuclideanDistanceMetric())
+  .setUseQuadTree(false)
+  .setSizeHint(CrossHint.SECOND_IS_SMALL)
+
+// run knn join
+knn.fit(trainingSet)
+val result = knn.predict(testingSet).collect()
+{% endhighlight %}
+
+For more details on the computing KNN with and without and quadtree, here is a presentation: [http://danielblazevski.github.io/](http://danielblazevski.github.io/)

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/libs/ml/min_max_scaler.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/ml/min_max_scaler.md b/docs/dev/libs/ml/min_max_scaler.md
new file mode 100644
index 0000000..35376c3
--- /dev/null
+++ b/docs/dev/libs/ml/min_max_scaler.md
@@ -0,0 +1,112 @@
+---
+mathjax: include
+title: MinMax Scaler
+nav-parent_id: ml
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+* This will be replaced by the TOC
+{:toc}
+
+## Description
+
+ The MinMax scaler scales the given data set, so that all values will lie between a user specified range [min,max].
+ In case the user does not provide a specific minimum and maximum value for the scaling range, the MinMax scaler transforms the features of the input data set to lie in the [0,1] interval.
+ Given a set of input data $x_1, x_2,... x_n$, with minimum value:
+
+ $$x_{min} = min({x_1, x_2,..., x_n})$$
+
+ and maximum value:
+
+ $$x_{max} = max({x_1, x_2,..., x_n})$$
+
+The scaled data set $z_1, z_2,...,z_n$ will be:
+
+ $$z_{i}= \frac{x_{i} - x_{min}}{x_{max} - x_{min}} \left ( max - min \right ) + min$$
+
+where $\textit{min}$ and $\textit{max}$ are the user specified minimum and maximum values of the range to scale.
+
+## Operations
+
+`MinMaxScaler` is a `Transformer`.
+As such, it supports the `fit` and `transform` operation.
+
+### Fit
+
+MinMaxScaler is trained on all subtypes of `Vector` or `LabeledVector`:
+
+* `fit[T <: Vector]: DataSet[T] => Unit`
+* `fit: DataSet[LabeledVector] => Unit`
+
+### Transform
+
+MinMaxScaler transforms all subtypes of `Vector` or `LabeledVector` into the respective type:
+
+* `transform[T <: Vector]: DataSet[T] => DataSet[T]`
+* `transform: DataSet[LabeledVector] => DataSet[LabeledVector]`
+
+## Parameters
+
+The MinMax scaler implementation can be controlled by the following two parameters:
+
+ <table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 20%">Parameters</th>
+      <th class="text-center">Description</th>
+    </tr>
+  </thead>
+
+  <tbody>
+    <tr>
+      <td><strong>Min</strong></td>
+      <td>
+        <p>
+          The minimum value of the range for the scaled data set. (Default value: <strong>0.0</strong>)
+        </p>
+      </td>
+    </tr>
+    <tr>
+      <td><strong>Max</strong></td>
+      <td>
+        <p>
+          The maximum value of the range for the scaled data set. (Default value: <strong>1.0</strong>)
+        </p>
+      </td>
+    </tr>
+  </tbody>
+</table>
+
+## Examples
+
+{% highlight scala %}
+// Create MinMax scaler transformer
+val minMaxscaler = MinMaxScaler()
+  .setMin(-1.0)
+
+// Obtain data set to be scaled
+val dataSet: DataSet[Vector] = ...
+
+// Learn the minimum and maximum values of the training data
+minMaxscaler.fit(dataSet)
+
+// Scale the provided data set to have min=-1.0 and max=1.0
+val scaledDS = minMaxscaler.transform(dataSet)
+{% endhighlight %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/libs/ml/multiple_linear_regression.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/ml/multiple_linear_regression.md b/docs/dev/libs/ml/multiple_linear_regression.md
new file mode 100644
index 0000000..95ee85f
--- /dev/null
+++ b/docs/dev/libs/ml/multiple_linear_regression.md
@@ -0,0 +1,160 @@
+---
+mathjax: include
+title: Multiple Linear Regression
+nav-parent_id: ml
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+* This will be replaced by the TOC
+{:toc}
+
+## Description
+
+ Multiple linear regression tries to find a linear function which best fits the provided input data.
+ Given a set of input data with its value $(\mathbf{x}, y)$, multiple linear regression finds
+ a vector $\mathbf{w}$ such that the sum of the squared residuals is minimized:
+
+ $$ S(\mathbf{w}) = \sum_{i=1} \left(y - \mathbf{w}^T\mathbf{x_i} \right)^2$$
+
+ Written in matrix notation, we obtain the following formulation:
+
+ $$\mathbf{w}^* = \arg \min_{\mathbf{w}} (\mathbf{y} - X\mathbf{w})^2$$
+
+ This problem has a closed form solution which is given by:
+
+  $$\mathbf{w}^* = \left(X^TX\right)^{-1}X^T\mathbf{y}$$
+
+  However, in cases where the input data set is so huge that a complete parse over the whole data
+  set is prohibitive, one can apply stochastic gradient descent (SGD) to approximate the solution.
+  SGD first calculates for a random subset of the input data set the gradients. The gradient
+  for a given point $\mathbf{x}_i$ is given by:
+
+  $$\nabla_{\mathbf{w}} S(\mathbf{w}, \mathbf{x_i}) = 2\left(\mathbf{w}^T\mathbf{x_i} -
+    y\right)\mathbf{x_i}$$
+
+  The gradients are averaged and scaled. The scaling is defined by $\gamma = \frac{s}{\sqrt{j}}$
+  with $s$ being the initial step size and $j$ being the current iteration number. The resulting gradient is subtracted from the
+  current weight vector giving the new weight vector for the next iteration:
+
+  $$\mathbf{w}_{t+1} = \mathbf{w}_t - \gamma \frac{1}{n}\sum_{i=1}^n \nabla_{\mathbf{w}} S(\mathbf{w}, \mathbf{x_i})$$
+
+  The multiple linear regression algorithm computes either a fixed number of SGD iterations or terminates based on a dynamic convergence criterion.
+  The convergence criterion is the relative change in the sum of squared residuals:
+
+  $$\frac{S_{k-1} - S_k}{S_{k-1}} < \rho$$
+
+## Operations
+
+`MultipleLinearRegression` is a `Predictor`.
+As such, it supports the `fit` and `predict` operation.
+
+### Fit
+
+MultipleLinearRegression is trained on a set of `LabeledVector`:
+
+* `fit: DataSet[LabeledVector] => Unit`
+
+### Predict
+
+MultipleLinearRegression predicts for all subtypes of `Vector` the corresponding regression value:
+
+* `predict[T <: Vector]: DataSet[T] => DataSet[LabeledVector]`
+
+If we call predict with a `DataSet[LabeledVector]`, we make a prediction on the regression value
+for each example, and return a `DataSet[(Double, Double)]`. In each tuple the first element
+is the true value, as was provided from the input `DataSet[LabeledVector]` and the second element
+is the predicted value. You can then use these `(truth, prediction)` tuples to evaluate
+the algorithm's performance.
+
+* `predict: DataSet[LabeledVector] => DataSet[(Double, Double)]`
+
+## Parameters
+
+  The multiple linear regression implementation can be controlled by the following parameters:
+
+   <table class="table table-bordered">
+    <thead>
+      <tr>
+        <th class="text-left" style="width: 20%">Parameters</th>
+        <th class="text-center">Description</th>
+      </tr>
+    </thead>
+
+    <tbody>
+      <tr>
+        <td><strong>Iterations</strong></td>
+        <td>
+          <p>
+            The maximum number of iterations. (Default value: <strong>10</strong>)
+          </p>
+        </td>
+      </tr>
+      <tr>
+        <td><strong>Stepsize</strong></td>
+        <td>
+          <p>
+            Initial step size for the gradient descent method.
+            This value controls how far the gradient descent method moves in the opposite direction of the gradient.
+            Tuning this parameter might be crucial to make it stable and to obtain a better performance.
+            (Default value: <strong>0.1</strong>)
+          </p>
+        </td>
+      </tr>
+      <tr>
+        <td><strong>ConvergenceThreshold</strong></td>
+        <td>
+          <p>
+            Threshold for relative change of the sum of squared residuals until the iteration is stopped.
+            (Default value: <strong>None</strong>)
+          </p>
+        </td>
+      </tr>
+      <tr>
+        <td><strong>LearningRateMethod</strong></td>
+        <td>
+            <p>
+                Learning rate method used to calculate the effective learning rate for each iteration.
+                See the list of supported <a href="optimization.html">learning rate methods</a>.
+                (Default value: <strong>LearningRateMethod.Default</strong>)
+            </p>
+        </td>
+      </tr>
+    </tbody>
+  </table>
+
+## Examples
+
+{% highlight scala %}
+// Create multiple linear regression learner
+val mlr = MultipleLinearRegression()
+.setIterations(10)
+.setStepsize(0.5)
+.setConvergenceThreshold(0.001)
+
+// Obtain training and testing data set
+val trainingDS: DataSet[LabeledVector] = ...
+val testingDS: DataSet[Vector] = ...
+
+// Fit the linear model to the provided data
+mlr.fit(trainingDS)
+
+// Calculate the predictions for the test data
+val predictions = mlr.predict(testingDS)
+{% endhighlight %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/libs/ml/optimization.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/ml/optimization.md b/docs/dev/libs/ml/optimization.md
new file mode 100644
index 0000000..e3e2f63
--- /dev/null
+++ b/docs/dev/libs/ml/optimization.md
@@ -0,0 +1,382 @@
+---
+mathjax: include
+title: Optimization
+nav-parent_id: ml
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+* Table of contents
+{:toc}
+
+## Mathematical Formulation
+
+The optimization framework in FlinkML is a developer-oriented package that can be used to solve
+[optimization](https://en.wikipedia.org/wiki/Mathematical_optimization)
+problems common in Machine Learning (ML) tasks. In the supervised learning context, this usually
+involves finding a model, as defined by a set of parameters $w$, that minimize a function $f(\wv)$
+given a set of $(\x, y)$ examples,
+where $\x$ is a feature vector and $y$ is a real number, which can represent either a real value in
+the regression case, or a class label in the classification case. In supervised learning, the
+function to be minimized is usually of the form:
+
+
+\begin{equation} \label{eq:objectiveFunc}
+    f(\wv) :=
+    \frac1n \sum_{i=1}^n L(\wv;\x_i,y_i) +
+    \lambda\, R(\wv)
+    \ .
+\end{equation}
+
+
+where $L$ is the loss function and $R(\wv)$ the regularization penalty. We use $L$ to measure how
+well the model fits the observed data, and we use $R$ in order to impose a complexity cost to the
+model, with $\lambda > 0$ being the regularization parameter.
+
+### Loss Functions
+
+In supervised learning, we use loss functions in order to measure the model fit, by
+penalizing errors in the predictions $p$ made by the model compared to the true $y$ for each
+example. Different loss functions can be used for regression (e.g. Squared Loss) and classification
+(e.g. Hinge Loss) tasks.
+
+Some common loss functions are:
+
+* Squared Loss: $ \frac{1}{2} \left(\wv^T \cdot \x - y\right)^2, \quad y \in \R $
+* Hinge Loss: $ \max \left(0, 1 - y ~ \wv^T \cdot \x\right), \quad y \in \{-1, +1\} $
+* Logistic Loss: $ \log\left(1+\exp\left( -y ~ \wv^T \cdot \x\right)\right), \quad y \in \{-1, +1\}$
+
+### Regularization Types
+
+[Regularization](https://en.wikipedia.org/wiki/Regularization_(mathematics)) in machine learning
+imposes penalties to the estimated models, in order to reduce overfitting. The most common penalties
+are the $L_1$ and $L_2$ penalties, defined as:
+
+* $L_1$: $R(\wv) = \norm{\wv}_1$
+* $L_2$: $R(\wv) = \frac{1}{2}\norm{\wv}_2^2$
+
+The $L_2$ penalty penalizes large weights, favoring solutions with more small weights rather than
+few large ones.
+The $L_1$ penalty can be used to drive a number of the solution coefficients to 0, thereby
+producing sparse solutions.
+The regularization constant $\lambda$ in $\eqref{eq:objectiveFunc}$ determines the amount of regularization applied to the model,
+and is usually determined through model cross-validation.
+A good comparison of regularization types can be found in [this](http://www.robotics.stanford.edu/~ang/papers/icml04-l1l2.pdf) paper by Andrew Ng.
+Which regularization type is supported depends on the actually used optimization algorithm.
+
+## Stochastic Gradient Descent
+
+In order to find a (local) minimum of a function, Gradient Descent methods take steps in the
+direction opposite to the gradient of the function $\eqref{eq:objectiveFunc}$ taken with
+respect to the current parameters (weights).
+In order to compute the exact gradient we need to perform one pass through all the points in
+a dataset, making the process computationally expensive.
+An alternative is Stochastic Gradient Descent (SGD) where at each iteration we sample one point
+from the complete dataset and update the parameters for each point, in an online manner.
+
+In mini-batch SGD we instead sample random subsets of the dataset, and compute the gradient
+over each batch. At each iteration of the algorithm we update the weights once, based on
+the average of the gradients computed from each mini-batch.
+
+An important parameter is the learning rate $\eta$, or step size, which can be determined by one of five methods, listed below. The setting of the initial step size can significantly affect the performance of the
+algorithm. For some practical tips on tuning SGD see Leon Botou's
+"[Stochastic Gradient Descent Tricks](http://research.microsoft.com/pubs/192769/tricks-2012.pdf)".
+
+The current implementation of SGD  uses the whole partition, making it
+effectively a batch gradient descent. Once a sampling operator has been introduced in Flink, true
+mini-batch SGD will be performed.
+
+### Regularization
+
+FlinkML supports Stochastic Gradient Descent with L1, L2 and no regularization.
+The following list contains a mapping between the implementing classes and the regularization function.
+
+<table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 20%">Class Name</th>
+      <th class="text-center">Regularization function $R(\wv)$</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td><code>SimpleGradient</code></td>
+      <td>$R(\wv) = 0$</td>
+    </tr>
+    <tr>
+      <td><code>GradientDescentL1</code></td>
+      <td>$R(\wv) = \norm{\wv}_1$</td>
+    </tr>
+    <tr>
+      <td><code>GradientDescentL2</code></td>
+      <td>$R(\wv) = \frac{1}{2}\norm{\wv}_2^2$</td>
+    </tr>
+  </tbody>
+</table>
+
+### Parameters
+
+  The stochastic gradient descent implementation can be controlled by the following parameters:
+
+   <table class="table table-bordered">
+    <thead>
+      <tr>
+        <th class="text-left" style="width: 20%">Parameter</th>
+        <th class="text-center">Description</th>
+      </tr>
+    </thead>
+    <tbody>
+      <tr>
+        <td><strong>LossFunction</strong></td>
+        <td>
+          <p>
+            The loss function to be optimized. (Default value: <strong>None</strong>)
+          </p>
+        </td>
+      </tr>
+      <tr>
+        <td><strong>RegularizationConstant</strong></td>
+        <td>
+          <p>
+            The amount of regularization to apply. (Default value: <strong>0.1</strong>)
+          </p>
+        </td>
+      </tr>
+      <tr>
+        <td><strong>Iterations</strong></td>
+        <td>
+          <p>
+            The maximum number of iterations. (Default value: <strong>10</strong>)
+          </p>
+        </td>
+      </tr>
+      <tr>
+        <td><strong>LearningRate</strong></td>
+        <td>
+          <p>
+            Initial learning rate for the gradient descent method.
+            This value controls how far the gradient descent method moves in the opposite direction
+            of the gradient.
+            (Default value: <strong>0.1</strong>)
+          </p>
+        </td>
+      </tr>
+      <tr>
+        <td><strong>ConvergenceThreshold</strong></td>
+        <td>
+          <p>
+            When set, iterations stop if the relative change in the value of the objective function $\eqref{eq:objectiveFunc}$ is less than the provided threshold, $\tau$.
+            The convergence criterion is defined as follows: $\left| \frac{f(\wv)_{i-1} - f(\wv)_i}{f(\wv)_{i-1}}\right| < \tau$.
+            (Default value: <strong>None</strong>)
+          </p>
+        </td>
+      </tr>
+      <tr>
+        <td><strong>LearningRateMethod</strong></td>
+        <td>
+          <p>
+            (Default value: <strong>LearningRateMethod.Default</strong>)
+          </p>
+        </td>
+      </tr>
+      <tr>
+        <td><strong>Decay</strong></td>
+        <td>
+          <p>
+            (Default value: <strong>0.0</strong>)
+          </p>
+        </td>
+      </tr>
+    </tbody>
+  </table>
+
+### Loss Function
+
+The loss function which is minimized has to implement the `LossFunction` interface, which defines methods to compute the loss and the gradient of it.
+Either one defines ones own `LossFunction` or one uses the `GenericLossFunction` class which constructs the loss function from an outer loss function and a prediction function.
+An example can be seen here
+
+```Scala
+val lossFunction = GenericLossFunction(SquaredLoss, LinearPrediction)
+```
+
+The full list of supported outer loss functions can be found [here](#partial-loss-function-values).
+The full list of supported prediction functions can be found [here](#prediction-function-values).
+
+#### Partial Loss Function Values ##
+
+  <table class="table table-bordered">
+    <thead>
+      <tr>
+        <th class="text-left" style="width: 20%">Function Name</th>
+        <th class="text-center">Description</th>
+        <th class="text-center">Loss</th>
+        <th class="text-center">Loss Derivative</th>
+      </tr>
+    </thead>
+    <tbody>
+      <tr>
+        <td><strong>SquaredLoss</strong></td>
+        <td>
+          <p>
+            Loss function most commonly used for regression tasks.
+          </p>
+        </td>
+        <td class="text-center">$\frac{1}{2} (\wv^T \cdot \x - y)^2$</td>
+        <td class="text-center">$\wv^T \cdot \x - y$</td>
+      </tr>
+    </tbody>
+  </table>
+
+#### Prediction Function Values ##
+
+  <table class="table table-bordered">
+      <thead>
+        <tr>
+          <th class="text-left" style="width: 20%">Function Name</th>
+          <th class="text-center">Description</th>
+          <th class="text-center">Prediction</th>
+          <th class="text-center">Prediction Gradient</th>
+        </tr>
+      </thead>
+      <tbody>
+        <tr>
+          <td><strong>LinearPrediction</strong></td>
+          <td>
+            <p>
+              The function most commonly used for linear models, such as linear regression and
+              linear classifiers.
+            </p>
+          </td>
+          <td class="text-center">$\x^T \cdot \wv$</td>
+          <td class="text-center">$\x$</td>
+        </tr>
+      </tbody>
+    </table>
+
+#### Effective Learning Rate ##
+
+Where:
+
+- $j$ is the iteration number
+
+- $\eta_j$ is the step size on step $j$
+
+- $\eta_0$ is the initial step size
+
+- $\lambda$ is the regularization constant
+
+- $\tau$ is the decay constant, which causes the learning rate to be a decreasing function of $j$, that is to say as iterations increase, learning rate decreases. The exact rate of decay is function specific, see **Inverse Scaling** and **Wei Xu's Method** (which is an extension of the **Inverse Scaling** method).
+
+<table class="table table-bordered">
+    <thead>
+      <tr>
+        <th class="text-left" style="width: 20%">Function Name</th>
+        <th class="text-center">Description</th>
+        <th class="text-center">Function</th>
+        <th class="text-center">Called As</th>
+      </tr>
+    </thead>
+    <tbody>
+      <tr>
+        <td><strong>Default</strong></td>
+        <td>
+          <p>
+            The function default method used for determining the step size. This is equivalent to the inverse scaling method for $\tau$ = 0.5.  This special case is kept as the default to maintain backwards compatibility.
+          </p>
+        </td>
+        <td class="text-center">$\eta_j = \eta_0/\sqrt{j}$</td>
+        <td class="text-center"><code>LearningRateMethod.Default</code></td>
+      </tr>
+      <tr>
+        <td><strong>Constant</strong></td>
+        <td>
+          <p>
+            The step size is constant throughout the learning task.
+          </p>
+        </td>
+        <td class="text-center">$\eta_j = \eta_0$</td>
+        <td class="text-center"><code>LearningRateMethod.Constant</code></td>
+      </tr>
+      <tr>
+        <td><strong>Leon Bottou's Method</strong></td>
+        <td>
+          <p>
+            This is the <code>'optimal'</code> method of sklearn.
+            The optimal initial value $t_0$ has to be provided.
+            Sklearn uses the following heuristic: $t_0 = \max(1.0, L^\prime(-\beta, 1.0) / (\alpha \cdot \beta)$
+            with $\beta = \sqrt{\frac{1}{\sqrt{\alpha}}}$ and $L^\prime(prediction, truth)$ being the derivative of the loss function.
+          </p>
+        </td>
+        <td class="text-center">$\eta_j = 1 / (\lambda \cdot (t_0 + j -1)) $</td>
+        <td class="text-center"><code>LearningRateMethod.Bottou</code></td>
+      </tr>
+      <tr>
+        <td><strong>Inverse Scaling</strong></td>
+        <td>
+          <p>
+            A very common method for determining the step size.
+          </p>
+        </td>
+        <td class="text-center">$\eta_j = \eta_0 / j^{\tau}$</td>
+        <td class="text-center"><code>LearningRateMethod.InvScaling</code></td>
+      </tr>
+      <tr>
+        <td><strong>Wei Xu's Method</strong></td>
+        <td>
+          <p>
+            Method proposed by Wei Xu in <a href="http://arxiv.org/pdf/1107.2490.pdf">Towards Optimal One Pass Large Scale Learning with
+            Averaged Stochastic Gradient Descent</a>
+          </p>
+        </td>
+        <td class="text-center">$\eta_j = \eta_0 \cdot (1+ \lambda \cdot \eta_0 \cdot j)^{-\tau} $</td>
+        <td class="text-center"><code>LearningRateMethod.Xu</code></td>
+      </tr>
+    </tbody>
+  </table>
+
+### Examples
+
+In the Flink implementation of SGD, given a set of examples in a `DataSet[LabeledVector]` and
+optionally some initial weights, we can use `GradientDescentL1.optimize()` in order to optimize
+the weights for the given data.
+
+The user can provide an initial `DataSet[WeightVector]`,
+which contains one `WeightVector` element, or use the default weights which are all set to 0.
+A `WeightVector` is a container class for the weights, which separates the intercept from the
+weight vector. This allows us to avoid applying regularization to the intercept.
+
+
+
+{% highlight scala %}
+// Create stochastic gradient descent solver
+val sgd = GradientDescentL1()
+  .setLossFunction(SquaredLoss())
+  .setRegularizationConstant(0.2)
+  .setIterations(100)
+  .setLearningRate(0.01)
+  .setLearningRateMethod(LearningRateMethod.Xu(-0.75))
+
+
+// Obtain data
+val trainingDS: DataSet[LabeledVector] = ...
+
+// Optimize the weights, according to the provided data
+val weightDS = sgd.optimize(trainingDS)
+{% endhighlight %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/libs/ml/pipelines.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/ml/pipelines.md b/docs/dev/libs/ml/pipelines.md
new file mode 100644
index 0000000..e0f7d82
--- /dev/null
+++ b/docs/dev/libs/ml/pipelines.md
@@ -0,0 +1,441 @@
+---
+mathjax: include
+title: Looking under the hood of pipelines
+nav-title: Pipelines
+nav-parent_id: ml
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+* This will be replaced by the TOC
+{:toc}
+
+## Introduction
+
+The ability to chain together different transformers and predictors is an important feature for
+any Machine Learning (ML) library. In FlinkML we wanted to provide an intuitive API,
+and at the same
+time utilize the capabilities of the Scala language to provide
+type-safe implementations of our pipelines. What we hope to achieve then is an easy to use API,
+that protects users from type errors at pre-flight (before the job is launched) time, thereby
+eliminating cases where long
+running jobs are submitted to the cluster only to see them fail due to some
+error in the series of data transformations that commonly happen in an ML pipeline.
+
+In this guide then we will describe the choices we made during the implementation of chainable
+transformers and predictors in FlinkML, and provide guidelines on how developers can create their
+own algorithms that make use of these capabilities.
+
+## The what and the why
+
+So what do we mean by "ML pipelines"? Pipelines in the ML context can be thought of as chains of
+operations that have some data as input, perform a number of transformations to that data,
+and
+then output the transformed data, either to be used as the input (features) of a predictor
+function, such as a learning model, or just output the transformed data themselves, to be used in
+some other task. The end learner can of course be a part of the pipeline as well.
+ML pipelines can often be complicated sets of operations ([in-depth explanation](http://research.google.com/pubs/pub43146.html)) and
+can become sources of errors for end-to-end learning systems.
+
+The purpose of ML pipelines is then to create a
+framework that can be used to manage the complexity introduced by these chains of operations.
+Pipelines should make it easy for developers to define chained transformations that can be
+applied to the
+training data, in order to create the end features that will be used to train a
+learning model, and then perform the same set of transformations just as easily to unlabeled
+(test) data. Pipelines should also simplify cross-validation and model selection on
+these chains of operations.
+
+Finally, by ensuring that the consecutive links in the pipeline chain "fit together" we also
+avoid costly type errors. Since each step in a pipeline can be a computationally-heavy operation,
+we want to avoid running a pipelined job, unless we are sure that all the input/output pairs in a
+pipeline "fit".
+
+## Pipelines in FlinkML
+
+The building blocks for pipelines in FlinkML can be found in the `ml.pipeline` package.
+FlinkML follows an API inspired by [sklearn](http://scikit-learn.org) which means that we have
+`Estimator`, `Transformer` and `Predictor` interfaces. For an in-depth look at the design of the
+sklearn API the interested reader is referred to [this](http://arxiv.org/abs/1309.0238) paper.
+In short, the `Estimator` is the base class from which `Transformer` and `Predictor` inherit.
+`Estimator` defines a `fit` method, and `Transformer` also defines a `transform` method and
+`Predictor` defines a `predict` method.
+
+The `fit` method of the `Estimator` performs the actual training of the model, for example
+finding the correct weights in a linear regression task, or the mean and standard deviation of
+the data in a feature scaler.
+As evident by the naming, classes that implement
+`Transformer` are transform operations like [scaling the input](standard_scaler.html) and
+`Predictor` implementations are learning algorithms such as [Multiple Linear Regression]({{site.baseurl}}/dev/libs/ml/multiple_linear_regression.html).
+Pipelines can be created by chaining together a number of Transformers, and the final link in a pipeline can be a Predictor or another Transformer.
+Pipelines that end with Predictor cannot be chained any further.
+Below is an example of how a pipeline can be formed:
+
+{% highlight scala %}
+// Training data
+val input: DataSet[LabeledVector] = ...
+// Test data
+val unlabeled: DataSet[Vector] = ...
+
+val scaler = StandardScaler()
+val polyFeatures = PolynomialFeatures()
+val mlr = MultipleLinearRegression()
+
+// Construct the pipeline
+val pipeline = scaler
+  .chainTransformer(polyFeatures)
+  .chainPredictor(mlr)
+
+// Train the pipeline (scaler and multiple linear regression)
+pipeline.fit(input)
+
+// Calculate predictions for the testing data
+val predictions: DataSet[LabeledVector] = pipeline.predict(unlabeled)
+
+{% endhighlight %}
+
+As we mentioned, FlinkML pipelines are type-safe.
+If we tried to chain a transformer with output of type `A` to another with input of type `B` we
+would get an error at pre-flight time if `A` != `B`. FlinkML achieves this kind of type-safety
+through the use of Scala's implicits.
+
+### Scala implicits
+
+If you are not familiar with Scala's implicits we can recommend [this excerpt](https://www.artima.com/pins1ed/implicit-conversions-and-parameters.html)
+from Martin Odersky's "Programming in Scala". In short, implicit conversions allow for ad-hoc
+polymorphism in Scala by providing conversions from one type to another, and implicit values
+provide the compiler with default values that can be supplied to function calls through implicit parameters.
+The combination of implicit conversions and implicit parameters is what allows us to chain transform
+and predict operations together in a type-safe manner.
+
+### Operations
+
+As we mentioned, the trait (abstract class) `Estimator` defines a `fit` method. The method has two
+parameter lists
+(i.e. is a [curried function](http://docs.scala-lang.org/tutorials/tour/currying.html)). The
+first parameter list
+takes the input (training) `DataSet` and the parameters for the estimator. The second parameter
+list takes one `implicit` parameter, of type `FitOperation`. `FitOperation` is a class that also
+defines a `fit` method, and this is where the actual logic of training the concrete Estimators
+should be implemented. The `fit` method of `Estimator` is essentially a wrapper around the  fit
+method of `FitOperation`. The `predict` method of `Predictor` and the `transform` method of
+`Transform` are designed in a similar manner, with a respective operation class.
+
+In these methods the operation object is provided as an implicit parameter.
+Scala will [look for implicits](http://docs.scala-lang.org/tutorials/FAQ/finding-implicits.html)
+in the companion object of a type, so classes that implement these interfaces should provide these
+objects as implicit objects inside the companion object.
+
+As an example we can look at the `StandardScaler` class. `StandardScaler` extends `Transformer`, so it has access to its `fit` and `transform` functions.
+These two functions expect objects of `FitOperation` and `TransformOperation` as implicit parameters,
+for the `fit` and `transform` methods respectively, which `StandardScaler` provides in its companion
+object, through `transformVectors` and `fitVectorStandardScaler`:
+
+{% highlight scala %}
+class StandardScaler extends Transformer[StandardScaler] {
+  ...
+}
+
+object StandardScaler {
+
+  ...
+
+  implicit def fitVectorStandardScaler[T <: Vector] = new FitOperation[StandardScaler, T] {
+    override def fit(instance: StandardScaler, fitParameters: ParameterMap, input: DataSet[T])
+      : Unit = {
+        ...
+      }
+
+  implicit def transformVectors[T <: Vector: VectorConverter: TypeInformation: ClassTag] = {
+      new TransformOperation[StandardScaler, T, T] {
+        override def transform(
+          instance: StandardScaler,
+          transformParameters: ParameterMap,
+          input: DataSet[T])
+        : DataSet[T] = {
+          ...
+        }
+
+}
+
+{% endhighlight %}
+
+Note that `StandardScaler` does **not** override the `fit` method of `Estimator` or the `transform`
+method of `Transformer`. Rather, its implementations of `FitOperation` and `TransformOperation`
+override their respective `fit` and `transform` methods, which are then called by the `fit` and
+`transform` methods of `Estimator` and `Transformer`.  Similarly, a class that implements
+`Predictor` should define an implicit `PredictOperation` object inside its companion object.
+
+#### Types and type safety
+
+Apart from the `fit` and `transform` operations that we listed above, the `StandardScaler` also
+provides `fit` and `transform` operations for input of type `LabeledVector`.
+This allows us to use the  algorithm for input that is labeled or unlabeled, and this happens
+automatically, depending on  the type of the input that we give to the fit and transform
+operations. The correct implicit operation is chosen by the compiler, depending on the input type.
+
+If we try to call the `fit` or `transform` methods with types that are not supported we will get a
+runtime error before the job is launched.
+While it would be possible to catch these kinds of errors at compile time as well, the error
+messages that we are able to provide the user would be much less informative, which is why we chose
+to throw runtime exceptions instead.
+
+### Chaining
+
+Chaining is achieved by calling `chainTransformer` or `chainPredictor` on an object
+of a class that implements `Transformer`. These methods return a `ChainedTransformer` or
+`ChainedPredictor` object respectively. As we mentioned, `ChainedTransformer` objects can be
+chained further, while `ChainedPredictor` objects cannot. These classes take care of applying
+fit, transform, and predict operations for a pair of successive transformers or
+a transformer and a predictor. They also act recursively if the length of the
+chain is larger than two, since every `ChainedTransformer` defines a `transform` and `fit`
+operation that can be further chained with more transformers or a predictor.
+
+It is important to note that developers and users do not need to worry about chaining when
+implementing their algorithms, all this is handled automatically by FlinkML.
+
+### How to Implement a Pipeline Operator
+
+In order to support FlinkML's pipelining, algorithms have to adhere to a certain design pattern, which we will describe in this section.
+Let's assume that we want to implement a pipeline operator which changes the mean of your data.
+Since centering data is a common pre-processing step in many analysis pipelines, we will implement it as a `Transformer`.
+Therefore, we first create a `MeanTransformer` class which inherits from `Transformer`
+
+{% highlight scala %}
+class MeanTransformer extends Transformer[MeanTransformer] {}
+{% endhighlight %}
+
+Since we want to be able to configure the mean of the resulting data, we have to add a configuration parameter.
+
+{% highlight scala %}
+class MeanTransformer extends Transformer[MeanTransformer] {
+  def setMean(mean: Double): this.type = {
+    parameters.add(MeanTransformer.Mean, mean)
+    this
+  }
+}
+
+object MeanTransformer {
+  case object Mean extends Parameter[Double] {
+    override val defaultValue: Option[Double] = Some(0.0)
+  }
+
+  def apply(): MeanTransformer = new MeanTransformer
+}
+{% endhighlight %}
+
+Parameters are defined in the companion object of the transformer class and extend the `Parameter` class.
+Since the parameter instances are supposed to act as immutable keys for a parameter map, they should be implemented as `case objects`.
+The default value will be used if no other value has been set by the user of this component.
+If no default value has been specified, meaning that `defaultValue = None`, then the algorithm has to handle this situation accordingly.
+
+We can now instantiate a `MeanTransformer` object and set the mean value of the transformed data.
+But we still have to implement how the transformation works.
+The workflow can be separated into two phases.
+Within the first phase, the transformer learns the mean of the given training data.
+This knowledge can then be used in the second phase to transform the provided data with respect to the configured resulting mean value.
+
+The learning of the mean can be implemented within the `fit` operation of our `Transformer`, which it inherited from `Estimator`.
+Within the `fit` operation, a pipeline component is trained with respect to the given training data.
+The algorithm is, however, **not** implemented by overriding the `fit` method but by providing an implementation of a corresponding `FitOperation` for the correct type.
+Taking a look at the definition of the `fit` method in `Estimator`, which is the parent class of `Transformer`, reveals what why this is the case.
+
+{% highlight scala %}
+trait Estimator[Self] extends WithParameters with Serializable {
+  that: Self =>
+
+  def fit[Training](
+      training: DataSet[Training],
+      fitParameters: ParameterMap = ParameterMap.Empty)
+      (implicit fitOperation: FitOperation[Self, Training]): Unit = {
+    FlinkMLTools.registerFlinkMLTypes(training.getExecutionEnvironment)
+    fitOperation.fit(this, fitParameters, training)
+  }
+}
+{% endhighlight %}
+
+We see that the `fit` method is called with an input data set of type `Training`, an optional parameter list and in the second parameter list with an implicit parameter of type `FitOperation`.
+Within the body of the function, first some machine learning types are registered and then the `fit` method of the `FitOperation` parameter is called.
+The instance gives itself, the parameter map and the training data set as a parameters to the method.
+Thus, all the program logic takes place within the `FitOperation`.
+
+The `FitOperation` has two type parameters.
+The first defines the pipeline operator type for which this `FitOperation` shall work and the second type parameter defines the type of the data set elements.
+If we first wanted to implement the `MeanTransformer` to work on `DenseVector`, we would, thus, have to provide an implementation for `FitOperation[MeanTransformer, DenseVector]`.
+
+{% highlight scala %}
+val denseVectorMeanFitOperation = new FitOperation[MeanTransformer, DenseVector] {
+  override def fit(instance: MeanTransformer, fitParameters: ParameterMap, input: DataSet[DenseVector]) : Unit = {
+    import org.apache.flink.ml.math.Breeze._
+    val meanTrainingData: DataSet[DenseVector] = input
+      .map{ x => (x.asBreeze, 1) }
+      .reduce{
+        (left, right) =>
+          (left._1 + right._1, left._2 + right._2)
+      }
+      .map{ p => (p._1/p._2).fromBreeze }
+  }
+}
+{% endhighlight %}
+
+A `FitOperation[T, I]` has a `fit` method which is called with an instance of type `T`, a parameter map and an input `DataSet[I]`.
+In our case `T=MeanTransformer` and `I=DenseVector`.
+The parameter map is necessary if our fit step depends on some parameter values which were not given directly at creation time of the `Transformer`.
+The `FitOperation` of the `MeanTransformer` sums the `DenseVector` instances of the given input data set up and divides the result by the total number of vectors.
+That way, we obtain a `DataSet[DenseVector]` with a single element which is the mean value.
+
+But if we look closely at the implementation, we see that the result of the mean computation is never stored anywhere.
+If we want to use this knowledge in a later step to adjust the mean of some other input, we have to keep it around.
+And here is where the parameter of type `MeanTransformer` which is given to the `fit` method comes into play.
+We can use this instance to store state, which is used by a subsequent `transform` operation which works on the same object.
+But first we have to extend `MeanTransformer` by a member field and then adjust the `FitOperation` implementation.
+
+{% highlight scala %}
+class MeanTransformer extends Transformer[Centering] {
+  var meanOption: Option[DataSet[DenseVector]] = None
+
+  def setMean(mean: Double): Mean = {
+    parameters.add(MeanTransformer.Mean, mu)
+  }
+}
+
+val denseVectorMeanFitOperation = new FitOperation[MeanTransformer, DenseVector] {
+  override def fit(instance: MeanTransformer, fitParameters: ParameterMap, input: DataSet[DenseVector]) : Unit = {
+    import org.apache.flink.ml.math.Breeze._
+
+    instance.meanOption = Some(input
+      .map{ x => (x.asBreeze, 1) }
+      .reduce{
+        (left, right) =>
+          (left._1 + right._1, left._2 + right._2)
+      }
+      .map{ p => (p._1/p._2).fromBreeze })
+  }
+}
+{% endhighlight %}
+
+If we look at the `transform` method in `Transformer`, we will see that we also need an implementation of `TransformOperation`.
+A possible mean transforming implementation could look like the following.
+
+{% highlight scala %}
+
+val denseVectorMeanTransformOperation = new TransformOperation[MeanTransformer, DenseVector, DenseVector] {
+  override def transform(
+      instance: MeanTransformer,
+      transformParameters: ParameterMap,
+      input: DataSet[DenseVector])
+    : DataSet[DenseVector] = {
+    val resultingParameters = parameters ++ transformParameters
+
+    val resultingMean = resultingParameters(MeanTransformer.Mean)
+
+    instance.meanOption match {
+      case Some(trainingMean) => {
+        input.map{ new MeanTransformMapper(resultingMean) }.withBroadcastSet(trainingMean, "trainingMean")
+      }
+      case None => throw new RuntimeException("MeanTransformer has not been fitted to data.")
+    }
+  }
+}
+
+class MeanTransformMapper(resultingMean: Double) extends RichMapFunction[DenseVector, DenseVector] {
+  var trainingMean: DenseVector = null
+
+  override def open(parameters: Configuration): Unit = {
+    trainingMean = getRuntimeContext().getBroadcastVariable[DenseVector]("trainingMean").get(0)
+  }
+
+  override def map(vector: DenseVector): DenseVector = {
+    import org.apache.flink.ml.math.Breeze._
+
+    val result = vector.asBreeze - trainingMean.asBreeze + resultingMean
+
+    result.fromBreeze
+  }
+}
+{% endhighlight %}
+
+Now we have everything implemented to fit our `MeanTransformer` to a training data set of `DenseVector` instances and to transform them.
+However, when we execute the `fit` operation
+
+{% highlight scala %}
+val trainingData: DataSet[DenseVector] = ...
+val meanTransformer = MeanTransformer()
+
+meanTransformer.fit(trainingData)
+{% endhighlight %}
+
+we receive the following error at runtime: `"There is no FitOperation defined for class MeanTransformer which trains on a DataSet[org.apache.flink.ml.math.DenseVector]"`.
+The reason is that the Scala compiler could not find a fitting `FitOperation` value with the right type parameters for the implicit parameter of the `fit` method.
+Therefore, it chose a fallback implicit value which gives you this error message at runtime.
+In order to make the compiler aware of our implementation, we have to define it as an implicit value and put it in the scope of the `MeanTransformer's` companion object.
+
+{% highlight scala %}
+object MeanTransformer{
+  implicit val denseVectorMeanFitOperation = new FitOperation[MeanTransformer, DenseVector] ...
+
+  implicit val denseVectorMeanTransformOperation = new TransformOperation[MeanTransformer, DenseVector, DenseVector] ...
+}
+{% endhighlight %}
+
+Now we can call `fit` and `transform` of our `MeanTransformer` with `DataSet[DenseVector]` as input.
+Furthermore, we can now use this transformer as part of an analysis pipeline where we have a `DenseVector` as input and expected output.
+
+{% highlight scala %}
+val trainingData: DataSet[DenseVector] = ...
+
+val mean = MeanTransformer.setMean(1.0)
+val polyFeatures = PolynomialFeatures().setDegree(3)
+
+val pipeline = mean.chainTransformer(polyFeatures)
+
+pipeline.fit(trainingData)
+{% endhighlight %}
+
+It is noteworthy that there is no additional code needed to enable chaining.
+The system automatically constructs the pipeline logic using the operations of the individual components.
+
+So far everything works fine with `DenseVector`.
+But what happens, if we call our transformer with `LabeledVector` instead?
+{% highlight scala %}
+val trainingData: DataSet[LabeledVector] = ...
+
+val mean = MeanTransformer()
+
+mean.fit(trainingData)
+{% endhighlight %}
+
+As before we see the following exception upon execution of the program: `"There is no FitOperation defined for class MeanTransformer which trains on a DataSet[org.apache.flink.ml.common.LabeledVector]"`.
+It is noteworthy, that this exception is thrown in the pre-flight phase, which means that the job has not been submitted to the runtime system.
+This has the advantage that you won't see a job which runs for a couple of days and then fails because of an incompatible pipeline component.
+Type compatibility is, thus, checked at the very beginning for the complete job.
+
+In order to make the `MeanTransformer` work on `LabeledVector` as well, we have to provide the corresponding operations.
+Consequently, we have to define a `FitOperation[MeanTransformer, LabeledVector]` and `TransformOperation[MeanTransformer, LabeledVector, LabeledVector]` as implicit values in the scope of `MeanTransformer`'s companion object.
+
+{% highlight scala %}
+object MeanTransformer {
+  implicit val labeledVectorFitOperation = new FitOperation[MeanTransformer, LabeledVector] ...
+
+  implicit val labeledVectorTransformOperation = new TransformOperation[MeanTransformer, LabeledVector, LabeledVector] ...
+}
+{% endhighlight %}
+
+If we wanted to implement a `Predictor` instead of a `Transformer`, then we would have to provide a `FitOperation`, too.
+Moreover, a `Predictor` requires a `PredictOperation` which implements how predictions are calculated from testing data.  

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/libs/ml/polynomial_features.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/ml/polynomial_features.md b/docs/dev/libs/ml/polynomial_features.md
new file mode 100644
index 0000000..676c132
--- /dev/null
+++ b/docs/dev/libs/ml/polynomial_features.md
@@ -0,0 +1,108 @@
+---
+mathjax: include
+title: Polynomial Features
+nav-parent_id: ml
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+* This will be replaced by the TOC
+{:toc}
+
+## Description
+
+The polynomial features transformer maps a vector into the polynomial feature space of degree $d$.
+The dimension of the input vector determines the number of polynomial factors whose values are the respective vector entries.
+Given a vector $(x, y, z, \ldots)^T$ the resulting feature vector looks like:
+
+$$\left(x, y, z, x^2, xy, y^2, yz, z^2, x^3, x^2y, x^2z, xy^2, xyz, xz^2, y^3, \ldots\right)^T$$
+
+Flink's implementation orders the polynomials in decreasing order of their degree.
+
+Given the vector $\left(3,2\right)^T$, the polynomial features vector of degree 3 would look like
+
+ $$\left(3^3, 3^2\cdot2, 3\cdot2^2, 2^3, 3^2, 3\cdot2, 2^2, 3, 2\right)^T$$
+
+This transformer can be prepended to all `Transformer` and `Predictor` implementations which expect an input of type `LabeledVector` or any sub-type of `Vector`.
+
+## Operations
+
+`PolynomialFeatures` is a `Transformer`.
+As such, it supports the `fit` and `transform` operation.
+
+### Fit
+
+PolynomialFeatures is not trained on data and, thus, supports all types of input data.
+
+### Transform
+
+PolynomialFeatures transforms all subtypes of `Vector` and `LabeledVector` into their respective types:
+
+* `transform[T <: Vector]: DataSet[T] => DataSet[T]`
+* `transform: DataSet[LabeledVector] => DataSet[LabeledVector]`
+
+## Parameters
+
+The polynomial features transformer can be controlled by the following parameters:
+
+<table class="table table-bordered">
+    <thead>
+      <tr>
+        <th class="text-left" style="width: 20%">Parameters</th>
+        <th class="text-center">Description</th>
+      </tr>
+    </thead>
+
+    <tbody>
+      <tr>
+        <td><strong>Degree</strong></td>
+        <td>
+          <p>
+            The maximum polynomial degree.
+            (Default value: <strong>10</strong>)
+          </p>
+        </td>
+      </tr>
+    </tbody>
+  </table>
+
+## Examples
+
+{% highlight scala %}
+// Obtain the training data set
+val trainingDS: DataSet[LabeledVector] = ...
+
+// Setup polynomial feature transformer of degree 3
+val polyFeatures = PolynomialFeatures()
+.setDegree(3)
+
+// Setup the multiple linear regression learner
+val mlr = MultipleLinearRegression()
+
+// Control the learner via the parameter map
+val parameters = ParameterMap()
+.add(MultipleLinearRegression.Iterations, 20)
+.add(MultipleLinearRegression.Stepsize, 0.5)
+
+// Create pipeline PolynomialFeatures -> MultipleLinearRegression
+val pipeline = polyFeatures.chainPredictor(mlr)
+
+// train the model
+pipeline.fit(trainingDS)
+{% endhighlight %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/libs/ml/quickstart.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/ml/quickstart.md b/docs/dev/libs/ml/quickstart.md
new file mode 100644
index 0000000..26b9275
--- /dev/null
+++ b/docs/dev/libs/ml/quickstart.md
@@ -0,0 +1,243 @@
+---
+mathjax: include
+title: Quickstart Guide
+nav-title: Quickstart
+nav-parent_id: ml
+nav-pos: 0
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+* This will be replaced by the TOC
+{:toc}
+
+## Introduction
+
+FlinkML is designed to make learning from your data a straight-forward process, abstracting away
+the complexities that usually come with big data learning tasks. In this
+quick-start guide we will show just how easy it is to solve a simple supervised learning problem
+using FlinkML. But first some basics, feel free to skip the next few lines if you're already
+familiar with Machine Learning (ML).
+
+As defined by Murphy [[1]](#murphy) ML deals with detecting patterns in data, and using those
+learned patterns to make predictions about the future. We can categorize most ML algorithms into
+two major categories: Supervised and Unsupervised Learning.
+
+* **Supervised Learning** deals with learning a function (mapping) from a set of inputs
+(features) to a set of outputs. The learning is done using a *training set* of (input,
+output) pairs that we use to approximate the mapping function. Supervised learning problems are
+further divided into classification and regression problems. In classification problems we try to
+predict the *class* that an example belongs to, for example whether a user is going to click on
+an ad or not. Regression problems one the other hand, are about predicting (real) numerical
+values, often called the dependent variable, for example what the temperature will be tomorrow.
+
+* **Unsupervised Learning** deals with discovering patterns and regularities in the data. An example
+of this would be *clustering*, where we try to discover groupings of the data from the
+descriptive features. Unsupervised learning can also be used for feature selection, for example
+through [principal components analysis](https://en.wikipedia.org/wiki/Principal_component_analysis).
+
+## Linking with FlinkML
+
+In order to use FlinkML in your project, first you have to
+[set up a Flink program]({{ site.baseurl }}}/dev/api_concepts.html#linking-with-flink).
+Next, you have to add the FlinkML dependency to the `pom.xml` of your project:
+
+{% highlight xml %}
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-ml{{ site.scala_version_suffix }}</artifactId>
+  <version>{{site.version }}</version>
+</dependency>
+{% endhighlight %}
+
+## Loading data
+
+To load data to be used with FlinkML we can use the ETL capabilities of Flink, or specialized
+functions for formatted data, such as the LibSVM format. For supervised learning problems it is
+common to use the `LabeledVector` class to represent the `(label, features)` examples. A `LabeledVector`
+object will have a FlinkML `Vector` member representing the features of the example and a `Double`
+member which represents the label, which could be the class in a classification problem, or the dependent
+variable for a regression problem.
+
+As an example, we can use Haberman's Survival Data Set , which you can
+[download from the UCI ML repository](http://archive.ics.uci.edu/ml/machine-learning-databases/haberman/haberman.data).
+This dataset *"contains cases from a study conducted on the survival of patients who had undergone
+surgery for breast cancer"*. The data comes in a comma-separated file, where the first 3 columns
+are the features and last column is the class, and the 4th column indicates whether the patient
+survived 5 years or longer (label 1), or died within 5 years (label 2). You can check the [UCI
+page](https://archive.ics.uci.edu/ml/datasets/Haberman%27s+Survival) for more information on the data.
+
+We can load the data as a `DataSet[String]` first:
+
+{% highlight scala %}
+
+import org.apache.flink.api.scala.ExecutionEnvironment
+
+val env = ExecutionEnvironment.getExecutionEnvironment
+
+val survival = env.readCsvFile[(String, String, String, String)]("/path/to/haberman.data")
+
+{% endhighlight %}
+
+We can now transform the data into a `DataSet[LabeledVector]`. This will allow us to use the
+dataset with the FlinkML classification algorithms. We know that the 4th element of the dataset
+is the class label, and the rest are features, so we can build `LabeledVector` elements like this:
+
+{% highlight scala %}
+
+import org.apache.flink.ml.common.LabeledVector
+import org.apache.flink.ml.math.DenseVector
+
+val survivalLV = survival
+  .map{tuple =>
+    val list = tuple.productIterator.toList
+    val numList = list.map(_.asInstanceOf[String].toDouble)
+    LabeledVector(numList(3), DenseVector(numList.take(3).toArray))
+  }
+
+{% endhighlight %}
+
+We can then use this data to train a learner. We will however use another dataset to exemplify
+building a learner; that will allow us to show how we can import other dataset formats.
+
+**LibSVM files**
+
+A common format for ML datasets is the LibSVM format and a number of datasets using that format can be
+found [in the LibSVM datasets website](http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/). FlinkML provides utilities for loading
+datasets using the LibSVM format through the `readLibSVM` function available through the `MLUtils`
+object.
+You can also save datasets in the LibSVM format using the `writeLibSVM` function.
+Let's import the svmguide1 dataset. You can download the
+[training set here](http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary/svmguide1)
+and the [test set here](http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary/svmguide1.t).
+This is an astroparticle binary classification dataset, used by Hsu et al. [[3]](#hsu) in their
+practical Support Vector Machine (SVM) guide. It contains 4 numerical features, and the class label.
+
+We can simply import the dataset then using:
+
+{% highlight scala %}
+
+import org.apache.flink.ml.MLUtils
+
+val astroTrain: DataSet[LabeledVector] = MLUtils.readLibSVM("/path/to/svmguide1")
+val astroTest: DataSet[LabeledVector] = MLUtils.readLibSVM("/path/to/svmguide1.t")
+
+{% endhighlight %}
+
+This gives us two `DataSet[LabeledVector]` objects that we will use in the following section to
+create a classifier.
+
+## Classification
+
+Once we have imported the dataset we can train a `Predictor` such as a linear SVM classifier.
+We can set a number of parameters for the classifier. Here we set the `Blocks` parameter,
+which is used to split the input by the underlying CoCoA algorithm [[2]](#jaggi) uses. The
+regularization parameter determines the amount of $l_2$ regularization applied, which is used
+to avoid overfitting. The step size determines the contribution of the weight vector updates to
+the next weight vector value. This parameter sets the initial step size.
+
+{% highlight scala %}
+
+import org.apache.flink.ml.classification.SVM
+
+val svm = SVM()
+  .setBlocks(env.getParallelism)
+  .setIterations(100)
+  .setRegularization(0.001)
+  .setStepsize(0.1)
+  .setSeed(42)
+
+svm.fit(astroTrain)
+
+{% endhighlight %}
+
+We can now make predictions on the test set.
+
+{% highlight scala %}
+
+val predictionPairs = svm.predict(astroTest)
+
+{% endhighlight %}
+
+Next we will see how we can pre-process our data, and use the ML pipelines capabilities of FlinkML.
+
+## Data pre-processing and pipelines
+
+A pre-processing step that is often encouraged [[3]](#hsu) when using SVM classification is scaling
+the input features to the [0, 1] range, in order to avoid features with extreme values
+dominating the rest.
+FlinkML has a number of `Transformers` such as `MinMaxScaler` that are used to pre-process data,
+and a key feature is the ability to chain `Transformers` and `Predictors` together. This allows
+us to run the same pipeline of transformations and make predictions on the train and test data in
+a straight-forward and type-safe manner. You can read more on the pipeline system of FlinkML
+[in the pipelines documentation](pipelines.html).
+
+Let us first create a normalizing transformer for the features in our dataset, and chain it to a
+new SVM classifier.
+
+{% highlight scala %}
+
+import org.apache.flink.ml.preprocessing.MinMaxScaler
+
+val scaler = MinMaxScaler()
+
+val scaledSVM = scaler.chainPredictor(svm)
+
+{% endhighlight %}
+
+We can now use our newly created pipeline to make predictions on the test set.
+First we call fit again, to train the scaler and the SVM classifier.
+The data of the test set will then be automatically scaled before being passed on to the SVM to
+make predictions.
+
+{% highlight scala %}
+
+scaledSVM.fit(astroTrain)
+
+val predictionPairsScaled: DataSet[(Double, Double)] = scaledSVM.predict(astroTest)
+
+{% endhighlight %}
+
+The scaled inputs should give us better prediction performance.
+The result of the prediction on `LabeledVector`s is a data set of tuples where the first entry denotes the true label value and the second entry is the predicted label value.
+
+## Where to go from here
+
+This quickstart guide can act as an introduction to the basic concepts of FlinkML, but there's a lot
+more you can do.
+We recommend going through the [FlinkML documentation]({{ site.baseurl }}/dev/libs/ml/index.html), and trying out the different
+algorithms.
+A very good way to get started is to play around with interesting datasets from the UCI ML
+repository and the LibSVM datasets.
+Tackling an interesting problem from a website like [Kaggle](https://www.kaggle.com) or
+[DrivenData](http://www.drivendata.org/) is also a great way to learn by competing with other
+data scientists.
+If you would like to contribute some new algorithms take a look at our
+[contribution guide](contribution_guide.html).
+
+**References**
+
+<a name="murphy"></a>[1] Murphy, Kevin P. *Machine learning: a probabilistic perspective.* MIT
+press, 2012.
+
+<a name="jaggi"></a>[2] Jaggi, Martin, et al. *Communication-efficient distributed dual
+coordinate ascent.* Advances in Neural Information Processing Systems. 2014.
+
+<a name="hsu"></a>[3] Hsu, Chih-Wei, Chih-Chung Chang, and Chih-Jen Lin.
+ *A practical guide to support vector classification.* 2003.

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/libs/ml/standard_scaler.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/ml/standard_scaler.md b/docs/dev/libs/ml/standard_scaler.md
new file mode 100644
index 0000000..5104d3c
--- /dev/null
+++ b/docs/dev/libs/ml/standard_scaler.md
@@ -0,0 +1,113 @@
+---
+mathjax: include
+title: Standard Scaler
+nav-parent_id: ml
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+* This will be replaced by the TOC
+{:toc}
+
+## Description
+
+ The standard scaler scales the given data set, so that all features will have a user specified mean and variance.
+ In case the user does not provide a specific mean and standard deviation, the standard scaler transforms the features of the input data set to have mean equal to 0 and standard deviation equal to 1.
+ Given a set of input data $x_1, x_2,... x_n$, with mean:
+
+ $$\bar{x} = \frac{1}{n}\sum_{i=1}^{n}x_{i}$$
+
+ and standard deviation:
+
+ $$\sigma_{x}=\sqrt{ \frac{1}{n} \sum_{i=1}^{n}(x_{i}-\bar{x})^{2}}$$
+
+The scaled data set $z_1, z_2,...,z_n$ will be:
+
+ $$z_{i}= std \left (\frac{x_{i} - \bar{x}  }{\sigma_{x}}\right ) + mean$$
+
+where $\textit{std}$ and $\textit{mean}$ are the user specified values for the standard deviation and mean.
+
+## Operations
+
+`StandardScaler` is a `Transformer`.
+As such, it supports the `fit` and `transform` operation.
+
+### Fit
+
+StandardScaler is trained on all subtypes of `Vector` or `LabeledVector`:
+
+* `fit[T <: Vector]: DataSet[T] => Unit`
+* `fit: DataSet[LabeledVector] => Unit`
+
+### Transform
+
+StandardScaler transforms all subtypes of `Vector` or `LabeledVector` into the respective type:
+
+* `transform[T <: Vector]: DataSet[T] => DataSet[T]`
+* `transform: DataSet[LabeledVector] => DataSet[LabeledVector]`
+
+## Parameters
+
+The standard scaler implementation can be controlled by the following two parameters:
+
+ <table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 20%">Parameters</th>
+      <th class="text-center">Description</th>
+    </tr>
+  </thead>
+
+  <tbody>
+    <tr>
+      <td><strong>Mean</strong></td>
+      <td>
+        <p>
+          The mean of the scaled data set. (Default value: <strong>0.0</strong>)
+        </p>
+      </td>
+    </tr>
+    <tr>
+      <td><strong>Std</strong></td>
+      <td>
+        <p>
+          The standard deviation of the scaled data set. (Default value: <strong>1.0</strong>)
+        </p>
+      </td>
+    </tr>
+  </tbody>
+</table>
+
+## Examples
+
+{% highlight scala %}
+// Create standard scaler transformer
+val scaler = StandardScaler()
+.setMean(10.0)
+.setStd(2.0)
+
+// Obtain data set to be scaled
+val dataSet: DataSet[Vector] = ...
+
+// Learn the mean and standard deviation of the training data
+scaler.fit(dataSet)
+
+// Scale the provided data set to have mean=10.0 and std=2.0
+val scaledDS = scaler.transform(dataSet)
+{% endhighlight %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/libs/ml/svm.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/ml/svm.md b/docs/dev/libs/ml/svm.md
new file mode 100644
index 0000000..34fa1ec
--- /dev/null
+++ b/docs/dev/libs/ml/svm.md
@@ -0,0 +1,220 @@
+---
+mathjax: include
+title: SVM using CoCoA
+nav-parent_id: ml
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+* This will be replaced by the TOC
+{:toc}
+
+## Description
+
+Implements an SVM with soft-margin using the communication-efficient distributed dual coordinate
+ascent algorithm with hinge-loss function.
+The algorithm solves the following minimization problem:
+
+$$\min_{\mathbf{w} \in \mathbb{R}^d} \frac{\lambda}{2} \left\lVert \mathbf{w} \right\rVert^2 + \frac{1}{n} \sum_{i=1}^n l_{i}\left(\mathbf{w}^T\mathbf{x}_i\right)$$
+
+with $\mathbf{w}$ being the weight vector, $\lambda$ being the regularization constant,
+$$\mathbf{x}_i \in \mathbb{R}^d$$ being the data points and $$l_{i}$$ being the convex loss
+functions, which can also depend on the labels $$y_{i} \in \mathbb{R}$$.
+In the current implementation the regularizer is the $\ell_2$-norm and the loss functions are the hinge-loss functions:
+
+  $$l_{i} = \max\left(0, 1 - y_{i} \mathbf{w}^T\mathbf{x}_i \right)$$
+
+With these choices, the problem definition is equivalent to a SVM with soft-margin.
+Thus, the algorithm allows us to train a SVM with soft-margin.
+
+The minimization problem is solved by applying stochastic dual coordinate ascent (SDCA).
+In order to make the algorithm efficient in a distributed setting, the CoCoA algorithm calculates
+several iterations of SDCA locally on a data block before merging the local updates into a
+valid global state.
+This state is redistributed to the different data partitions where the next round of local SDCA
+iterations is then executed.
+The number of outer iterations and local SDCA iterations control the overall network costs, because
+there is only network communication required for each outer iteration.
+The local SDCA iterations are embarrassingly parallel once the individual data partitions have been
+distributed across the cluster.
+
+The implementation of this algorithm is based on the work of
+[Jaggi et al.](http://arxiv.org/abs/1409.1458)
+
+## Operations
+
+`SVM` is a `Predictor`.
+As such, it supports the `fit` and `predict` operation.
+
+### Fit
+
+SVM is trained given a set of `LabeledVector`:
+
+* `fit: DataSet[LabeledVector] => Unit`
+
+### Predict
+
+SVM predicts for all subtypes of FlinkML's `Vector` the corresponding class label:
+
+* `predict[T <: Vector]: DataSet[T] => DataSet[(T, Double)]`, where the `(T, Double)` tuple
+  corresponds to (original_features, label)
+
+If we call evaluate with a `DataSet[(Vector, Double)]`, we make a prediction on the class label
+for each example, and return a `DataSet[(Double, Double)]`. In each tuple the first element
+is the true value, as was provided from the input `DataSet[(Vector, Double)]` and the second element
+is the predicted value. You can then use these `(truth, prediction)` tuples to evaluate
+the algorithm's performance.
+
+* `predict: DataSet[(Vector, Double)] => DataSet[(Double, Double)]`
+
+## Parameters
+
+The SVM implementation can be controlled by the following parameters:
+
+<table class="table table-bordered">
+<thead>
+  <tr>
+    <th class="text-left" style="width: 20%">Parameters</th>
+    <th class="text-center">Description</th>
+  </tr>
+</thead>
+
+<tbody>
+  <tr>
+    <td><strong>Blocks</strong></td>
+    <td>
+      <p>
+        Sets the number of blocks into which the input data will be split.
+        On each block the local stochastic dual coordinate ascent method is executed.
+        This number should be set at least to the degree of parallelism.
+        If no value is specified, then the parallelism of the input DataSet is used as the number of blocks.
+        (Default value: <strong>None</strong>)
+      </p>
+    </td>
+  </tr>
+  <tr>
+    <td><strong>Iterations</strong></td>
+    <td>
+      <p>
+        Defines the maximum number of iterations of the outer loop method.
+        In other words, it defines how often the SDCA method is applied to the blocked data.
+        After each iteration, the locally computed weight vector updates have to be reduced to update the global weight vector value.
+        The new weight vector is broadcast to all SDCA tasks at the beginning of each iteration.
+        (Default value: <strong>10</strong>)
+      </p>
+    </td>
+  </tr>
+  <tr>
+    <td><strong>LocalIterations</strong></td>
+    <td>
+      <p>
+        Defines the maximum number of SDCA iterations.
+        In other words, it defines how many data points are drawn from each local data block to calculate the stochastic dual coordinate ascent.
+        (Default value: <strong>10</strong>)
+      </p>
+    </td>
+  </tr>
+  <tr>
+    <td><strong>Regularization</strong></td>
+    <td>
+      <p>
+        Defines the regularization constant of the SVM algorithm.
+        The higher the value, the smaller will the 2-norm of the weight vector be.
+        In case of a SVM with hinge loss this means that the SVM margin will be wider even though it might contain some false classifications.
+        (Default value: <strong>1.0</strong>)
+      </p>
+    </td>
+  </tr>
+  <tr>
+    <td><strong>Stepsize</strong></td>
+    <td>
+      <p>
+        Defines the initial step size for the updates of the weight vector.
+        The larger the step size is, the larger will be the contribution of the weight vector updates to the next weight vector value.
+        The effective scaling of the updates is $\frac{stepsize}{blocks}$.
+        This value has to be tuned in case that the algorithm becomes unstable.
+        (Default value: <strong>1.0</strong>)
+      </p>
+    </td>
+  </tr>
+  <tr>
+    <td><strong>ThresholdValue</strong></td>
+    <td>
+      <p>
+        Defines the limiting value for the decision function above which examples are labeled as
+        positive (+1.0). Examples with a decision function value below this value are classified
+        as negative (-1.0). In order to get the raw decision function values you need to indicate it by
+        using the OutputDecisionFunction parameter.  (Default value: <strong>0.0</strong>)
+      </p>
+    </td>
+  </tr>
+  <tr>
+    <td><strong>OutputDecisionFunction</strong></td>
+    <td>
+      <p>
+        Determines whether the predict and evaluate functions of the SVM should return the distance
+        to the separating hyperplane, or binary class labels. Setting this to true will
+        return the raw distance to the hyperplane for each example. Setting it to false will
+        return the binary class label (+1.0, -1.0) (Default value: <strong>false</strong>)
+      </p>
+    </td>
+  </tr>
+  <tr>
+  <td><strong>Seed</strong></td>
+  <td>
+    <p>
+      Defines the seed to initialize the random number generator.
+      The seed directly controls which data points are chosen for the SDCA method.
+      (Default value: <strong>Random Long Integer</strong>)
+    </p>
+  </td>
+</tr>
+</tbody>
+</table>
+
+## Examples
+
+{% highlight scala %}
+import org.apache.flink.api.scala._
+import org.apache.flink.ml.math.Vector
+import org.apache.flink.ml.common.LabeledVector
+import org.apache.flink.ml.classification.SVM
+import org.apache.flink.ml.RichExecutionEnvironment
+
+val pathToTrainingFile: String = ???
+val pathToTestingFile: String = ???
+val env = ExecutionEnvironment.getExecutionEnvironment
+
+// Read the training data set, from a LibSVM formatted file
+val trainingDS: DataSet[LabeledVector] = env.readLibSVM(pathToTrainingFile)
+
+// Create the SVM learner
+val svm = SVM()
+  .setBlocks(10)
+
+// Learn the SVM model
+svm.fit(trainingDS)
+
+// Read the testing data set
+val testingDS: DataSet[Vector] = env.readLibSVM(pathToTestingFile).map(_.vector)
+
+// Calculate the predictions for the testing data set
+val predictionDS: DataSet[(Vector, Double)] = svm.predict(testingDS)
+
+{% endhighlight %}


[39/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/fig/stream_watermark_out_of_order.svg
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/fig/stream_watermark_out_of_order.svg b/docs/apis/streaming/fig/stream_watermark_out_of_order.svg
deleted file mode 100644
index e8f80a0..0000000
--- a/docs/apis/streaming/fig/stream_watermark_out_of_order.svg
+++ /dev/null
@@ -1,314 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<svg
-   xmlns:dc="http://purl.org/dc/elements/1.1/"
-   xmlns:cc="http://creativecommons.org/ns#"
-   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
-   xmlns:svg="http://www.w3.org/2000/svg"
-   xmlns="http://www.w3.org/2000/svg"
-   version="1.1"
-   width="534.41998"
-   height="157.25"
-   id="svg2">
-  <defs
-     id="defs4" />
-  <metadata
-     id="metadata7">
-    <rdf:RDF>
-      <cc:Work
-         rdf:about="">
-        <dc:format>image/svg+xml</dc:format>
-        <dc:type
-           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
-        <dc:title></dc:title>
-      </cc:Work>
-    </rdf:RDF>
-  </metadata>
-  <g
-     transform="translate(-276.19474,-304.69235)"
-     id="layer1">
-    <g
-       transform="translate(234.9412,-56.421315)"
-       id="g3282">
-      <path
-         d="m 81.029039,395.76901 0,40.78642 454.276431,0 0,-40.78642 -454.276431,0 z"
-         id="path3284"
-         style="fill:#f2f2f2;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 41.967829,408.78316 15.714496,0 0,-7.85725 15.695743,15.69574 -15.695743,15.7145 0,-7.85725 -15.714496,0 z"
-         id="path3286"
-         style="fill:#a6a6a6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 542.9752,408.78316 15.69574,0 0,-7.85725 15.7145,15.69574 -15.7145,15.7145 0,-7.85725 -15.69574,0 z"
-         id="path3288"
-         style="fill:#a6a6a6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="232.96123"
-         y="372.95453"
-         id="text3290"
-         xml:space="preserve"
-         style="font-size:13.80175209px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Stream </text>
-      <text
-         x="288.46829"
-         y="372.95453"
-         id="text3292"
-         xml:space="preserve"
-         style="font-size:13.80175209px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">(out of order)</text>
-      <path
-         d="m 371.54093,395.76901 0,7.50095 -1.87524,0 0,-7.50095 1.87524,0 z m 0,13.12666 0,7.50095 -1.87524,0 0,-7.50095 1.87524,0 z m 0,13.12667 0,7.50095 -1.87524,0 0,-7.50095 1.87524,0 z m 0,13.12667 0,2.66283 -1.87524,0 0,-2.66283 1.87524,0 z"
-         id="path3294"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 503.5952,407.02043 0,20.15881 23.74051,0 0,-20.15881 -23.74051,0 z"
-         id="path3296"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 503.5952,407.02043 23.74051,0 0,20.15881 -23.74051,0 z"
-         id="path3298"
-         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="511.93024"
-         y="421.50052"
-         id="text3300"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">7</text>
-      <path
-         d="m 161.98307,395.76901 0,7.50095 -1.87524,0 0,-7.50095 1.87524,0 z m 0,13.12666 0,7.50095 -1.87524,0 0,-7.50095 1.87524,0 z m 0,13.12667 0,7.50095 -1.87524,0 0,-7.50095 1.87524,0 z m 0,13.12667 0,2.66283 -1.87524,0 0,-2.66283 1.87524,0 z"
-         id="path3302"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="356.53876"
-         y="455.68298"
-         id="text3304"
-         xml:space="preserve"
-         style="font-size:10.05127621px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">W(11)</text>
-      <text
-         x="145.13034"
-         y="455.68298"
-         id="text3306"
-         xml:space="preserve"
-         style="font-size:10.05127621px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">W(17)</text>
-      <path
-         d="m 476.40424,407.02043 0,20.15881 23.5905,0 0,-20.15881 -23.5905,0 z"
-         id="path3308"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 476.40424,407.02043 23.5905,0 0,20.15881 -23.5905,0 z"
-         id="path3310"
-         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="481.08948"
-         y="421.50052"
-         id="text3312"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">11</text>
-      <path
-         d="m 434.51142,407.02043 0,20.15881 23.75927,0 0,-20.15881 -23.75927,0 z"
-         id="path3314"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 434.51142,407.02043 23.75927,0 0,20.15881 -23.75927,0 z"
-         id="path3316"
-         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="439.34079"
-         y="421.50052"
-         id="text3318"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">15</text>
-      <path
-         d="m 409.19571,407.02043 0,20.15881 23.60925,0 0,-20.15881 -23.60925,0 z"
-         id="path3320"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 409.19571,407.02043 23.60925,0 0,20.15881 -23.60925,0 z"
-         id="path3322"
-         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="417.47748"
-         y="421.50052"
-         id="text3324"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">9</text>
-      <path
-         d="m 374.50381,407.02043 0,20.15881 23.60924,0 0,-20.15881 -23.60924,0 z"
-         id="path3326"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 374.50381,407.02043 23.60924,0 0,20.15881 -23.60924,0 z"
-         id="path3328"
-         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="379.23441"
-         y="421.50052"
-         id="text3330"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">12</text>
-      <path
-         d="m 340.13069,407.02043 0,20.15881 23.5905,0 0,-20.15881 -23.5905,0 z"
-         id="path3332"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 340.13069,407.02043 23.5905,0 0,20.15881 -23.5905,0 z"
-         id="path3334"
-         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="344.81403"
-         y="421.50052"
-         id="text3336"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">14</text>
-      <path
-         d="m 306.99523,407.02043 0,20.15881 23.60925,0 0,-20.15881 -23.60925,0 z"
-         id="path3338"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 306.99523,407.02043 23.60925,0 0,20.15881 -23.60925,0 z"
-         id="path3340"
-         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="311.71414"
-         y="421.50052"
-         id="text3342"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">17</text>
-      <path
-         d="m 279.65426,407.02043 0,20.15881 23.75927,0 0,-20.15881 -23.75927,0 z"
-         id="path3344"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 279.65426,407.02043 23.75927,0 0,20.15881 -23.75927,0 z"
-         id="path3346"
-         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="284.41156"
-         y="421.50052"
-         id="text3348"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">12</text>
-      <path
-         d="m 253.08214,407.02043 0,20.15881 23.75926,0 0,-20.15881 -23.75926,0 z"
-         id="path3350"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 253.08214,407.02043 23.75926,0 0,20.15881 -23.75926,0 z"
-         id="path3352"
-         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="257.91165"
-         y="421.50052"
-         id="text3354"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">22</text>
-      <path
-         d="m 178.86021,407.02043 0,20.15881 23.75927,0 0,-20.15881 -23.75927,0 z"
-         id="path3356"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 178.86021,407.02043 23.75927,0 0,20.15881 -23.75927,0 z"
-         id="path3358"
-         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="183.66698"
-         y="421.50052"
-         id="text3360"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">20</text>
-      <path
-         d="m 209.18281,407.02043 0,20.15881 23.5905,0 0,-20.15881 -23.5905,0 z"
-         id="path3362"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 209.18281,407.02043 23.5905,0 0,20.15881 -23.5905,0 z"
-         id="path3364"
-         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="213.86813"
-         y="421.50052"
-         id="text3366"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">17</text>
-      <path
-         d="m 129.78523,407.02043 0,20.15881 23.75927,0 0,-20.15881 -23.75927,0 z"
-         id="path3368"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 129.78523,407.02043 23.75927,0 0,20.15881 -23.75927,0 z"
-         id="path3370"
-         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="134.54695"
-         y="421.50052"
-         id="text3372"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">19</text>
-      <path
-         d="m 93.386858,407.02043 0,20.15881 23.590492,0 0,-20.15881 -23.590492,0 z"
-         id="path3374"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 93.386858,407.02043 23.590492,0 0,20.15881 -23.590492,0 z"
-         id="path3376"
-         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="98.096954"
-         y="421.50052"
-         id="text3378"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">21</text>
-      <text
-         x="207.48544"
-         y="493.10297"
-         id="text3380"
-         xml:space="preserve"
-         style="font-size:12.451581px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Watermark</text>
-      <path
-         d="m 209.22032,476.64803 -26.92842,-13.59548 0.56257,-1.10639 26.92842,13.57672 -0.56257,1.12515 z m -27.22846,-10.2388 -5.00689,-6.73211 8.38232,0.0375 -3.37543,6.6946 z"
-         id="path3382"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 280.91067,479.77967 65.22078,-18.28357 -0.33754,-1.20015 -65.22078,18.28357 0.33754,1.20015 z m 64.86449,-14.92689 6.20704,-5.64447 -8.2323,-1.5752 2.02526,7.21967 z"
-         id="path3384"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="497.42996"
-         y="481.36301"
-         id="text3386"
-         xml:space="preserve"
-         style="font-size:12.451581px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Event</text>
-      <text
-         x="396.06403"
-         y="515.14447"
-         id="text3388"
-         xml:space="preserve"
-         style="font-size:12.451581px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Event timestamp</text>
-      <path
-         d="m 516.90939,464.79652 -1.72522,-31.03519 1.25641,-0.075 1.72522,31.05394 -1.25641,0.0563 z m -4.76311,-29.61001 3.31917,-7.70723 4.16303,7.29468 -7.4822,0.41255 z"
-         id="path3390"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 450.01964,498.56956 56.76346,-72.94676 -0.99387,-0.76885 -56.76346,72.94676 0.99387,0.76885 z m 58.45118,-70.04014 1.65021,-8.2323 -7.55721,3.61921 5.907,4.61309 z"
-         id="path3392"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-    </g>
-  </g>
-</svg>

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/fig/times_clocks.svg
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/fig/times_clocks.svg b/docs/apis/streaming/fig/times_clocks.svg
deleted file mode 100644
index 2dede77..0000000
--- a/docs/apis/streaming/fig/times_clocks.svg
+++ /dev/null
@@ -1,368 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<svg
-   xmlns:dc="http://purl.org/dc/elements/1.1/"
-   xmlns:cc="http://creativecommons.org/ns#"
-   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
-   xmlns:svg="http://www.w3.org/2000/svg"
-   xmlns="http://www.w3.org/2000/svg"
-   version="1.1"
-   width="447.60001"
-   height="212.21001"
-   id="svg2">
-  <defs
-     id="defs4" />
-  <metadata
-     id="metadata7">
-    <rdf:RDF>
-      <cc:Work
-         rdf:about="">
-        <dc:format>image/svg+xml</dc:format>
-        <dc:type
-           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
-        <dc:title></dc:title>
-      </cc:Work>
-    </rdf:RDF>
-  </metadata>
-  <g
-     transform="translate(56.082351,-319.04344)"
-     id="layer1">
-    <g
-       transform="translate(-93.876994,199.60201)"
-       id="g2989">
-      <path
-         d="m 229.20098,184.79534 c -5.33505,0 -9.65748,-3.98488 -9.65748,-8.90738 0,-4.9225 4.32243,-8.90738 9.65748,-8.90738"
-         id="path2991"
-         style="fill:none;stroke:#000000;stroke-width:1.24703336px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 190.55232,166.98058 38.64866,0 c 5.33505,0 9.66685,3.98488 9.66685,8.90738 0,4.9225 -4.3318,8.90738 -9.66685,8.90738 l -38.64866,0 c -5.33505,0 -9.66685,-3.98488 -9.66685,-8.90738 0,-4.9225 4.3318,-8.90738 9.66685,-8.90738 z"
-         id="path2993"
-         style="fill:none;stroke:#000000;stroke-width:1.24703336px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 229.20098,257.30142 c -5.33505,0 -9.65748,-3.94737 -9.65748,-8.82299 0,-4.87562 4.32243,-8.83237 9.65748,-8.83237"
-         id="path2995"
-         style="fill:none;stroke:#000000;stroke-width:1.24703336px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 190.55232,239.64606 38.64866,0 c 5.33505,0 9.66685,3.95675 9.66685,8.83237 0,4.87562 -4.3318,8.82299 -9.66685,8.82299 l -38.64866,0 c -5.33505,0 -9.66685,-3.94737 -9.66685,-8.82299 0,-4.87562 4.3318,-8.83237 9.66685,-8.83237 z"
-         id="path2997"
-         style="fill:none;stroke:#000000;stroke-width:1.24703336px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 80.56023,234.02034 c 0,-1.98775 1.5752,-3.59108 3.516071,-3.59108 1.950248,0 3.516072,1.60333 3.516072,3.59108 0,1.98775 -1.565824,3.59108 -3.516072,3.59108 -1.940871,0 -3.516071,-1.60333 -3.516071,-3.59108"
-         id="path2999"
-         style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 80.56023,234.02034 c 0,-1.98775 1.5752,-3.59108 3.516071,-3.59108 1.950248,0 3.516072,1.60333 3.516072,3.59108 0,1.98775 -1.565824,3.59108 -3.516072,3.59108 -1.940871,0 -3.516071,-1.60333 -3.516071,-3.59108"
-         id="path3001"
-         style="fill:none;stroke:#000000;stroke-width:1.24703336px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 84.160687,237.61142 0,10.90451"
-         id="path3003"
-         style="fill:none;stroke:#000000;stroke-width:1.87523818px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="M 84.160687,246.5188 70.405815,280.69502"
-         id="path3005"
-         style="fill:none;stroke:#000000;stroke-width:1.87523818px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 84.160687,246.5188 13.754871,34.17622"
-         id="path3007"
-         style="fill:none;stroke:#000000;stroke-width:1.87523818px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 78.684992,259.17666 10.548214,0"
-         id="path3009"
-         style="fill:none;stroke:#000000;stroke-width:0.62820476px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 76.809753,265.73999 15.001905,0"
-         id="path3011"
-         style="fill:none;stroke:#000000;stroke-width:0.62820476px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 74.465706,271.36571 19.221191,0"
-         id="path3013"
-         style="fill:none;stroke:#000000;stroke-width:0.62820476px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 72.121658,276.99142 24.6125,0"
-         id="path3015"
-         style="fill:none;stroke:#000000;stroke-width:0.62820476px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 90.030182,223.30336 c 5.475695,5.23191 5.672595,13.91426 0.440681,19.38996"
-         id="path3017"
-         style="fill:none;stroke:#000000;stroke-width:0.62820476px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 95.740282,223.30336 c 5.475698,5.23191 5.672598,13.91426 0.440681,19.38996"
-         id="path3019"
-         style="fill:none;stroke:#000000;stroke-width:0.62820476px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 101.44101,223.30336 c 5.47569,5.23191 5.67259,13.91426 0.44068,19.38996"
-         id="path3021"
-         style="fill:none;stroke:#000000;stroke-width:0.62820476px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="M 77.241058,242.65581 C 71.765363,237.4239 71.568463,228.74155 76.800377,223.26585"
-         id="path3023"
-         style="fill:none;stroke:#000000;stroke-width:0.62820476px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="M 71.540334,242.65581 C 66.064639,237.4239 65.867739,228.74155 71.099653,223.26585"
-         id="path3025"
-         style="fill:none;stroke:#000000;stroke-width:0.62820476px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="M 65.83961,242.65581 C 60.363915,237.4239 60.167015,228.74155 65.398929,223.26585"
-         id="path3027"
-         style="fill:none;stroke:#000000;stroke-width:0.62820476px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 66.186529,155.39161 c 0,-3.26292 2.644086,-5.91638 5.916377,-5.91638 l 23.646752,0 c 3.262915,0 5.907002,2.65346 5.907002,5.91638 l 0,40.21448 c 0,3.26291 -2.644087,5.907 -5.907002,5.907 l -23.646752,0 c -3.272291,0 -5.916377,-2.64409 -5.916377,-5.907 z"
-         id="path3029"
-         style="fill:none;stroke:#000000;stroke-width:1.24703336px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 70.24642,156.03856 27.350348,0 0,4.37869 -27.350348,0 z"
-         id="path3031"
-         style="fill:none;stroke:#000000;stroke-width:0.62820476px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 70.24642,162.45188 27.350348,0 0,4.5287 -27.350348,0 z"
-         id="path3033"
-         style="fill:none;stroke:#000000;stroke-width:0.62820476px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 70.24642,169.01521 27.350348,0 0,4.5287 -27.350348,0 z"
-         id="path3035"
-         style="fill:none;stroke:#000000;stroke-width:0.62820476px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 70.24642,175.57855 27.350348,0 0,4.3693 -27.350348,0 z"
-         id="path3037"
-         style="fill:none;stroke:#000000;stroke-width:0.62820476px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="42.695183"
-         y="136.73421"
-         id="text3039"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Event Producer</text>
-      <text
-         x="164.95934"
-         y="136.73421"
-         id="text3041"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Message Queue</text>
-      <path
-         d="m 314.49619,174.79095 c 0,-8.79487 7.14465,-15.93953 15.93952,-15.93953 8.81362,0 15.93952,7.14466 15.93952,15.93953 0,8.81361 -7.1259,15.93952 -15.93952,15.93952 -8.79487,0 -15.93952,-7.12591 -15.93952,-15.93952"
-         id="path3043"
-         style="fill:none;stroke:#000000;stroke-width:1.25640953px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="317.23706"
-         y="129.91739"
-         id="text3045"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Flink</text>
-      <text
-         x="295.78436"
-         y="143.4191"
-         id="text3047"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Data Source</text>
-      <text
-         x="415.78812"
-         y="129.91739"
-         id="text3049"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Flink</text>
-      <text
-         x="379.48352"
-         y="143.4191"
-         id="text3051"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Window Operator</text>
-      <path
-         d="m 314.49619,248.71283 c 0,-8.79486 7.14465,-15.93952 15.93952,-15.93952 8.81362,0 15.93952,7.14466 15.93952,15.93952 0,8.81362 -7.1259,15.93953 -15.93952,15.93953 -8.79487,0 -15.93952,-7.12591 -15.93952,-15.93953"
-         id="path3053"
-         style="fill:none;stroke:#000000;stroke-width:1.25640953px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 408.89567,174.79095 c 0,-8.79487 7.16341,-15.93953 16.01454,-15.93953 8.85112,0 16.01453,7.14466 16.01453,15.93953 0,8.81361 -7.16341,15.93952 -16.01453,15.93952 -8.85113,0 -16.01454,-7.12591 -16.01454,-15.93952"
-         id="path3055"
-         style="fill:none;stroke:#000000;stroke-width:1.25640953px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 408.89567,248.71283 c 0,-8.79486 7.16341,-15.93952 16.01454,-15.93952 8.85112,0 16.01453,7.14466 16.01453,15.93952 0,8.81362 -7.16341,15.93953 -16.01453,15.93953 -8.85113,0 -16.01454,-7.12591 -16.01454,-15.93953"
-         id="path3057"
-         style="fill:none;stroke:#000000;stroke-width:1.25640953px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 432.33615,169.63404 c 2.41906,0 4.3693,0.16877 4.3693,0.37505 l 0,9.43245 c 0,0.18752 -1.95024,0.35629 -4.3693,0.35629"
-         id="path3059"
-         style="fill:none;stroke:#000000;stroke-width:1.87523818px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 418.5719,169.63404 c -2.49407,0 -4.51932,0.16877 -4.51932,0.3938 l 0,9.39494 c 0,0.20628 2.02525,0.37505 4.51932,0.37505"
-         id="path3061"
-         style="fill:none;stroke:#000000;stroke-width:1.87523818px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 432.33615,244.02474 c 2.41906,0 4.3693,0.16877 4.3693,0.37504 l 0,9.4137 c 0,0.20628 -1.95024,0.37505 -4.3693,0.37505"
-         id="path3063"
-         style="fill:none;stroke:#000000;stroke-width:1.87523818px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 418.5719,244.02474 c -2.49407,0 -4.51932,0.16877 -4.51932,0.37504 l 0,9.4137 c 0,0.20628 2.02525,0.37505 4.51932,0.37505"
-         id="path3065"
-         style="fill:none;stroke:#000000;stroke-width:1.87523818px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 323.57234,170.53415 7.50095,0 0,-4.18178 7.50095,8.36357 -7.50095,8.36356 0,-4.18178 -7.50095,0 z"
-         id="path3067"
-         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 323.57234,244.60606 7.50095,0 0,-4.18178 7.50095,8.36356 -7.50095,8.36356 0,-4.18178 -7.50095,0 z"
-         id="path3069"
-         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 117.44616,174.17212 49.70319,0 0,1.24703 -49.70319,0 z m 42.52103,-4.2943 8.41982,4.91313 -8.41982,4.9225 c -0.30004,0.16877 -0.68446,0.075 -0.85324,-0.22503 -0.17814,-0.30004 -0.075,-0.68447 0.22503,-0.85324 l 7.50096,-4.37868 0,1.07826 -7.50096,-4.37868 c -0.30003,-0.16877 -0.40317,-0.55319 -0.22503,-0.85323 0.16878,-0.30004 0.5532,-0.3938 0.85324,-0.22503 z"
-         id="path3071"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937619px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 117.44616,247.92523 49.70319,0 0,1.25641 -49.70319,0 z m 42.52103,-4.28492 8.41982,4.91313 -8.41982,4.91312 c -0.30004,0.17815 -0.68446,0.075 -0.85324,-0.22503 -0.17814,-0.29066 -0.075,-0.67508 0.22503,-0.85323 l 7.50096,-4.36931 0,1.07827 -7.50096,-4.37869 c -0.30003,-0.16877 -0.40317,-0.55319 -0.22503,-0.85323 0.16878,-0.30004 0.5532,-0.40317 0.85324,-0.22503 z"
-         id="path3073"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937619px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 117.86809,240.81808 52.5348,-48.78432 -0.84386,-0.91887 -52.54417,48.78432 z m 50.19075,-40.74892 2.83161,-9.33869 -9.51683,2.13777 c -0.33755,0.075 -0.5532,0.40318 -0.47819,0.74072 0.075,0.33754 0.41255,0.5532 0.7501,0.47819 l 8.47607,-1.90337 -0.74072,-0.7876 -2.51282,8.3073 c -0.10313,0.33755 0.0844,0.68447 0.41256,0.77823 0.32816,0.10314 0.67508,-0.0844 0.77822,-0.41255 z"
-         id="path3075"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937619px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 252.14452,174.17212 49.71256,0 0,1.25641 -49.71256,0 z m 42.5304,-4.2943 8.41982,4.91313 -8.41982,4.93187 c -0.30004,0.16877 -0.67509,0.075 -0.86261,-0.22503 -0.16877,-0.30004 -0.075,-0.69384 0.22503,-0.86261 l 7.50095,-4.3693 0,1.06888 -7.50095,-4.3693 c -0.30004,-0.16877 -0.3938,-0.56257 -0.22503,-0.86261 0.18752,-0.30004 0.56257,-0.3938 0.86261,-0.22503 z"
-         id="path3077"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 252.14452,248.56281 49.71256,0 0,1.23766 -49.71256,0 z m 42.5304,-4.29429 8.41982,4.91312 -8.41982,4.91313 c -0.30004,0.18752 -0.67509,0.075 -0.86261,-0.22503 -0.16877,-0.30004 -0.075,-0.67509 0.22503,-0.84386 l 7.50095,-4.38806 0,1.08764 -7.50095,-4.3693 c -0.30004,-0.18753 -0.3938,-0.56258 -0.22503,-0.86261 0.18752,-0.30004 0.56257,-0.3938 0.86261,-0.22503 z"
-         id="path3079"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="189.87421"
-         y="177.52216"
-         id="text3081"
-         xml:space="preserve"
-         style="font-size:4.95062876px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">partition 1</text>
-      <text
-         x="189.87421"
-         y="251.06122"
-         id="text3083"
-         xml:space="preserve"
-         style="font-size:4.95062876px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">partition 2</text>
-      <path
-         d="m 352.46976,174.17212 49.71256,0 0,1.25641 -49.71256,0 z m 42.5304,-4.2943 8.41982,4.91313 -8.41982,4.93187 c -0.30004,0.16877 -0.67509,0.075 -0.86261,-0.22503 -0.16877,-0.30004 -0.075,-0.69384 0.22503,-0.86261 l 7.50095,-4.3693 0,1.06888 -7.50095,-4.3693 c -0.30004,-0.16877 -0.3938,-0.56257 -0.22503,-0.86261 0.18752,-0.30004 0.56257,-0.3938 0.86261,-0.22503 z"
-         id="path3085"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 352.46976,248.56281 49.71256,0 0,1.23766 -49.71256,0 z m 42.5304,-4.29429 8.41982,4.91312 -8.41982,4.91313 c -0.30004,0.18752 -0.67509,0.075 -0.86261,-0.22503 -0.16877,-0.30004 -0.075,-0.67509 0.22503,-0.84386 l 7.50095,-4.38806 0,1.08764 -7.50095,-4.3693 c -0.30004,-0.18753 -0.3938,-0.56258 -0.22503,-0.86261 0.18752,-0.30004 0.56257,-0.3938 0.86261,-0.22503 z"
-         id="path3087"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 352.93857,177.66006 55.60081,58.61994 -0.90012,0.86261 -55.61956,-58.63869 z m 53.76307,50.46266 2.23154,9.4887 -9.35744,-2.73785 c -0.33754,-0.0938 -0.52507,-0.4313 -0.43131,-0.76884 0.0938,-0.33755 0.45006,-0.52507 0.7876,-0.43131 l 8.32606,2.43781 -0.7876,0.7501 -1.98775,-8.45733 c -0.075,-0.33754 0.13127,-0.67508 0.46881,-0.75009 0.33754,-0.0938 0.67508,0.13126 0.75009,0.46881 z"
-         id="path3089"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 352.93857,246.8751 52.58167,-54.81321 -0.90011,-0.86261 -52.60043,54.79446 z m 50.70644,-46.65593 2.28779,-9.4887 -9.39495,2.68159 c -0.31879,0.0938 -0.52506,0.45006 -0.4313,0.76885 0.0938,0.33754 0.45006,0.52506 0.7876,0.4313 l 8.34481,-2.38155 -0.7876,-0.7501 -2.02526,8.43858 c -0.0938,0.33754 0.11251,0.67508 0.45006,0.76884 0.33754,0.075 0.67508,-0.13126 0.76885,-0.46881 z"
-         id="path3091"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 40.561401,302.94472 c 0,-5.00689 4.069266,-9.07615 9.0574,-9.07615 5.006885,0 9.076152,4.06926 9.076152,9.07615 0,5.00688 -4.069267,9.0574 -9.076152,9.0574 -4.988134,0 -9.0574,-4.05052 -9.0574,-9.0574"
-         id="path3093"
-         style="fill:none;stroke:#000000;stroke-width:1.25640953px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 49.468782,303.57292 0,-7.66972"
-         id="path3095"
-         style="fill:none;stroke:#000000;stroke-width:1.24703336px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 52.862963,306.02011 -3.394181,-2.45656"
-         id="path3097"
-         style="fill:none;stroke:#000000;stroke-width:1.25640953px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 417.33424,302.94472 c 0,-5.00689 4.08802,-9.07615 9.13241,-9.07615 5.06315,0 9.15117,4.06926 9.15117,9.07615 0,5.00688 -4.08802,9.0574 -9.15117,9.0574 -5.04439,0 -9.13241,-4.05052 -9.13241,-9.0574"
-         id="path3099"
-         style="fill:none;stroke:#000000;stroke-width:1.25640953px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 426.39164,303.5823 0,-7.66972"
-         id="path3101"
-         style="fill:none;stroke:#000000;stroke-width:1.25640953px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 429.78582,306.02011 -3.39418,-2.45656"
-         id="path3103"
-         style="fill:none;stroke:#000000;stroke-width:1.25640953px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 297.319,302.94472 c 0,-5.00689 4.05052,-9.07615 9.0574,-9.07615 5.00689,0 9.0574,4.06926 9.0574,9.07615 0,5.00688 -4.05051,9.0574 -9.0574,9.0574 -5.00688,0 -9.0574,-4.05052 -9.0574,-9.0574"
-         id="path3105"
-         style="fill:none;stroke:#000000;stroke-width:1.25640953px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 306.22639,303.5823 0,-7.66972"
-         id="path3107"
-         style="fill:none;stroke:#000000;stroke-width:1.25640953px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 309.62057,306.02011 -3.39418,-2.45656"
-         id="path3109"
-         style="fill:none;stroke:#000000;stroke-width:1.25640953px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="67.69368"
-         y="308.08017"
-         id="text3111"
-         xml:space="preserve"
-         style="font-size:8.70110512px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Event</text>
-      <text
-         x="69.493912"
-         y="318.58148"
-         id="text3113"
-         xml:space="preserve"
-         style="font-size:8.70110512px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Time</text>
-      <text
-         x="319.44556"
-         y="308.07742"
-         id="text3115"
-         xml:space="preserve"
-         style="font-size:8.70110512px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Ingestion</text>
-      <text
-         x="329.19681"
-         y="318.57874"
-         id="text3117"
-         xml:space="preserve"
-         style="font-size:8.70110512px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Time</text>
-      <text
-         x="443.55893"
-         y="309.46686"
-         id="text3119"
-         xml:space="preserve"
-         style="font-size:8.70110512px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Window</text>
-      <text
-         x="437.40817"
-         y="319.96817"
-         id="text3121"
-         xml:space="preserve"
-         style="font-size:8.70110512px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Processing</text>
-      <text
-         x="449.85974"
-         y="330.46951"
-         id="text3123"
-         xml:space="preserve"
-         style="font-size:8.70110512px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Time</text>
-      <path
-         d="m 307.31402,271.36571 0,-1.23766 1.25641,0 0,1.23766 -1.25641,0 z m 0,-2.49407 0,-1.25641 1.25641,0 0,1.25641 -1.25641,0 z m 0,-2.49406 0,-1.25641 1.25641,0 0,1.25641 -1.25641,0 z m 0,-2.51282 0,-1.23766 1.25641,0 0,1.23766 -1.25641,0 z m 0,-2.49407 0,-1.25641 1.25641,0 0,1.25641 -1.25641,0 z m 0,-2.49407 0,-1.25641 1.25641,0 0,1.25641 -1.25641,0 z m 0,-2.51282 0,-1.23765 1.25641,0 0,1.23765 -1.25641,0 z m 0,-2.49406 0,-1.25641 1.25641,0 0,1.25641 -1.25641,0 z m 0,-2.49407 0,-1.25641 1.25641,0 0,1.25641 -1.25641,0 z m 0,-2.51282 0,-1.23766 1.25641,0 0,1.23766 -1.25641,0 z m 0,-2.49407 0,-1.25641 1.25641,0 0,1.25641 -1.25641,0 z m 0,-2.49406 0,-1.25641 1.25641,0 0,1.25641 -1.25641,0 z m 0,-2.51282 0,-1.23766 1.25641,0 0,1.23766 -1.25641,0 z m 0,-2.49407 0,-1.25641 1.25641,0 0,1.25641 -1.25641,0 z m 0,-2.49406 0,-1.25641 1.25641,0 0,1.25641 -1.25641,0 z m 0,-2.51282 0,-1.23766 1.25641,0 0,1.23766 -1.25641,0 z m 0,-2.49407 0,-1.25641 1.25641,0 0,1.25641 -1.25641,0 z m 0,-2.
 49407 0,-1.25641 1.25641,0 0,1.25641 -1.25641,0 z m 0,-2.51282 0,-1.23765 1.25641,0 0,1.23765 -1.25641,0 z m 0,-2.49406 0,-1.25641 1.25641,0 0,1.25641 -1.25641,0 z m 0,-2.49407 0,-1.25641 1.25641,0 0,1.25641 -1.25641,0 z m 0,-2.51282 0,-1.23766 1.25641,0 0,1.23766 -1.25641,0 z m 0,-2.49407 0,-1.25641 1.25641,0 0,1.25641 -1.25641,0 z m 0,-2.49406 0,-1.25641 1.25641,0 0,1.25641 -1.25641,0 z m 0,-2.51282 0,-1.23766 1.25641,0 0,1.23766 -1.25641,0 z m 0,-2.49407 0,-1.25641 1.25641,0 0,1.25641 -1.25641,0 z m 0,-2.49406 0,-1.25641 1.25641,0 0,1.25641 -1.25641,0 z m 0,-2.51282 0,-1.23766 1.25641,0 0,1.23766 -1.25641,0 z m 0,-2.49407 0,-1.25641 1.25641,0 0,1.25641 -1.25641,0 z m 0,-2.49407 0,-1.25641 1.25641,0 0,1.25641 -1.25641,0 z m 0,-2.51282 0,-1.23765 1.25641,0 0,1.23765 -1.25641,0 z m 0,-2.49406 0,-1.25641 1.25641,0 0,1.25641 -1.25641,0 z m 0,-2.49407 0,-1.25641 1.25641,0 0,1.25641 -1.25641,0 z m 0,-2.51282 0,-1.23766 1.25641,0 0,1.23766 -1.25641,0 z m 0,-2.49407 0,-1.25641 1.25641,0 0
 ,1.25641 -1.25641,0 z m 0,-2.49406 0,-1.25641 1.25641,0 0,1.25641 -1.25641,0 z m 0,-2.51282 0,-1.23766 1.25641,0 0,1.23766 -1.25641,0 z m 0,-2.49407 0,-1.25641 1.25641,0 0,1.25641 -1.25641,0 z m 0,-2.49406 0,-1.25641 1.25641,0 0,1.25641 -1.25641,0 z m 0,-2.51282 0,-1.23766 1.25641,0 0,1.23766 -1.25641,0 z m 0,-2.49407 0,-1.25641 1.25641,0 0,1.25641 -1.25641,0 z m 0,-2.49407 0,-1.25641 1.25641,0 0,1.25641 -1.25641,0 z m 0,-2.51282 0,-1.23765 1.25641,0 0,1.23765 -1.25641,0 z m 0,-2.49406 0,-1.25641 1.25641,0 0,1.25641 -1.25641,0 z m 0,-2.49407 0,-1.25641 1.25641,0 0,1.25641 -1.25641,0 z m 0,-2.51282 0,-1.23766 1.25641,0 0,1.23766 -1.25641,0 z m 0,-2.49407 0,-1.25641 1.25641,0 0,1.25641 -1.25641,0 z m 0,-2.49406 0,-1.25641 1.25641,0 0,1.25641 -1.25641,0 z m 0,-2.51282 0,-1.40643 1.08764,0 0,1.25641 -0.46881,0 0.63758,-0.61883 0,0.76885 -1.25641,0 z m 2.34405,-1.40643 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.49407,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.51282,0 1.23765,
 0 0,1.25641 -1.23765,0 0,-1.25641 z m 2.49406,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.49407,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.51282,0 1.23766,0 0,1.25641 -1.23766,0 0,-1.25641 z m 2.49407,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.49406,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.51282,0 1.23766,0 0,1.25641 -1.23766,0 0,-1.25641 z m 2.49407,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.49406,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.51282,0 1.23766,0 0,1.25641 -1.23766,0 0,-1.25641 z m 2.49407,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.49407,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.51282,0 1.23765,0 0,1.25641 -1.23765,0 0,-1.25641 z m 2.49406,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.49407,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.51282,0 1.23766,0 0,1.25641 -1.23766,0 0,-1.25641 z m 2.49407,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.49406,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.51282
 ,0 1.23766,0 0,1.25641 -1.23766,0 0,-1.25641 z m 2.49407,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.49406,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.51282,0 1.23766,0 0,1.25641 -1.23766,0 0,-1.25641 z m 2.49407,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.49407,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.51282,0 1.23765,0 0,1.25641 -1.23765,0 0,-1.25641 z m 2.49406,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.49407,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.51282,0 1.23766,0 0,1.25641 -1.23766,0 0,-1.25641 z m 2.49407,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.49406,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.51282,0 1.23766,0 0,1.25641 -1.23766,0 0,-1.25641 z m 2.49407,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.49406,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.51282,0 1.23766,0 0,1.25641 -1.23766,0 0,-1.25641 z m 2.49407,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.49407,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 
 z m 2.51282,0 1.23765,0 0,1.25641 -1.23765,0 0,-1.25641 z m 2.49406,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.49407,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.51282,0 1.23766,0 0,1.25641 -1.23766,0 0,-1.25641 z m 2.49407,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.49406,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.51282,0 1.23766,0 0,1.25641 -1.23766,0 0,-1.25641 z m 2.49407,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.49406,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.51282,0 1.23766,0 0,1.25641 -1.23766,0 0,-1.25641 z m 2.49407,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.49407,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.51282,0 1.23765,0 0,1.25641 -1.23765,0 0,-1.25641 z m 2.49406,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.49407,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.51282,0 1.23766,0 0,1.25641 -1.23766,0 0,-1.25641 z m 2.49407,0 1.25641,0 0,1.25641 -1.25641,0 0,-1.25641 z m 2.49406,0 1.25641,0 0,1.25641 -1.25641,0 
 0,-1.25641 z m 2.51282,0 0.93762,0 0,1.5752 -1.25641,0 0,-0.93762 0.61883,0.61883 -0.30004,0 0,-1.25641 z m 0.93762,2.81286 0,1.25641 -1.25641,0 0,-1.25641 1.25641,0 z m 0,2.51282 0,1.23765 -1.25641,0 0,-1.23765 1.25641,0 z m 0,2.49406 0,1.25641 -1.25641,0 0,-1.25641 1.25641,0 z m 0,2.49407 0,1.25641 -1.25641,0 0,-1.25641 1.25641,0 z m 0,2.51282 0,1.23766 -1.25641,0 0,-1.23766 1.25641,0 z m 0,2.49407 0,1.25641 -1.25641,0 0,-1.25641 1.25641,0 z m 0,2.49406 0,1.25641 -1.25641,0 0,-1.25641 1.25641,0 z m 0,2.51282 0,1.23766 -1.25641,0 0,-1.23766 1.25641,0 z m 0,2.49407 0,1.25641 -1.25641,0 0,-1.25641 1.25641,0 z m 0,2.49406 0,1.25641 -1.25641,0 0,-1.25641 1.25641,0 z m 0,2.51282 0,1.23766 -1.25641,0 0,-1.23766 1.25641,0 z m 0,2.49407 0,1.25641 -1.25641,0 0,-1.25641 1.25641,0 z m 0,2.49407 0,1.25641 -1.25641,0 0,-1.25641 1.25641,0 z m 0,2.51282 0,1.23765 -1.25641,0 0,-1.23765 1.25641,0 z m 0,2.49406 0,1.25641 -1.25641,0 0,-1.25641 1.25641,0 z m 0,2.49407 0,1.25641 -1.25641,0 0,-1.25641 1
 .25641,0 z m 0,2.51282 0,1.23766 -1.25641,0 0,-1.23766 1.25641,0 z m 0,2.49407 0,1.25641 -1.25641,0 0,-1.25641 1.25641,0 z m 0,2.49406 0,1.25641 -1.25641,0 0,-1.25641 1.25641,0 z m 0,2.51282 0,1.23766 -1.25641,0 0,-1.23766 1.25641,0 z m 0,2.49407 0,1.25641 -1.25641,0 0,-1.25641 1.25641,0 z m 0,2.49406 0,1.25641 -1.25641,0 0,-1.25641 1.25641,0 z m 0,2.51282 0,1.23766 -1.25641,0 0,-1.23766 1.25641,0 z m 0,2.49407 0,1.25641 -1.25641,0 0,-1.25641 1.25641,0 z m 0,2.49407 0,1.25641 -1.25641,0 0,-1.25641 1.25641,0 z m 0,2.51282 0,1.23765 -1.25641,0 0,-1.23765 1.25641,0 z m 0,2.49406 0,1.25641 -1.25641,0 0,-1.25641 1.25641,0 z m 0,2.49407 0,1.25641 -1.25641,0 0,-1.25641 1.25641,0 z m 0,2.51282 0,1.23766 -1.25641,0 0,-1.23766 1.25641,0 z m 0,2.49407 0,1.2564 -1.25641,0 0,-1.2564 1.25641,0 z m 0,2.49406 0,1.25641 -1.25641,0 0,-1.25641 1.25641,0 z m 0,2.51282 0,1.23766 -1.25641,0 0,-1.23766 1.25641,0 z m 0,2.49407 0,1.25641 -1.25641,0 0,-1.25641 1.25641,0 z m 0,2.49406 0,1.25641 -1.25641,0 0,-
 1.25641 1.25641,0 z m 0,2.51282 0,1.23766 -1.25641,0 0,-1.23766 1.25641,0 z m 0,2.49407 0,1.25641 -1.25641,0 0,-1.25641 1.25641,0 z m 0,2.49407 0,1.25641 -1.25641,0 0,-1.25641 1.25641,0 z m 0,2.51282 0,1.23765 -1.25641,0 0,-1.23765 1.25641,0 z m 0,2.49406 0,1.25641 -1.25641,0 0,-1.25641 1.25641,0 z m 0,2.49407 0,1.25641 -1.25641,0 0,-1.25641 1.25641,0 z m 0,2.51282 0,1.23766 -1.25641,0 0,-1.23766 1.25641,0 z m 0,2.49407 0,1.2564 -1.25641,0 0,-1.2564 1.25641,0 z m 0,2.49406 0,1.25641 -1.25641,0 0,-1.25641 1.25641,0 z m 0,2.51282 0,1.23766 -1.25641,0 0,-1.23766 1.25641,0 z m 0,2.49407 0,1.25641 -1.25641,0 0,-1.25641 1.25641,0 z m 0,2.49406 0,1.25641 -1.25641,0 0,-1.25641 1.25641,0 z m 0,2.51282 0,1.23766 -1.25641,0 0,-1.23766 1.25641,0 z m 0,2.49407 0,1.72522 -0.7876,0 0,-1.25641 0.15002,0 -0.61883,0.61883 0,-1.08764 1.25641,0 z m -2.04401,1.72522 -1.23766,0 0,-1.25641 1.23766,0 0,1.25641 z m -2.49407,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.49406,0 -1.25641,0 0,-1.25641 1.2
 5641,0 0,1.25641 z m -2.51282,0 -1.23766,0 0,-1.25641 1.23766,0 0,1.25641 z m -2.49407,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.49407,0 -1.2564,0 0,-1.25641 1.2564,0 0,1.25641 z m -2.51281,0 -1.23766,0 0,-1.25641 1.23766,0 0,1.25641 z m -2.49407,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.49407,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.51282,0 -1.23765,0 0,-1.25641 1.23765,0 0,1.25641 z m -2.49406,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.49407,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.51282,0 -1.23766,0 0,-1.25641 1.23766,0 0,1.25641 z m -2.49407,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.49406,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.51282,0 -1.23766,0 0,-1.25641 1.23766,0 0,1.25641 z m -2.49407,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.49406,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.51282,0 -1.23766,0 0,-1.25641 1.23766,0 0,1.25641 z m -2.49407,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.49407,
 0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.51282,0 -1.23765,0 0,-1.25641 1.23765,0 0,1.25641 z m -2.49406,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.49407,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.51282,0 -1.23766,0 0,-1.25641 1.23766,0 0,1.25641 z m -2.49407,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.49406,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.51282,0 -1.23766,0 0,-1.25641 1.23766,0 0,1.25641 z m -2.49407,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.49406,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.51282,0 -1.23766,0 0,-1.25641 1.23766,0 0,1.25641 z m -2.49407,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.49407,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.51282,0 -1.23765,0 0,-1.25641 1.23765,0 0,1.25641 z m -2.49406,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.49407,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.51282,0 -1.23766,0 0,-1.25641 1.23766,0 0,1.25641 z m -2.49407,0 -1.25641,0 0,-1.25641 1.25
 641,0 0,1.25641 z m -2.49406,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.51282,0 -1.23766,0 0,-1.25641 1.23766,0 0,1.25641 z m -2.49407,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.49406,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.51282,0 -1.23766,0 0,-1.25641 1.23766,0 0,1.25641 z m -2.49407,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.49407,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.51282,0 -1.23765,0 0,-1.25641 1.23765,0 0,1.25641 z m -2.49406,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.49407,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.51282,0 -1.23766,0 0,-1.25641 1.23766,0 0,1.25641 z m -2.49407,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.49406,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.51282,0 -1.23766,0 0,-1.25641 1.23766,0 0,1.25641 z m -2.49407,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.49406,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.51282,0 -1.23766,0 0,-1.25641 1.23766,0 0,1.25641 z m -2.49407
 ,0 -1.25641,0 0,-1.25641 1.25641,0 0,1.25641 z m -2.49407,0 -0.63758,0 0,-1.25641 0.63758,0 0,1.25641 z"
-         id="path3125"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 70.968387,294.47802 12.751619,-25.48449 -1.115767,-0.56257 -12.751619,25.49386 z m 14.429957,-24.65001 c 0.618829,-1.22828 0.112514,-2.72847 -1.115767,-3.3473 -1.237657,-0.61883 -2.737847,-0.12189 -3.356676,1.11577 -0.618828,1.23766 -0.12189,2.73785 1.115767,3.35668 1.237657,0.61882 2.737847,0.11251 3.356676,-1.12515 z"
-         id="path3127"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937619px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 325.74761,293.15598 6.37581,-31.97281 -1.23765,-0.24379 -6.37581,31.97281 z m 8.21355,-31.59777 c 0.26253,-1.36892 -0.61883,-2.68159 -1.969,-2.94412 -1.35017,-0.28129 -2.66284,0.60008 -2.94413,1.95025 -0.26253,1.36892 0.61883,2.68159 1.969,2.94412 1.35017,0.28129 2.68159,-0.60008 2.94413,-1.95025 z"
-         id="path3129"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 440.38092,294.03734 -14.5706,-36.49214 1.16265,-0.46881 14.5706,36.49214 z m -16.31457,-35.7983 c -0.50632,-1.27516 0.11251,-2.73785 1.40643,-3.24416 1.27516,-0.52507 2.73784,0.11251 3.24416,1.38768 0.50631,1.27516 -0.11252,2.73784 -1.38768,3.24416 -1.29391,0.52506 -2.73784,-0.11252 -3.26291,-1.38768 z"
-         id="path3131"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-    </g>
-  </g>
-</svg>


[72/89] [abbrv] flink git commit: [FLINK-4346] [rpc] Add new RPC abstraction

Posted by se...@apache.org.
[FLINK-4346] [rpc] Add new RPC abstraction


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/b273afad
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/b273afad
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/b273afad

Branch: refs/heads/flip-6
Commit: b273afad3423b9ca403403c0fa92f1fcf3ec2cf9
Parents: 4810910
Author: Till Rohrmann <tr...@apache.org>
Authored: Wed Aug 3 19:31:34 2016 +0200
Committer: Stephan Ewen <se...@apache.org>
Committed: Thu Aug 25 20:21:02 2016 +0200

----------------------------------------------------------------------
 flink-runtime/pom.xml                           |   5 +
 .../flink/runtime/rpc/MainThreadExecutor.java   |  54 +++
 .../apache/flink/runtime/rpc/RpcEndpoint.java   | 182 +++++++++++
 .../apache/flink/runtime/rpc/RpcGateway.java    |  25 ++
 .../org/apache/flink/runtime/rpc/RpcMethod.java |  35 ++
 .../apache/flink/runtime/rpc/RpcService.java    |  74 +++++
 .../apache/flink/runtime/rpc/RpcTimeout.java    |  34 ++
 .../flink/runtime/rpc/akka/AkkaGateway.java     |  29 ++
 .../flink/runtime/rpc/akka/AkkaRpcService.java  | 145 ++++++++
 .../flink/runtime/rpc/akka/BaseAkkaActor.java   |  50 +++
 .../flink/runtime/rpc/akka/BaseAkkaGateway.java |  41 +++
 .../rpc/akka/jobmaster/JobMasterAkkaActor.java  |  58 ++++
 .../akka/jobmaster/JobMasterAkkaGateway.java    |  57 ++++
 .../rpc/akka/messages/CallableMessage.java      |  33 ++
 .../runtime/rpc/akka/messages/CancelTask.java   |  36 ++
 .../runtime/rpc/akka/messages/ExecuteTask.java  |  36 ++
 .../messages/RegisterAtResourceManager.java     |  36 ++
 .../rpc/akka/messages/RegisterJobMaster.java    |  36 ++
 .../runtime/rpc/akka/messages/RequestSlot.java  |  37 +++
 .../rpc/akka/messages/RunnableMessage.java      |  31 ++
 .../akka/messages/UpdateTaskExecutionState.java |  37 +++
 .../ResourceManagerAkkaActor.java               |  65 ++++
 .../ResourceManagerAkkaGateway.java             |  67 ++++
 .../taskexecutor/TaskExecutorAkkaActor.java     |  77 +++++
 .../taskexecutor/TaskExecutorAkkaGateway.java   |  59 ++++
 .../flink/runtime/rpc/jobmaster/JobMaster.java  | 249 ++++++++++++++
 .../runtime/rpc/jobmaster/JobMasterGateway.java |  45 +++
 .../resourcemanager/JobMasterRegistration.java  |  35 ++
 .../resourcemanager/RegistrationResponse.java   |  43 +++
 .../rpc/resourcemanager/ResourceManager.java    |  94 ++++++
 .../resourcemanager/ResourceManagerGateway.java |  58 ++++
 .../rpc/resourcemanager/SlotAssignment.java     |  25 ++
 .../rpc/resourcemanager/SlotRequest.java        |  25 ++
 .../runtime/rpc/taskexecutor/TaskExecutor.java  |  82 +++++
 .../rpc/taskexecutor/TaskExecutorGateway.java   |  48 +++
 .../resourcemanager/ResourceManagerITCase.java  |   1 -
 .../flink/runtime/rpc/RpcCompletenessTest.java  | 327 +++++++++++++++++++
 .../runtime/rpc/akka/AkkaRpcServiceTest.java    |  81 +++++
 .../rpc/taskexecutor/TaskExecutorTest.java      |  92 ++++++
 .../runtime/util/DirectExecutorService.java     | 234 +++++++++++++
 flink-tests/pom.xml                             |   1 -
 pom.xml                                         |   7 +
 42 files changed, 2784 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/pom.xml
----------------------------------------------------------------------
diff --git a/flink-runtime/pom.xml b/flink-runtime/pom.xml
index 5fea8fb..09c6fd0 100644
--- a/flink-runtime/pom.xml
+++ b/flink-runtime/pom.xml
@@ -189,6 +189,11 @@ under the License.
 			<artifactId>akka-testkit_${scala.binary.version}</artifactId>
 		</dependency>
 
+		<dependency>
+			<groupId>org.reflections</groupId>
+			<artifactId>reflections</artifactId>
+		</dependency>
+
 	</dependencies>
 
 	<build>

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/MainThreadExecutor.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/MainThreadExecutor.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/MainThreadExecutor.java
new file mode 100644
index 0000000..e06711e
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/MainThreadExecutor.java
@@ -0,0 +1,54 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc;
+
+import akka.util.Timeout;
+import scala.concurrent.Future;
+
+import java.util.concurrent.Callable;
+import java.util.concurrent.TimeoutException;
+
+/**
+ * Interface to execute {@link Runnable} and {@link Callable} in the main thread of the underlying
+ * rpc server.
+ *
+ * This interface is intended to be implemented by the self gateway in a {@link RpcEndpoint}
+ * implementation which allows to dispatch local procedures to the main thread of the underlying
+ * rpc server.
+ */
+public interface MainThreadExecutor {
+	/**
+	 * Execute the runnable in the main thread of the underlying rpc server.
+	 *
+	 * @param runnable Runnable to be executed
+	 */
+	void runAsync(Runnable runnable);
+
+	/**
+	 * Execute the callable in the main thread of the underlying rpc server and return a future for
+	 * the callable result. If the future is not completed within the given timeout, the returned
+	 * future will throw a {@link TimeoutException}.
+	 *
+	 * @param callable Callable to be executed
+	 * @param timeout Timeout for the future to complete
+	 * @param <V> Return value of the callable
+	 * @return Future of the callable result
+	 */
+	<V> Future<V> callAsync(Callable<V> callable, Timeout timeout);
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java
new file mode 100644
index 0000000..3d8757f
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java
@@ -0,0 +1,182 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc;
+
+import akka.util.Timeout;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import scala.concurrent.ExecutionContext;
+import scala.concurrent.Future;
+
+import java.util.concurrent.Callable;
+
+/**
+ * Base class for rpc endpoints. Distributed components which offer remote procedure calls have to
+ * extend the rpc endpoint base class.
+ *
+ * The main idea is that a rpc endpoint is backed by a rpc server which has a single thread
+ * processing the rpc calls. Thus, by executing all state changing operations within the main
+ * thread, we don't have to reason about concurrent accesses. The rpc provides provides
+ * {@link #runAsync(Runnable)}, {@link #callAsync(Callable, Timeout)} and the
+ * {@link #getMainThreadExecutionContext()} to execute code in the rpc server's main thread.
+ *
+ * @param <C> Rpc gateway counterpart for the implementing rpc endpoint
+ */
+public abstract class RpcEndpoint<C extends RpcGateway> {
+
+	protected final Logger log = LoggerFactory.getLogger(getClass());
+
+	/** Rpc service to be used to start the rpc server and to obtain rpc gateways */
+	private final RpcService rpcService;
+
+	/** Self gateway which can be used to schedule asynchronous calls on yourself */
+	private C self;
+
+	/**
+	 * The main thread execution context to be used to execute future callbacks in the main thread
+	 * of the executing rpc server.
+	 *
+	 * IMPORTANT: The main thread context is only available after the rpc server has been started.
+	 */
+	private MainThreadExecutionContext mainThreadExecutionContext;
+
+	public RpcEndpoint(RpcService rpcService) {
+		this.rpcService = rpcService;
+	}
+
+	/**
+	 * Get self-gateway which should be used to run asynchronous rpc calls on this endpoint.
+	 *
+	 * IMPORTANT: Always issue local method calls via the self-gateway if the current thread
+	 * is not the main thread of the underlying rpc server, e.g. from within a future callback.
+	 *
+	 * @return Self gateway
+	 */
+	public C getSelf() {
+		return self;
+	}
+
+	/**
+	 * Execute the runnable in the main thread of the underlying rpc server.
+	 *
+	 * @param runnable Runnable to be executed in the main thread of the underlying rpc server
+	 */
+	public void runAsync(Runnable runnable) {
+		((MainThreadExecutor) self).runAsync(runnable);
+	}
+
+	/**
+	 * Execute the callable in the main thread of the underlying rpc server returning a future for
+	 * the result of the callable. If the callable is not completed within the given timeout, then
+	 * the future will be failed with a {@link java.util.concurrent.TimeoutException}.
+	 *
+	 * @param callable Callable to be executed in the main thread of the underlying rpc server
+	 * @param timeout Timeout for the callable to be completed
+	 * @param <V> Return type of the callable
+	 * @return Future for the result of the callable.
+	 */
+	public <V> Future<V> callAsync(Callable<V> callable, Timeout timeout) {
+		return ((MainThreadExecutor) self).callAsync(callable, timeout);
+	}
+
+	/**
+	 * Gets the main thread execution context. The main thread execution context can be used to
+	 * execute tasks in the main thread of the underlying rpc server.
+	 *
+	 * @return Main thread execution context
+	 */
+	public ExecutionContext getMainThreadExecutionContext() {
+		return mainThreadExecutionContext;
+	}
+
+	/**
+	 * Gets the used rpc service.
+	 *
+	 * @return Rpc service
+	 */
+	public RpcService getRpcService() {
+		return rpcService;
+	}
+
+	/**
+	 * Starts the underlying rpc server via the rpc service and creates the main thread execution
+	 * context. This makes the rpc endpoint effectively reachable from the outside.
+	 *
+	 * Can be overriden to add rpc endpoint specific start up code. Should always call the parent
+	 * start method.
+	 */
+	public void start() {
+		self = rpcService.startServer(this);
+		mainThreadExecutionContext = new MainThreadExecutionContext((MainThreadExecutor) self);
+	}
+
+
+	/**
+	 * Shuts down the underlying rpc server via the rpc service.
+	 *
+	 * Can be overriden to add rpc endpoint specific shut down code. Should always call the parent
+	 * shut down method.
+	 */
+	public void shutDown() {
+		rpcService.stopServer(self);
+	}
+
+	/**
+	 * Gets the address of the underlying rpc server. The address should be fully qualified so that
+	 * a remote system can connect to this rpc server via this address.
+	 *
+	 * @return Fully qualified address of the underlying rpc server
+	 */
+	public String getAddress() {
+		return rpcService.getAddress(self);
+	}
+
+	/**
+	 * Execution context which executes runnables in the main thread context. A reported failure
+	 * will cause the underlying rpc server to shut down.
+	 */
+	private class MainThreadExecutionContext implements ExecutionContext {
+		private final MainThreadExecutor gateway;
+
+		MainThreadExecutionContext(MainThreadExecutor gateway) {
+			this.gateway = gateway;
+		}
+
+		@Override
+		public void execute(Runnable runnable) {
+			gateway.runAsync(runnable);
+		}
+
+		@Override
+		public void reportFailure(final Throwable t) {
+			gateway.runAsync(new Runnable() {
+				@Override
+				public void run() {
+					log.error("Encountered failure in the main thread execution context.", t);
+					shutDown();
+				}
+			});
+		}
+
+		@Override
+		public ExecutionContext prepare() {
+			return this;
+		}
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcGateway.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcGateway.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcGateway.java
new file mode 100644
index 0000000..e3a16b4
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcGateway.java
@@ -0,0 +1,25 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc;
+
+/**
+ * Rpc gateway interface which has to be implemented by Rpc gateways.
+ */
+public interface RpcGateway {
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcMethod.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcMethod.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcMethod.java
new file mode 100644
index 0000000..875e557
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcMethod.java
@@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc;
+
+import java.lang.annotation.ElementType;
+import java.lang.annotation.Retention;
+import java.lang.annotation.RetentionPolicy;
+import java.lang.annotation.Target;
+
+/**
+ * Annotation for rpc method in a {@link RpcEndpoint} implementation. Every rpc method must have a
+ * respective counterpart in the {@link RpcGateway} implementation for this rpc server. The
+ * RpcCompletenessTest makes sure that the set of rpc methods in a rpc server and the set of
+ * gateway methods in the corresponding gateway implementation are identical.
+ */
+@Target(ElementType.METHOD)
+@Retention(RetentionPolicy.RUNTIME)
+public @interface RpcMethod {
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcService.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcService.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcService.java
new file mode 100644
index 0000000..90ff7b6
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcService.java
@@ -0,0 +1,74 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc;
+
+import scala.concurrent.Future;
+
+/**
+ * Interface for rpc services. An rpc service is used to start and connect to a {@link RpcEndpoint}.
+ * Connecting to a rpc server will return a {@link RpcGateway} which can be used to call remote
+ * procedures.
+ */
+public interface RpcService {
+
+	/**
+	 * Connect to a remote rpc server under the provided address. Returns a rpc gateway which can
+	 * be used to communicate with the rpc server.
+	 *
+	 * @param address Address of the remote rpc server
+	 * @param clazz Class of the rpc gateway to return
+	 * @param <C> Type of the rpc gateway to return
+	 * @return Future containing the rpc gateway
+	 */
+	<C extends RpcGateway> Future<C> connect(String address, Class<C> clazz);
+
+	/**
+	 * Start a rpc server which forwards the remote procedure calls to the provided rpc endpoint.
+	 *
+	 * @param rpcEndpoint Rpc protocl to dispath the rpcs to
+	 * @param <S> Type of the rpc endpoint
+	 * @param <C> Type of the self rpc gateway associated with the rpc server
+	 * @return Self gateway to dispatch remote procedure calls to oneself
+	 */
+	<S extends RpcEndpoint, C extends RpcGateway> C startServer(S rpcEndpoint);
+
+	/**
+	 * Stop the underlying rpc server of the provided self gateway.
+	 *
+	 * @param selfGateway Self gateway describing the underlying rpc server
+	 * @param <C> Type of the rpc gateway
+	 */
+	<C extends RpcGateway> void stopServer(C selfGateway);
+
+	/**
+	 * Stop the rpc service shutting down all started rpc servers.
+	 */
+	void stopService();
+
+	/**
+	 * Get the fully qualified address of the underlying rpc server represented by the self gateway.
+	 * It must be possible to connect from a remote host to the rpc server via the returned fully
+	 * qualified address.
+	 *
+	 * @param selfGateway Self gateway associated with the underlying rpc server
+	 * @param <C> Type of the rpc gateway
+	 * @return Fully qualified address
+	 */
+	<C extends RpcGateway> String getAddress(C selfGateway);
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcTimeout.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcTimeout.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcTimeout.java
new file mode 100644
index 0000000..3d36d47
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcTimeout.java
@@ -0,0 +1,34 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc;
+
+import java.lang.annotation.ElementType;
+import java.lang.annotation.Retention;
+import java.lang.annotation.RetentionPolicy;
+import java.lang.annotation.Target;
+
+/**
+ * Annotation for {@link RpcGateway} methods to specify an additional timeout parameter for the
+ * returned future to be completed. The rest of the provided parameters is passed to the remote rpc
+ * server for the rpc.
+ */
+@Target(ElementType.PARAMETER)
+@Retention(RetentionPolicy.RUNTIME)
+public @interface RpcTimeout {
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaGateway.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaGateway.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaGateway.java
new file mode 100644
index 0000000..a96a600
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaGateway.java
@@ -0,0 +1,29 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.akka;
+
+import akka.actor.ActorRef;
+
+/**
+ * Interface for Akka based rpc gateways
+ */
+public interface AkkaGateway {
+
+	ActorRef getActorRef();
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
new file mode 100644
index 0000000..d55bd13
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
@@ -0,0 +1,145 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.akka;
+
+import akka.actor.ActorIdentity;
+import akka.actor.ActorRef;
+import akka.actor.ActorSelection;
+import akka.actor.ActorSystem;
+import akka.actor.Identify;
+import akka.actor.PoisonPill;
+import akka.actor.Props;
+import akka.dispatch.Mapper;
+import akka.pattern.AskableActorSelection;
+import akka.util.Timeout;
+import org.apache.flink.runtime.akka.AkkaUtils;
+import org.apache.flink.runtime.rpc.jobmaster.JobMaster;
+import org.apache.flink.runtime.rpc.jobmaster.JobMasterGateway;
+import org.apache.flink.runtime.rpc.resourcemanager.ResourceManager;
+import org.apache.flink.runtime.rpc.resourcemanager.ResourceManagerGateway;
+import org.apache.flink.runtime.rpc.RpcGateway;
+import org.apache.flink.runtime.rpc.RpcEndpoint;
+import org.apache.flink.runtime.rpc.RpcService;
+import org.apache.flink.runtime.rpc.akka.jobmaster.JobMasterAkkaActor;
+import org.apache.flink.runtime.rpc.akka.jobmaster.JobMasterAkkaGateway;
+import org.apache.flink.runtime.rpc.akka.resourcemanager.ResourceManagerAkkaActor;
+import org.apache.flink.runtime.rpc.akka.resourcemanager.ResourceManagerAkkaGateway;
+import org.apache.flink.runtime.rpc.akka.taskexecutor.TaskExecutorAkkaActor;
+import org.apache.flink.runtime.rpc.akka.taskexecutor.TaskExecutorAkkaGateway;
+import org.apache.flink.runtime.rpc.taskexecutor.TaskExecutorGateway;
+import org.apache.flink.runtime.rpc.taskexecutor.TaskExecutor;
+import scala.concurrent.Future;
+
+import java.util.HashSet;
+import java.util.Set;
+
+public class AkkaRpcService implements RpcService {
+	private final ActorSystem actorSystem;
+	private final Timeout timeout;
+	private final Set<ActorRef> actors = new HashSet<>();
+
+	public AkkaRpcService(ActorSystem actorSystem, Timeout timeout) {
+		this.actorSystem = actorSystem;
+		this.timeout = timeout;
+	}
+
+	@Override
+	public <C extends RpcGateway> Future<C> connect(String address, final Class<C> clazz) {
+		ActorSelection actorSel = actorSystem.actorSelection(address);
+
+		AskableActorSelection asker = new AskableActorSelection(actorSel);
+
+		Future<Object> identify = asker.ask(new Identify(42), timeout);
+
+		return identify.map(new Mapper<Object, C>(){
+			public C apply(Object obj) {
+				ActorRef actorRef = ((ActorIdentity) obj).getRef();
+
+				if (clazz == TaskExecutorGateway.class) {
+					return (C) new TaskExecutorAkkaGateway(actorRef, timeout);
+				} else if (clazz == ResourceManagerGateway.class) {
+					return (C) new ResourceManagerAkkaGateway(actorRef, timeout);
+				} else if (clazz == JobMasterGateway.class) {
+					return (C) new JobMasterAkkaGateway(actorRef, timeout);
+				} else {
+					throw new RuntimeException("Could not find remote endpoint " + clazz);
+				}
+			}
+		}, actorSystem.dispatcher());
+	}
+
+	@Override
+	public <S extends RpcEndpoint, C extends RpcGateway> C startServer(S rpcEndpoint) {
+		ActorRef ref;
+		C self;
+		if (rpcEndpoint instanceof TaskExecutor) {
+			ref = actorSystem.actorOf(
+				Props.create(TaskExecutorAkkaActor.class, rpcEndpoint)
+			);
+
+			self = (C) new TaskExecutorAkkaGateway(ref, timeout);
+		} else if (rpcEndpoint instanceof ResourceManager) {
+			ref = actorSystem.actorOf(
+				Props.create(ResourceManagerAkkaActor.class, rpcEndpoint)
+			);
+
+			self = (C) new ResourceManagerAkkaGateway(ref, timeout);
+		} else if (rpcEndpoint instanceof JobMaster) {
+			ref = actorSystem.actorOf(
+				Props.create(JobMasterAkkaActor.class, rpcEndpoint)
+			);
+
+			self = (C) new JobMasterAkkaGateway(ref, timeout);
+		} else {
+			throw new RuntimeException("Could not start RPC server for class " + rpcEndpoint.getClass());
+		}
+
+		actors.add(ref);
+
+		return self;
+	}
+
+	@Override
+	public <C extends RpcGateway> void stopServer(C selfGateway) {
+		if (selfGateway instanceof AkkaGateway) {
+			AkkaGateway akkaClient = (AkkaGateway) selfGateway;
+
+			if (actors.contains(akkaClient.getActorRef())) {
+				akkaClient.getActorRef().tell(PoisonPill.getInstance(), ActorRef.noSender());
+			} else {
+				// don't stop this actor since it was not started by this RPC service
+			}
+		}
+	}
+
+	@Override
+	public void stopService() {
+		actorSystem.shutdown();
+		actorSystem.awaitTermination();
+	}
+
+	@Override
+	public <C extends RpcGateway> String getAddress(C selfGateway) {
+		if (selfGateway instanceof AkkaGateway) {
+			return AkkaUtils.getAkkaURL(actorSystem, ((AkkaGateway) selfGateway).getActorRef());
+		} else {
+			throw new RuntimeException("Cannot get address for non " + AkkaGateway.class.getName() + ".");
+		}
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/BaseAkkaActor.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/BaseAkkaActor.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/BaseAkkaActor.java
new file mode 100644
index 0000000..3cb499c
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/BaseAkkaActor.java
@@ -0,0 +1,50 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.akka;
+
+import akka.actor.Status;
+import akka.actor.UntypedActor;
+import org.apache.flink.runtime.rpc.akka.messages.CallableMessage;
+import org.apache.flink.runtime.rpc.akka.messages.RunnableMessage;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class BaseAkkaActor extends UntypedActor {
+	private static final Logger LOG = LoggerFactory.getLogger(BaseAkkaActor.class);
+
+	@Override
+	public void onReceive(Object message) throws Exception {
+		if (message instanceof RunnableMessage) {
+			try {
+				((RunnableMessage) message).getRunnable().run();
+			} catch (Exception e) {
+				LOG.error("Encountered error while executing runnable.", e);
+			}
+		} else if (message instanceof CallableMessage<?>) {
+			try {
+				Object result = ((CallableMessage<?>) message).getCallable().call();
+				sender().tell(new Status.Success(result), getSelf());
+			} catch (Exception e) {
+				sender().tell(new Status.Failure(e), getSelf());
+			}
+		} else {
+			throw new RuntimeException("Unknown message " + message);
+		}
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/BaseAkkaGateway.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/BaseAkkaGateway.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/BaseAkkaGateway.java
new file mode 100644
index 0000000..512790d
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/BaseAkkaGateway.java
@@ -0,0 +1,41 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.akka;
+
+import akka.actor.ActorRef;
+import akka.pattern.Patterns;
+import akka.util.Timeout;
+import org.apache.flink.runtime.rpc.MainThreadExecutor;
+import org.apache.flink.runtime.rpc.akka.messages.CallableMessage;
+import org.apache.flink.runtime.rpc.akka.messages.RunnableMessage;
+import scala.concurrent.Future;
+
+import java.util.concurrent.Callable;
+
+public abstract class BaseAkkaGateway implements MainThreadExecutor, AkkaGateway {
+	@Override
+	public void runAsync(Runnable runnable) {
+		getActorRef().tell(new RunnableMessage(runnable), ActorRef.noSender());
+	}
+
+	@Override
+	public <V> Future<V> callAsync(Callable<V> callable, Timeout timeout) {
+		return (Future<V>) Patterns.ask(getActorRef(), new CallableMessage(callable), timeout);
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/jobmaster/JobMasterAkkaActor.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/jobmaster/JobMasterAkkaActor.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/jobmaster/JobMasterAkkaActor.java
new file mode 100644
index 0000000..9e04ea9
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/jobmaster/JobMasterAkkaActor.java
@@ -0,0 +1,58 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.akka.jobmaster;
+
+import akka.actor.ActorRef;
+import akka.actor.Status;
+import org.apache.flink.runtime.rpc.akka.BaseAkkaActor;
+import org.apache.flink.runtime.rpc.akka.messages.RegisterAtResourceManager;
+import org.apache.flink.runtime.rpc.jobmaster.JobMaster;
+import org.apache.flink.runtime.messages.Acknowledge;
+import org.apache.flink.runtime.rpc.akka.messages.UpdateTaskExecutionState;
+
+public class JobMasterAkkaActor extends BaseAkkaActor {
+	private final JobMaster jobMaster;
+
+	public JobMasterAkkaActor(JobMaster jobMaster) {
+		this.jobMaster = jobMaster;
+	}
+
+	@Override
+	public void onReceive(Object message) throws Exception {
+		if (message instanceof UpdateTaskExecutionState) {
+
+			final ActorRef sender = getSender();
+
+			UpdateTaskExecutionState updateTaskExecutionState = (UpdateTaskExecutionState) message;
+
+			try {
+				Acknowledge result = jobMaster.updateTaskExecutionState(updateTaskExecutionState.getTaskExecutionState());
+				sender.tell(new Status.Success(result), getSelf());
+			} catch (Exception e) {
+				sender.tell(new Status.Failure(e), getSelf());
+			}
+		} else if (message instanceof RegisterAtResourceManager) {
+			RegisterAtResourceManager registerAtResourceManager = (RegisterAtResourceManager) message;
+
+			jobMaster.registerAtResourceManager(registerAtResourceManager.getAddress());
+		} else {
+			super.onReceive(message);
+		}
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/jobmaster/JobMasterAkkaGateway.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/jobmaster/JobMasterAkkaGateway.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/jobmaster/JobMasterAkkaGateway.java
new file mode 100644
index 0000000..e6bf061
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/jobmaster/JobMasterAkkaGateway.java
@@ -0,0 +1,57 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.akka.jobmaster;
+
+import akka.actor.ActorRef;
+import akka.pattern.AskableActorRef;
+import akka.util.Timeout;
+import org.apache.flink.runtime.rpc.akka.BaseAkkaGateway;
+import org.apache.flink.runtime.rpc.akka.messages.RegisterAtResourceManager;
+import org.apache.flink.runtime.rpc.jobmaster.JobMasterGateway;
+import org.apache.flink.runtime.messages.Acknowledge;
+import org.apache.flink.runtime.rpc.akka.messages.UpdateTaskExecutionState;
+import org.apache.flink.runtime.taskmanager.TaskExecutionState;
+import scala.concurrent.Future;
+import scala.reflect.ClassTag$;
+
+public class JobMasterAkkaGateway extends BaseAkkaGateway implements JobMasterGateway {
+	private final AskableActorRef actorRef;
+	private final Timeout timeout;
+
+	public JobMasterAkkaGateway(ActorRef actorRef, Timeout timeout) {
+		this.actorRef = new AskableActorRef(actorRef);
+		this.timeout = timeout;
+	}
+
+	@Override
+	public Future<Acknowledge> updateTaskExecutionState(TaskExecutionState taskExecutionState) {
+		return actorRef.ask(new UpdateTaskExecutionState(taskExecutionState), timeout)
+			.mapTo(ClassTag$.MODULE$.<Acknowledge>apply(Acknowledge.class));
+	}
+
+	@Override
+	public void registerAtResourceManager(String address) {
+		actorRef.actorRef().tell(new RegisterAtResourceManager(address), actorRef.actorRef());
+	}
+
+	@Override
+	public ActorRef getActorRef() {
+		return actorRef.actorRef();
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/CallableMessage.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/CallableMessage.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/CallableMessage.java
new file mode 100644
index 0000000..f0e555f
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/CallableMessage.java
@@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.akka.messages;
+
+import java.util.concurrent.Callable;
+
+public class CallableMessage<V> {
+	private final Callable<V> callable;
+
+	public CallableMessage(Callable<V> callable) {
+		this.callable = callable;
+	}
+
+	public Callable<V> getCallable() {
+		return callable;
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/CancelTask.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/CancelTask.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/CancelTask.java
new file mode 100644
index 0000000..0b9e9dc
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/CancelTask.java
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.akka.messages;
+
+import org.apache.flink.runtime.executiongraph.ExecutionAttemptID;
+
+import java.io.Serializable;
+
+public class CancelTask implements Serializable {
+	private static final long serialVersionUID = -2998176874447950595L;
+	private final ExecutionAttemptID executionAttemptID;
+
+	public CancelTask(ExecutionAttemptID executionAttemptID) {
+		this.executionAttemptID = executionAttemptID;
+	}
+
+	public ExecutionAttemptID getExecutionAttemptID() {
+		return executionAttemptID;
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/ExecuteTask.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/ExecuteTask.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/ExecuteTask.java
new file mode 100644
index 0000000..a83d539
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/ExecuteTask.java
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.akka.messages;
+
+import org.apache.flink.runtime.deployment.TaskDeploymentDescriptor;
+
+import java.io.Serializable;
+
+public class ExecuteTask implements Serializable {
+	private static final long serialVersionUID = -6769958430967048348L;
+	private final TaskDeploymentDescriptor taskDeploymentDescriptor;
+
+	public ExecuteTask(TaskDeploymentDescriptor taskDeploymentDescriptor) {
+		this.taskDeploymentDescriptor = taskDeploymentDescriptor;
+	}
+
+	public TaskDeploymentDescriptor getTaskDeploymentDescriptor() {
+		return taskDeploymentDescriptor;
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RegisterAtResourceManager.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RegisterAtResourceManager.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RegisterAtResourceManager.java
new file mode 100644
index 0000000..3ade082
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RegisterAtResourceManager.java
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.akka.messages;
+
+import java.io.Serializable;
+
+public class RegisterAtResourceManager implements Serializable {
+
+	private static final long serialVersionUID = -4175905742620903602L;
+
+	private final String address;
+
+	public RegisterAtResourceManager(String address) {
+		this.address = address;
+	}
+
+	public String getAddress() {
+		return address;
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RegisterJobMaster.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RegisterJobMaster.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RegisterJobMaster.java
new file mode 100644
index 0000000..b35ea38
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RegisterJobMaster.java
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.akka.messages;
+
+import org.apache.flink.runtime.rpc.resourcemanager.JobMasterRegistration;
+
+import java.io.Serializable;
+
+public class RegisterJobMaster implements Serializable{
+	private static final long serialVersionUID = -4616879574192641507L;
+	private final JobMasterRegistration jobMasterRegistration;
+
+	public RegisterJobMaster(JobMasterRegistration jobMasterRegistration) {
+		this.jobMasterRegistration = jobMasterRegistration;
+	}
+
+	public JobMasterRegistration getJobMasterRegistration() {
+		return jobMasterRegistration;
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RequestSlot.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RequestSlot.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RequestSlot.java
new file mode 100644
index 0000000..85ceeec
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RequestSlot.java
@@ -0,0 +1,37 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.akka.messages;
+
+import org.apache.flink.runtime.rpc.resourcemanager.SlotRequest;
+
+import java.io.Serializable;
+
+public class RequestSlot implements Serializable {
+	private static final long serialVersionUID = 7207463889348525866L;
+
+	private final SlotRequest slotRequest;
+
+	public RequestSlot(SlotRequest slotRequest) {
+		this.slotRequest = slotRequest;
+	}
+
+	public SlotRequest getSlotRequest() {
+		return slotRequest;
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RunnableMessage.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RunnableMessage.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RunnableMessage.java
new file mode 100644
index 0000000..3556738
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RunnableMessage.java
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.akka.messages;
+
+public class RunnableMessage {
+	private final Runnable runnable;
+
+	public RunnableMessage(Runnable runnable) {
+		this.runnable = runnable;
+	}
+
+	public Runnable getRunnable() {
+		return runnable;
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/UpdateTaskExecutionState.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/UpdateTaskExecutionState.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/UpdateTaskExecutionState.java
new file mode 100644
index 0000000..f89cd2f
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/UpdateTaskExecutionState.java
@@ -0,0 +1,37 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.akka.messages;
+
+import org.apache.flink.runtime.taskmanager.TaskExecutionState;
+
+import java.io.Serializable;
+
+public class UpdateTaskExecutionState implements Serializable{
+	private static final long serialVersionUID = -6662229114427331436L;
+
+	private final TaskExecutionState taskExecutionState;
+
+	public UpdateTaskExecutionState(TaskExecutionState taskExecutionState) {
+		this.taskExecutionState = taskExecutionState;
+	}
+
+	public TaskExecutionState getTaskExecutionState() {
+		return taskExecutionState;
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/resourcemanager/ResourceManagerAkkaActor.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/resourcemanager/ResourceManagerAkkaActor.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/resourcemanager/ResourceManagerAkkaActor.java
new file mode 100644
index 0000000..13101f9
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/resourcemanager/ResourceManagerAkkaActor.java
@@ -0,0 +1,65 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.akka.resourcemanager;
+
+import akka.actor.ActorRef;
+import akka.actor.Status;
+import akka.pattern.Patterns;
+import org.apache.flink.runtime.rpc.akka.BaseAkkaActor;
+import org.apache.flink.runtime.rpc.resourcemanager.RegistrationResponse;
+import org.apache.flink.runtime.rpc.resourcemanager.ResourceManager;
+import org.apache.flink.runtime.rpc.resourcemanager.SlotAssignment;
+import org.apache.flink.runtime.rpc.akka.messages.RegisterJobMaster;
+import org.apache.flink.runtime.rpc.akka.messages.RequestSlot;
+import scala.concurrent.Future;
+
+public class ResourceManagerAkkaActor extends BaseAkkaActor {
+	private final ResourceManager resourceManager;
+
+	public ResourceManagerAkkaActor(ResourceManager resourceManager) {
+		this.resourceManager = resourceManager;
+	}
+
+	@Override
+	public void onReceive(Object message) throws Exception {
+		final ActorRef sender = getSender();
+
+		if (message instanceof RegisterJobMaster) {
+			RegisterJobMaster registerJobMaster = (RegisterJobMaster) message;
+
+			try {
+				Future<RegistrationResponse> response = resourceManager.registerJobMaster(registerJobMaster.getJobMasterRegistration());
+				Patterns.pipe(response, getContext().dispatcher()).to(sender());
+			} catch (Exception e) {
+				sender.tell(new Status.Failure(e), getSelf());
+			}
+		} else if (message instanceof RequestSlot) {
+			RequestSlot requestSlot = (RequestSlot) message;
+
+			try {
+				SlotAssignment response = resourceManager.requestSlot(requestSlot.getSlotRequest());
+				sender.tell(new Status.Success(response), getSelf());
+			} catch (Exception e) {
+				sender.tell(new Status.Failure(e), getSelf());
+			}
+		} else {
+			super.onReceive(message);
+		}
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/resourcemanager/ResourceManagerAkkaGateway.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/resourcemanager/ResourceManagerAkkaGateway.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/resourcemanager/ResourceManagerAkkaGateway.java
new file mode 100644
index 0000000..1304707
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/resourcemanager/ResourceManagerAkkaGateway.java
@@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.akka.resourcemanager;
+
+import akka.actor.ActorRef;
+import akka.pattern.AskableActorRef;
+import akka.util.Timeout;
+import org.apache.flink.runtime.rpc.akka.BaseAkkaGateway;
+import org.apache.flink.runtime.rpc.resourcemanager.JobMasterRegistration;
+import org.apache.flink.runtime.rpc.resourcemanager.RegistrationResponse;
+import org.apache.flink.runtime.rpc.resourcemanager.ResourceManagerGateway;
+import org.apache.flink.runtime.rpc.resourcemanager.SlotAssignment;
+import org.apache.flink.runtime.rpc.resourcemanager.SlotRequest;
+import org.apache.flink.runtime.rpc.akka.messages.RegisterJobMaster;
+import org.apache.flink.runtime.rpc.akka.messages.RequestSlot;
+import scala.concurrent.Future;
+import scala.concurrent.duration.FiniteDuration;
+import scala.reflect.ClassTag$;
+
+public class ResourceManagerAkkaGateway extends BaseAkkaGateway implements ResourceManagerGateway {
+	private final AskableActorRef actorRef;
+	private final Timeout timeout;
+
+	public ResourceManagerAkkaGateway(ActorRef actorRef, Timeout timeout) {
+		this.actorRef = new AskableActorRef(actorRef);
+		this.timeout = timeout;
+	}
+
+	@Override
+	public Future<RegistrationResponse> registerJobMaster(JobMasterRegistration jobMasterRegistration, FiniteDuration timeout) {
+		return actorRef.ask(new RegisterJobMaster(jobMasterRegistration), new Timeout(timeout))
+			.mapTo(ClassTag$.MODULE$.<RegistrationResponse>apply(RegistrationResponse.class));
+	}
+
+	@Override
+	public Future<RegistrationResponse> registerJobMaster(JobMasterRegistration jobMasterRegistration) {
+		return actorRef.ask(new RegisterJobMaster(jobMasterRegistration), timeout)
+			.mapTo(ClassTag$.MODULE$.<RegistrationResponse>apply(RegistrationResponse.class));
+	}
+
+	@Override
+	public Future<SlotAssignment> requestSlot(SlotRequest slotRequest) {
+		return actorRef.ask(new RequestSlot(slotRequest), timeout)
+			.mapTo(ClassTag$.MODULE$.<SlotAssignment>apply(SlotAssignment.class));
+	}
+
+	@Override
+	public ActorRef getActorRef() {
+		return actorRef.actorRef();
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/taskexecutor/TaskExecutorAkkaActor.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/taskexecutor/TaskExecutorAkkaActor.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/taskexecutor/TaskExecutorAkkaActor.java
new file mode 100644
index 0000000..ed522cc
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/taskexecutor/TaskExecutorAkkaActor.java
@@ -0,0 +1,77 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.akka.taskexecutor;
+
+import akka.actor.ActorRef;
+import akka.actor.Status;
+import akka.dispatch.OnComplete;
+import org.apache.flink.runtime.messages.Acknowledge;
+import org.apache.flink.runtime.rpc.akka.BaseAkkaActor;
+import org.apache.flink.runtime.rpc.akka.messages.CancelTask;
+import org.apache.flink.runtime.rpc.akka.messages.ExecuteTask;
+import org.apache.flink.runtime.rpc.taskexecutor.TaskExecutorGateway;
+
+public class TaskExecutorAkkaActor extends BaseAkkaActor {
+	private final TaskExecutorGateway taskExecutor;
+
+	public TaskExecutorAkkaActor(TaskExecutorGateway taskExecutor) {
+		this.taskExecutor = taskExecutor;
+	}
+
+	@Override
+	public void onReceive(Object message) throws Exception {
+		final ActorRef sender = getSender();
+
+		if (message instanceof ExecuteTask) {
+			ExecuteTask executeTask = (ExecuteTask) message;
+
+			taskExecutor.executeTask(executeTask.getTaskDeploymentDescriptor()).onComplete(
+				new OnComplete<Acknowledge>() {
+					@Override
+					public void onComplete(Throwable failure, Acknowledge success) throws Throwable {
+						if (failure != null) {
+							sender.tell(new Status.Failure(failure), getSelf());
+						} else {
+							sender.tell(new Status.Success(Acknowledge.get()), getSelf());
+						}
+					}
+				},
+				getContext().dispatcher()
+			);
+		} else if (message instanceof CancelTask) {
+			CancelTask cancelTask = (CancelTask) message;
+
+			taskExecutor.cancelTask(cancelTask.getExecutionAttemptID()).onComplete(
+				new OnComplete<Acknowledge>() {
+					@Override
+					public void onComplete(Throwable failure, Acknowledge success) throws Throwable {
+						if (failure != null) {
+							sender.tell(new Status.Failure(failure), getSelf());
+						} else {
+							sender.tell(new Status.Success(Acknowledge.get()), getSelf());
+						}
+					}
+				},
+				getContext().dispatcher()
+			);
+		} else {
+			super.onReceive(message);
+		}
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/taskexecutor/TaskExecutorAkkaGateway.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/taskexecutor/TaskExecutorAkkaGateway.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/taskexecutor/TaskExecutorAkkaGateway.java
new file mode 100644
index 0000000..7f0a522
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/taskexecutor/TaskExecutorAkkaGateway.java
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.akka.taskexecutor;
+
+import akka.actor.ActorRef;
+import akka.pattern.AskableActorRef;
+import akka.util.Timeout;
+import org.apache.flink.runtime.deployment.TaskDeploymentDescriptor;
+import org.apache.flink.runtime.executiongraph.ExecutionAttemptID;
+import org.apache.flink.runtime.messages.Acknowledge;
+import org.apache.flink.runtime.rpc.akka.BaseAkkaGateway;
+import org.apache.flink.runtime.rpc.akka.messages.CancelTask;
+import org.apache.flink.runtime.rpc.akka.messages.ExecuteTask;
+import org.apache.flink.runtime.rpc.taskexecutor.TaskExecutorGateway;
+import scala.concurrent.Future;
+import scala.reflect.ClassTag$;
+
+public class TaskExecutorAkkaGateway extends BaseAkkaGateway implements TaskExecutorGateway {
+	private final AskableActorRef actorRef;
+	private final Timeout timeout;
+
+	public TaskExecutorAkkaGateway(ActorRef actorRef, Timeout timeout) {
+		this.actorRef = new AskableActorRef(actorRef);
+		this.timeout = timeout;
+	}
+
+	@Override
+	public Future<Acknowledge> executeTask(TaskDeploymentDescriptor taskDeploymentDescriptor) {
+		return actorRef.ask(new ExecuteTask(taskDeploymentDescriptor), timeout)
+			.mapTo(ClassTag$.MODULE$.<Acknowledge>apply(Acknowledge.class));
+	}
+
+	@Override
+	public Future<Acknowledge> cancelTask(ExecutionAttemptID executionAttemptId) {
+		return actorRef.ask(new CancelTask(executionAttemptId), timeout)
+			.mapTo(ClassTag$.MODULE$.<Acknowledge>apply(Acknowledge.class));
+	}
+
+	@Override
+	public ActorRef getActorRef() {
+		return actorRef.actorRef();
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/jobmaster/JobMaster.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/jobmaster/JobMaster.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/jobmaster/JobMaster.java
new file mode 100644
index 0000000..b81b19c
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/jobmaster/JobMaster.java
@@ -0,0 +1,249 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.jobmaster;
+
+import akka.dispatch.Futures;
+import akka.dispatch.Mapper;
+import akka.dispatch.OnComplete;
+import org.apache.flink.runtime.instance.InstanceID;
+import org.apache.flink.runtime.messages.Acknowledge;
+import org.apache.flink.runtime.rpc.RpcMethod;
+import org.apache.flink.runtime.rpc.resourcemanager.JobMasterRegistration;
+import org.apache.flink.runtime.rpc.resourcemanager.RegistrationResponse;
+import org.apache.flink.runtime.rpc.resourcemanager.ResourceManagerGateway;
+import org.apache.flink.runtime.rpc.RpcEndpoint;
+import org.apache.flink.runtime.rpc.RpcService;
+import org.apache.flink.runtime.taskmanager.TaskExecutionState;
+import scala.Tuple2;
+import scala.concurrent.ExecutionContext;
+import scala.concurrent.ExecutionContext$;
+import scala.concurrent.Future;
+import scala.concurrent.duration.Deadline;
+import scala.concurrent.duration.FiniteDuration;
+
+import java.util.UUID;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ScheduledThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+
+/**
+ * JobMaster implementation. The job master is responsible for the execution of a single
+ * {@link org.apache.flink.runtime.jobgraph.JobGraph}.
+ *
+ * It offers the following methods as part of its rpc interface to interact with the JobMaster
+ * remotely:
+ * <ul>
+ *     <li>{@link #registerAtResourceManager(String)} triggers the registration at the resource manager</li>
+ *     <li>{@link #updateTaskExecutionState(TaskExecutionState)} updates the task execution state for
+ * given task</li>
+ * </ul>
+ */
+public class JobMaster extends RpcEndpoint<JobMasterGateway> {
+	/** Execution context for future callbacks */
+	private final ExecutionContext executionContext;
+
+	/** Execution context for scheduled runnables */
+	private final ScheduledExecutorService scheduledExecutorService;
+
+	private final FiniteDuration initialRegistrationTimeout = new FiniteDuration(500, TimeUnit.MILLISECONDS);
+	private final FiniteDuration maxRegistrationTimeout = new FiniteDuration(30, TimeUnit.SECONDS);
+	private final FiniteDuration registrationDuration = new FiniteDuration(365, TimeUnit.DAYS);
+	private final long failedRegistrationDelay = 10000;
+
+	/** Gateway to connected resource manager, null iff not connected */
+	private ResourceManagerGateway resourceManager = null;
+
+	/** UUID to filter out old registration runs */
+	private UUID currentRegistrationRun;
+
+	public JobMaster(RpcService rpcService, ExecutorService executorService) {
+		super(rpcService);
+		executionContext = ExecutionContext$.MODULE$.fromExecutor(executorService);
+		scheduledExecutorService = new ScheduledThreadPoolExecutor(1);
+	}
+
+	public ResourceManagerGateway getResourceManager() {
+		return resourceManager;
+	}
+
+	//----------------------------------------------------------------------------------------------
+	// RPC methods
+	//----------------------------------------------------------------------------------------------
+
+	/**
+	 * Updates the task execution state for a given task.
+	 *
+	 * @param taskExecutionState New task execution state for a given task
+	 * @return Acknowledge the task execution state update
+	 */
+	@RpcMethod
+	public Acknowledge updateTaskExecutionState(TaskExecutionState taskExecutionState) {
+		System.out.println("TaskExecutionState: " + taskExecutionState);
+		return Acknowledge.get();
+	}
+
+	/**
+	 * Triggers the registration of the job master at the resource manager.
+	 *
+	 * @param address Address of the resource manager
+	 */
+	@RpcMethod
+	public void registerAtResourceManager(final String address) {
+		currentRegistrationRun = UUID.randomUUID();
+
+		Future<ResourceManagerGateway> resourceManagerFuture = getRpcService().connect(address, ResourceManagerGateway.class);
+
+		handleResourceManagerRegistration(
+			new JobMasterRegistration(getAddress()),
+			1,
+			resourceManagerFuture,
+			currentRegistrationRun,
+			initialRegistrationTimeout,
+			maxRegistrationTimeout,
+			registrationDuration.fromNow());
+	}
+
+	//----------------------------------------------------------------------------------------------
+	// Helper methods
+	//----------------------------------------------------------------------------------------------
+
+	/**
+	 * Helper method to handle the resource manager registration process. If a registration attempt
+	 * times out, then a new attempt with the doubled time out is initiated. The whole registration
+	 * process has a deadline. Once this deadline is overdue without successful registration, the
+	 * job master shuts down.
+	 *
+	 * @param jobMasterRegistration Job master registration info which is sent to the resource
+	 *                              manager
+	 * @param attemptNumber Registration attempt number
+	 * @param resourceManagerFuture Future of the resource manager gateway
+	 * @param registrationRun UUID describing the current registration run
+	 * @param timeout Timeout of the last registration attempt
+	 * @param maxTimeout Maximum timeout between registration attempts
+	 * @param deadline Deadline for the registration
+	 */
+	void handleResourceManagerRegistration(
+		final JobMasterRegistration jobMasterRegistration,
+		final int attemptNumber,
+		final Future<ResourceManagerGateway> resourceManagerFuture,
+		final UUID registrationRun,
+		final FiniteDuration timeout,
+		final FiniteDuration maxTimeout,
+		final Deadline deadline) {
+
+		// filter out concurrent registration runs
+		if (registrationRun.equals(currentRegistrationRun)) {
+
+			log.info("Start registration attempt #{}.", attemptNumber);
+
+			if (deadline.isOverdue()) {
+				// we've exceeded our registration deadline. This means that we have to shutdown the JobMaster
+				log.error("Exceeded registration deadline without successfully registering at the ResourceManager.");
+				shutDown();
+			} else {
+				Future<Tuple2<RegistrationResponse, ResourceManagerGateway>> registrationResponseFuture = resourceManagerFuture.flatMap(new Mapper<ResourceManagerGateway, Future<Tuple2<RegistrationResponse, ResourceManagerGateway>>>() {
+					@Override
+					public Future<Tuple2<RegistrationResponse, ResourceManagerGateway>> apply(ResourceManagerGateway resourceManagerGateway) {
+						return resourceManagerGateway.registerJobMaster(jobMasterRegistration, timeout).zip(Futures.successful(resourceManagerGateway));
+					}
+				}, executionContext);
+
+				registrationResponseFuture.onComplete(new OnComplete<Tuple2<RegistrationResponse, ResourceManagerGateway>>() {
+					@Override
+					public void onComplete(Throwable failure, Tuple2<RegistrationResponse, ResourceManagerGateway> tuple) throws Throwable {
+						if (failure != null) {
+							if (failure instanceof TimeoutException) {
+								// we haven't received an answer in the given timeout interval,
+								// so increase it and try again.
+								final FiniteDuration newTimeout = timeout.$times(2L).min(maxTimeout);
+
+								handleResourceManagerRegistration(
+									jobMasterRegistration,
+									attemptNumber + 1,
+									resourceManagerFuture,
+									registrationRun,
+									newTimeout,
+									maxTimeout,
+									deadline);
+							} else {
+								log.error("Received unknown error while registering at the ResourceManager.", failure);
+								shutDown();
+							}
+						} else {
+							final RegistrationResponse response = tuple._1();
+							final ResourceManagerGateway gateway = tuple._2();
+
+							if (response.isSuccess()) {
+								finishResourceManagerRegistration(gateway, response.getInstanceID());
+							} else {
+								log.info("The registration was refused. Try again.");
+
+								scheduledExecutorService.schedule(new Runnable() {
+									@Override
+									public void run() {
+										// we have to execute scheduled runnable in the main thread
+										// because we need consistency wrt currentRegistrationRun
+										runAsync(new Runnable() {
+											@Override
+											public void run() {
+												// our registration attempt was refused. Start over.
+												handleResourceManagerRegistration(
+													jobMasterRegistration,
+													1,
+													resourceManagerFuture,
+													registrationRun,
+													initialRegistrationTimeout,
+													maxTimeout,
+													deadline);
+											}
+										});
+									}
+								}, failedRegistrationDelay, TimeUnit.MILLISECONDS);
+							}
+						}
+					}
+				}, getMainThreadExecutionContext()); // use the main thread execution context to execute the call back in the main thread
+			}
+		} else {
+			log.info("Discard out-dated registration run.");
+		}
+	}
+
+	/**
+	 * Finish the resource manager registration by setting the new resource manager gateway.
+	 *
+	 * @param resourceManager New resource manager gateway
+	 * @param instanceID Instance id assigned by the resource manager
+	 */
+	void finishResourceManagerRegistration(ResourceManagerGateway resourceManager, InstanceID instanceID) {
+		log.info("Successfully registered at the ResourceManager under instance id {}.", instanceID);
+		this.resourceManager = resourceManager;
+	}
+
+	/**
+	 * Return if the job master is connected to a resource manager.
+	 *
+	 * @return true if the job master is connected to the resource manager
+	 */
+	public boolean isConnected() {
+		return resourceManager != null;
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/jobmaster/JobMasterGateway.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/jobmaster/JobMasterGateway.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/jobmaster/JobMasterGateway.java
new file mode 100644
index 0000000..17a4c3a
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/jobmaster/JobMasterGateway.java
@@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.jobmaster;
+
+import org.apache.flink.runtime.messages.Acknowledge;
+import org.apache.flink.runtime.rpc.RpcGateway;
+import org.apache.flink.runtime.taskmanager.TaskExecutionState;
+import scala.concurrent.Future;
+
+/**
+ * {@link JobMaster} rpc gateway interface
+ */
+public interface JobMasterGateway extends RpcGateway {
+
+	/**
+	 * Updates the task execution state for a given task.
+	 *
+	 * @param taskExecutionState New task execution state for a given task
+	 * @return Future acknowledge of the task execution state update
+	 */
+	Future<Acknowledge> updateTaskExecutionState(TaskExecutionState taskExecutionState);
+
+	/**
+	 * Triggers the registration of the job master at the resource manager.
+	 *
+	 * @param address Address of the resource manager
+	 */
+	void registerAtResourceManager(final String address);
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/JobMasterRegistration.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/JobMasterRegistration.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/JobMasterRegistration.java
new file mode 100644
index 0000000..7a2deae
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/JobMasterRegistration.java
@@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.resourcemanager;
+
+import java.io.Serializable;
+
+public class JobMasterRegistration implements Serializable {
+	private static final long serialVersionUID = 8411214999193765202L;
+
+	private final String address;
+
+	public JobMasterRegistration(String address) {
+		this.address = address;
+	}
+
+	public String getAddress() {
+		return address;
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/RegistrationResponse.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/RegistrationResponse.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/RegistrationResponse.java
new file mode 100644
index 0000000..8ac9e49
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/RegistrationResponse.java
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.resourcemanager;
+
+import org.apache.flink.runtime.instance.InstanceID;
+
+import java.io.Serializable;
+
+public class RegistrationResponse implements Serializable {
+	private static final long serialVersionUID = -2379003255993119993L;
+
+	private final boolean isSuccess;
+	private final InstanceID instanceID;
+
+	public RegistrationResponse(boolean isSuccess, InstanceID instanceID) {
+		this.isSuccess = isSuccess;
+		this.instanceID = instanceID;
+	}
+
+	public boolean isSuccess() {
+		return isSuccess;
+	}
+
+	public InstanceID getInstanceID() {
+		return instanceID;
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/ResourceManager.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/ResourceManager.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/ResourceManager.java
new file mode 100644
index 0000000..c7e8def
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/ResourceManager.java
@@ -0,0 +1,94 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.resourcemanager;
+
+import akka.dispatch.Mapper;
+import org.apache.flink.runtime.instance.InstanceID;
+import org.apache.flink.runtime.rpc.RpcMethod;
+import org.apache.flink.runtime.rpc.RpcEndpoint;
+import org.apache.flink.runtime.rpc.RpcService;
+import org.apache.flink.runtime.rpc.jobmaster.JobMaster;
+import org.apache.flink.runtime.rpc.jobmaster.JobMasterGateway;
+import scala.concurrent.ExecutionContext;
+import scala.concurrent.ExecutionContext$;
+import scala.concurrent.Future;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.ExecutorService;
+
+/**
+ * ResourceManager implementation. The resource manager is responsible for resource de-/allocation
+ * and bookkeeping.
+ *
+ * It offers the following methods as part of its rpc interface to interact with the him remotely:
+ * <ul>
+ *     <li>{@link #registerJobMaster(JobMasterRegistration)} registers a {@link JobMaster} at the resource manager</li>
+ *     <li>{@link #requestSlot(SlotRequest)} requests a slot from the resource manager</li>
+ * </ul>
+ */
+public class ResourceManager extends RpcEndpoint<ResourceManagerGateway> {
+	private final ExecutionContext executionContext;
+	private final Map<JobMasterGateway, InstanceID> jobMasterGateways;
+
+	public ResourceManager(RpcService rpcService, ExecutorService executorService) {
+		super(rpcService);
+		this.executionContext = ExecutionContext$.MODULE$.fromExecutor(executorService);
+		this.jobMasterGateways = new HashMap<>();
+	}
+
+	/**
+	 * Register a {@link JobMaster} at the resource manager.
+	 *
+	 * @param jobMasterRegistration Job master registration information
+	 * @return Future registration response
+	 */
+	@RpcMethod
+	public Future<RegistrationResponse> registerJobMaster(JobMasterRegistration jobMasterRegistration) {
+		Future<JobMasterGateway> jobMasterFuture = getRpcService().connect(jobMasterRegistration.getAddress(), JobMasterGateway.class);
+
+		return jobMasterFuture.map(new Mapper<JobMasterGateway, RegistrationResponse>() {
+			@Override
+			public RegistrationResponse apply(final JobMasterGateway jobMasterGateway) {
+				InstanceID instanceID;
+
+				if (jobMasterGateways.containsKey(jobMasterGateway)) {
+					instanceID = jobMasterGateways.get(jobMasterGateway);
+				} else {
+					instanceID = new InstanceID();
+					jobMasterGateways.put(jobMasterGateway, instanceID);
+				}
+
+				return new RegistrationResponse(true, instanceID);
+			}
+		}, getMainThreadExecutionContext());
+	}
+
+	/**
+	 * Requests a slot from the resource manager.
+	 *
+	 * @param slotRequest Slot request
+	 * @return Slot assignment
+	 */
+	@RpcMethod
+	public SlotAssignment requestSlot(SlotRequest slotRequest) {
+		System.out.println("SlotRequest: " + slotRequest);
+		return new SlotAssignment();
+	}
+}


[57/89] [abbrv] flink git commit: [FLINK-4452] [metrics] TaskManager network buffer gauges

Posted by se...@apache.org.
[FLINK-4452] [metrics] TaskManager network buffer gauges

Adds gauges for the number of total and available TaskManager network
memory segments.

This closes #2408


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/28743cfb
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/28743cfb
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/28743cfb

Branch: refs/heads/flip-6
Commit: 28743cfb86545cf9eaf4ec2cf37ec460a13f3537
Parents: 58165d6
Author: Greg Hogan <co...@greghogan.com>
Authored: Tue Aug 23 10:46:48 2016 -0400
Committer: Greg Hogan <co...@greghogan.com>
Committed: Wed Aug 24 09:02:15 2016 -0400

----------------------------------------------------------------------
 docs/monitoring/metrics.md                      |  9 +++++++
 .../flink/runtime/taskmanager/TaskManager.scala | 25 +++++++++++++++++---
 2 files changed, 31 insertions(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/28743cfb/docs/monitoring/metrics.md
----------------------------------------------------------------------
diff --git a/docs/monitoring/metrics.md b/docs/monitoring/metrics.md
index 023bef9..3a148e1 100644
--- a/docs/monitoring/metrics.md
+++ b/docs/monitoring/metrics.md
@@ -335,6 +335,15 @@ Flink exposes the following system metrics:
       <td></td>
     </tr>
     <tr>
+      <th rowspan="2"><strong>TaskManager.Status</strong></th>
+      <td>Network.AvailableMemorySegments</td>
+      <td>The number of unused memory segments.</td>
+    </tr>
+    <tr>
+      <td>Network.TotalMemorySegments</td>
+      <td>The number of allocated memory segments.</td>
+    </tr>
+    <tr>
       <th rowspan="19"><strong>TaskManager.Status.JVM</strong></th>
       <td>ClassLoader.ClassesLoaded</td>
       <td>The total number of classes loaded since the start of the JVM.</td>

http://git-wip-us.apache.org/repos/asf/flink/blob/28743cfb/flink-runtime/src/main/scala/org/apache/flink/runtime/taskmanager/TaskManager.scala
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/scala/org/apache/flink/runtime/taskmanager/TaskManager.scala b/flink-runtime/src/main/scala/org/apache/flink/runtime/taskmanager/TaskManager.scala
index 5a95143..72ec2ac 100644
--- a/flink-runtime/src/main/scala/org/apache/flink/runtime/taskmanager/TaskManager.scala
+++ b/flink-runtime/src/main/scala/org/apache/flink/runtime/taskmanager/TaskManager.scala
@@ -965,7 +965,7 @@ class TaskManager(
     taskManagerMetricGroup = 
       new TaskManagerMetricGroup(metricsRegistry, this.runtimeInfo.getHostname, id.toString)
     
-    TaskManager.instantiateStatusMetrics(taskManagerMetricGroup)
+    TaskManager.instantiateStatusMetrics(taskManagerMetricGroup, network)
     
     // watch job manager to detect when it dies
     context.watch(jobManager)
@@ -2357,9 +2357,16 @@ object TaskManager {
     metricRegistry
   }
 
-  private def instantiateStatusMetrics(taskManagerMetricGroup: MetricGroup) : Unit = {
-    val jvm = taskManagerMetricGroup
+  private def instantiateStatusMetrics(
+      taskManagerMetricGroup: MetricGroup,
+      network: NetworkEnvironment)
+    : Unit = {
+    val status = taskManagerMetricGroup
       .addGroup("Status")
+
+    instantiateNetworkMetrics(status.addGroup("Network"), network)
+
+    val jvm = status
       .addGroup("JVM")
 
     instantiateClassLoaderMetrics(jvm.addGroup("ClassLoader"))
@@ -2369,6 +2376,18 @@ object TaskManager {
     instantiateCPUMetrics(jvm.addGroup("CPU"))
   }
 
+  private def instantiateNetworkMetrics(
+        metrics: MetricGroup,
+        network: NetworkEnvironment)
+    : Unit = {
+    metrics.gauge[Long, FlinkGauge[Long]]("TotalMemorySegments", new FlinkGauge[Long] {
+      override def getValue: Long = network.getNetworkBufferPool.getTotalNumberOfMemorySegments
+    })
+    metrics.gauge[Long, FlinkGauge[Long]]("AvailableMemorySegments", new FlinkGauge[Long] {
+      override def getValue: Long = network.getNetworkBufferPool.getNumberOfAvailableMemorySegments
+    })
+  }
+
   private def instantiateClassLoaderMetrics(metrics: MetricGroup) {
     val mxBean = ManagementFactory.getClassLoadingMXBean
 


[19/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/batch/iterations.md
----------------------------------------------------------------------
diff --git a/docs/dev/batch/iterations.md b/docs/dev/batch/iterations.md
new file mode 100644
index 0000000..47910d0
--- /dev/null
+++ b/docs/dev/batch/iterations.md
@@ -0,0 +1,212 @@
+---
+title:  "Iterations"
+
+# Sub-level navigation
+sub-nav-group: batch
+sub-nav-pos: 3
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Iterative algorithms occur in many domains of data analysis, such as *machine learning* or *graph analysis*. Such algorithms are crucial in order to realize the promise of Big Data to extract meaningful information out of your data. With increasing interest to run these kinds of algorithms on very large data sets, there is a need to execute iterations in a massively parallel fashion.
+
+Flink programs implement iterative algorithms by defining a **step function** and embedding it into a special iteration operator. There are two  variants of this operator: **Iterate** and **Delta Iterate**. Both operators repeatedly invoke the step function on the current iteration state until a certain termination condition is reached.
+
+Here, we provide background on both operator variants and outline their usage. The [programming guide](index.html) explains how to implement the operators in both Scala and Java. We also support both **vertex-centric and gather-sum-apply iterations** through Flink's graph processing API, [Gelly]({{site.baseurl}}/dev/libs/gelly/index.html).
+
+The following table provides an overview of both operators:
+
+<table class="table table-striped table-hover table-bordered">
+	<thead>
+		<th></th>
+		<th class="text-center">Iterate</th>
+		<th class="text-center">Delta Iterate</th>
+	</thead>
+	<tr>
+		<td class="text-center" width="20%"><strong>Iteration Input</strong></td>
+		<td class="text-center" width="40%"><strong>Partial Solution</strong></td>
+		<td class="text-center" width="40%"><strong>Workset</strong> and <strong>Solution Set</strong></td>
+	</tr>
+	<tr>
+		<td class="text-center"><strong>Step Function</strong></td>
+		<td colspan="2" class="text-center">Arbitrary Data Flows</td>
+	</tr>
+	<tr>
+		<td class="text-center"><strong>State Update</strong></td>
+		<td class="text-center">Next <strong>partial solution</strong></td>
+		<td>
+			<ul>
+				<li>Next workset</li>
+				<li><strong>Changes to solution set</strong></li>
+			</ul>
+		</td>
+	</tr>
+	<tr>
+		<td class="text-center"><strong>Iteration Result</strong></td>
+		<td class="text-center">Last partial solution</td>
+		<td class="text-center">Solution set state after last iteration</td>
+	</tr>
+	<tr>
+		<td class="text-center"><strong>Termination</strong></td>
+		<td>
+			<ul>
+				<li><strong>Maximum number of iterations</strong> (default)</li>
+				<li>Custom aggregator convergence</li>
+			</ul>
+		</td>
+		<td>
+			<ul>
+				<li><strong>Maximum number of iterations or empty workset</strong> (default)</li>
+				<li>Custom aggregator convergence</li>
+			</ul>
+		</td>
+	</tr>
+</table>
+
+
+* This will be replaced by the TOC
+{:toc}
+
+Iterate Operator
+----------------
+
+The **iterate operator** covers the *simple form of iterations*: in each iteration, the **step function** consumes the **entire input** (the *result of the previous iteration*, or the *initial data set*), and computes the **next version of the partial solution** (e.g. `map`, `reduce`, `join`, etc.).
+
+<p class="text-center">
+    <img alt="Iterate Operator" width="60%" src="fig/iterations_iterate_operator.png" />
+</p>
+
+  1. **Iteration Input**: Initial input for the *first iteration* from a *data source* or *previous operators*.
+  2. **Step Function**: The step function will be executed in each iteration. It is an arbitrary data flow consisting of operators like `map`, `reduce`, `join`, etc. and depends on your specific task at hand.
+  3. **Next Partial Solution**: In each iteration, the output of the step function will be fed back into the *next iteration*.
+  4. **Iteration Result**: Output of the *last iteration* is written to a *data sink* or used as input to the *following operators*.
+
+There are multiple options to specify **termination conditions** for an iteration:
+
+  - **Maximum number of iterations**: Without any further conditions, the iteration will be executed this many times.
+  - **Custom aggregator convergence**: Iterations allow to specify *custom aggregators* and *convergence criteria* like sum aggregate the number of emitted records (aggregator) and terminate if this number is zero (convergence criterion).
+
+You can also think about the iterate operator in pseudo-code:
+
+~~~java
+IterationState state = getInitialState();
+
+while (!terminationCriterion()) {
+	state = step(state);
+}
+
+setFinalState(state);
+~~~
+
+<div class="panel panel-default">
+	<div class="panel-body">
+	See the <strong><a href="index.html">Programming Guide</a> </strong> for details and code examples.</div>
+</div>
+
+### Example: Incrementing Numbers
+
+In the following example, we **iteratively incremenet a set numbers**:
+
+<p class="text-center">
+    <img alt="Iterate Operator Example" width="60%" src="fig/iterations_iterate_operator_example.png" />
+</p>
+
+  1. **Iteration Input**: The inital input is read from a data source and consists of five single-field records (integers `1` to `5`).
+  2. **Step function**: The step function is a single `map` operator, which increments the integer field from `i` to `i+1`. It will be applied to every record of the input.
+  3. **Next Partial Solution**: The output of the step function will be the output of the map operator, i.e. records with incremented integers.
+  4. **Iteration Result**: After ten iterations, the initial numbers will have been incremented ten times, resulting in integers `11` to `15`.
+
+~~~
+// 1st           2nd                       10th
+map(1) -> 2      map(2) -> 3      ...      map(10) -> 11
+map(2) -> 3      map(3) -> 4      ...      map(11) -> 12
+map(3) -> 4      map(4) -> 5      ...      map(12) -> 13
+map(4) -> 5      map(5) -> 6      ...      map(13) -> 14
+map(5) -> 6      map(6) -> 7      ...      map(14) -> 15
+~~~
+
+Note that **1**, **2**, and **4** can be arbitrary data flows.
+
+
+Delta Iterate Operator
+----------------------
+
+The **delta iterate operator** covers the case of **incremental iterations**. Incremental iterations **selectively modify elements** of their **solution** and evolve the solution rather than fully recompute it.
+
+Where applicable, this leads to **more efficient algorithms**, because not every element in the solution set changes in each iteration. This allows to **focus on the hot parts** of the solution and leave the **cold parts untouched**. Frequently, the majority of the solution cools down comparatively fast and the later iterations operate only on a small subset of the data.
+
+<p class="text-center">
+    <img alt="Delta Iterate Operator" width="60%" src="fig/iterations_delta_iterate_operator.png" />
+</p>
+
+  1. **Iteration Input**: The initial workset and solution set are read from *data sources* or *previous operators* as input to the first iteration.
+  2. **Step Function**: The step function will be executed in each iteration. It is an arbitrary data flow consisting of operators like `map`, `reduce`, `join`, etc. and depends on your specific task at hand.
+  3. **Next Workset/Update Solution Set**: The *next workset* drives the iterative computation and will be fed back into the *next iteration*. Furthermore, the solution set will be updated and implicitly forwarded (it is not required to be rebuild). Both data sets can be updated by different operators of the step function.
+  4. **Iteration Result**: After the *last iteration*, the *solution set* is written to a *data sink* or used as input to the *following operators*.
+
+The default **termination condition** for delta iterations is specified by the **empty workset convergence criterion** and a **maximum number of iterations**. The iteration will terminate when a produced *next workset* is empty or when the maximum number of iterations is reached. It is also possible to specify a **custom aggregator** and **convergence criterion**.
+
+You can also think about the iterate operator in pseudo-code:
+
+~~~java
+IterationState workset = getInitialState();
+IterationState solution = getInitialSolution();
+
+while (!terminationCriterion()) {
+	(delta, workset) = step(workset, solution);
+
+	solution.update(delta)
+}
+
+setFinalState(solution);
+~~~
+
+<div class="panel panel-default">
+	<div class="panel-body">
+	See the <strong><a href="index.html">programming guide</a></strong> for details and code examples.</div>
+</div>
+
+### Example: Propagate Minimum in Graph
+
+In the following example, every vertex has an **ID** and a **coloring**. Each vertex will propagate its vertex ID to neighboring vertices. The **goal** is to *assign the minimum ID to every vertex in a subgraph*. If a received ID is smaller then the current one, it changes to the color of the vertex with the received ID. One application of this can be found in *community analysis* or *connected components* computation.
+
+<p class="text-center">
+    <img alt="Delta Iterate Operator Example" width="100%" src="fig/iterations_delta_iterate_operator_example.png" />
+</p>
+
+The **initial input** is set as **both workset and solution set.** In the above figure, the colors visualize the **evolution of the solution set**. With each iteration, the color of the minimum ID is spreading in the respective subgraph. At the same time, the amount of work (exchanged and compared vertex IDs) decreases with each iteration. This corresponds to the **decreasing size of the workset**, which goes from all seven vertices to zero after three iterations, at which time the iteration terminates. The **important observation** is that *the lower subgraph converges before the upper half* does and the delta iteration is able to capture this with the workset abstraction.
+
+In the upper subgraph **ID 1** (*orange*) is the **minimum ID**. In the **first iteration**, it will get propagated to vertex 2, which will subsequently change its color to orange. Vertices 3 and 4 will receive **ID 2** (in *yellow*) as their current minimum ID and change to yellow. Because the color of *vertex 1* didn't change in the first iteration, it can be skipped it in the next workset.
+
+In the lower subgraph **ID 5** (*cyan*) is the **minimum ID**. All vertices of the lower subgraph will receive it in the first iteration. Again, we can skip the unchanged vertices (*vertex 5*) for the next workset.
+
+In the **2nd iteration**, the workset size has already decreased from seven to five elements (vertices 2, 3, 4, 6, and 7). These are part of the iteration and further propagate their current minimum IDs. After this iteration, the lower subgraph has already converged (**cold part** of the graph), as it has no elements in the workset, whereas the upper half needs a further iteration (**hot part** of the graph) for the two remaining workset elements (vertices 3 and 4).
+
+The iteration **terminates**, when the workset is empty after the **3rd iteration**.
+
+<a href="#supersteps"></a>
+
+Superstep Synchronization
+-------------------------
+
+We referred to each execution of the step function of an iteration operator as *a single iteration*. In parallel setups, **multiple instances of the step function are evaluated in parallel** on different partitions of the iteration state. In many settings, one evaluation of the step function on all parallel instances forms a so called **superstep**, which is also the granularity of synchronization. Therefore, *all* parallel tasks of an iteration need to complete the superstep, before a next superstep will be initialized. **Termination criteria** will also be evaluated at superstep barriers.
+
+<p class="text-center">
+    <img alt="Supersteps" width="50%" src="fig/iterations_supersteps.png" />
+</p>

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/batch/python.md
----------------------------------------------------------------------
diff --git a/docs/dev/batch/python.md b/docs/dev/batch/python.md
new file mode 100644
index 0000000..09a4fa8
--- /dev/null
+++ b/docs/dev/batch/python.md
@@ -0,0 +1,635 @@
+---
+title: "Python Programming Guide"
+is_beta: true
+nav-title: Python API
+nav-parent_id: batch
+nav-pos: 4
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Analysis programs in Flink are regular programs that implement transformations on data sets
+(e.g., filtering, mapping, joining, grouping). The data sets are initially created from certain
+sources (e.g., by reading files, or from collections). Results are returned via sinks, which may for
+example write the data to (distributed) files, or to standard output (for example the command line
+terminal). Flink programs run in a variety of contexts, standalone, or embedded in other programs.
+The execution can happen in a local JVM, or on clusters of many machines.
+
+In order to create your own Flink program, we encourage you to start with the
+[program skeleton](#program-skeleton) and gradually add your own
+[transformations](#transformations). The remaining sections act as references for additional
+operations and advanced features.
+
+* This will be replaced by the TOC
+{:toc}
+
+Example Program
+---------------
+
+The following program is a complete, working example of WordCount. You can copy &amp; paste the code
+to run it locally.
+
+{% highlight python %}
+from flink.plan.Environment import get_environment
+from flink.functions.GroupReduceFunction import GroupReduceFunction
+
+class Adder(GroupReduceFunction):
+  def reduce(self, iterator, collector):
+    count, word = iterator.next()
+    count += sum([x[0] for x in iterator])
+    collector.collect((count, word))
+
+env = get_environment()
+data = env.from_elements("Who's there?",
+ "I think I hear them. Stand, ho! Who's there?")
+
+data \
+  .flat_map(lambda x, c: [(1, word) for word in x.lower().split()]) \
+  .group_by(1) \
+  .reduce_group(Adder(), combinable=True) \
+  .output()
+
+env.execute(local=True)
+{% endhighlight %}
+
+{% top %}
+
+Program Skeleton
+----------------
+
+As we already saw in the example, Flink programs look like regular python programs.
+Each program consists of the same basic parts:
+
+1. Obtain an `Environment`,
+2. Load/create the initial data,
+3. Specify transformations on this data,
+4. Specify where to put the results of your computations, and
+5. Execute your program.
+
+We will now give an overview of each of those steps but please refer to the respective sections for
+more details.
+
+
+The `Environment` is the basis for all Flink programs. You can
+obtain one using these static methods on class `Environment`:
+
+{% highlight python %}
+get_environment()
+{% endhighlight %}
+
+For specifying data sources the execution environment has several methods
+to read from files. To just read a text file as a sequence of lines, you can use:
+
+{% highlight python %}
+env = get_environment()
+text = env.read_text("file:///path/to/file")
+{% endhighlight %}
+
+This will give you a DataSet on which you can then apply transformations. For
+more information on data sources and input formats, please refer to
+[Data Sources](#data-sources).
+
+Once you have a DataSet you can apply transformations to create a new
+DataSet which you can then write to a file, transform again, or
+combine with other DataSets. You apply transformations by calling
+methods on DataSet with your own custom transformation function. For example,
+a map transformation looks like this:
+
+{% highlight python %}
+data.map(lambda x: x*2)
+{% endhighlight %}
+
+This will create a new DataSet by doubling every value in the original DataSet.
+For more information and a list of all the transformations,
+please refer to [Transformations](#transformations).
+
+Once you have a DataSet that needs to be written to disk you can call one
+of these methods on DataSet:
+
+{% highlight python %}
+data.write_text("<file-path>", WriteMode=Constants.NO_OVERWRITE)
+write_csv("<file-path>", line_delimiter='\n', field_delimiter=',', write_mode=Constants.NO_OVERWRITE)
+output()
+{% endhighlight %}
+
+The last method is only useful for developing/debugging on a local machine,
+it will output the contents of the DataSet to standard output. (Note that in
+a cluster, the result goes to the standard out stream of the cluster nodes and ends
+up in the *.out* files of the workers).
+The first two do as the name suggests.
+Please refer to [Data Sinks](#data-sinks) for more information on writing to files.
+
+Once you specified the complete program you need to call `execute` on
+the `Environment`. This will either execute on your local machine or submit your program
+for execution on a cluster, depending on how Flink was started. You can force
+a local execution by using `execute(local=True)`.
+
+{% top %}
+
+Project setup
+---------------
+
+Apart from setting up Flink, no additional work is required. The python package can be found in the /resource folder of your Flink distribution. The flink package, along with the plan and optional packages are automatically distributed among the cluster via HDFS when running a job.
+
+The Python API was tested on Linux/Windows systems that have Python 2.7 or 3.4 installed.
+
+By default Flink will start python processes by calling "python" or "python3", depending on which start-script
+was used. By setting the "python.binary.python[2/3]" key in the flink-conf.yaml you can modify this behaviour to use a binary of your choice.
+
+{% top %}
+
+Lazy Evaluation
+---------------
+
+All Flink programs are executed lazily: When the program's main method is executed, the data loading
+and transformations do not happen directly. Rather, each operation is created and added to the
+program's plan. The operations are actually executed when one of the `execute()` methods is invoked
+on the Environment object. Whether the program is executed locally or on a cluster depends
+on the environment of the program.
+
+The lazy evaluation lets you construct sophisticated programs that Flink executes as one
+holistically planned unit.
+
+{% top %}
+
+
+Transformations
+---------------
+
+Data transformations transform one or more DataSets into a new DataSet. Programs can combine
+multiple transformations into sophisticated assemblies.
+
+This section gives a brief overview of the available transformations. The [transformations
+documentation](dataset_transformations.html) has a full description of all transformations with
+examples.
+
+<br />
+
+<table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 20%">Transformation</th>
+      <th class="text-center">Description</th>
+    </tr>
+  </thead>
+
+  <tbody>
+    <tr>
+      <td><strong>Map</strong></td>
+      <td>
+        <p>Takes one element and produces one element.</p>
+{% highlight python %}
+data.map(lambda x: x * 2)
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>FlatMap</strong></td>
+      <td>
+        <p>Takes one element and produces zero, one, or more elements. </p>
+{% highlight python %}
+data.flat_map(
+  lambda x,c: [(1,word) for word in line.lower().split() for line in x])
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>MapPartition</strong></td>
+      <td>
+        <p>Transforms a parallel partition in a single function call. The function get the partition
+        as an `Iterator` and can produce an arbitrary number of result values. The number of
+        elements in each partition depends on the parallelism and previous operations.</p>
+{% highlight python %}
+data.map_partition(lambda x,c: [value * 2 for value in x])
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>Filter</strong></td>
+      <td>
+        <p>Evaluates a boolean function for each element and retains those for which the function
+        returns true.</p>
+{% highlight python %}
+data.filter(lambda x: x > 1000)
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>Reduce</strong></td>
+      <td>
+        <p>Combines a group of elements into a single element by repeatedly combining two elements
+        into one. Reduce may be applied on a full data set, or on a grouped data set.</p>
+{% highlight python %}
+data.reduce(lambda x,y : x + y)
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>ReduceGroup</strong></td>
+      <td>
+        <p>Combines a group of elements into one or more elements. ReduceGroup may be applied on a
+        full data set, or on a grouped data set.</p>
+{% highlight python %}
+class Adder(GroupReduceFunction):
+  def reduce(self, iterator, collector):
+    count, word = iterator.next()
+    count += sum([x[0] for x in iterator)      
+    collector.collect((count, word))
+
+data.reduce_group(Adder())
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>Aggregate</strong></td>
+      <td>
+        <p>Performs a built-in operation (sum, min, max) on one field of all the Tuples in a
+        data set or in each group of a data set. Aggregation can be applied on a full dataset
+        or on a grouped data set.</p>
+{% highlight python %}
+# This code finds the sum of all of the values in the first field and the maximum of all of the values in the second field
+data.aggregate(Aggregation.Sum, 0).and_agg(Aggregation.Max, 1)
+
+# min(), max(), and sum() syntactic sugar functions are also available
+data.sum(0).and_agg(Aggregation.Max, 1)
+{% endhighlight %}
+      </td>
+    </tr>
+
+    </tr>
+      <td><strong>Join</strong></td>
+      <td>
+        Joins two data sets by creating all pairs of elements that are equal on their keys.
+        Optionally uses a JoinFunction to turn the pair of elements into a single element.
+        See <a href="#specifying-keys">keys</a> on how to define join keys.
+{% highlight python %}
+# In this case tuple fields are used as keys.
+# "0" is the join field on the first tuple
+# "1" is the join field on the second tuple.
+result = input1.join(input2).where(0).equal_to(1)
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>CoGroup</strong></td>
+      <td>
+        <p>The two-dimensional variant of the reduce operation. Groups each input on one or more
+        fields and then joins the groups. The transformation function is called per pair of groups.
+        See <a href="#specifying-keys">keys</a> on how to define coGroup keys.</p>
+{% highlight python %}
+data1.co_group(data2).where(0).equal_to(1)
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>Cross</strong></td>
+      <td>
+        <p>Builds the Cartesian product (cross product) of two inputs, creating all pairs of
+        elements. Optionally uses a CrossFunction to turn the pair of elements into a single
+        element.</p>
+{% highlight python %}
+result = data1.cross(data2)
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>Union</strong></td>
+      <td>
+        <p>Produces the union of two data sets.</p>
+{% highlight python %}
+data.union(data2)
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>ZipWithIndex</strong></td>
+      <td>
+        <p>Assigns consecutive indexes to each element. For more information, please refer to
+        the [Zip Elements Guide](zip_elements_guide.html#zip-with-a-dense-index).</p>
+{% highlight python %}
+data.zip_with_index()
+{% endhighlight %}
+      </td>
+    </tr>
+  </tbody>
+</table>
+
+{% top %}
+
+
+Specifying Keys
+-------------
+
+Some transformations (like Join or CoGroup) require that a key is defined on
+its argument DataSets, and other transformations (Reduce, GroupReduce) allow that the DataSet is grouped on a key before they are
+applied.
+
+A DataSet is grouped as
+{% highlight python %}
+reduced = data \
+  .group_by(<define key here>) \
+  .reduce_group(<do something>)
+{% endhighlight %}
+
+The data model of Flink is not based on key-value pairs. Therefore,
+you do not need to physically pack the data set types into keys and
+values. Keys are "virtual": they are defined as functions over the
+actual data to guide the grouping operator.
+
+### Define keys for Tuples
+{:.no_toc}
+
+The simplest case is grouping a data set of Tuples on one or more
+fields of the Tuple:
+{% highlight python %}
+reduced = data \
+  .group_by(0) \
+  .reduce_group(<do something>)
+{% endhighlight %}
+
+The data set is grouped on the first field of the tuples.
+The group-reduce function will thus receive groups of tuples with
+the same value in the first field.
+
+{% highlight python %}
+grouped = data \
+  .group_by(0,1) \
+  .reduce(/*do something*/)
+{% endhighlight %}
+
+The data set is grouped on the composite key consisting of the first and the
+second fields, therefore the reduce function will receive groups
+with the same value for both fields.
+
+A note on nested Tuples: If you have a DataSet with a nested tuple
+specifying `group_by(<index of tuple>)` will cause the system to use the full tuple as a key.
+
+{% top %}
+
+
+Passing Functions to Flink
+--------------------------
+
+Certain operations require user-defined functions, whereas all of them accept lambda functions and rich functions as arguments.
+
+{% highlight python %}
+data.filter(lambda x: x > 5)
+{% endhighlight %}
+
+{% highlight python %}
+class Filter(FilterFunction):
+    def filter(self, value):
+        return value > 5
+
+data.filter(Filter())
+{% endhighlight %}
+
+Rich functions allow the use of imported functions, provide access to broadcast-variables,
+can be parameterized using __init__(), and are the go-to-option for complex functions.
+They are also the only way to define an optional `combine` function for a reduce operation.
+
+Lambda functions allow the easy insertion of one-liners. Note that a lambda function has to return
+an iterable, if the operation can return multiple values. (All functions receiving a collector argument)
+
+{% top %}
+
+Data Types
+----------
+
+Flink's Python API currently only offers native support for primitive python types (int, float, bool, string) and byte arrays.
+
+The type support can be extended by passing a serializer, deserializer and type class to the environment.
+{% highlight python %}
+class MyObj(object):
+    def __init__(self, i):
+        self.value = i
+
+
+class MySerializer(object):
+    def serialize(self, value):
+        return struct.pack(">i", value.value)
+
+
+class MyDeserializer(object):
+    def _deserialize(self, read):
+        i = struct.unpack(">i", read(4))[0]
+        return MyObj(i)
+
+
+env.register_custom_type(MyObj, MySerializer(), MyDeserializer())
+{% endhighlight %}
+
+#### Tuples/Lists
+
+You can use the tuples (or lists) for composite types. Python tuples are mapped to the Flink Tuple type, that contain
+a fix number of fields of various types (up to 25). Every field of a tuple can be a primitive type - including further tuples, resulting in nested tuples.
+
+{% highlight python %}
+word_counts = env.from_elements(("hello", 1), ("world",2))
+
+counts = word_counts.map(lambda x: x[1])
+{% endhighlight %}
+
+When working with operators that require a Key for grouping or matching records,
+Tuples let you simply specify the positions of the fields to be used as key. You can specify more
+than one position to use composite keys (see [Section Data Transformations](#transformations)).
+
+{% highlight python %}
+wordCounts \
+    .group_by(0) \
+    .reduce(MyReduceFunction())
+{% endhighlight %}
+
+{% top %}
+
+Data Sources
+------------
+
+Data sources create the initial data sets, such as from files or from collections.
+
+File-based:
+
+- `read_text(path)` - Reads files line wise and returns them as Strings.
+- `read_csv(path, type)` - Parses files of comma (or another char) delimited fields.
+  Returns a DataSet of tuples. Supports the basic java types and their Value counterparts as field
+  types.
+
+Collection-based:
+
+- `from_elements(*args)` - Creates a data set from a Seq. All elements
+- `generate_sequence(from, to)` - Generates the sequence of numbers in the given interval, in parallel.
+
+**Examples**
+
+{% highlight python %}
+env  = get_environment
+
+\# read text file from local files system
+localLiens = env.read_text("file:#/path/to/my/textfile")
+
+\# read text file from a HDFS running at nnHost:nnPort
+hdfsLines = env.read_text("hdfs://nnHost:nnPort/path/to/my/textfile")
+
+\# read a CSV file with three fields, schema defined using constants defined in flink.plan.Constants
+csvInput = env.read_csv("hdfs:///the/CSV/file", (INT, STRING, DOUBLE))
+
+\# create a set from some given elements
+values = env.from_elements("Foo", "bar", "foobar", "fubar")
+
+\# generate a number sequence
+numbers = env.generate_sequence(1, 10000000)
+{% endhighlight %}
+
+{% top %}
+
+Data Sinks
+----------
+
+Data sinks consume DataSets and are used to store or return them:
+
+- `write_text()` - Writes elements line-wise as Strings. The Strings are
+  obtained by calling the *str()* method of each element.
+- `write_csv(...)` - Writes tuples as comma-separated value files. Row and field
+  delimiters are configurable. The value for each field comes from the *str()* method of the objects.
+- `output()` - Prints the *str()* value of each element on the
+  standard out.
+
+A DataSet can be input to multiple operations. Programs can write or print a data set and at the
+same time run additional transformations on them.
+
+**Examples**
+
+Standard data sink methods:
+
+{% highlight scala %}
+ write DataSet to a file on the local file system
+textData.write_text("file:///my/result/on/localFS")
+
+ write DataSet to a file on a HDFS with a namenode running at nnHost:nnPort
+textData.write_text("hdfs://nnHost:nnPort/my/result/on/localFS")
+
+ write DataSet to a file and overwrite the file if it exists
+textData.write_text("file:///my/result/on/localFS", WriteMode.OVERWRITE)
+
+ tuples as lines with pipe as the separator "a|b|c"
+values.write_csv("file:///path/to/the/result/file", line_delimiter="\n", field_delimiter="|")
+
+ this writes tuples in the text formatting "(a, b, c)", rather than as CSV lines
+values.write_text("file:///path/to/the/result/file")
+{% endhighlight %}
+
+{% top %}
+
+Broadcast Variables
+-------------------
+
+Broadcast variables allow you to make a data set available to all parallel instances of an
+operation, in addition to the regular input of the operation. This is useful for auxiliary data
+sets, or data-dependent parameterization. The data set will then be accessible at the operator as a
+Collection.
+
+- **Broadcast**: broadcast sets are registered by name via `with_broadcast_set(DataSet, String)`
+- **Access**: accessible via `self.context.get_broadcast_variable(String)` at the target operator
+
+{% highlight python %}
+class MapperBcv(MapFunction):
+    def map(self, value):
+        factor = self.context.get_broadcast_variable("bcv")[0][0]
+        return value * factor
+
+# 1. The DataSet to be broadcasted
+toBroadcast = env.from_elements(1, 2, 3)
+data = env.from_elements("a", "b")
+
+# 2. Broadcast the DataSet
+data.map(MapperBcv()).with_broadcast_set("bcv", toBroadcast)
+{% endhighlight %}
+
+Make sure that the names (`bcv` in the previous example) match when registering and
+accessing broadcasted data sets.
+
+**Note**: As the content of broadcast variables is kept in-memory on each node, it should not become
+too large. For simpler things like scalar values you can simply parameterize the rich function.
+
+{% top %}
+
+Parallel Execution
+------------------
+
+This section describes how the parallel execution of programs can be configured in Flink. A Flink
+program consists of multiple tasks (operators, data sources, and sinks). A task is split into
+several parallel instances for execution and each parallel instance processes a subset of the task's
+input data. The number of parallel instances of a task is called its *parallelism* or *degree of
+parallelism (DOP)*.
+
+The degree of parallelism of a task can be specified in Flink on different levels.
+
+### Execution Environment Level
+
+Flink programs are executed in the context of an [execution environment](#program-skeleton). An
+execution environment defines a default parallelism for all operators, data sources, and data sinks
+it executes. Execution environment parallelism can be overwritten by explicitly configuring the
+parallelism of an operator.
+
+The default parallelism of an execution environment can be specified by calling the
+`set_parallelism()` method. To execute all operators, data sources, and data sinks of the
+[WordCount](#example-program) example program with a parallelism of `3`, set the default parallelism of the
+execution environment as follows:
+
+{% highlight python %}
+env = get_environment()
+env.set_parallelism(3)
+
+text.flat_map(lambda x,c: x.lower().split()) \
+    .group_by(1) \
+    .reduce_group(Adder(), combinable=True) \
+    .output()
+
+env.execute()
+{% endhighlight %}
+
+### System Level
+
+A system-wide default parallelism for all execution environments can be defined by setting the
+`parallelism.default` property in `./conf/flink-conf.yaml`. See the
+[Configuration]({{ site.baseurl }}/setup/config.html) documentation for details.
+
+{% top %}
+
+Executing Plans
+---------------
+
+To run the plan with Flink, go to your Flink distribution, and run the pyflink.sh script from the /bin folder.
+use pyflink2.sh for python 2.7, and pyflink3.sh for python 3.4. The script containing the plan has to be passed
+as the first argument, followed by a number of additional python packages, and finally, separated by - additional
+arguments that will be fed to the script.
+
+{% highlight python %}
+./bin/pyflink<2/3>.sh <Script>[ <pathToPackage1>[ <pathToPackageX]][ - <param1>[ <paramX>]]
+{% endhighlight %}
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/batch/zip_elements_guide.md
----------------------------------------------------------------------
diff --git a/docs/dev/batch/zip_elements_guide.md b/docs/dev/batch/zip_elements_guide.md
new file mode 100644
index 0000000..40edb8d
--- /dev/null
+++ b/docs/dev/batch/zip_elements_guide.md
@@ -0,0 +1,126 @@
+---
+title: "Zipping Elements in a DataSet"
+nav-title: Zipping Elements
+nav-parent_id: batch
+nav-pos: 2
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+In certain algorithms, one may need to assign unique identifiers to data set elements.
+This document shows how {% gh_link /flink-java/src/main/java/org/apache/flink/api/java/utils/DataSetUtils.java "DataSetUtils" %} can be used for that purpose.
+
+* This will be replaced by the TOC
+{:toc}
+
+### Zip with a Dense Index
+`zipWithIndex` assigns consecutive labels to the elements, receiving a data set as input and returning a new data set of `(unique id, initial value)` 2-tuples.
+This process requires two passes, first counting then labeling elements, and cannot be pipelined due to the synchronization of counts.
+The alternative `zipWithUniqueId` works in a pipelined fashion and is preferred when a unique labeling is sufficient.
+For example, the following code:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+env.setParallelism(2);
+DataSet<String> in = env.fromElements("A", "B", "C", "D", "E", "F", "G", "H");
+
+DataSet<Tuple2<Long, String>> result = DataSetUtils.zipWithIndex(in);
+
+result.writeAsCsv(resultPath, "\n", ",");
+env.execute();
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+import org.apache.flink.api.scala._
+
+val env: ExecutionEnvironment = ExecutionEnvironment.getExecutionEnvironment
+env.setParallelism(2)
+val input: DataSet[String] = env.fromElements("A", "B", "C", "D", "E", "F", "G", "H")
+
+val result: DataSet[(Long, String)] = input.zipWithIndex
+
+result.writeAsCsv(resultPath, "\n", ",")
+env.execute()
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight python %}
+from flink.plan.Environment import get_environment
+
+env = get_environment()
+env.set_parallelism(2)
+input = env.from_elements("A", "B", "C", "D", "E", "F", "G", "H")
+
+result = input.zipWithIndex()
+
+result.write_text(result_path)
+env.execute()
+{% endhighlight %}
+</div>
+
+</div>
+
+may yield the tuples: (0,G), (1,H), (2,A), (3,B), (4,C), (5,D), (6,E), (7,F)
+
+[Back to top](#top)
+
+### Zip with a Unique Identifier
+In many cases one may not need to assign consecutive labels.
+`zipWithUniqueId` works in a pipelined fashion, speeding up the label assignment process. This method receives a data set as input and returns a new data set of `(unique id, initial value)` 2-tuples.
+For example, the following code:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+env.setParallelism(2);
+DataSet<String> in = env.fromElements("A", "B", "C", "D", "E", "F", "G", "H");
+
+DataSet<Tuple2<Long, String>> result = DataSetUtils.zipWithUniqueId(in);
+
+result.writeAsCsv(resultPath, "\n", ",");
+env.execute();
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+import org.apache.flink.api.scala._
+
+val env: ExecutionEnvironment = ExecutionEnvironment.getExecutionEnvironment
+env.setParallelism(2)
+val input: DataSet[String] = env.fromElements("A", "B", "C", "D", "E", "F", "G", "H")
+
+val result: DataSet[(Long, String)] = input.zipWithUniqueId
+
+result.writeAsCsv(resultPath, "\n", ",")
+env.execute()
+{% endhighlight %}
+</div>
+
+</div>
+
+may yield the tuples: (0,G), (1,A), (2,H), (3,B), (5,C), (7,D), (9,E), (11,F)
+
+[Back to top](#top)

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/cluster_execution.md
----------------------------------------------------------------------
diff --git a/docs/dev/cluster_execution.md b/docs/dev/cluster_execution.md
new file mode 100644
index 0000000..31b4d4a
--- /dev/null
+++ b/docs/dev/cluster_execution.md
@@ -0,0 +1,155 @@
+---
+title:  "Cluster Execution"
+nav-parent_id: dev
+nav-pos: 12
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+* This will be replaced by the TOC
+{:toc}
+
+Flink programs can run distributed on clusters of many machines. There
+are two ways to send a program to a cluster for execution:
+
+## Command Line Interface
+
+The command line interface lets you submit packaged programs (JARs) to a cluster
+(or single machine setup).
+
+Please refer to the [Command Line Interface]({{ site.baseurl }}/setup/cli.html) documentation for
+details.
+
+## Remote Environment
+
+The remote environment lets you execute Flink Java programs on a cluster
+directly. The remote environment points to the cluster on which you want to
+execute the program.
+
+### Maven Dependency
+
+If you are developing your program as a Maven project, you have to add the
+`flink-clients` module using this dependency:
+
+~~~xml
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-clients{{ site.scala_version_suffix }}</artifactId>
+  <version>{{ site.version }}</version>
+</dependency>
+~~~
+
+### Example
+
+The following illustrates the use of the `RemoteEnvironment`:
+
+~~~java
+public static void main(String[] args) throws Exception {
+    ExecutionEnvironment env = ExecutionEnvironment
+        .createRemoteEnvironment("flink-master", 6123, "/home/user/udfs.jar");
+
+    DataSet<String> data = env.readTextFile("hdfs://path/to/file");
+
+    data
+        .filter(new FilterFunction<String>() {
+            public boolean filter(String value) {
+                return value.startsWith("http://");
+            }
+        })
+        .writeAsText("hdfs://path/to/result");
+
+    env.execute();
+}
+~~~
+
+Note that the program contains custom user code and hence requires a JAR file with
+the classes of the code attached. The constructor of the remote environment
+takes the path(s) to the JAR file(s).
+
+## Linking with modules not contained in the binary distribution
+
+The binary distribution contains jar packages in the `lib` folder that are automatically
+provided to the classpath of your distributed programs. Almost all of Flink classes are
+located there with a few exceptions, for example the streaming connectors and some freshly
+added modules. To run code depending on these modules you need to make them accessible
+during runtime, for which we suggest two options:
+
+1. Either copy the required jar files to the `lib` folder onto all of your TaskManagers.
+Note that you have to restart your TaskManagers after this.
+2. Or package them with your code.
+
+The latter version is recommended as it respects the classloader management in Flink.
+
+### Packaging dependencies with your usercode with Maven
+
+To provide these dependencies not included by Flink we suggest two options with Maven.
+
+1. The maven assembly plugin builds a so-called uber-jar (executable jar) containing all your dependencies.
+The assembly configuration is straight-forward, but the resulting jar might become bulky.
+See [maven-assembly-plugin](http://maven.apache.org/plugins/maven-assembly-plugin/usage.html) for further information.
+2. The maven unpack plugin unpacks the relevant parts of the dependencies and
+then packages it with your code.
+
+Using the latter approach in order to bundle the Kafka connector, `flink-connector-kafka`
+you would need to add the classes from both the connector and the Kafka API itself. Add
+the following to your plugins section.
+
+~~~xml
+<plugin>
+    <groupId>org.apache.maven.plugins</groupId>
+    <artifactId>maven-dependency-plugin</artifactId>
+    <version>2.9</version>
+    <executions>
+        <execution>
+            <id>unpack</id>
+            <!-- executed just before the package phase -->
+            <phase>prepare-package</phase>
+            <goals>
+                <goal>unpack</goal>
+            </goals>
+            <configuration>
+                <artifactItems>
+                    <!-- For Flink connector classes -->
+                    <artifactItem>
+                        <groupId>org.apache.flink</groupId>
+                        <artifactId>flink-connector-kafka</artifactId>
+                        <version>{{ site.version }}</version>
+                        <type>jar</type>
+                        <overWrite>false</overWrite>
+                        <outputDirectory>${project.build.directory}/classes</outputDirectory>
+                        <includes>org/apache/flink/**</includes>
+                    </artifactItem>
+                    <!-- For Kafka API classes -->
+                    <artifactItem>
+                        <groupId>org.apache.kafka</groupId>
+                        <artifactId>kafka_<YOUR_SCALA_VERSION></artifactId>
+                        <version><YOUR_KAFKA_VERSION></version>
+                        <type>jar</type>
+                        <overWrite>false</overWrite>
+                        <outputDirectory>${project.build.directory}/classes</outputDirectory>
+                        <includes>kafka/**</includes>
+                    </artifactItem>
+                </artifactItems>
+            </configuration>
+        </execution>
+    </executions>
+</plugin>
+~~~
+
+Now when running `mvn clean package` the produced jar includes the required dependencies.

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/connectors/cassandra.md
----------------------------------------------------------------------
diff --git a/docs/dev/connectors/cassandra.md b/docs/dev/connectors/cassandra.md
new file mode 100644
index 0000000..90be0e3
--- /dev/null
+++ b/docs/dev/connectors/cassandra.md
@@ -0,0 +1,155 @@
+---
+title: "Apache Cassandra Connector"
+nav-title: Cassandra
+nav-parent_id: connectors
+nav-pos: 2
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+This connector provides sinks that writes data into a [Cassandra](https://cassandra.apache.org/) database.
+
+To use this connector, add the following dependency to your project:
+
+{% highlight xml %}
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-connector-cassandra{{ site.scala_version_suffix }}</artifactId>
+  <version>{{site.version }}</version>
+</dependency>
+{% endhighlight %}
+
+Note that the streaming connectors are currently not part of the binary distribution. See how to link with them for cluster execution [here]({{ site.baseurl}}/dev/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution).
+
+#### Installing Apache Cassandra
+Follow the instructions from the [Cassandra Getting Started page](http://wiki.apache.org/cassandra/GettingStarted).
+
+#### Cassandra Sink
+
+Flink's Cassandra sink are created by using the static CassandraSink.addSink(DataStream<IN> input) method.
+This method returns a CassandraSinkBuilder, which offers methods to further configure the sink.
+
+The following configuration methods can be used:
+
+1. setQuery(String query)
+2. setHost(String host[, int port])
+3. setClusterBuilder(ClusterBuilder builder)
+4. enableWriteAheadLog([CheckpointCommitter committer])
+5. build()
+
+*setQuery()* sets the query that is executed for every value the sink receives.
+*setHost()* sets the cassandra host/port to connect to. This method is intended for simple use-cases.
+*setClusterBuilder()* sets the cluster builder that is used to configure the connection to cassandra. The *setHost()* functionality can be subsumed with this method.
+*enableWriteAheadLog()* is an optional method, that allows exactly-once processing for non-deterministic algorithms.
+
+A checkpoint committer stores additional information about completed checkpoints
+in some resource. This information is used to prevent a full replay of the last
+completed checkpoint in case of a failure.
+You can use a `CassandraCommitter` to store these in a separate table in cassandra.
+Note that this table will NOT be cleaned up by Flink.
+
+*build()* finalizes the configuration and returns the CassandraSink.
+
+Flink can provide exactly-once guarantees if the query is idempotent (meaning it can be applied multiple
+times without changing the result) and checkpointing is enabled. In case of a failure the failed
+checkpoint will be replayed completely.
+
+Furthermore, for non-deterministic programs the write-ahead log has to be enabled. For such a program
+the replayed checkpoint may be completely different than the previous attempt, which may leave the
+database in an inconsitent state since part of the first attempt may already be written.
+The write-ahead log guarantees that the replayed checkpoint is identical to the first attempt.
+Note that that enabling this feature will have an adverse impact on latency.
+
+<p style="border-radius: 5px; padding: 5px" class="bg-danger"><b>Note</b>: The write-ahead log functionality is currently experimental. In many cases it is sufficent to use the connector without enabling it. Please report problems to the development mailing list.</p>
+
+
+#### Example
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+CassandraSink.addSink(input)
+  .setQuery("INSERT INTO example.values (id, counter) values (?, ?);")
+  .setClusterBuilder(new ClusterBuilder() {
+    @Override
+    public Cluster buildCluster(Cluster.Builder builder) {
+      return builder.addContactPoint("127.0.0.1").build();
+    }
+  })
+  .build();
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+CassandraSink.addSink(input)
+  .setQuery("INSERT INTO example.values (id, counter) values (?, ?);")
+  .setClusterBuilder(new ClusterBuilder() {
+    @Override
+    public Cluster buildCluster(Cluster.Builder builder) {
+      return builder.addContactPoint("127.0.0.1").build();
+    }
+  })
+  .build();
+{% endhighlight %}
+</div>
+</div>
+
+The Cassandra sinks support both tuples and POJO's that use DataStax annotations.
+Flink automatically detects which type of input is used.
+
+Example for such a Pojo:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+
+@Table(keyspace= "test", name = "mappersink")
+public class Pojo implements Serializable {
+
+	private static final long serialVersionUID = 1038054554690916991L;
+
+	@Column(name = "id")
+	private long id;
+	@Column(name = "value")
+	private String value;
+
+	public Pojo(long id, String value){
+		this.id = id;
+		this.value = value;
+	}
+
+	public long getId() {
+		return id;
+	}
+
+	public void setId(long id) {
+		this.id = id;
+	}
+
+	public String getValue() {
+		return value;
+	}
+
+	public void setValue(String value) {
+		this.value = value;
+	}
+}
+{% endhighlight %}
+</div>
+</div>

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/connectors/elasticsearch.md
----------------------------------------------------------------------
diff --git a/docs/dev/connectors/elasticsearch.md b/docs/dev/connectors/elasticsearch.md
new file mode 100644
index 0000000..be45f98
--- /dev/null
+++ b/docs/dev/connectors/elasticsearch.md
@@ -0,0 +1,180 @@
+---
+title: "Elasticsearch Connector"
+nav-title: Elasticsearch
+nav-parent_id: connectors
+nav-pos: 4
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+This connector provides a Sink that can write to an
+[Elasticsearch](https://elastic.co/) Index. To use this connector, add the
+following dependency to your project:
+
+{% highlight xml %}
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-connector-elasticsearch{{ site.scala_version_suffix }}</artifactId>
+  <version>{{site.version }}</version>
+</dependency>
+{% endhighlight %}
+
+Note that the streaming connectors are currently not part of the binary
+distribution. See
+[here]({{site.baseurl}}/dev/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution)
+for information about how to package the program with the libraries for
+cluster execution.
+
+#### Installing Elasticsearch
+
+Instructions for setting up an Elasticsearch cluster can be found
+[here](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup.html).
+Make sure to set and remember a cluster name. This must be set when
+creating a Sink for writing to your cluster
+
+#### Elasticsearch Sink
+The connector provides a Sink that can send data to an Elasticsearch Index.
+
+The sink can use two different methods for communicating with Elasticsearch:
+
+1. An embedded Node
+2. The TransportClient
+
+See [here](https://www.elastic.co/guide/en/elasticsearch/client/java-api/current/client.html)
+for information about the differences between the two modes.
+
+This code shows how to create a sink that uses an embedded Node for
+communication:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+DataStream<String> input = ...;
+
+Map<String, String> config = Maps.newHashMap();
+// This instructs the sink to emit after every element, otherwise they would be buffered
+config.put("bulk.flush.max.actions", "1");
+config.put("cluster.name", "my-cluster-name");
+
+input.addSink(new ElasticsearchSink<>(config, new IndexRequestBuilder<String>() {
+    @Override
+    public IndexRequest createIndexRequest(String element, RuntimeContext ctx) {
+        Map<String, Object> json = new HashMap<>();
+        json.put("data", element);
+
+        return Requests.indexRequest()
+                .index("my-index")
+                .type("my-type")
+                .source(json);
+    }
+}));
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val input: DataStream[String] = ...
+
+val config = new util.HashMap[String, String]
+config.put("bulk.flush.max.actions", "1")
+config.put("cluster.name", "my-cluster-name")
+
+text.addSink(new ElasticsearchSink(config, new IndexRequestBuilder[String] {
+  override def createIndexRequest(element: String, ctx: RuntimeContext): IndexRequest = {
+    val json = new util.HashMap[String, AnyRef]
+    json.put("data", element)
+    println("SENDING: " + element)
+    Requests.indexRequest.index("my-index").`type`("my-type").source(json)
+  }
+}))
+{% endhighlight %}
+</div>
+</div>
+
+Note how a Map of Strings is used to configure the Sink. The configuration keys
+are documented in the Elasticsearch documentation
+[here](https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html).
+Especially important is the `cluster.name` parameter that must correspond to
+the name of your cluster.
+
+Internally, the sink uses a `BulkProcessor` to send index requests to the cluster.
+This will buffer elements before sending a request to the cluster. The behaviour of the
+`BulkProcessor` can be configured using these config keys:
+ * **bulk.flush.max.actions**: Maximum amount of elements to buffer
+ * **bulk.flush.max.size.mb**: Maximum amount of data (in megabytes) to buffer
+ * **bulk.flush.interval.ms**: Interval at which to flush data regardless of the other two
+  settings in milliseconds
+
+This example code does the same, but with a `TransportClient`:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+DataStream<String> input = ...;
+
+Map<String, String> config = Maps.newHashMap();
+// This instructs the sink to emit after every element, otherwise they would be buffered
+config.put("bulk.flush.max.actions", "1");
+config.put("cluster.name", "my-cluster-name");
+
+List<TransportAddress> transports = new ArrayList<String>();
+transports.add(new InetSocketTransportAddress("node-1", 9300));
+transports.add(new InetSocketTransportAddress("node-2", 9300));
+
+input.addSink(new ElasticsearchSink<>(config, transports, new IndexRequestBuilder<String>() {
+    @Override
+    public IndexRequest createIndexRequest(String element, RuntimeContext ctx) {
+        Map<String, Object> json = new HashMap<>();
+        json.put("data", element);
+
+        return Requests.indexRequest()
+                .index("my-index")
+                .type("my-type")
+                .source(json);
+    }
+}));
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val input: DataStream[String] = ...
+
+val config = new util.HashMap[String, String]
+config.put("bulk.flush.max.actions", "1")
+config.put("cluster.name", "my-cluster-name")
+
+val transports = new ArrayList[String]
+transports.add(new InetSocketTransportAddress("node-1", 9300))
+transports.add(new InetSocketTransportAddress("node-2", 9300))
+
+text.addSink(new ElasticsearchSink(config, transports, new IndexRequestBuilder[String] {
+  override def createIndexRequest(element: String, ctx: RuntimeContext): IndexRequest = {
+    val json = new util.HashMap[String, AnyRef]
+    json.put("data", element)
+    println("SENDING: " + element)
+    Requests.indexRequest.index("my-index").`type`("my-type").source(json)
+  }
+}))
+{% endhighlight %}
+</div>
+</div>
+
+The difference is that we now need to provide a list of Elasticsearch Nodes
+to which the sink should connect using a `TransportClient`.
+
+More information about Elasticsearch can be found [here](https://elastic.co).

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/connectors/elasticsearch2.md
----------------------------------------------------------------------
diff --git a/docs/dev/connectors/elasticsearch2.md b/docs/dev/connectors/elasticsearch2.md
new file mode 100644
index 0000000..8eed690
--- /dev/null
+++ b/docs/dev/connectors/elasticsearch2.md
@@ -0,0 +1,141 @@
+---
+title: "Elasticsearch 2.x Connector"
+nav-title: Elasticsearch 2.x
+nav-parent_id: connectors
+nav-pos: 5
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+This connector provides a Sink that can write to an
+[Elasticsearch 2.x](https://elastic.co/) Index. To use this connector, add the
+following dependency to your project:
+
+{% highlight xml %}
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-connector-elasticsearch2{{ site.scala_version_suffix }}</artifactId>
+  <version>{{site.version }}</version>
+</dependency>
+{% endhighlight %}
+
+Note that the streaming connectors are currently not part of the binary
+distribution. See
+[here]({{site.baseurl}}/dev/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution)
+for information about how to package the program with the libraries for
+cluster execution.
+
+#### Installing Elasticsearch 2.x
+
+Instructions for setting up an Elasticsearch cluster can be found
+[here](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup.html).
+Make sure to set and remember a cluster name. This must be set when
+creating a Sink for writing to your cluster
+
+#### Elasticsearch 2.x Sink
+The connector provides a Sink that can send data to an Elasticsearch 2.x Index.
+
+The sink communicates with Elasticsearch via Transport Client
+
+See [here](https://www.elastic.co/guide/en/elasticsearch/client/java-api/current/transport-client.html)
+for information about the Transport Client.
+
+The code below shows how to create a sink that uses a `TransportClient` for communication:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+File dataDir = ....;
+
+DataStream<String> input = ...;
+
+Map<String, String> config = new HashMap<>();
+// This instructs the sink to emit after every element, otherwise they would be buffered
+config.put("bulk.flush.max.actions", "1");
+config.put("cluster.name", "my-cluster-name");
+
+List<InetSocketAddress> transports = new ArrayList<>();
+transports.add(new InetSocketAddress(InetAddress.getByName("127.0.0.1"), 9300));
+transports.add(new InetSocketAddress(InetAddress.getByName("10.2.3.1"), 9300));
+
+input.addSink(new ElasticsearchSink(config, transports, new ElasticsearchSinkFunction<String>() {
+  public IndexRequest createIndexRequest(String element) {
+    Map<String, String> json = new HashMap<>();
+    json.put("data", element);
+
+    return Requests.indexRequest()
+            .index("my-index")
+            .type("my-type")
+            .source(json);
+  }
+
+  @Override
+  public void process(String element, RuntimeContext ctx, RequestIndexer indexer) {
+    indexer.add(createIndexRequest(element));
+  }
+}));
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val dataDir = ....;
+
+val input: DataStream[String] = ...
+
+val config = new util.HashMap[String, String]
+config.put("bulk.flush.max.actions", "1")
+config.put("cluster.name", "my-cluster-name")
+
+val transports = new ArrayList[String]
+transports.add(new InetSocketAddress(InetAddress.getByName("127.0.0.1"), 9300))
+transports.add(new InetSocketAddress(InetAddress.getByName("10.2.3.1"), 9300));
+
+input.addSink(new ElasticsearchSink(config, transports, new ElasticsearchSinkFunction[String] {
+  def createIndexRequest(element: String): IndexRequest = {
+    val json = new util.HashMap[String, AnyRef]
+    json.put("data", element)
+    Requests.indexRequest.index("my-index").`type`("my-type").source(json)
+  }
+
+  override def process(element: String, ctx: RuntimeContext, indexer: RequestIndexer) {
+    indexer.add(createIndexRequest(element))
+  }
+}))
+{% endhighlight %}
+</div>
+</div>
+
+A Map of Strings is used to configure the Sink. The configuration keys
+are documented in the Elasticsearch documentation
+[here](https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html).
+Especially important is the `cluster.name`. parameter that must correspond to
+the name of your cluster and with ElasticSearch 2x you also need to specify `path.home`.
+
+Internally, the sink uses a `BulkProcessor` to send Action requests to the cluster.
+This will buffer elements and Action Requests before sending to the cluster. The behaviour of the
+`BulkProcessor` can be configured using these config keys:
+ * **bulk.flush.max.actions**: Maximum amount of elements to buffer
+ * **bulk.flush.max.size.mb**: Maximum amount of data (in megabytes) to buffer
+ * **bulk.flush.interval.ms**: Interval at which to flush data regardless of the other two
+  settings in milliseconds
+
+This now provides a list of Elasticsearch Nodes
+to which the sink should connect via a `TransportClient`.
+
+More information about Elasticsearch can be found [here](https://elastic.co).

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/connectors/filesystem_sink.md
----------------------------------------------------------------------
diff --git a/docs/dev/connectors/filesystem_sink.md b/docs/dev/connectors/filesystem_sink.md
new file mode 100644
index 0000000..c6318e8
--- /dev/null
+++ b/docs/dev/connectors/filesystem_sink.md
@@ -0,0 +1,130 @@
+---
+title: "HDFS Connector"
+nav-title: Rolling File Sink
+nav-parent_id: connectors
+nav-pos: 6
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+This connector provides a Sink that writes rolling files to any filesystem supported by
+Hadoop FileSystem. To use this connector, add the
+following dependency to your project:
+
+{% highlight xml %}
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-connector-filesystem{{ site.scala_version_suffix }}</artifactId>
+  <version>{{site.version}}</version>
+</dependency>
+{% endhighlight %}
+
+Note that the streaming connectors are currently not part of the binary
+distribution. See
+[here]({{site.baseurl}}/dev/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution)
+for information about how to package the program with the libraries for
+cluster execution.
+
+#### Rolling File Sink
+
+The rolling behaviour as well as the writing can be configured but we will get to that later.
+This is how you can create a default rolling sink:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+DataStream<String> input = ...;
+
+input.addSink(new RollingSink<String>("/base/path"));
+
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val input: DataStream[String] = ...
+
+input.addSink(new RollingSink("/base/path"))
+
+{% endhighlight %}
+</div>
+</div>
+
+The only required parameter is the base path where the rolling files (buckets) will be
+stored. The sink can be configured by specifying a custom bucketer, writer and batch size.
+
+By default the rolling sink will use the pattern `"yyyy-MM-dd--HH"` to name the rolling buckets.
+This pattern is passed to `SimpleDateFormat` with the current system time to form a bucket path. A
+new bucket will be created whenever the bucket path changes. For example, if you have a pattern
+that contains minutes as the finest granularity you will get a new bucket every minute.
+Each bucket is itself a directory that contains several part files: Each parallel instance
+of the sink will create its own part file and when part files get too big the sink will also
+create a new part file next to the others. To specify a custom bucketer use `setBucketer()`
+on a `RollingSink`.
+
+The default writer is `StringWriter`. This will call `toString()` on the incoming elements
+and write them to part files, separated by newline. To specify a custom writer use `setWriter()`
+on a `RollingSink`. If you want to write Hadoop SequenceFiles you can use the provided
+`SequenceFileWriter` which can also be configured to use compression.
+
+The last configuration option is the batch size. This specifies when a part file should be closed
+and a new one started. (The default part file size is 384 MB).
+
+Example:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+DataStream<Tuple2<IntWritable,Text>> input = ...;
+
+RollingSink sink = new RollingSink<String>("/base/path");
+sink.setBucketer(new DateTimeBucketer("yyyy-MM-dd--HHmm"));
+sink.setWriter(new SequenceFileWriter<IntWritable, Text>());
+sink.setBatchSize(1024 * 1024 * 400); // this is 400 MB,
+
+input.addSink(sink);
+
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val input: DataStream[Tuple2[IntWritable, Text]] = ...
+
+val sink = new RollingSink[String]("/base/path")
+sink.setBucketer(new DateTimeBucketer("yyyy-MM-dd--HHmm"))
+sink.setWriter(new SequenceFileWriter[IntWritable, Text]())
+sink.setBatchSize(1024 * 1024 * 400) // this is 400 MB,
+
+input.addSink(sink)
+
+{% endhighlight %}
+</div>
+</div>
+
+This will create a sink that writes to bucket files that follow this schema:
+
+```
+/base/path/{date-time}/part-{parallel-task}-{count}
+```
+
+Where `date-time` is the string that we get from the date/time format, `parallel-task` is the index
+of the parallel sink instance and `count` is the running number of part files that where created
+because of the batch size.
+
+For in-depth information, please refer to the JavaDoc for
+[RollingSink](http://flink.apache.org/docs/latest/api/java/org/apache/flink/streaming/connectors/fs/RollingSink.html).

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/connectors/index.md
----------------------------------------------------------------------
diff --git a/docs/dev/connectors/index.md b/docs/dev/connectors/index.md
new file mode 100644
index 0000000..c49c8c2
--- /dev/null
+++ b/docs/dev/connectors/index.md
@@ -0,0 +1,46 @@
+---
+title: "Streaming Connectors"
+nav-id: connectors
+nav-title: Connectors
+nav-parent_id: dev
+nav-pos: 7
+nav-show_overview: true
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Connectors provide code for interfacing with various third-party systems.
+
+Currently these systems are supported:
+
+ * [Apache Kafka](https://kafka.apache.org/) (sink/source)
+ * [Elasticsearch](https://elastic.co/) (sink)
+ * [Elasticsearch 2x](https://elastic.com) (sink)
+ * [Hadoop FileSystem](http://hadoop.apache.org) (sink)
+ * [RabbitMQ](http://www.rabbitmq.com/) (sink/source)
+ * [Amazon Kinesis Streams](http://aws.amazon.com/kinesis/streams/) (sink/source)
+ * [Twitter Streaming API](https://dev.twitter.com/docs/streaming-apis) (source)
+ * [Apache NiFi](https://nifi.apache.org) (sink/source)
+ * [Apache Cassandra](https://cassandra.apache.org/) (sink)
+ * [Redis](http://redis.io/) (sink)
+
+To run an application using one of these connectors, additional third party
+components are usually required to be installed and launched, e.g. the servers
+for the message queues. Further instructions for these can be found in the
+corresponding subsections.

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/connectors/kafka.md
----------------------------------------------------------------------
diff --git a/docs/dev/connectors/kafka.md b/docs/dev/connectors/kafka.md
new file mode 100644
index 0000000..d2221fa
--- /dev/null
+++ b/docs/dev/connectors/kafka.md
@@ -0,0 +1,289 @@
+---
+title: "Apache Kafka Connector"
+nav-title: Kafka
+nav-parent_id: connectors
+nav-pos: 1
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+This connector provides access to event streams served by [Apache Kafka](https://kafka.apache.org/).
+
+Flink provides special Kafka Connectors for reading and writing data from/to Kafka topics.
+The Flink Kafka Consumer integrates with Flink's checkpointing mechanism to provide
+exactly-once processing semantics. To achieve that, Flink does not purely rely on Kafka's consumer group
+offset tracking, but tracks and checkpoints these offsets internally as well.
+
+Please pick a package (maven artifact id) and class name for your use-case and environment.
+For most users, the `FlinkKafkaConsumer08` (part of `flink-connector-kafka`) is appropriate.
+
+<table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left">Maven Dependency</th>
+      <th class="text-left">Supported since</th>
+      <th class="text-left">Consumer and <br>
+      Producer Class name</th>
+      <th class="text-left">Kafka version</th>
+      <th class="text-left">Notes</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+        <td>flink-connector-kafka</td>
+        <td>0.9.1, 0.10</td>
+        <td>FlinkKafkaConsumer082<br>
+        FlinkKafkaProducer</td>
+        <td>0.8.x</td>
+        <td>Uses the <a href="https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example">SimpleConsumer</a> API of Kafka internally. Offsets are committed to ZK by Flink.</td>
+    </tr>
+     <tr>
+        <td>flink-connector-kafka-0.8{{ site.scala_version_suffix }}</td>
+        <td>1.0.0</td>
+        <td>FlinkKafkaConsumer08<br>
+        FlinkKafkaProducer08</td>
+        <td>0.8.x</td>
+        <td>Uses the <a href="https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example">SimpleConsumer</a> API of Kafka internally. Offsets are committed to ZK by Flink.</td>
+    </tr>
+     <tr>
+        <td>flink-connector-kafka-0.9{{ site.scala_version_suffix }}</td>
+        <td>1.0.0</td>
+        <td>FlinkKafkaConsumer09<br>
+        FlinkKafkaProducer09</td>
+        <td>0.9.x</td>
+        <td>Uses the new <a href="http://kafka.apache.org/documentation.html#newconsumerapi">Consumer API</a> Kafka.</td>
+    </tr>
+  </tbody>
+</table>
+
+Then, import the connector in your maven project:
+
+{% highlight xml %}
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-connector-kafka-0.8{{ site.scala_version_suffix }}</artifactId>
+  <version>{{site.version }}</version>
+</dependency>
+{% endhighlight %}
+
+Note that the streaming connectors are currently not part of the binary distribution. See how to link with them for cluster execution [here]({{ site.baseurl}}/dev/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution).
+
+### Installing Apache Kafka
+
+* Follow the instructions from [Kafka's quickstart](https://kafka.apache.org/documentation.html#quickstart) to download the code and launch a server (launching a Zookeeper and a Kafka server is required every time before starting the application).
+* On 32 bit computers [this](http://stackoverflow.com/questions/22325364/unrecognized-vm-option-usecompressedoops-when-running-kafka-from-my-ubuntu-in) problem may occur.
+* If the Kafka and Zookeeper servers are running on a remote machine, then the `advertised.host.name` setting in the `config/server.properties` file must be set to the machine's IP address.
+
+### Kafka Consumer
+
+Flink's Kafka consumer is called `FlinkKafkaConsumer08` (or `09` for Kafka 0.9.0.x versions). It provides access to one or more Kafka topics.
+
+The constructor accepts the following arguments:
+
+1. The topic name / list of topic names
+2. A DeserializationSchema / KeyedDeserializationSchema for deserializing the data from Kafka
+3. Properties for the Kafka consumer.
+  The following properties are required:
+  - "bootstrap.servers" (comma separated list of Kafka brokers)
+  - "zookeeper.connect" (comma separated list of Zookeeper servers) (**only required for Kafka 0.8**)
+  - "group.id" the id of the consumer group
+
+Example:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+Properties properties = new Properties();
+properties.setProperty("bootstrap.servers", "localhost:9092");
+// only required for Kafka 0.8
+properties.setProperty("zookeeper.connect", "localhost:2181");
+properties.setProperty("group.id", "test");
+DataStream<String> stream = env
+	.addSource(new FlinkKafkaConsumer08<>("topic", new SimpleStringSchema(), properties))
+	.print();
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val properties = new Properties();
+properties.setProperty("bootstrap.servers", "localhost:9092");
+// only required for Kafka 0.8
+properties.setProperty("zookeeper.connect", "localhost:2181");
+properties.setProperty("group.id", "test");
+stream = env
+    .addSource(new FlinkKafkaConsumer08[String]("topic", new SimpleStringSchema(), properties))
+    .print
+{% endhighlight %}
+</div>
+</div>
+
+The current FlinkKafkaConsumer implementation will establish a connection from the client (when calling the constructor)
+for querying the list of topics and partitions.
+
+For this to work, the consumer needs to be able to access the consumers from the machine submitting the job to the Flink cluster.
+If you experience any issues with the Kafka consumer on the client side, the client log might contain information about failed requests, etc.
+
+#### The `DeserializationSchema`
+
+The Flink Kafka Consumer needs to know how to turn the binary data in Kafka into Java/Scala objects. The
+`DeserializationSchema` allows users to specify such a schema. The `T deserialize(byte[] message)`
+method gets called for each Kafka message, passing the value from Kafka.
+
+It is usually helpful to start from the `AbstractDeserializationSchema`, which takes care of describing the
+produced Java/Scala type to Flink's type system. Users that implement a vanilla `DeserializationSchema` need
+to implement the `getProducedType(...)` method themselves.
+
+For accessing both the key and value of the Kafka message, the `KeyedDeserializationSchema` has
+the following deserialize method ` T deserialize(byte[] messageKey, byte[] message, String topic, int partition, long offset)`.
+
+For convenience, Flink provides the following schemas:
+
+1. `TypeInformationSerializationSchema` (and `TypeInformationKeyValueSerializationSchema`) which creates
+    a schema based on a Flink's `TypeInformation`. This is useful if the data is both written and read by Flink.
+    This schema is a performant Flink-specific alternative to other generic serialization approaches.
+
+2. `JsonDeserializationSchema` (and `JSONKeyValueDeserializationSchema`) which turns the serialized JSON
+    into an ObjectNode object, from which fields can be accessed using objectNode.get("field").as(Int/String/...)().
+    The KeyValue objectNode contains a "key" and "value" field which contain all fields, as well as
+    an optional "metadata" field that exposes the offset/partition/topic for this message.
+
+#### Kafka Consumers and Fault Tolerance
+
+With Flink's checkpointing enabled, the Flink Kafka Consumer will consume records from a topic and periodically checkpoint all
+its Kafka offsets, together with the state of other operations, in a consistent manner. In case of a job failure, Flink will restore
+the streaming program to the state of the latest checkpoint and re-consume the records from Kafka, starting from the offsets that where
+stored in the checkpoint.
+
+The interval of drawing checkpoints therefore defines how much the program may have to go back at most, in case of a failure.
+
+To use fault tolerant Kafka Consumers, checkpointing of the topology needs to be enabled at the execution environment:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
+env.enableCheckpointing(5000); // checkpoint every 5000 msecs
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = StreamExecutionEnvironment.getExecutionEnvironment()
+env.enableCheckpointing(5000) // checkpoint every 5000 msecs
+{% endhighlight %}
+</div>
+</div>
+
+Also note that Flink can only restart the topology if enough processing slots are available to restart the topology.
+So if the topology fails due to loss of a TaskManager, there must still be enough slots available afterwards.
+Flink on YARN supports automatic restart of lost YARN containers.
+
+If checkpointing is not enabled, the Kafka consumer will periodically commit the offsets to Zookeeper.
+
+#### Kafka Consumers and Timestamp Extraction/Watermark Emission
+
+In many scenarios, the timestamp of a record is embedded (explicitly or implicitly) in the record itself.
+In addition, the user may want to emit watermarks either periodically, or in an irregular fashion, e.g. based on
+special records in the Kafka stream that contain the current event-time watermark. For these cases, the Flink Kafka
+Consumer allows the specification of an `AssignerWithPeriodicWatermarks` or an `AssignerWithPunctuatedWatermarks`.
+
+You can specify your custom timestamp extractor/watermark emitter as described
+[here]({{ site.baseurl }}/apis/streaming/event_timestamps_watermarks.html), or use one from the
+[predefined ones]({{ site.baseurl }}/apis/streaming/event_timestamp_extractors.html). After doing so, you
+can pass it to your consumer in the following way:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+Properties properties = new Properties();
+properties.setProperty("bootstrap.servers", "localhost:9092");
+// only required for Kafka 0.8
+properties.setProperty("zookeeper.connect", "localhost:2181");
+properties.setProperty("group.id", "test");
+
+FlinkKafkaConsumer08<String> myConsumer =
+    new FlinkKafkaConsumer08<>("topic", new SimpleStringSchema(), properties);
+myConsumer.assignTimestampsAndWatermarks(new CustomWatermarkEmitter());
+
+DataStream<String> stream = env
+	.addSource(myConsumer)
+	.print();
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val properties = new Properties();
+properties.setProperty("bootstrap.servers", "localhost:9092");
+// only required for Kafka 0.8
+properties.setProperty("zookeeper.connect", "localhost:2181");
+properties.setProperty("group.id", "test");
+
+val myConsumer = new FlinkKafkaConsumer08[String]("topic", new SimpleStringSchema(), properties);
+myConsumer.assignTimestampsAndWatermarks(new CustomWatermarkEmitter());
+stream = env
+    .addSource(myConsumer)
+    .print
+{% endhighlight %}
+</div>
+</div>
+
+Internally, an instance of the assigner is executed per Kafka partition.
+When such an assigner is specified, for each record read from Kafka, the
+`extractTimestamp(T element, long previousElementTimestamp)` is called to assign a timestamp to the record and
+the `Watermark getCurrentWatermark()` (for periodic) or the
+`Watermark checkAndGetNextWatermark(T lastElement, long extractedTimestamp)` (for punctuated) is called to determine
+if a new watermark should be emitted and with which timestamp.
+
+### Kafka Producer
+
+The `FlinkKafkaProducer08` writes data to a Kafka topic. The producer can specify a custom partitioner that assigns
+records to partitions.
+
+Example:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+stream.addSink(new FlinkKafkaProducer08<String>("localhost:9092", "my-topic", new SimpleStringSchema()));
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+stream.addSink(new FlinkKafkaProducer08[String]("localhost:9092", "my-topic", new SimpleStringSchema()))
+{% endhighlight %}
+</div>
+</div>
+
+You can also define a custom Kafka producer configuration for the KafkaSink with the constructor. Please refer to
+the [Apache Kafka documentation](https://kafka.apache.org/documentation.html) for details on how to configure
+Kafka Producers.
+
+Similar to the consumer, the producer also allows using an advanced serialization schema which allows
+serializing the key and value separately. It also allows to override the target topic id, so that
+one producer instance can send data to multiple topics.
+
+The interface of the serialization schema is called `KeyedSerializationSchema`.
+
+
+**Note**: By default, the number of retries is set to "0". This means that the producer fails immediately on errors,
+including leader changes. The value is set to "0" by default to avoid duplicate messages in the target topic.
+For most production environments with frequent broker changes, we recommend setting the number of retries to a
+higher value.
+
+There is currently no transactional producer for Kafka, so Flink can not guarantee exactly-once delivery
+into a Kafka topic.


[53/89] [abbrv] flink git commit: [FLINK-4253] [config] Rename 'recovery.mode' key to 'high-availability'

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/ZooKeeperLeaderElectionTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/ZooKeeperLeaderElectionTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/ZooKeeperLeaderElectionTest.java
index 1c0290a..048fbee 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/ZooKeeperLeaderElectionTest.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/ZooKeeperLeaderElectionTest.java
@@ -89,8 +89,8 @@ public class ZooKeeperLeaderElectionTest extends TestLogger {
 	@Test
 	public void testZooKeeperLeaderElectionRetrieval() throws Exception {
 		Configuration configuration = new Configuration();
-		configuration.setString(ConfigConstants.ZOOKEEPER_QUORUM_KEY, testingServer.getConnectString());
-		configuration.setString(ConfigConstants.RECOVERY_MODE, "zookeeper");
+		configuration.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, testingServer.getConnectString());
+		configuration.setString(ConfigConstants.HIGH_AVAILABILITY, "zookeeper");
 
 		ZooKeeperLeaderElectionService leaderElectionService = null;
 		ZooKeeperLeaderRetrievalService leaderRetrievalService = null;
@@ -134,8 +134,8 @@ public class ZooKeeperLeaderElectionTest extends TestLogger {
 	@Test
 	public void testZooKeeperReelection() throws Exception {
 		Configuration configuration = new Configuration();
-		configuration.setString(ConfigConstants.ZOOKEEPER_QUORUM_KEY, testingServer.getConnectString());
-		configuration.setString(ConfigConstants.RECOVERY_MODE, "zookeeper");
+		configuration.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, testingServer.getConnectString());
+		configuration.setString(ConfigConstants.HIGH_AVAILABILITY, "zookeeper");
 
 		Deadline deadline = new FiniteDuration(5, TimeUnit.MINUTES).fromNow();
 
@@ -217,8 +217,8 @@ public class ZooKeeperLeaderElectionTest extends TestLogger {
 	@Test
 	public void testZooKeeperReelectionWithReplacement() throws Exception {
 		Configuration configuration = new Configuration();
-		configuration.setString(ConfigConstants.ZOOKEEPER_QUORUM_KEY, testingServer.getConnectString());
-		configuration.setString(ConfigConstants.RECOVERY_MODE, "zookeeper");
+		configuration.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, testingServer.getConnectString());
+		configuration.setString(ConfigConstants.HIGH_AVAILABILITY, "zookeeper");
 
 		int num = 3;
 		int numTries = 30;
@@ -295,9 +295,9 @@ public class ZooKeeperLeaderElectionTest extends TestLogger {
 		final String leaderPath = "/leader";
 
 		Configuration configuration = new Configuration();
-		configuration.setString(ConfigConstants.ZOOKEEPER_QUORUM_KEY, testingServer.getConnectString());
-		configuration.setString(ConfigConstants.RECOVERY_MODE, "zookeeper");
-		configuration.setString(ConfigConstants.ZOOKEEPER_LEADER_PATH, leaderPath);
+		configuration.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, testingServer.getConnectString());
+		configuration.setString(ConfigConstants.HIGH_AVAILABILITY, "zookeeper");
+		configuration.setString(ConfigConstants.HA_ZOOKEEPER_LEADER_PATH, leaderPath);
 
 		ZooKeeperLeaderElectionService leaderElectionService = null;
 		ZooKeeperLeaderRetrievalService leaderRetrievalService = null;
@@ -379,8 +379,8 @@ public class ZooKeeperLeaderElectionTest extends TestLogger {
 	@Test
 	public void testExceptionForwarding() throws Exception {
 		Configuration configuration = new Configuration();
-		configuration.setString(ConfigConstants.ZOOKEEPER_QUORUM_KEY, testingServer.getConnectString());
-		configuration.setString(ConfigConstants.RECOVERY_MODE, "zookeeper");
+		configuration.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, testingServer.getConnectString());
+		configuration.setString(ConfigConstants.HIGH_AVAILABILITY, "zookeeper");
 
 		ZooKeeperLeaderElectionService leaderElectionService = null;
 		ZooKeeperLeaderRetrievalService leaderRetrievalService = null;
@@ -448,8 +448,8 @@ public class ZooKeeperLeaderElectionTest extends TestLogger {
 	@Test
 	public void testEphemeralZooKeeperNodes() throws Exception {
 		Configuration configuration = new Configuration();
-		configuration.setString(ConfigConstants.ZOOKEEPER_QUORUM_KEY, testingServer.getConnectString());
-		configuration.setString(ConfigConstants.RECOVERY_MODE, "zookeeper");
+		configuration.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, testingServer.getConnectString());
+		configuration.setString(ConfigConstants.HIGH_AVAILABILITY, "zookeeper");
 
 		ZooKeeperLeaderElectionService leaderElectionService;
 		ZooKeeperLeaderRetrievalService leaderRetrievalService = null;
@@ -466,7 +466,7 @@ public class ZooKeeperLeaderElectionTest extends TestLogger {
 			listener = new TestingListener();
 
 			client = ZooKeeperUtils.startCuratorFramework(configuration);
-			final String leaderPath = configuration.getString(ConfigConstants.ZOOKEEPER_LEADER_PATH,
+			final String leaderPath = configuration.getString(ConfigConstants.HA_ZOOKEEPER_LEADER_PATH,
 					ConfigConstants.DEFAULT_ZOOKEEPER_LEADER_PATH);
 			cache = new NodeCache(client, leaderPath);
 

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/ZooKeeperLeaderRetrievalTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/ZooKeeperLeaderRetrievalTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/ZooKeeperLeaderRetrievalTest.java
index aae1840..5aace34 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/ZooKeeperLeaderRetrievalTest.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/ZooKeeperLeaderRetrievalTest.java
@@ -23,6 +23,7 @@ import org.apache.curator.test.TestingServer;
 import org.apache.flink.configuration.ConfigConstants;
 import org.apache.flink.configuration.Configuration;
 import org.apache.flink.runtime.jobmanager.JobManager;
+import org.apache.flink.runtime.jobmanager.HighAvailabilityMode;
 import org.apache.flink.runtime.leaderretrieval.LeaderRetrievalService;
 import org.apache.flink.runtime.util.LeaderRetrievalUtils;
 import org.apache.flink.runtime.util.ZooKeeperUtils;
@@ -43,6 +44,7 @@ import java.net.UnknownHostException;
 import java.util.concurrent.TimeUnit;
 
 import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
 
 public class ZooKeeperLeaderRetrievalTest extends TestLogger{
 
@@ -82,8 +84,8 @@ public class ZooKeeperLeaderRetrievalTest extends TestLogger{
 
 		long sleepingTime = 1000;
 
-		config.setString(ConfigConstants.RECOVERY_MODE, "zookeeper");
-		config.setString(ConfigConstants.ZOOKEEPER_QUORUM_KEY, testingServer.getConnectString());
+		config.setString(ConfigConstants.HIGH_AVAILABILITY, "zookeeper");
+		config.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, testingServer.getConnectString());
 
 		LeaderElectionService leaderElectionService = null;
 		LeaderElectionService faultyLeaderElectionService;
@@ -179,8 +181,8 @@ public class ZooKeeperLeaderRetrievalTest extends TestLogger{
 	@Test
 	public void testTimeoutOfFindConnectingAddress() throws Exception {
 		Configuration config = new Configuration();
-		config.setString(ConfigConstants.RECOVERY_MODE, "zookeeper");
-		config.setString(ConfigConstants.ZOOKEEPER_QUORUM_KEY, testingServer.getConnectString());
+		config.setString(ConfigConstants.HIGH_AVAILABILITY, "zookeeper");
+		config.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, testingServer.getConnectString());
 
 		FiniteDuration timeout = new FiniteDuration(10, TimeUnit.SECONDS);
 
@@ -190,6 +192,46 @@ public class ZooKeeperLeaderRetrievalTest extends TestLogger{
 		assertEquals(InetAddress.getLocalHost(), result);
 	}
 
+	@Test
+	public void testConnectionToZookeeperOverridingOldConfig() throws Exception {
+		Configuration config = new Configuration();
+		// The new config will be taken into effect
+		config.setString(ConfigConstants.RECOVERY_MODE, "standalone");
+		config.setString(ConfigConstants.HIGH_AVAILABILITY, "zookeeper");
+		config.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, testingServer.getConnectString());
+
+		FiniteDuration timeout = new FiniteDuration(10, TimeUnit.SECONDS);
+
+		LeaderRetrievalService leaderRetrievalService =
+			LeaderRetrievalUtils.createLeaderRetrievalService(config);
+		InetAddress result = LeaderRetrievalUtils.findConnectingAddress(leaderRetrievalService, timeout);
+
+		assertEquals(InetAddress.getLocalHost(), result);
+	}
+
+	@Test
+	public void testConnectionToStandAloneLeaderOverridingOldConfig() throws Exception {
+		Configuration config = new Configuration();
+		// The new config will be taken into effect
+		config.setString(ConfigConstants.RECOVERY_MODE, "zookeeper");
+		config.setString(ConfigConstants.HIGH_AVAILABILITY, "none");
+		config.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, testingServer.getConnectString());
+
+		HighAvailabilityMode mode = HighAvailabilityMode.fromConfig(config);
+		assertTrue(mode == HighAvailabilityMode.NONE);
+	}
+
+	@Test
+	public void testConnectionToZookeeperUsingOldConfig() throws Exception {
+		Configuration config = new Configuration();
+		// The new config will be taken into effect
+		config.setString(ConfigConstants.RECOVERY_MODE, "zookeeper");
+		config.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, testingServer.getConnectString());
+
+		HighAvailabilityMode mode = HighAvailabilityMode.fromConfig(config);
+		assertTrue(mode == HighAvailabilityMode.ZOOKEEPER);
+	}
+
 	class FindConnectingAddress implements Runnable {
 
 		private final Configuration config;

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/JobManagerProcess.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/JobManagerProcess.java b/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/JobManagerProcess.java
index fac6162..66d523f 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/JobManagerProcess.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/JobManagerProcess.java
@@ -212,7 +212,7 @@ public class JobManagerProcess extends TestJvmProcess {
 		 * <code>--port PORT</code>.
 		 *
 		 * <p>Other arguments are parsed to a {@link Configuration} and passed to the
-		 * JobManager, for instance: <code>--recovery.mode ZOOKEEPER --recovery.zookeeper.quorum
+		 * JobManager, for instance: <code>--high-availability ZOOKEEPER --recovery.zookeeper.quorum
 		 * "xyz:123:456"</code>.
 		 */
 		public static void main(String[] args) {

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/TaskManagerProcess.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/TaskManagerProcess.java b/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/TaskManagerProcess.java
index 97e7cca..417dc88 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/TaskManagerProcess.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/TaskManagerProcess.java
@@ -96,7 +96,7 @@ public class TaskManagerProcess extends TestJvmProcess {
 
 		/**
 		 * All arguments are parsed to a {@link Configuration} and passed to the Taskmanager,
-		 * for instance: <code>--recovery.mode ZOOKEEPER --recovery.zookeeper.quorum "xyz:123:456"</code>.
+		 * for instance: <code>--high-availability ZOOKEEPER --recovery.zookeeper.quorum "xyz:123:456"</code>.
 		 */
 		public static void main(String[] args) throws Exception {
 			try {

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/ZooKeeperTestUtils.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/ZooKeeperTestUtils.java b/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/ZooKeeperTestUtils.java
index 2796337..c94842f 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/ZooKeeperTestUtils.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/ZooKeeperTestUtils.java
@@ -20,7 +20,7 @@ package org.apache.flink.runtime.testutils;
 
 import org.apache.flink.configuration.ConfigConstants;
 import org.apache.flink.configuration.Configuration;
-import org.apache.flink.runtime.jobmanager.RecoveryMode;
+import org.apache.flink.runtime.jobmanager.HighAvailabilityMode;
 import org.apache.flink.runtime.state.filesystem.FsStateBackendFactory;
 
 import static org.apache.flink.util.Preconditions.checkNotNull;
@@ -31,12 +31,12 @@ import static org.apache.flink.util.Preconditions.checkNotNull;
 public class ZooKeeperTestUtils {
 
 	/**
-	 * Creates a configuration to operate in {@link RecoveryMode#ZOOKEEPER}.
+	 * Creates a configuration to operate in {@link HighAvailabilityMode#ZOOKEEPER}.
 	 *
 	 * @param zooKeeperQuorum   ZooKeeper quorum to connect to
 	 * @param fsStateHandlePath Base path for file system state backend (for checkpoints and
 	 *                          recovery)
-	 * @return A new configuration to operate in {@link RecoveryMode#ZOOKEEPER}.
+	 * @return A new configuration to operate in {@link HighAvailabilityMode#ZOOKEEPER}.
 	 */
 	public static Configuration createZooKeeperRecoveryModeConfig(
 			String zooKeeperQuorum, String fsStateHandlePath) {
@@ -45,13 +45,13 @@ public class ZooKeeperTestUtils {
 	}
 
 	/**
-	 * Sets all necessary configuration keys to operate in {@link RecoveryMode#ZOOKEEPER}.
+	 * Sets all necessary configuration keys to operate in {@link HighAvailabilityMode#ZOOKEEPER}.
 	 *
 	 * @param config            Configuration to use
 	 * @param zooKeeperQuorum   ZooKeeper quorum to connect to
 	 * @param fsStateHandlePath Base path for file system state backend (for checkpoints and
 	 *                          recovery)
-	 * @return The modified configuration to operate in {@link RecoveryMode#ZOOKEEPER}.
+	 * @return The modified configuration to operate in {@link HighAvailabilityMode#ZOOKEEPER}.
 	 */
 	public static Configuration setZooKeeperRecoveryMode(
 			Configuration config,
@@ -66,8 +66,8 @@ public class ZooKeeperTestUtils {
 		config.setInteger(ConfigConstants.JOB_MANAGER_WEB_PORT_KEY, -1);
 
 		// ZooKeeper recovery mode
-		config.setString(ConfigConstants.RECOVERY_MODE, "ZOOKEEPER");
-		config.setString(ConfigConstants.ZOOKEEPER_QUORUM_KEY, zooKeeperQuorum);
+		config.setString(ConfigConstants.HIGH_AVAILABILITY, "ZOOKEEPER");
+		config.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, zooKeeperQuorum);
 
 		int connTimeout = 5000;
 		if (System.getenv().containsKey("CI")) {
@@ -75,20 +75,20 @@ public class ZooKeeperTestUtils {
 			connTimeout = 30000;
 		}
 
-		config.setInteger(ConfigConstants.ZOOKEEPER_CONNECTION_TIMEOUT, connTimeout);
-		config.setInteger(ConfigConstants.ZOOKEEPER_SESSION_TIMEOUT, connTimeout);
+		config.setInteger(ConfigConstants.HA_ZOOKEEPER_CONNECTION_TIMEOUT, connTimeout);
+		config.setInteger(ConfigConstants.HA_ZOOKEEPER_SESSION_TIMEOUT, connTimeout);
 
 		// File system state backend
 		config.setString(ConfigConstants.STATE_BACKEND, "FILESYSTEM");
 		config.setString(FsStateBackendFactory.CHECKPOINT_DIRECTORY_URI_CONF_KEY, fsStateHandlePath + "/checkpoints");
-		config.setString(ConfigConstants.ZOOKEEPER_RECOVERY_PATH, fsStateHandlePath + "/recovery");
+		config.setString(ConfigConstants.ZOOKEEPER_HA_PATH, fsStateHandlePath + "/recovery");
 
 		// Akka failure detection and execution retries
 		config.setString(ConfigConstants.AKKA_WATCH_HEARTBEAT_INTERVAL, "1000 ms");
 		config.setString(ConfigConstants.AKKA_WATCH_HEARTBEAT_PAUSE, "6 s");
 		config.setInteger(ConfigConstants.AKKA_WATCH_THRESHOLD, 9);
 		config.setString(ConfigConstants.AKKA_ASK_TIMEOUT, "100 s");
-		config.setString(ConfigConstants.RECOVERY_JOB_DELAY, "10 s");
+		config.setString(ConfigConstants.HA_JOB_DELAY, "10 s");
 
 		return config;
 	}

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-runtime/src/test/java/org/apache/flink/runtime/util/ZooKeeperUtilTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/util/ZooKeeperUtilTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/util/ZooKeeperUtilTest.java
index 0d01f65..daed4a4 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/util/ZooKeeperUtilTest.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/util/ZooKeeperUtilTest.java
@@ -71,7 +71,7 @@ public class ZooKeeperUtilTest extends TestLogger {
 	}
 
 	private Configuration setQuorum(Configuration conf, String quorum) {
-		conf.setString(ConfigConstants.ZOOKEEPER_QUORUM_KEY, quorum);
+		conf.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, quorum);
 		return conf;
 	}
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-runtime/src/test/java/org/apache/flink/runtime/zookeeper/ZooKeeperTestEnvironment.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/zookeeper/ZooKeeperTestEnvironment.java b/flink-runtime/src/test/java/org/apache/flink/runtime/zookeeper/ZooKeeperTestEnvironment.java
index 8fc80e0..467706f 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/zookeeper/ZooKeeperTestEnvironment.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/zookeeper/ZooKeeperTestEnvironment.java
@@ -58,7 +58,7 @@ public class ZooKeeperTestEnvironment {
 				zooKeeperServer = new TestingServer(true);
 				zooKeeperCluster = null;
 
-				conf.setString(ConfigConstants.ZOOKEEPER_QUORUM_KEY,
+				conf.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY,
 						zooKeeperServer.getConnectString());
 			}
 			else {
@@ -67,7 +67,7 @@ public class ZooKeeperTestEnvironment {
 
 				zooKeeperCluster.start();
 
-				conf.setString(ConfigConstants.ZOOKEEPER_QUORUM_KEY,
+				conf.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY,
 						zooKeeperCluster.getConnectString());
 			}
 
@@ -127,7 +127,7 @@ public class ZooKeeperTestEnvironment {
 	 */
 	public CuratorFramework createClient() {
 		Configuration config = new Configuration();
-		config.setString(ConfigConstants.ZOOKEEPER_QUORUM_KEY, getConnectString());
+		config.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, getConnectString());
 		return ZooKeeperUtils.startCuratorFramework(config);
 	}
 

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-runtime/src/test/scala/org/apache/flink/runtime/testingUtils/TestingUtils.scala
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/scala/org/apache/flink/runtime/testingUtils/TestingUtils.scala b/flink-runtime/src/test/scala/org/apache/flink/runtime/testingUtils/TestingUtils.scala
index 02a0fec..7d2b86c 100644
--- a/flink-runtime/src/test/scala/org/apache/flink/runtime/testingUtils/TestingUtils.scala
+++ b/flink-runtime/src/test/scala/org/apache/flink/runtime/testingUtils/TestingUtils.scala
@@ -423,7 +423,8 @@ object TestingUtils {
       prefix: String)
     : ActorGateway = {
 
-    configuration.setString(ConfigConstants.RECOVERY_MODE, ConfigConstants.DEFAULT_RECOVERY_MODE)
+    configuration.setString(ConfigConstants.HIGH_AVAILABILITY,
+      ConfigConstants.DEFAULT_HIGH_AVAILABILTY)
 
       val (actor, _) = JobManager.startJobManagerActors(
         configuration,
@@ -502,7 +503,8 @@ object TestingUtils {
       configuration: Configuration)
   : ActorGateway = {
 
-    configuration.setString(ConfigConstants.RECOVERY_MODE, ConfigConstants.DEFAULT_RECOVERY_MODE)
+    configuration.setString(ConfigConstants.HIGH_AVAILABILITY,
+      ConfigConstants.DEFAULT_HIGH_AVAILABILTY)
 
     val actor = FlinkResourceManager.startResourceManagerActors(
       configuration,

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-test-utils-parent/flink-test-utils/src/main/java/org/apache/flink/test/util/TestBaseUtils.java
----------------------------------------------------------------------
diff --git a/flink-test-utils-parent/flink-test-utils/src/main/java/org/apache/flink/test/util/TestBaseUtils.java b/flink-test-utils-parent/flink-test-utils/src/main/java/org/apache/flink/test/util/TestBaseUtils.java
index c51e666..7e5acee 100644
--- a/flink-test-utils-parent/flink-test-utils/src/main/java/org/apache/flink/test/util/TestBaseUtils.java
+++ b/flink-test-utils-parent/flink-test-utils/src/main/java/org/apache/flink/test/util/TestBaseUtils.java
@@ -120,7 +120,7 @@ public class TestBaseUtils extends TestLogger {
 
 		if (startZooKeeper) {
 			config.setInteger(ConfigConstants.LOCAL_NUMBER_JOB_MANAGER, 3);
-			config.setString(ConfigConstants.RECOVERY_MODE, "zookeeper");
+			config.setString(ConfigConstants.HIGH_AVAILABILITY, "zookeeper");
 		}
 
 		return startCluster(config, singleActorSystem);

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-test-utils-parent/flink-test-utils/src/main/scala/org/apache/flink/test/util/ForkableFlinkMiniCluster.scala
----------------------------------------------------------------------
diff --git a/flink-test-utils-parent/flink-test-utils/src/main/scala/org/apache/flink/test/util/ForkableFlinkMiniCluster.scala b/flink-test-utils-parent/flink-test-utils/src/main/scala/org/apache/flink/test/util/ForkableFlinkMiniCluster.scala
index 42c0a6a..5dd4188 100644
--- a/flink-test-utils-parent/flink-test-utils/src/main/scala/org/apache/flink/test/util/ForkableFlinkMiniCluster.scala
+++ b/flink-test-utils-parent/flink-test-utils/src/main/scala/org/apache/flink/test/util/ForkableFlinkMiniCluster.scala
@@ -29,7 +29,7 @@ import org.apache.flink.configuration.{ConfigConstants, Configuration}
 import org.apache.flink.runtime.akka.AkkaUtils
 import org.apache.flink.runtime.clusterframework.FlinkResourceManager
 import org.apache.flink.runtime.clusterframework.types.ResourceID
-import org.apache.flink.runtime.jobmanager.{JobManager, RecoveryMode}
+import org.apache.flink.runtime.jobmanager.{JobManager, HighAvailabilityMode}
 import org.apache.flink.runtime.minicluster.LocalFlinkMiniCluster
 import org.apache.flink.runtime.taskmanager.TaskManager
 import org.apache.flink.runtime.testingUtils.TestingTaskManagerMessages.NotifyWhenRegisteredAtJobManager
@@ -261,14 +261,16 @@ class ForkableFlinkMiniCluster(
   }
 
   override def start(): Unit = {
-    val zookeeperURL = configuration.getString(ConfigConstants.ZOOKEEPER_QUORUM_KEY, "")
+    val zookeeperURL = configuration.getString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, "")
 
-    zookeeperCluster = if(recoveryMode == RecoveryMode.ZOOKEEPER && zookeeperURL.equals("")) {
+    zookeeperCluster = if (recoveryMode == HighAvailabilityMode.ZOOKEEPER &&
+      zookeeperURL.equals("")) {
       LOG.info("Starting ZooKeeper cluster.")
 
       val testingCluster = new TestingCluster(1)
 
-      configuration.setString(ConfigConstants.ZOOKEEPER_QUORUM_KEY, testingCluster.getConnectString)
+      configuration.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY,
+        testingCluster.getConnectString)
 
       testingCluster.start()
 

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-tests/src/test/java/org/apache/flink/test/recovery/ChaosMonkeyITCase.java
----------------------------------------------------------------------
diff --git a/flink-tests/src/test/java/org/apache/flink/test/recovery/ChaosMonkeyITCase.java b/flink-tests/src/test/java/org/apache/flink/test/recovery/ChaosMonkeyITCase.java
index e97532c..22bf62a 100644
--- a/flink-tests/src/test/java/org/apache/flink/test/recovery/ChaosMonkeyITCase.java
+++ b/flink-tests/src/test/java/org/apache/flink/test/recovery/ChaosMonkeyITCase.java
@@ -564,7 +564,7 @@ public class ChaosMonkeyITCase extends TestLogger {
 			fail(fsCheckpoints + " does not exist: " + Arrays.toString(FileStateBackendBasePath.listFiles()));
 		}
 
-		File fsRecovery = new File(new URI(config.getString(ConfigConstants.ZOOKEEPER_RECOVERY_PATH, "")).getPath());
+		File fsRecovery = new File(new URI(config.getString(ConfigConstants.ZOOKEEPER_HA_PATH, "")).getPath());
 
 		LOG.info("Checking " + fsRecovery);
 

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-tests/src/test/java/org/apache/flink/test/recovery/JobManagerHAJobGraphRecoveryITCase.java
----------------------------------------------------------------------
diff --git a/flink-tests/src/test/java/org/apache/flink/test/recovery/JobManagerHAJobGraphRecoveryITCase.java b/flink-tests/src/test/java/org/apache/flink/test/recovery/JobManagerHAJobGraphRecoveryITCase.java
index eccf971..e0e165d 100644
--- a/flink-tests/src/test/java/org/apache/flink/test/recovery/JobManagerHAJobGraphRecoveryITCase.java
+++ b/flink-tests/src/test/java/org/apache/flink/test/recovery/JobManagerHAJobGraphRecoveryITCase.java
@@ -482,7 +482,7 @@ public class JobManagerHAJobGraphRecoveryITCase extends TestLogger {
 
 		// ZooKeeper
 		String currentJobsPath = config.getString(
-				ConfigConstants.ZOOKEEPER_JOBGRAPHS_PATH,
+				ConfigConstants.HA_ZOOKEEPER_JOBGRAPHS_PATH,
 				ConfigConstants.DEFAULT_ZOOKEEPER_JOBGRAPHS_PATH);
 
 		Stat stat = ZooKeeper.getClient().checkExists().forPath(currentJobsPath);
@@ -514,7 +514,7 @@ public class JobManagerHAJobGraphRecoveryITCase extends TestLogger {
 
 		// ZooKeeper
 		String currentJobsPath = config.getString(
-			ConfigConstants.ZOOKEEPER_JOBGRAPHS_PATH,
+			ConfigConstants.HA_ZOOKEEPER_JOBGRAPHS_PATH,
 			ConfigConstants.DEFAULT_ZOOKEEPER_JOBGRAPHS_PATH);
 
 		Stat stat = ZooKeeper.getClient().checkExists().forPath(currentJobsPath);

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-tests/src/test/java/org/apache/flink/test/recovery/JobManagerHAProcessFailureBatchRecoveryITCase.java
----------------------------------------------------------------------
diff --git a/flink-tests/src/test/java/org/apache/flink/test/recovery/JobManagerHAProcessFailureBatchRecoveryITCase.java b/flink-tests/src/test/java/org/apache/flink/test/recovery/JobManagerHAProcessFailureBatchRecoveryITCase.java
index 88aeb09..0c52204 100644
--- a/flink-tests/src/test/java/org/apache/flink/test/recovery/JobManagerHAProcessFailureBatchRecoveryITCase.java
+++ b/flink-tests/src/test/java/org/apache/flink/test/recovery/JobManagerHAProcessFailureBatchRecoveryITCase.java
@@ -149,8 +149,8 @@ public class JobManagerHAProcessFailureBatchRecoveryITCase extends TestLogger {
 	 */
 	public void testJobManagerFailure(String zkQuorum, final File coordinateDir) throws Exception {
 		Configuration config = new Configuration();
-		config.setString(ConfigConstants.RECOVERY_MODE, "ZOOKEEPER");
-		config.setString(ConfigConstants.ZOOKEEPER_QUORUM_KEY, zkQuorum);
+		config.setString(ConfigConstants.HIGH_AVAILABILITY, "ZOOKEEPER");
+		config.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, zkQuorum);
 
 		ExecutionEnvironment env = ExecutionEnvironment.createRemoteEnvironment(
 				"leader", 1, config);

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-tests/src/test/java/org/apache/flink/test/runtime/leaderelection/ZooKeeperLeaderElectionITCase.java
----------------------------------------------------------------------
diff --git a/flink-tests/src/test/java/org/apache/flink/test/runtime/leaderelection/ZooKeeperLeaderElectionITCase.java b/flink-tests/src/test/java/org/apache/flink/test/runtime/leaderelection/ZooKeeperLeaderElectionITCase.java
index 45ee839..7091339 100644
--- a/flink-tests/src/test/java/org/apache/flink/test/runtime/leaderelection/ZooKeeperLeaderElectionITCase.java
+++ b/flink-tests/src/test/java/org/apache/flink/test/runtime/leaderelection/ZooKeeperLeaderElectionITCase.java
@@ -91,11 +91,11 @@ public class ZooKeeperLeaderElectionITCase extends TestLogger {
 		int numJMs = 10;
 		int numTMs = 3;
 
-		configuration.setString(ConfigConstants.RECOVERY_MODE, "zookeeper");
+		configuration.setString(ConfigConstants.HIGH_AVAILABILITY, "zookeeper");
 		configuration.setInteger(ConfigConstants.LOCAL_NUMBER_JOB_MANAGER, numJMs);
 		configuration.setInteger(ConfigConstants.LOCAL_NUMBER_TASK_MANAGER, numTMs);
 		configuration.setString(ConfigConstants.STATE_BACKEND, "filesystem");
-		configuration.setString(ConfigConstants.ZOOKEEPER_RECOVERY_PATH, tempDirectory.getAbsoluteFile().toURI().toString());
+		configuration.setString(ConfigConstants.ZOOKEEPER_HA_PATH, tempDirectory.getAbsoluteFile().toURI().toString());
 
 		ForkableFlinkMiniCluster cluster = new ForkableFlinkMiniCluster(configuration);
 
@@ -139,12 +139,12 @@ public class ZooKeeperLeaderElectionITCase extends TestLogger {
 
 		Configuration configuration = new Configuration();
 
-		configuration.setString(ConfigConstants.RECOVERY_MODE, "zookeeper");
+		configuration.setString(ConfigConstants.HIGH_AVAILABILITY, "zookeeper");
 		configuration.setInteger(ConfigConstants.LOCAL_NUMBER_JOB_MANAGER, numJMs);
 		configuration.setInteger(ConfigConstants.LOCAL_NUMBER_TASK_MANAGER, numTMs);
 		configuration.setInteger(ConfigConstants.TASK_MANAGER_NUM_TASK_SLOTS, numSlotsPerTM);
 		configuration.setString(ConfigConstants.STATE_BACKEND, "filesystem");
-		configuration.setString(ConfigConstants.ZOOKEEPER_RECOVERY_PATH, tempDirectory.getAbsoluteFile().toURI().toString());
+		configuration.setString(ConfigConstants.ZOOKEEPER_HA_PATH, tempDirectory.getAbsoluteFile().toURI().toString());
 
 		// we "effectively" disable the automatic RecoverAllJobs message and sent it manually to make
 		// sure that all TMs have registered to the JM prior to issueing the RecoverAllJobs message

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-yarn-tests/src/test/java/org/apache/flink/yarn/CliFrontendYarnAddressConfigurationTest.java
----------------------------------------------------------------------
diff --git a/flink-yarn-tests/src/test/java/org/apache/flink/yarn/CliFrontendYarnAddressConfigurationTest.java b/flink-yarn-tests/src/test/java/org/apache/flink/yarn/CliFrontendYarnAddressConfigurationTest.java
index 60ae2ef..48ad7f5 100644
--- a/flink-yarn-tests/src/test/java/org/apache/flink/yarn/CliFrontendYarnAddressConfigurationTest.java
+++ b/flink-yarn-tests/src/test/java/org/apache/flink/yarn/CliFrontendYarnAddressConfigurationTest.java
@@ -202,7 +202,7 @@ public class CliFrontendYarnAddressConfigurationTest {
 				CliFrontendParser.parseRunCommand(new String[] {"-yid", TEST_YARN_APPLICATION_ID.toString()});
 
 		frontend.retrieveClient(options);
-		String zkNs = frontend.getConfiguration().getString(ConfigConstants.ZOOKEEPER_NAMESPACE_KEY, "error");
+		String zkNs = frontend.getConfiguration().getString(ConfigConstants.HA_ZOOKEEPER_NAMESPACE_KEY, "error");
 		Assert.assertTrue(zkNs.matches("application_\\d+_0042"));
 	}
 
@@ -216,7 +216,7 @@ public class CliFrontendYarnAddressConfigurationTest {
 				CliFrontendParser.parseRunCommand(new String[] {"-yid", TEST_YARN_APPLICATION_ID.toString(), "-yz", overrideZkNamespace});
 
 		frontend.retrieveClient(options);
-		String zkNs = frontend.getConfiguration().getString(ConfigConstants.ZOOKEEPER_NAMESPACE_KEY, "error");
+		String zkNs = frontend.getConfiguration().getString(ConfigConstants.HA_ZOOKEEPER_NAMESPACE_KEY, "error");
 		Assert.assertEquals(overrideZkNamespace, zkNs);
 	}
 

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YARNHighAvailabilityITCase.java
----------------------------------------------------------------------
diff --git a/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YARNHighAvailabilityITCase.java b/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YARNHighAvailabilityITCase.java
index 9b52975..25dbe53 100644
--- a/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YARNHighAvailabilityITCase.java
+++ b/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YARNHighAvailabilityITCase.java
@@ -115,7 +115,7 @@ public class YARNHighAvailabilityITCase extends YarnTestBase {
 			zkServer.getConnectString() + "@@yarn.application-attempts=" + numberApplicationAttempts +
 			"@@" + ConfigConstants.STATE_BACKEND + "=FILESYSTEM" +
 			"@@" + FsStateBackendFactory.CHECKPOINT_DIRECTORY_URI_CONF_KEY + "=" + fsStateHandlePath + "/checkpoints" +
-			"@@" + ConfigConstants.ZOOKEEPER_RECOVERY_PATH + "=" + fsStateHandlePath + "/recovery");
+			"@@" + ConfigConstants.ZOOKEEPER_HA_PATH + "=" + fsStateHandlePath + "/recovery");
 		flinkYarnClient.setConfigurationFilePath(new Path(confDirPath + File.separator + "flink-conf.yaml"));
 
 		ClusterClient yarnCluster = null;

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-yarn/src/main/java/org/apache/flink/yarn/AbstractYarnClusterDescriptor.java
----------------------------------------------------------------------
diff --git a/flink-yarn/src/main/java/org/apache/flink/yarn/AbstractYarnClusterDescriptor.java b/flink-yarn/src/main/java/org/apache/flink/yarn/AbstractYarnClusterDescriptor.java
index ba07af1..f4c2032 100644
--- a/flink-yarn/src/main/java/org/apache/flink/yarn/AbstractYarnClusterDescriptor.java
+++ b/flink-yarn/src/main/java/org/apache/flink/yarn/AbstractYarnClusterDescriptor.java
@@ -23,7 +23,7 @@ import org.apache.flink.client.deployment.ClusterDescriptor;
 import org.apache.flink.configuration.ConfigConstants;
 import org.apache.flink.configuration.GlobalConfiguration;
 import org.apache.flink.runtime.akka.AkkaUtils;
-import org.apache.flink.runtime.jobmanager.RecoveryMode;
+import org.apache.flink.runtime.jobmanager.HighAvailabilityMode;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
@@ -553,13 +553,13 @@ public abstract class AbstractYarnClusterDescriptor implements ClusterDescriptor
 		// no user specified cli argument for namespace?
 		if (zkNamespace == null || zkNamespace.isEmpty()) {
 			// namespace defined in config? else use applicationId as default.
-			zkNamespace = flinkConfiguration.getString(ConfigConstants.ZOOKEEPER_NAMESPACE_KEY, String.valueOf(appId));
+			zkNamespace = flinkConfiguration.getString(ConfigConstants.HA_ZOOKEEPER_NAMESPACE_KEY, String.valueOf(appId));
 			setZookeeperNamespace(zkNamespace);
 		}
 
-		flinkConfiguration.setString(ConfigConstants.ZOOKEEPER_NAMESPACE_KEY, zkNamespace);
+		flinkConfiguration.setString(ConfigConstants.HA_ZOOKEEPER_NAMESPACE_KEY, zkNamespace);
 
-		if (RecoveryMode.isHighAvailabilityModeActivated(flinkConfiguration)) {
+		if (HighAvailabilityMode.isHighAvailabilityModeActivated(flinkConfiguration)) {
 			// activate re-execution of failed applications
 			appContext.setMaxAppAttempts(
 				flinkConfiguration.getInteger(

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-yarn/src/main/java/org/apache/flink/yarn/YarnApplicationMasterRunner.java
----------------------------------------------------------------------
diff --git a/flink-yarn/src/main/java/org/apache/flink/yarn/YarnApplicationMasterRunner.java b/flink-yarn/src/main/java/org/apache/flink/yarn/YarnApplicationMasterRunner.java
index 39b2510..87a2c98 100644
--- a/flink-yarn/src/main/java/org/apache/flink/yarn/YarnApplicationMasterRunner.java
+++ b/flink-yarn/src/main/java/org/apache/flink/yarn/YarnApplicationMasterRunner.java
@@ -430,7 +430,7 @@ public class YarnApplicationMasterRunner {
 		// override zookeeper namespace with user cli argument (if provided)
 		String cliZKNamespace = ENV.get(YarnConfigKeys.ENV_ZOOKEEPER_NAMESPACE);
 		if (cliZKNamespace != null && !cliZKNamespace.isEmpty()) {
-			configuration.setString(ConfigConstants.ZOOKEEPER_NAMESPACE_KEY, cliZKNamespace);
+			configuration.setString(ConfigConstants.HA_ZOOKEEPER_NAMESPACE_KEY, cliZKNamespace);
 		}
 
 		// if a web monitor shall be started, set the port to random binding

http://git-wip-us.apache.org/repos/asf/flink/blob/01ffe34c/flink-yarn/src/main/java/org/apache/flink/yarn/cli/FlinkYarnSessionCli.java
----------------------------------------------------------------------
diff --git a/flink-yarn/src/main/java/org/apache/flink/yarn/cli/FlinkYarnSessionCli.java b/flink-yarn/src/main/java/org/apache/flink/yarn/cli/FlinkYarnSessionCli.java
index bee6a7a..3c93e34 100644
--- a/flink-yarn/src/main/java/org/apache/flink/yarn/cli/FlinkYarnSessionCli.java
+++ b/flink-yarn/src/main/java/org/apache/flink/yarn/cli/FlinkYarnSessionCli.java
@@ -58,7 +58,7 @@ import java.util.Map;
 import java.util.Properties;
 
 import static org.apache.flink.client.cli.CliFrontendParser.ADDRESS_OPTION;
-import static org.apache.flink.configuration.ConfigConstants.ZOOKEEPER_NAMESPACE_KEY;
+import static org.apache.flink.configuration.ConfigConstants.HA_ZOOKEEPER_NAMESPACE_KEY;
 
 /**
  * Class handling the command line interface to the YARN session.
@@ -503,8 +503,8 @@ public class FlinkYarnSessionCli implements CustomCommandLine<YarnClusterClient>
 		if(null != applicationID) {
 			String zkNamespace = cmdLine.hasOption(ZOOKEEPER_NAMESPACE.getOpt()) ?
 					cmdLine.getOptionValue(ZOOKEEPER_NAMESPACE.getOpt())
-					: config.getString(ZOOKEEPER_NAMESPACE_KEY, applicationID);
-			config.setString(ZOOKEEPER_NAMESPACE_KEY, zkNamespace);
+					: config.getString(HA_ZOOKEEPER_NAMESPACE_KEY, applicationID);
+			config.setString(HA_ZOOKEEPER_NAMESPACE_KEY, zkNamespace);
 
 			AbstractYarnClusterDescriptor yarnDescriptor = getClusterDescriptor();
 			yarnDescriptor.setFlinkConfiguration(config);


[40/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/fig/LICENSE.txt
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/fig/LICENSE.txt b/docs/apis/streaming/fig/LICENSE.txt
deleted file mode 100644
index 35b8673..0000000
--- a/docs/apis/streaming/fig/LICENSE.txt
+++ /dev/null
@@ -1,17 +0,0 @@
-All image files in the folder and its subfolders are
-licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/fig/parallel_streams_watermarks.svg
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/fig/parallel_streams_watermarks.svg b/docs/apis/streaming/fig/parallel_streams_watermarks.svg
deleted file mode 100644
index f6a4c4b..0000000
--- a/docs/apis/streaming/fig/parallel_streams_watermarks.svg
+++ /dev/null
@@ -1,516 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<svg
-   xmlns:dc="http://purl.org/dc/elements/1.1/"
-   xmlns:cc="http://creativecommons.org/ns#"
-   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
-   xmlns:svg="http://www.w3.org/2000/svg"
-   xmlns="http://www.w3.org/2000/svg"
-   version="1.1"
-   width="468.91"
-   height="285.20001"
-   id="svg2">
-  <defs
-     id="defs4" />
-  <metadata
-     id="metadata7">
-    <rdf:RDF>
-      <cc:Work
-         rdf:about="">
-        <dc:format>image/svg+xml</dc:format>
-        <dc:type
-           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
-        <dc:title></dc:title>
-      </cc:Work>
-    </rdf:RDF>
-  </metadata>
-  <g
-     transform="translate(-355.61783,-283.04674)"
-     id="layer1">
-    <g
-       transform="translate(229.75524,151.68574)"
-       id="g2989">
-      <path
-         d="m 127.90999,194.24654 c 0,-13.41733 10.88576,-24.29371 24.30309,-24.29371 13.41733,0 24.30308,10.87638 24.30308,24.29371 0,13.42671 -10.88575,24.30309 -24.30308,24.30309 -13.41733,0 -24.30309,-10.87638 -24.30309,-24.30309"
-         id="path2991"
-         style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="134.8311"
-         y="192.20834"
-         id="text2993"
-         xml:space="preserve"
-         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source</text>
-      <text
-         x="144.43231"
-         y="204.20988"
-         id="text2995"
-         xml:space="preserve"
-         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">(1)</text>
-      <path
-         d="m 127.29116,327.47283 c 0,-13.37044 10.83888,-24.22807 24.22808,-24.22807 13.37045,0 24.20932,10.85763 24.20932,24.22807 0,13.3892 -10.83887,24.22808 -24.20932,24.22808 -13.3892,0 -24.22808,-10.83888 -24.22808,-24.22808"
-         id="path2997"
-         style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="134.18349"
-         y="325.44901"
-         id="text2999"
-         xml:space="preserve"
-         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source</text>
-      <text
-         x="143.7847"
-         y="337.45053"
-         id="text3001"
-         xml:space="preserve"
-         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">(2)</text>
-      <path
-         d="m 266.05878,194.25592 c 0,-13.42671 10.83888,-24.30309 24.22808,-24.30309 13.37045,0 24.20933,10.87638 24.20933,24.30309 0,13.4267 -10.83888,24.30308 -24.20933,24.30308 -13.3892,0 -24.22808,-10.87638 -24.22808,-24.30308"
-         id="path3003"
-         style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="279.25809"
-         y="192.20834"
-         id="text3005"
-         xml:space="preserve"
-         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">map</text>
-      <text
-         x="282.55853"
-         y="204.20988"
-         id="text3007"
-         xml:space="preserve"
-         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">(1)</text>
-      <path
-         d="m 266.05878,327.47283 c 0,-13.37044 10.83888,-24.22807 24.22808,-24.22807 13.37045,0 24.20933,10.85763 24.20933,24.22807 0,13.3892 -10.83888,24.22808 -24.20933,24.22808 -13.3892,0 -24.22808,-10.83888 -24.22808,-24.22808"
-         id="path3009"
-         style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="279.25809"
-         y="325.44901"
-         id="text3011"
-         xml:space="preserve"
-         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">map</text>
-      <text
-         x="282.55853"
-         y="337.45053"
-         id="text3013"
-         xml:space="preserve"
-         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">(2)</text>
-      <path
-         d="m 473.2726,194.25592 c 0,-13.42671 10.83887,-24.30309 24.22807,-24.30309 13.37045,0 24.20933,10.87638 24.20933,24.30309 0,13.4267 -10.83888,24.30308 -24.20933,24.30308 -13.3892,0 -24.22807,-10.87638 -24.22807,-24.30308"
-         id="path3015"
-         style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="478.6647"
-         y="192.20834"
-         id="text3017"
-         xml:space="preserve"
-         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">window</text>
-      <text
-         x="489.76611"
-         y="204.20988"
-         id="text3019"
-         xml:space="preserve"
-         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">(1)</text>
-      <path
-         d="m 473.2726,327.47283 c 0,-13.37044 10.83887,-24.22807 24.22807,-24.22807 13.37045,0 24.20933,10.85763 24.20933,24.22807 0,13.3892 -10.83888,24.22808 -24.20933,24.22808 -13.3892,0 -24.22807,-10.83888 -24.22807,-24.22808"
-         id="path3021"
-         style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="478.6647"
-         y="325.44901"
-         id="text3023"
-         xml:space="preserve"
-         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">window</text>
-      <text
-         x="489.76611"
-         y="337.45053"
-         id="text3025"
-         xml:space="preserve"
-         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">(2)</text>
-      <path
-         d="m 159.32023,167.68379 c 0,-1.67834 1.36892,-3.04726 3.04726,-3.04726 l 12.18905,0 c 1.68771,0 3.04726,1.36892 3.04726,3.04726 l 0,12.18905 c 0,1.68771 -1.35955,3.04726 -3.04726,3.04726 l -12.18905,0 c -1.67834,0 -3.04726,-1.35955 -3.04726,-3.04726 z"
-         id="path3027"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="161.9245"
-         y="177.71732"
-         id="text3029"
-         xml:space="preserve"
-         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">33</text>
-      <path
-         d="m 159.32023,302.70094 c 0,-1.66896 1.36892,-3.03789 3.05664,-3.03789 l 12.18905,0 c 1.68771,0 3.03788,1.36893 3.03788,3.03789 l 0,12.18905 c 0,1.68771 -1.35017,3.05663 -3.03788,3.05663 l -12.18905,0 c -1.68772,0 -3.05664,-1.36892 -3.05664,-3.05663 z"
-         id="path3031"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="161.9245"
-         y="312.73444"
-         id="text3033"
-         xml:space="preserve"
-         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">17</text>
-      <path
-         d="m 184.95474,189.95225 64.2269,0 0,-4.21929 8.43857,8.43857 -8.43857,8.43857 0,-4.21928 -64.2269,0 z"
-         id="path3035"
-         style="fill:#d9d9d9;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 321.22829,189.96162 137.19242,0 0,-4.21928 8.43857,8.43857 -8.43857,8.43857 0,-4.21929 -137.19242,0 z"
-         id="path3037"
-         style="fill:#d9d9d9;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 184.95474,322.16591 64.2269,0 0,-4.21929 8.43857,8.43857 -8.43857,8.43858 0,-4.21929 -64.2269,0 z"
-         id="path3039"
-         style="fill:#d9d9d9;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 321.22829,322.16591 137.19242,0 0,-4.21929 8.43857,8.43857 -8.43857,8.43858 0,-4.21929 -137.19242,0 z"
-         id="path3041"
-         style="fill:#d9d9d9;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 324.97877,206.16368 136.91113,94.32448 2.40031,-3.48795 2.15652,11.73899 -11.73899,2.15653 2.4003,-3.46919 -136.91113,-94.32448 z"
-         id="path3043"
-         style="fill:#d9d9d9;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 325.99139,314.43993 136.89239,-94.32448 2.4003,3.46919 2.15653,-11.73899 -11.73899,-2.15652 2.4003,3.46919 -136.91113,94.32447 z"
-         id="path3045"
-         style="fill:#d9d9d9;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 302.62593,167.68379 c 0,-1.66896 1.36892,-3.03788 3.05664,-3.03788 l 12.18904,0 c 1.66897,0 3.03789,1.36892 3.03789,3.03788 l 0,12.18905 c 0,1.68771 -1.36892,3.05664 -3.03789,3.05664 l -12.18904,0 c -1.68772,0 -3.05664,-1.36893 -3.05664,-3.05664 z"
-         id="path3047"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="305.31848"
-         y="177.71732"
-         id="text3049"
-         xml:space="preserve"
-         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">29</text>
-      <path
-         d="m 448.57571,176.59117 c 0,-1.66896 1.36892,-3.03788 3.05664,-3.03788 l 12.18905,0 c 1.68771,0 3.03788,1.36892 3.03788,3.03788 l 0,12.18905 c 0,1.68772 -1.35017,3.05664 -3.03788,3.05664 l -12.18905,0 c -1.68772,0 -3.05664,-1.36892 -3.05664,-3.05664 z"
-         id="path3051"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="451.24167"
-         y="186.67395"
-         id="text3053"
-         xml:space="preserve"
-         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">29</text>
-      <path
-         d="m 302.77595,302.70094 c 0,-1.66896 1.36892,-3.03789 3.05664,-3.03789 l 12.18904,0 c 1.68772,0 3.03789,1.36893 3.03789,3.03789 l 0,12.18905 c 0,1.68771 -1.35017,3.05663 -3.03789,3.05663 l -12.18904,0 c -1.68772,0 -3.05664,-1.36892 -3.05664,-3.05663 z"
-         id="path3055"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="305.4187"
-         y="312.73444"
-         id="text3057"
-         xml:space="preserve"
-         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">17</text>
-      <path
-         d="m 454.98903,216.43998 c 0,-1.66896 1.36892,-3.03788 3.05663,-3.03788 l 12.18905,0 c 1.66896,0 3.03789,1.36892 3.03789,3.03788 l 0,12.18905 c 0,1.68772 -1.36893,3.05664 -3.03789,3.05664 l -12.18905,0 c -1.68771,0 -3.05663,-1.36892 -3.05663,-3.05664 z"
-         id="path3059"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="457.60751"
-         y="226.43639"
-         id="text3061"
-         xml:space="preserve"
-         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">14</text>
-      <path
-         d="m 454.98903,334.42997 c 0,-1.68772 1.36892,-3.05664 3.05663,-3.05664 l 12.18905,0 c 1.66896,0 3.03789,1.36892 3.03789,3.05664 l 0,12.18904 c 0,1.68772 -1.36893,3.03789 -3.03789,3.03789 l -12.18905,0 c -1.68771,0 -3.05663,-1.35017 -3.05663,-3.03789 z"
-         id="path3063"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="457.60751"
-         y="344.52374"
-         id="text3065"
-         xml:space="preserve"
-         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">14</text>
-      <path
-         d="m 463.10881,287.54901 c 0,-1.68771 1.36892,-3.05663 3.05663,-3.05663 l 12.18905,0 c 1.68772,0 3.03789,1.36892 3.03789,3.05663 l 0,12.18905 c 0,1.68772 -1.35017,3.03789 -3.03789,3.03789 l -12.18905,0 c -1.68771,0 -3.05663,-1.35017 -3.05663,-3.03789 z"
-         id="path3067"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="465.84369"
-         y="297.68774"
-         id="text3069"
-         xml:space="preserve"
-         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">29</text>
-      <path
-         d="m 509.83974,302.70094 c 0,-1.66896 1.36892,-3.03789 3.05664,-3.03789 l 12.18905,0 c 1.66896,0 3.03788,1.36893 3.03788,3.03789 l 0,12.18905 c 0,1.68771 -1.36892,3.05663 -3.03788,3.05663 l -12.18905,0 c -1.68772,0 -3.05664,-1.36892 -3.05664,-3.05663 z"
-         id="path3071"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="512.54437"
-         y="312.73444"
-         id="text3073"
-         xml:space="preserve"
-         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">14</text>
-      <path
-         d="m 509.83974,167.68379 c 0,-1.66896 1.36892,-3.03788 3.05664,-3.03788 l 12.18905,0 c 1.66896,0 3.03788,1.36892 3.03788,3.03788 l 0,12.18905 c 0,1.68771 -1.36892,3.05664 -3.03788,3.05664 l -12.18905,0 c -1.68772,0 -3.05664,-1.36893 -3.05664,-3.05664 z"
-         id="path3075"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="512.55664"
-         y="177.71732"
-         id="text3077"
-         xml:space="preserve"
-         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">14</text>
-      <path
-         d="m 234.32976,180.73545 0,1.87524 -1.87524,0 0,-1.87524 1.87524,0 z m 0,3.75048 0,1.87523 -1.87524,0 0,-1.87523 1.87524,0 z m 0,3.75047 0,1.87524 -1.87524,0 0,-1.87524 1.87524,0 z m 0,3.75048 0,1.87524 -1.87524,0 0,-1.87524 1.87524,0 z m 0,3.75048 0,1.87523 -1.87524,0 0,-1.87523 1.87524,0 z m 0,3.75047 0,1.87524 -1.87524,0 0,-1.87524 1.87524,0 z m 0,3.75048 0,1.87524 -1.87524,0 0,-1.87524 1.87524,0 z m 0,3.75047 0,1.87524 -1.87524,0 0,-1.87524 1.87524,0 z"
-         id="path3079"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937619px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="219.60442"
-         y="218.16707"
-         id="text3081"
-         xml:space="preserve"
-         style="font-size:8.70110512px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">W(33)</text>
-      <path
-         d="m 355.11384,273.31596 1.10639,1.51894 -1.51894,1.10639 -1.10639,-1.51894 1.51894,-1.10639 z m 2.21278,3.03788 1.10639,1.51894 -1.51894,1.1064 -1.10639,-1.51895 1.51894,-1.10639 z m 2.21278,3.01914 1.1064,1.51894 -1.51895,1.10639 -1.10639,-1.51894 1.51894,-1.10639 z m 2.21279,3.03788 1.10639,1.51894 -1.51895,1.10639 -1.10639,-1.51894 1.51895,-1.10639 z m 2.19402,3.03789 1.10639,1.50019 -1.50019,1.10639 -1.10639,-1.51895 1.50019,-1.08763 z m 2.21279,3.01913 1.10639,1.51894 -1.50019,1.10639 -1.1064,-1.51894 1.5002,-1.10639 z m 2.21278,3.03789 0.99387,1.35017 -1.50019,1.10639 -1.01263,-1.35017 1.51895,-1.10639 z"
-         id="path3083"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 374.8226,312.15214 0,1.87524 -1.87524,0 0,-1.87524 1.87524,0 z m 0,3.75047 0,1.87524 -1.87524,0 0,-1.87524 1.87524,0 z m 0,3.75048 0,1.87524 -1.87524,0 0,-1.87524 1.87524,0 z m 0,3.75048 0,1.87523 -1.87524,0 0,-1.87523 1.87524,0 z m 0,3.75047 0,1.87524 -1.87524,0 0,-1.87524 1.87524,0 z m 0,3.75048 0,1.87524 -1.87524,0 0,-1.87524 1.87524,0 z m 0,3.75047 0,1.87524 -1.87524,0 0,-1.87524 1.87524,0 z m 0,3.75048 0,1.87524 -1.87524,0 0,-1.87524 1.87524,0 z"
-         id="path3085"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="337.06772"
-         y="270.31641"
-         id="text3087"
-         xml:space="preserve"
-         style="font-size:8.70110512px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">W(17)</text>
-      <text
-         x="359.68753"
-         y="351.43448"
-         id="text3089"
-         xml:space="preserve"
-         style="font-size:8.70110512px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">W(17)</text>
-      <path
-         d="m 414.9902,187.14876 c 0,-1.55644 1.25641,-2.81285 2.81285,-2.81285 l 16.08955,0 c 1.55644,0 2.81285,1.25641 2.81285,2.81285 l 0,11.25143 c 0,1.55645 -1.25641,2.81286 -2.81285,2.81286 l -16.08955,0 c -1.55644,0 -2.81285,-1.25641 -2.81285,-2.81286 z"
-         id="path3091"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 414.9902,187.14876 c 0,-1.55644 1.25641,-2.81285 2.81285,-2.81285 l 16.08955,0 c 1.55644,0 2.81285,1.25641 2.81285,2.81285 l 0,11.25143 c 0,1.55645 -1.25641,2.81286 -2.81285,2.81286 l -16.08955,0 c -1.55644,0 -2.81285,-1.25641 -2.81285,-2.81286 z"
-         id="path3093"
-         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="416.82651"
-         y="195.85332"
-         id="text3095"
-         xml:space="preserve"
-         style="font-size:7.50095272px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">A|30</text>
-      <path
-         d="m 526.8669,189.96162 19.07117,0 0,-4.21928 8.43858,8.43857 -8.43858,8.43857 0,-4.21929 -19.07117,0 z"
-         id="path3097"
-         style="fill:#d9d9d9;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 526.8669,322.16591 19.07117,0 0,-4.21929 8.43858,8.43857 -8.43858,8.43858 0,-4.21929 -19.07117,0 z"
-         id="path3099"
-         style="fill:#d9d9d9;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 359.82069,187.14876 c 0,-1.55644 1.25641,-2.81285 2.81286,-2.81285 l 16.08954,0 c 1.55645,0 2.81286,1.25641 2.81286,2.81285 l 0,11.25143 c 0,1.55645 -1.25641,2.81286 -2.81286,2.81286 l -16.08954,0 c -1.55645,0 -2.81286,-1.25641 -2.81286,-2.81286 z"
-         id="path3101"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 359.82069,187.14876 c 0,-1.55644 1.25641,-2.81285 2.81286,-2.81285 l 16.08954,0 c 1.55645,0 2.81286,1.25641 2.81286,2.81285 l 0,11.25143 c 0,1.55645 -1.25641,2.81286 -2.81286,2.81286 l -16.08954,0 c -1.55645,0 -2.81286,-1.25641 -2.81286,-2.81286 z"
-         id="path3103"
-         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="361.68771"
-         y="195.85332"
-         id="text3105"
-         xml:space="preserve"
-         style="font-size:7.50095272px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">B|31</text>
-      <path
-         d="m 334.03617,219.02781 c 0,-1.55645 1.25641,-2.81285 2.81285,-2.81285 l 16.08955,0 c 1.55644,0 2.81285,1.2564 2.81285,2.81285 l 0,11.25143 c 0,1.55645 -1.25641,2.81286 -2.81285,2.81286 l -16.08955,0 c -1.55644,0 -2.81285,-1.25641 -2.81285,-2.81286 z"
-         id="path3107"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 334.03617,219.02781 c 0,-1.55645 1.25641,-2.81285 2.81285,-2.81285 l 16.08955,0 c 1.55644,0 2.81285,1.2564 2.81285,2.81285 l 0,11.25143 c 0,1.55645 -1.25641,2.81286 -2.81285,2.81286 l -16.08955,0 c -1.55644,0 -2.81285,-1.25641 -2.81285,-2.81286 z"
-         id="path3109"
-         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="335.68719"
-         y="227.65872"
-         id="text3111"
-         xml:space="preserve"
-         style="font-size:7.50095272px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">C|30</text>
-      <path
-         d="m 402.48236,241.3619 c 0,-1.5377 1.25641,-2.81286 2.81285,-2.81286 l 16.08955,0 c 1.55645,0 2.81286,1.27516 2.81286,2.81286 l 0,11.25143 c 0,1.55644 -1.25641,2.81285 -2.81286,2.81285 l -16.08955,0 c -1.55644,0 -2.81285,-1.25641 -2.81285,-2.81285 z"
-         id="path3113"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 402.48236,241.3619 c 0,-1.5377 1.25641,-2.81286 2.81285,-2.81286 l 16.08955,0 c 1.55645,0 2.81286,1.27516 2.81286,2.81286 l 0,11.25143 c 0,1.55644 -1.25641,2.81285 -2.81286,2.81285 l -16.08955,0 c -1.55644,0 -2.81285,-1.25641 -2.81285,-2.81285 z"
-         id="path3115"
-         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="404.02338"
-         y="250.06659"
-         id="text3117"
-         xml:space="preserve"
-         style="font-size:7.50095272px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">D|15</text>
-      <path
-         d="m 432.01736,286.06758 c 0,-1.55645 1.25641,-2.81286 2.81286,-2.81286 l 16.08954,0 c 1.55645,0 2.81286,1.25641 2.81286,2.81286 l 0,11.25142 c 0,1.55645 -1.25641,2.81286 -2.81286,2.81286 l -16.08954,0 c -1.55645,0 -2.81286,-1.25641 -2.81286,-2.81286 z"
-         id="path3119"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 432.01736,286.06758 c 0,-1.55645 1.25641,-2.81286 2.81286,-2.81286 l 16.08954,0 c 1.55645,0 2.81286,1.25641 2.81286,2.81286 l 0,11.25142 c 0,1.55645 -1.25641,2.81286 -2.81286,2.81286 l -16.08954,0 c -1.55645,0 -2.81286,-1.25641 -2.81286,-2.81286 z"
-         id="path3121"
-         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="434.04199"
-         y="294.68573"
-         id="text3123"
-         xml:space="preserve"
-         style="font-size:7.50095272px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">E|30</text>
-      <path
-         d="m 391.69974,321.37831 c 0,-1.55645 1.25641,-2.81286 2.81286,-2.81286 l 16.08954,0 c 1.55645,0 2.81286,1.25641 2.81286,2.81286 l 0,11.25143 c 0,1.55645 -1.25641,2.81286 -2.81286,2.81286 l -16.08954,0 c -1.55645,0 -2.81286,-1.25641 -2.81286,-2.81286 z"
-         id="path3125"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 391.69974,321.37831 c 0,-1.55645 1.25641,-2.81286 2.81286,-2.81286 l 16.08954,0 c 1.55645,0 2.81286,1.25641 2.81286,2.81286 l 0,11.25143 c 0,1.55645 -1.25641,2.81286 -2.81286,2.81286 l -16.08954,0 c -1.55645,0 -2.81286,-1.25641 -2.81286,-2.81286 z"
-         id="path3127"
-         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="393.75818"
-         y="330.10825"
-         id="text3129"
-         xml:space="preserve"
-         style="font-size:7.50095272px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">F|30</text>
-      <path
-         d="m 325.74761,321.37831 c 0,-1.55645 1.27517,-2.81286 2.81286,-2.81286 l 16.1083,0 c 1.55644,0 2.81285,1.25641 2.81285,2.81286 l 0,11.25143 c 0,1.55645 -1.25641,2.81286 -2.81285,2.81286 l -16.1083,0 c -1.53769,0 -2.81286,-1.25641 -2.81286,-2.81286 z"
-         id="path3131"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 325.74761,321.37831 c 0,-1.55645 1.27517,-2.81286 2.81286,-2.81286 l 16.1083,0 c 1.55644,0 2.81285,1.25641 2.81285,2.81286 l 0,11.25143 c 0,1.55645 -1.25641,2.81286 -2.81285,2.81286 l -16.1083,0 c -1.53769,0 -2.81286,-1.25641 -2.81286,-2.81286 z"
-         id="path3133"
-         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="327.10162"
-         y="330.10825"
-         id="text3135"
-         xml:space="preserve"
-         style="font-size:7.50095272px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">G|18</text>
-      <path
-         d="m 204.02591,321.37831 c 0,-1.55645 1.25641,-2.81286 2.81285,-2.81286 l 16.08955,0 c 1.55645,0 2.81285,1.25641 2.81285,2.81286 l 0,11.25143 c 0,1.55645 -1.2564,2.81286 -2.81285,2.81286 l -16.08955,0 c -1.55644,0 -2.81285,-1.25641 -2.81285,-2.81286 z"
-         id="path3137"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 204.02591,321.37831 c 0,-1.55645 1.25641,-2.81286 2.81285,-2.81286 l 16.08955,0 c 1.55645,0 2.81285,1.25641 2.81285,2.81286 l 0,11.25143 c 0,1.55645 -1.2564,2.81286 -2.81285,2.81286 l -16.08955,0 c -1.55644,0 -2.81285,-1.25641 -2.81285,-2.81286 z"
-         id="path3139"
-         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="205.55592"
-         y="330.10825"
-         id="text3141"
-         xml:space="preserve"
-         style="font-size:7.50095272px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">H|20</text>
-      <path
-         d="m 189.79285,187.13939 c 0,-1.55645 1.26579,-2.81286 2.81286,-2.81286 l 16.09892,0 c 1.55644,0 2.81285,1.25641 2.81285,2.81286 l 0,11.25143 c 0,1.55644 -1.25641,2.81285 -2.81285,2.81285 l -16.09892,0 c -1.54707,0 -2.81286,-1.25641 -2.81286,-2.81285 z"
-         id="path3143"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 189.79285,187.13939 c 0,-1.55645 1.26579,-2.81286 2.81286,-2.81286 l 16.09892,0 c 1.55644,0 2.81285,1.25641 2.81285,2.81286 l 0,11.25143 c 0,1.55644 -1.25641,2.81285 -2.81285,2.81285 l -16.09892,0 c -1.54707,0 -2.81286,-1.25641 -2.81286,-2.81285 z"
-         id="path3145"
-         style="fill:none;stroke:#000000;stroke-width:0.62820476px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="191.56601"
-         y="195.85332"
-         id="text3147"
-         xml:space="preserve"
-         style="font-size:7.50095272px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">B|35</text>
-      <text
-         x="195.19138"
-         y="151.27718"
-         id="text3149"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Watermark</text>
-      <path
-         d="m 227.26948,158.18571 5.54133,16.22081 -1.1814,0.40318 -5.54133,-16.22081 1.1814,-0.40318 z m 6.91026,14.42996 -0.7501,5.54133 -3.98488,-3.92863 4.73498,-1.6127 z"
-         id="path3151"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937619px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="517.86865"
-         y="400.08151"
-         id="text3153"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Event Time</text>
-      <text
-         x="506.91727"
-         y="413.58322"
-         id="text3155"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">at the operator</text>
-      <text
-         x="375.68878"
-         y="140.82939"
-         id="text3157"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Event</text>
-      <text
-         x="353.63599"
-         y="153.13097"
-         id="text3159"
-         xml:space="preserve"
-         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[</text>
-      <text
-         x="358.13657"
-         y="153.13097"
-         id="text3161"
-         xml:space="preserve"
-         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">id|timestamp</text>
-      <text
-         x="425.19507"
-         y="153.13097"
-         id="text3163"
-         xml:space="preserve"
-         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">]</text>
-      <path
-         d="m 375.29141,161.458 -1.65021,16.40834 1.23765,0.13126 1.65021,-16.42708 -1.23765,-0.11252 z m -3.39419,14.98315 1.98776,5.21317 2.98163,-4.7256 -4.96939,-0.48757 z"
-         id="path3165"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 549.12598,384.68635 -22.37159,-63.15802 1.1814,-0.41255 22.37159,63.13926 -1.1814,0.43131 z m -23.72176,-61.35779 0.69383,-5.55071 4.03177,3.88174 -4.7256,1.66897 z"
-         id="path3167"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 505.15164,404.65763 -180.37915,-49.78757 -8.96364,-33.77304 1.20015,-0.31879 8.88863,33.41675 -0.45005,-0.45006 180.02286,49.69381 -0.3188,1.2189 z m -190.84298,-81.87289 1.12514,-5.4757 3.71298,4.20054 -4.83812,1.27516 z"
-         id="path3169"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="502.02127"
-         y="254.94814"
-         id="text3171"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Event Time</text>
-      <text
-         x="487.3194"
-         y="268.44983"
-         id="text3173"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">at input streams</text>
-      <path
-         d="m 513.4777,274.59112 -39.69879,53.01298 0.99388,0.75009 39.69879,-53.01298 -0.99388,-0.75009 z m -40.44888,50.87521 -1.01263,5.5132 5.00688,-2.51282 -3.99425,-3.00038 z"
-         id="path3175"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 510.42106,270.05304 -26.85341,15.3207 0.61883,1.08764 26.85341,-15.3207 -0.61883,-1.08764 z m -26.70339,13.07041 -3.09414,4.65059 5.56946,-0.30004 -2.47532,-4.35055 z"
-         id="path3177"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-    </g>
-  </g>
-</svg>

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/fig/rescale.svg
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/fig/rescale.svg b/docs/apis/streaming/fig/rescale.svg
deleted file mode 100644
index 43eeae9..0000000
--- a/docs/apis/streaming/fig/rescale.svg
+++ /dev/null
@@ -1,472 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<svg
-   xmlns:dc="http://purl.org/dc/elements/1.1/"
-   xmlns:cc="http://creativecommons.org/ns#"
-   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
-   xmlns:svg="http://www.w3.org/2000/svg"
-   xmlns="http://www.w3.org/2000/svg"
-   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
-   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
-   width="372.04724"
-   height="262.20471"
-   id="svg2"
-   version="1.1"
-   inkscape:version="0.48.5 r10040"
-   sodipodi:docname="New document 1">
-  <defs
-     id="defs4">
-    <marker
-       inkscape:stockid="TriangleOutM"
-       orient="auto"
-       refY="0.0"
-       refX="0.0"
-       id="TriangleOutM"
-       style="overflow:visible">
-      <path
-         id="path5012"
-         d="M 5.77,0.0 L -2.88,5.0 L -2.88,-5.0 L 5.77,0.0 z "
-         style="fill-rule:evenodd;stroke:#000000;stroke-width:1.0pt"
-         transform="scale(0.4)" />
-    </marker>
-    <marker
-       inkscape:stockid="TriangleOutL"
-       orient="auto"
-       refY="0.0"
-       refX="0.0"
-       id="TriangleOutL"
-       style="overflow:visible">
-      <path
-         id="path5009"
-         d="M 5.77,0.0 L -2.88,5.0 L -2.88,-5.0 L 5.77,0.0 z "
-         style="fill-rule:evenodd;stroke:#000000;stroke-width:1.0pt"
-         transform="scale(0.8)" />
-    </marker>
-    <marker
-       inkscape:stockid="Arrow2Lend"
-       orient="auto"
-       refY="0.0"
-       refX="0.0"
-       id="Arrow2Lend"
-       style="overflow:visible;">
-      <path
-         id="path4888"
-         style="fill-rule:evenodd;stroke-width:0.62500000;stroke-linejoin:round;"
-         d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
-         transform="scale(1.1) rotate(180) translate(1,0)" />
-    </marker>
-  </defs>
-  <sodipodi:namedview
-     id="base"
-     pagecolor="#ffffff"
-     bordercolor="#666666"
-     borderopacity="1.0"
-     inkscape:pageopacity="0.0"
-     inkscape:pageshadow="2"
-     inkscape:zoom="2.8284271"
-     inkscape:cx="23.967455"
-     inkscape:cy="142.96627"
-     inkscape:document-units="px"
-     inkscape:current-layer="layer1"
-     showgrid="false"
-     inkscape:window-width="2560"
-     inkscape:window-height="1391"
-     inkscape:window-x="0"
-     inkscape:window-y="1"
-     inkscape:window-maximized="1" />
-  <metadata
-     id="metadata7">
-    <rdf:RDF>
-      <cc:Work
-         rdf:about="">
-        <dc:format>image/svg+xml</dc:format>
-        <dc:type
-           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
-        <dc:title></dc:title>
-      </cc:Work>
-    </rdf:RDF>
-  </metadata>
-  <g
-     inkscape:label="Layer 1"
-     inkscape:groupmode="layer"
-     id="layer1"
-     transform="translate(0,-790.15744)">
-    <g
-       id="g6997"
-       transform="translate(0,0.17677669)">
-      <g
-         transform="translate(21.79899,7.2928933)"
-         id="g4835">
-        <g
-           id="g4721"
-           transform="translate(-2.6011424,-1.5258789e-5)">
-          <path
-             transform="matrix(1.1448338,0,0,1.1448338,-9.9783931,783.57046)"
-             d="m 98.237335,60.300472 c 0,8.019695 -6.501247,14.520943 -14.520942,14.520943 -8.019696,0 -14.520943,-6.501248 -14.520943,-14.520943 0,-8.019695 6.501247,-14.520942 14.520943,-14.520942 8.019695,0 14.520942,6.501247 14.520942,14.520942 z"
-             sodipodi:ry="14.520943"
-             sodipodi:rx="14.520943"
-             sodipodi:cy="60.300472"
-             sodipodi:cx="83.716393"
-             id="path3885"
-             style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.60000002;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
-             sodipodi:type="arc" />
-          <text
-             sodipodi:linespacing="125%"
-             id="text4681"
-             y="856.37225"
-             x="75.488594"
-             style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-             xml:space="preserve"><tspan
-               style="font-size:13px"
-               y="856.37225"
-               x="75.488594"
-               id="tspan4683"
-               sodipodi:role="line">Src</tspan></text>
-        </g>
-        <g
-           id="g4731"
-           transform="translate(8.7630735,76.367533)">
-          <g
-             id="g4718">
-            <path
-               transform="matrix(1.1448338,0,0,1.1448338,-21.342609,840.64408)"
-               d="m 98.237335,60.300472 c 0,8.019695 -6.501247,14.520943 -14.520942,14.520943 -8.019696,0 -14.520943,-6.501248 -14.520943,-14.520943 0,-8.019695 6.501247,-14.520942 14.520943,-14.520942 8.019695,0 14.520942,6.501247 14.520942,14.520942 z"
-               sodipodi:ry="14.520943"
-               sodipodi:rx="14.520943"
-               sodipodi:cy="60.300472"
-               sodipodi:cx="83.716393"
-               id="path4710"
-               style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.60000002;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
-               sodipodi:type="arc" />
-          </g>
-          <text
-             xml:space="preserve"
-             style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-             x="62.881989"
-             y="914.22382"
-             id="text4687"
-             sodipodi:linespacing="125%"><tspan
-               sodipodi:role="line"
-               id="tspan4689"
-               x="62.881989"
-               y="914.22382"
-               style="font-size:13px">Snk</tspan></text>
-        </g>
-        <g
-           id="g4753">
-          <g
-             transform="translate(-67.755979,8.3842533)"
-             id="g4726">
-            <path
-               sodipodi:type="arc"
-               style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.60000002;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
-               id="path4712"
-               sodipodi:cx="83.716393"
-               sodipodi:cy="60.300472"
-               sodipodi:rx="14.520943"
-               sodipodi:ry="14.520943"
-               d="m 98.237335,60.300472 c 0,8.019695 -6.501247,14.520943 -14.520942,14.520943 -8.019696,0 -14.520943,-6.501248 -14.520943,-14.520943 0,-8.019695 6.501247,-14.520942 14.520943,-14.520942 8.019695,0 14.520942,6.501247 14.520942,14.520942 z"
-               transform="matrix(1.1448338,0,0,1.1448338,55.176446,841.90677)" />
-            <text
-               xml:space="preserve"
-               style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-               x="137.40105"
-               y="915.48651"
-               id="text4714"
-               sodipodi:linespacing="125%"><tspan
-                 sodipodi:role="line"
-                 id="tspan4716"
-                 x="137.40105"
-                 y="915.48651"
-                 style="font-size:13px">Map</tspan></text>
-          </g>
-          <g
-             id="g4737"
-             transform="translate(-119.198,8.3842533)">
-            <path
-               transform="matrix(1.1448338,0,0,1.1448338,55.176446,841.90677)"
-               d="m 98.237335,60.300472 c 0,8.019695 -6.501247,14.520943 -14.520942,14.520943 -8.019696,0 -14.520943,-6.501248 -14.520943,-14.520943 0,-8.019695 6.501247,-14.520942 14.520943,-14.520942 8.019695,0 14.520942,6.501247 14.520942,14.520942 z"
-               sodipodi:ry="14.520943"
-               sodipodi:rx="14.520943"
-               sodipodi:cy="60.300472"
-               sodipodi:cx="83.716393"
-               id="path4739"
-               style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.60000002;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
-               sodipodi:type="arc" />
-            <text
-               sodipodi:linespacing="125%"
-               id="text4741"
-               y="915.48651"
-               x="137.40105"
-               style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-               xml:space="preserve"><tspan
-                 style="font-size:13px"
-                 y="915.48651"
-                 x="137.40105"
-                 id="tspan4743"
-                 sodipodi:role="line">Map</tspan></text>
-          </g>
-          <g
-             transform="translate(-16.313963,8.3842533)"
-             id="g4745">
-            <path
-               sodipodi:type="arc"
-               style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.60000002;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
-               id="path4747"
-               sodipodi:cx="83.716393"
-               sodipodi:cy="60.300472"
-               sodipodi:rx="14.520943"
-               sodipodi:ry="14.520943"
-               d="m 98.237335,60.300472 c 0,8.019695 -6.501247,14.520943 -14.520942,14.520943 -8.019696,0 -14.520943,-6.501248 -14.520943,-14.520943 0,-8.019695 6.501247,-14.520942 14.520943,-14.520942 8.019695,0 14.520942,6.501247 14.520942,14.520942 z"
-               transform="matrix(1.1448338,0,0,1.1448338,55.176446,841.90677)" />
-            <text
-               xml:space="preserve"
-               style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-               x="137.40105"
-               y="915.48651"
-               id="text4749"
-               sodipodi:linespacing="125%"><tspan
-                 sodipodi:role="line"
-                 id="tspan4751"
-                 x="137.40105"
-                 y="915.48651"
-                 style="font-size:13px">Map</tspan></text>
-          </g>
-        </g>
-      </g>
-      <path
-         inkscape:connector-curvature="0"
-         id="path4861"
-         d="M 97.248545,875.21054 54.181606,908.06852"
-         style="fill:none;stroke:#000000;stroke-width:0.95726824px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#TriangleOutM)" />
-      <path
-         transform="translate(0,790.15744)"
-         inkscape:connector-curvature="0"
-         id="path5871"
-         d="m 104.6518,87.195784 -0.35355,31.112696"
-         style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-mid:none;marker-end:url(#TriangleOutM)" />
-      <path
-         transform="translate(0,790.15744)"
-         inkscape:connector-curvature="0"
-         id="path6245"
-         d="m 113.49064,84.72091 41.36575,33.58757"
-         style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#TriangleOutM)" />
-      <path
-         transform="translate(0,790.15744)"
-         inkscape:connector-curvature="0"
-         id="path6433"
-         d="m 53.033009,154.37093 43.84062,32.17336"
-         style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#TriangleOutM)" />
-      <path
-         transform="translate(0,790.15744)"
-         inkscape:connector-curvature="0"
-         id="path6621"
-         d="m 104.29825,153.66382 0.35355,31.46625"
-         style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#TriangleOutM)" />
-      <path
-         transform="translate(0,790.15744)"
-         inkscape:connector-curvature="0"
-         id="path6809"
-         d="m 155.56349,154.01737 -41.7193,32.52692"
-         style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#TriangleOutM)" />
-    </g>
-    <g
-       id="g7028"
-       transform="translate(157.92745,0.17677669)">
-      <g
-         id="g7030"
-         transform="translate(21.79899,7.2928933)">
-        <g
-           transform="translate(-2.6011424,-1.5258789e-5)"
-           id="g7032">
-          <path
-             sodipodi:type="arc"
-             style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.60000002;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
-             id="path7034"
-             sodipodi:cx="83.716393"
-             sodipodi:cy="60.300472"
-             sodipodi:rx="14.520943"
-             sodipodi:ry="14.520943"
-             d="m 98.237335,60.300472 c 0,8.019695 -6.501247,14.520943 -14.520942,14.520943 -8.019696,0 -14.520943,-6.501248 -14.520943,-14.520943 0,-8.019695 6.501247,-14.520942 14.520943,-14.520942 8.019695,0 14.520942,6.501247 14.520942,14.520942 z"
-             transform="matrix(1.1448338,0,0,1.1448338,-9.9783931,783.57046)" />
-          <text
-             xml:space="preserve"
-             style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-             x="75.488594"
-             y="856.37225"
-             id="text7036"
-             sodipodi:linespacing="125%"><tspan
-               sodipodi:role="line"
-               id="tspan7038"
-               x="75.488594"
-               y="856.37225"
-               style="font-size:13px">Src</tspan></text>
-        </g>
-        <g
-           transform="translate(8.7630735,76.367533)"
-           id="g7040">
-          <g
-             id="g7042">
-            <path
-               sodipodi:type="arc"
-               style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.60000002;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
-               id="path7044"
-               sodipodi:cx="83.716393"
-               sodipodi:cy="60.300472"
-               sodipodi:rx="14.520943"
-               sodipodi:ry="14.520943"
-               d="m 98.237335,60.300472 c 0,8.019695 -6.501247,14.520943 -14.520942,14.520943 -8.019696,0 -14.520943,-6.501248 -14.520943,-14.520943 0,-8.019695 6.501247,-14.520942 14.520943,-14.520942 8.019695,0 14.520942,6.501247 14.520942,14.520942 z"
-               transform="matrix(1.1448338,0,0,1.1448338,-21.342609,840.64408)" />
-          </g>
-          <text
-             sodipodi:linespacing="125%"
-             id="text7046"
-             y="914.22382"
-             x="62.881989"
-             style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-             xml:space="preserve"><tspan
-               style="font-size:13px"
-               y="914.22382"
-               x="62.881989"
-               id="tspan7048"
-               sodipodi:role="line">Snk</tspan></text>
-        </g>
-        <g
-           id="g7050">
-          <g
-             id="g7052"
-             transform="translate(-67.755979,8.3842533)">
-            <path
-               transform="matrix(1.1448338,0,0,1.1448338,55.176446,841.90677)"
-               d="m 98.237335,60.300472 c 0,8.019695 -6.501247,14.520943 -14.520942,14.520943 -8.019696,0 -14.520943,-6.501248 -14.520943,-14.520943 0,-8.019695 6.501247,-14.520942 14.520943,-14.520942 8.019695,0 14.520942,6.501247 14.520942,14.520942 z"
-               sodipodi:ry="14.520943"
-               sodipodi:rx="14.520943"
-               sodipodi:cy="60.300472"
-               sodipodi:cx="83.716393"
-               id="path7054"
-               style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.60000002;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
-               sodipodi:type="arc" />
-            <text
-               sodipodi:linespacing="125%"
-               id="text7056"
-               y="915.48651"
-               x="137.40105"
-               style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-               xml:space="preserve"><tspan
-                 style="font-size:13px"
-                 y="915.48651"
-                 x="137.40105"
-                 id="tspan7058"
-                 sodipodi:role="line">Map</tspan></text>
-          </g>
-          <g
-             transform="translate(-119.198,8.3842533)"
-             id="g7060">
-            <path
-               sodipodi:type="arc"
-               style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.60000002;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
-               id="path7062"
-               sodipodi:cx="83.716393"
-               sodipodi:cy="60.300472"
-               sodipodi:rx="14.520943"
-               sodipodi:ry="14.520943"
-               d="m 98.237335,60.300472 c 0,8.019695 -6.501247,14.520943 -14.520942,14.520943 -8.019696,0 -14.520943,-6.501248 -14.520943,-14.520943 0,-8.019695 6.501247,-14.520942 14.520943,-14.520942 8.019695,0 14.520942,6.501247 14.520942,14.520942 z"
-               transform="matrix(1.1448338,0,0,1.1448338,55.176446,841.90677)" />
-            <text
-               xml:space="preserve"
-               style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-               x="137.40105"
-               y="915.48651"
-               id="text7064"
-               sodipodi:linespacing="125%"><tspan
-                 sodipodi:role="line"
-                 id="tspan7066"
-                 x="137.40105"
-                 y="915.48651"
-                 style="font-size:13px">Map</tspan></text>
-          </g>
-          <g
-             id="g7068"
-             transform="translate(-16.313963,8.3842533)">
-            <path
-               transform="matrix(1.1448338,0,0,1.1448338,55.176446,841.90677)"
-               d="m 98.237335,60.300472 c 0,8.019695 -6.501247,14.520943 -14.520942,14.520943 -8.019696,0 -14.520943,-6.501248 -14.520943,-14.520943 0,-8.019695 6.501247,-14.520942 14.520943,-14.520942 8.019695,0 14.520942,6.501247 14.520942,14.520942 z"
-               sodipodi:ry="14.520943"
-               sodipodi:rx="14.520943"
-               sodipodi:cy="60.300472"
-               sodipodi:cx="83.716393"
-               id="path7070"
-               style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.60000002;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
-               sodipodi:type="arc" />
-            <text
-               sodipodi:linespacing="125%"
-               id="text7072"
-               y="915.48651"
-               x="137.40105"
-               style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-               xml:space="preserve"><tspan
-                 style="font-size:13px"
-                 y="915.48651"
-                 x="137.40105"
-                 id="tspan7074"
-                 sodipodi:role="line">Map</tspan></text>
-          </g>
-        </g>
-      </g>
-      <path
-         style="fill:none;stroke:#000000;stroke-width:0.95726824px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#TriangleOutM)"
-         d="M 97.248545,875.21054 54.181606,908.06852"
-         id="path7076"
-         inkscape:connector-curvature="0" />
-      <path
-         style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-mid:none;marker-end:url(#TriangleOutM)"
-         d="m 104.6518,87.195784 -0.35355,31.112696"
-         id="path7078"
-         inkscape:connector-curvature="0"
-         transform="translate(0,790.15744)" />
-      <path
-         style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#TriangleOutM)"
-         d="m 113.49064,84.72091 41.36575,33.58757"
-         id="path7080"
-         inkscape:connector-curvature="0"
-         transform="translate(0,790.15744)" />
-      <path
-         style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#TriangleOutM)"
-         d="m 53.033009,154.37093 43.84062,32.17336"
-         id="path7082"
-         inkscape:connector-curvature="0"
-         transform="translate(0,790.15744)" />
-      <path
-         style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#TriangleOutM)"
-         d="m 104.29825,153.66382 0.35355,31.46625"
-         id="path7084"
-         inkscape:connector-curvature="0"
-         transform="translate(0,790.15744)" />
-      <path
-         style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#TriangleOutM)"
-         d="m 155.56349,154.01737 -41.7193,32.52692"
-         id="path7086"
-         inkscape:connector-curvature="0"
-         transform="translate(0,790.15744)" />
-    </g>
-  </g>
-</svg>

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/fig/savepoints-overview.png
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/fig/savepoints-overview.png b/docs/apis/streaming/fig/savepoints-overview.png
deleted file mode 100644
index c0e7563..0000000
Binary files a/docs/apis/streaming/fig/savepoints-overview.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/fig/savepoints-program_ids.png
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/fig/savepoints-program_ids.png b/docs/apis/streaming/fig/savepoints-program_ids.png
deleted file mode 100644
index cc161ef..0000000
Binary files a/docs/apis/streaming/fig/savepoints-program_ids.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/fig/stream_watermark_in_order.svg
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/fig/stream_watermark_in_order.svg b/docs/apis/streaming/fig/stream_watermark_in_order.svg
deleted file mode 100644
index dcdbbc6..0000000
--- a/docs/apis/streaming/fig/stream_watermark_in_order.svg
+++ /dev/null
@@ -1,314 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<svg
-   xmlns:dc="http://purl.org/dc/elements/1.1/"
-   xmlns:cc="http://creativecommons.org/ns#"
-   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
-   xmlns:svg="http://www.w3.org/2000/svg"
-   xmlns="http://www.w3.org/2000/svg"
-   version="1.1"
-   width="534.41998"
-   height="157.25"
-   id="svg2">
-  <defs
-     id="defs4" />
-  <metadata
-     id="metadata7">
-    <rdf:RDF>
-      <cc:Work
-         rdf:about="">
-        <dc:format>image/svg+xml</dc:format>
-        <dc:type
-           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
-        <dc:title></dc:title>
-      </cc:Work>
-    </rdf:RDF>
-  </metadata>
-  <g
-     transform="translate(-276.19474,-304.69235)"
-     id="layer1">
-    <g
-       transform="translate(235.51811,294.2736)"
-       id="g3458">
-      <path
-         d="m 81.029039,44.630667 0,40.786429 454.276431,0 0,-40.786429 -454.276431,0 z"
-         id="path3460"
-         style="fill:#f2f2f2;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 41.967829,57.635444 15.705119,0 0,-7.857248 15.70512,15.705119 -15.70512,15.705119 0,-7.847871 -15.705119,0 z"
-         id="path3462"
-         style="fill:#a6a6a6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 542.9752,57.64482 15.69574,0 0,-7.857248 15.7145,15.695743 -15.7145,15.714496 0,-7.857248 -15.69574,0 z"
-         id="path3464"
-         style="fill:#a6a6a6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="247.10393"
-         y="21.845293"
-         id="text3466"
-         xml:space="preserve"
-         style="font-size:13.80175209px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Stream </text>
-      <text
-         x="302.61096"
-         y="21.845293"
-         id="text3468"
-         xml:space="preserve"
-         style="font-size:13.80175209px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">(in order)</text>
-      <path
-         d="m 371.54093,44.630667 0,7.500953 -1.87524,0 0,-7.500953 1.87524,0 z m 0,13.126667 0,7.500953 -1.87524,0 0,-7.500953 1.87524,0 z m 0,13.126667 0,7.500952 -1.87524,0 0,-7.500952 1.87524,0 z m 0,13.126667 0,2.662838 -1.87524,0 0,-2.662838 1.87524,0 z"
-         id="path3470"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 503.5952,55.882096 0,20.15881 23.74051,0 0,-20.15881 -23.74051,0 z"
-         id="path3472"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 503.5952,55.882096 23.74051,0 0,20.15881 -23.74051,0 z"
-         id="path3474"
-         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="511.93024"
-         y="70.391273"
-         id="text3476"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">7</text>
-      <path
-         d="m 161.98307,44.621291 0,7.500953 -1.87524,0 0,-7.500953 1.87524,0 z m 0,13.126667 0,7.500952 -1.87524,0 0,-7.500952 1.87524,0 z m 0,13.126667 0,7.500952 -1.87524,0 0,-7.500952 1.87524,0 z m 0,13.126667 0,2.662838 -1.87524,0 0,-2.662838 1.87524,0 z"
-         id="path3478"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937619px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="356.53876"
-         y="104.57372"
-         id="text3480"
-         xml:space="preserve"
-         style="font-size:10.05127621px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">W(11)</text>
-      <text
-         x="145.13034"
-         y="104.57372"
-         id="text3482"
-         xml:space="preserve"
-         style="font-size:10.05127621px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">W(20)</text>
-      <text
-         x="207.48544"
-         y="141.57475"
-         id="text3484"
-         xml:space="preserve"
-         style="font-size:12.451581px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Watermark</text>
-      <path
-         d="m 209.21094,125.0315 -26.92842,-13.5861 0.56257,-1.11577 26.92842,13.58611 -0.56257,1.11576 z m -27.21908,-10.2388 -5.00689,-6.72273 8.38232,0.0281 -3.37543,6.6946 z"
-         id="path3486"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937619px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 280.91067,128.3413 65.22078,-18.28357 -0.33754,-1.21891 -65.22078,18.28357 0.33754,1.21891 z m 64.86449,-14.94565 6.20704,-5.64447 -8.2323,-1.5752 2.02526,7.21967 z"
-         id="path3488"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 476.40424,55.882096 0,20.15881 23.5905,0 0,-20.15881 -23.5905,0 z"
-         id="path3490"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 476.40424,55.882096 23.5905,0 0,20.15881 -23.5905,0 z"
-         id="path3492"
-         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="484.68994"
-         y="70.391273"
-         id="text3494"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">9</text>
-      <path
-         d="m 434.51142,55.882096 0,20.15881 23.75927,0 0,-20.15881 -23.75927,0 z"
-         id="path3496"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 434.51142,55.882096 23.75927,0 0,20.15881 -23.75927,0 z"
-         id="path3498"
-         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="442.94125"
-         y="70.391273"
-         id="text3500"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">9</text>
-      <path
-         d="m 409.19571,55.882096 0,20.15881 23.60925,0 0,-20.15881 -23.60925,0 z"
-         id="path3502"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 409.19571,55.882096 23.60925,0 0,20.15881 -23.60925,0 z"
-         id="path3504"
-         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="413.87704"
-         y="70.391273"
-         id="text3506"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">10</text>
-      <path
-         d="m 374.50381,55.882096 0,20.15881 23.60924,0 0,-20.15881 -23.60924,0 z"
-         id="path3508"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 374.50381,55.882096 23.60924,0 0,20.15881 -23.60924,0 z"
-         id="path3510"
-         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="379.23441"
-         y="70.391273"
-         id="text3512"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">11</text>
-      <path
-         d="m 340.13069,55.882096 0,20.15881 23.5905,0 0,-20.15881 -23.5905,0 z"
-         id="path3514"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 340.13069,55.882096 23.5905,0 0,20.15881 -23.5905,0 z"
-         id="path3516"
-         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="344.81403"
-         y="70.391273"
-         id="text3518"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">14</text>
-      <path
-         d="m 306.99523,55.882096 0,20.15881 23.60925,0 0,-20.15881 -23.60925,0 z"
-         id="path3520"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 306.99523,55.882096 23.60925,0 0,20.15881 -23.60925,0 z"
-         id="path3522"
-         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="311.71414"
-         y="70.391273"
-         id="text3524"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">15</text>
-      <path
-         d="m 279.65426,55.87272 0,20.15881 23.74989,0 0,-20.15881 -23.74989,0 z"
-         id="path3526"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 279.65426,55.87272 23.74989,0 0,20.15881 -23.74989,0 z"
-         id="path3528"
-         style="fill:none;stroke:#000000;stroke-width:0.62820476px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="284.41156"
-         y="70.391273"
-         id="text3530"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">17</text>
-      <text
-         x="497.42996"
-         y="129.83479"
-         id="text3532"
-         xml:space="preserve"
-         style="font-size:12.451581px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Event</text>
-      <text
-         x="396.06403"
-         y="163.61626"
-         id="text3534"
-         xml:space="preserve"
-         style="font-size:12.451581px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Event timestamp</text>
-      <path
-         d="m 516.90939,113.35814 -1.72522,-31.053939 1.25641,-0.05626 1.72522,31.035189 -1.25641,0.075 z m -4.76311,-29.628758 3.31917,-7.688476 4.16303,7.275924 -7.4822,0.412552 z"
-         id="path3536"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 450.01964,146.96241 56.76346,-72.946761 -0.99387,-0.768848 -56.76346,72.946759 0.99387,0.76885 z m 58.45118,-70.040142 1.65021,-8.232296 -7.55721,3.61921 5.907,4.613086 z"
-         id="path3538"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 253.08214,55.87272 0,20.15881 23.75926,0 0,-20.15881 -23.75926,0 z"
-         id="path3540"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 253.08214,55.87272 23.75926,0 0,20.15881 -23.75926,0 z"
-         id="path3542"
-         style="fill:none;stroke:#000000;stroke-width:0.62820476px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="257.91165"
-         y="70.391273"
-         id="text3544"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">18</text>
-      <path
-         d="m 178.86021,55.87272 0,20.15881 23.74989,0 0,-20.15881 -23.74989,0 z"
-         id="path3546"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 178.86021,55.87272 23.74989,0 0,20.15881 -23.74989,0 z"
-         id="path3548"
-         style="fill:none;stroke:#000000;stroke-width:0.62820476px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="183.66698"
-         y="70.391273"
-         id="text3550"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">20</text>
-      <path
-         d="m 209.17344,55.87272 0,20.15881 23.59987,0 0,-20.15881 -23.59987,0 z"
-         id="path3552"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 209.17344,55.87272 23.59987,0 0,20.15881 -23.59987,0 z"
-         id="path3554"
-         style="fill:none;stroke:#000000;stroke-width:0.62820476px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="213.86813"
-         y="70.391273"
-         id="text3556"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">19</text>
-      <path
-         d="m 129.78523,55.87272 0,20.15881 23.75927,0 0,-20.15881 -23.75927,0 z"
-         id="path3558"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 129.78523,55.87272 23.75927,0 0,20.15881 -23.75927,0 z"
-         id="path3560"
-         style="fill:none;stroke:#000000;stroke-width:0.62820476px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="134.54695"
-         y="70.391273"
-         id="text3562"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">21</text>
-      <path
-         d="m 93.377482,55.87272 0,20.15881 23.599868,0 0,-20.15881 -23.599868,0 z"
-         id="path3564"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 93.377482,55.87272 23.599868,0 0,20.15881 -23.599868,0 z"
-         id="path3566"
-         style="fill:none;stroke:#000000;stroke-width:0.62820476px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="98.096954"
-         y="70.391273"
-         id="text3568"
-         xml:space="preserve"
-         style="font-size:11.2514286px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">23</text>
-    </g>
-  </g>
-</svg>


[84/89] [abbrv] flink git commit: [FLINK-4434] [rpc] Add a testing RPC service.

Posted by se...@apache.org.
[FLINK-4434] [rpc] Add a testing RPC service.

This closes #2394.


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/728f2661
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/728f2661
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/728f2661

Branch: refs/heads/flip-6
Commit: 728f2661c1b5e24dba74c9061fc57431e58e4ede
Parents: 2452014
Author: Stephan Ewen <se...@apache.org>
Authored: Fri Aug 19 23:29:45 2016 +0200
Committer: Stephan Ewen <se...@apache.org>
Committed: Thu Aug 25 20:21:04 2016 +0200

----------------------------------------------------------------------
 .../flink/runtime/rpc/RpcCompletenessTest.java  |   3 +
 .../flink/runtime/rpc/TestingGatewayBase.java   |  85 ++++++++++++++
 .../flink/runtime/rpc/TestingRpcService.java    | 115 +++++++++++++++++++
 3 files changed, 203 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/728f2661/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/RpcCompletenessTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/RpcCompletenessTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/RpcCompletenessTest.java
index 97cf0cb..b8aad62 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/RpcCompletenessTest.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/RpcCompletenessTest.java
@@ -41,9 +41,11 @@ import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
 public class RpcCompletenessTest extends TestLogger {
+
 	private static final Class<?> futureClass = Future.class;
 
 	@Test
+	@SuppressWarnings({"rawtypes", "unchecked"})
 	public void testRpcCompleteness() {
 		Reflections reflections = new Reflections("org.apache.flink");
 
@@ -64,6 +66,7 @@ public class RpcCompletenessTest extends TestLogger {
 		}
 	}
 
+	@SuppressWarnings("rawtypes")
 	private void checkCompleteness(Class<? extends RpcEndpoint> rpcEndpoint, Class<? extends RpcGateway> rpcGateway) {
 		Method[] gatewayMethods = rpcGateway.getDeclaredMethods();
 		Method[] serverMethods = rpcEndpoint.getDeclaredMethods();

http://git-wip-us.apache.org/repos/asf/flink/blob/728f2661/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/TestingGatewayBase.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/TestingGatewayBase.java b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/TestingGatewayBase.java
new file mode 100644
index 0000000..4256135
--- /dev/null
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/TestingGatewayBase.java
@@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc;
+
+import akka.dispatch.Futures;
+import scala.concurrent.Future;
+import scala.concurrent.Promise;
+
+import java.util.concurrent.Executors;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+
+/**
+ * Utility base class for testing gateways
+ */
+public abstract class TestingGatewayBase implements RpcGateway {
+
+	private final ScheduledExecutorService executor;
+
+	protected TestingGatewayBase() {
+		this.executor = Executors.newSingleThreadScheduledExecutor();
+	}
+
+	// ------------------------------------------------------------------------
+	//  shutdown
+	// ------------------------------------------------------------------------
+
+	public void stop() {
+		executor.shutdownNow();
+	}
+
+	@Override
+	protected void finalize() throws Throwable {
+		super.finalize();
+		executor.shutdownNow();
+	}
+
+	// ------------------------------------------------------------------------
+	//  utilities
+	// ------------------------------------------------------------------------
+
+	public <T> Future<T> futureWithTimeout(long timeoutMillis) {
+		Promise<T> promise = Futures.<T>promise();
+		executor.schedule(new FutureTimeout(promise), timeoutMillis, TimeUnit.MILLISECONDS);
+		return promise.future();
+	}
+
+	// ------------------------------------------------------------------------
+	
+	private static final class FutureTimeout implements Runnable {
+
+		private final Promise<?> promise;
+
+		private FutureTimeout(Promise<?> promise) {
+			this.promise = promise;
+		}
+
+		@Override
+		public void run() {
+			try {
+				promise.failure(new TimeoutException());
+			} catch (Throwable t) {
+				System.err.println("CAUGHT AN ERROR IN THE TEST: " + t.getMessage());
+				t.printStackTrace();
+			}
+		}
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/728f2661/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/TestingRpcService.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/TestingRpcService.java b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/TestingRpcService.java
new file mode 100644
index 0000000..7e92e8d
--- /dev/null
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/TestingRpcService.java
@@ -0,0 +1,115 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc;
+
+import akka.dispatch.Futures;
+import akka.util.Timeout;
+
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.runtime.akka.AkkaUtils;
+import org.apache.flink.runtime.rpc.akka.AkkaRpcService;
+
+import scala.concurrent.Future;
+import scala.concurrent.duration.FiniteDuration;
+
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * An RPC Service implementation for testing. This RPC service acts as a replacement for
+ * teh regular RPC service for cases where tests need to return prepared mock gateways instead of
+ * proper RPC gateways.
+ * 
+ * <p>The TestingRpcService can be used for example in the following fashion,
+ * using <i>Mockito</i> for mocks and verification:
+ * 
+ * <pre>{@code
+ * TestingRpcService rpc = new TestingRpcService();
+ *
+ * ResourceManagerGateway testGateway = mock(ResourceManagerGateway.class);
+ * rpc.registerGateway("myAddress", testGateway);
+ * 
+ * MyComponentToTest component = new MyComponentToTest();
+ * component.triggerSomethingThatCallsTheGateway();
+ * 
+ * verify(testGateway, timeout(1000)).theTestMethod(any(UUID.class), anyString());
+ * }</pre>
+ */
+public class TestingRpcService extends AkkaRpcService {
+
+	/** Map of pre-registered connections */
+	private final ConcurrentHashMap<String, RpcGateway> registeredConnections;
+
+	/**
+	 * Creates a new {@code TestingRpcService}. 
+	 */
+	public TestingRpcService() {
+		this(new Configuration());
+	}
+
+	/**
+	 * Creates a new {@code TestingRpcService}, using the given configuration. 
+	 */
+	public TestingRpcService(Configuration configuration) {
+		super(AkkaUtils.createLocalActorSystem(configuration), new Timeout(new FiniteDuration(10, TimeUnit.SECONDS)));
+
+		this.registeredConnections = new ConcurrentHashMap<>();
+	}
+
+	// ------------------------------------------------------------------------
+
+	@Override
+	public void stopService() {
+		super.stopService();
+		registeredConnections.clear();
+	}
+
+	// ------------------------------------------------------------------------
+	// connections
+	// ------------------------------------------------------------------------
+
+	public void registerGateway(String address, RpcGateway gateway) {
+		checkNotNull(address);
+		checkNotNull(gateway);
+
+		if (registeredConnections.putIfAbsent(address, gateway) != null) {
+			throw new IllegalStateException("a gateway is already registered under " + address);
+		}
+	}
+
+	@Override
+	public <C extends RpcGateway> Future<C> connect(String address, Class<C> clazz) {
+		RpcGateway gateway = registeredConnections.get(address);
+
+		if (gateway != null) {
+			if (clazz.isAssignableFrom(gateway.getClass())) {
+				@SuppressWarnings("unchecked")
+				C typedGateway = (C) gateway;
+				return Futures.successful(typedGateway);
+			} else {
+				return Futures.failed(
+						new Exception("Gateway registered under " + address + " is not of type " + clazz));
+			}
+		} else {
+			return Futures.failed(new Exception("No gateway registered under that name"));
+		}
+	}
+}
\ No newline at end of file


[55/89] [abbrv] flink git commit: [FLINK-4253] [config] Clean up renaming of 'recovery.mode'

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-test-utils-parent/flink-test-utils/src/main/scala/org/apache/flink/test/util/ForkableFlinkMiniCluster.scala
----------------------------------------------------------------------
diff --git a/flink-test-utils-parent/flink-test-utils/src/main/scala/org/apache/flink/test/util/ForkableFlinkMiniCluster.scala b/flink-test-utils-parent/flink-test-utils/src/main/scala/org/apache/flink/test/util/ForkableFlinkMiniCluster.scala
index 5dd4188..fa3135a 100644
--- a/flink-test-utils-parent/flink-test-utils/src/main/scala/org/apache/flink/test/util/ForkableFlinkMiniCluster.scala
+++ b/flink-test-utils-parent/flink-test-utils/src/main/scala/org/apache/flink/test/util/ForkableFlinkMiniCluster.scala
@@ -263,7 +263,7 @@ class ForkableFlinkMiniCluster(
   override def start(): Unit = {
     val zookeeperURL = configuration.getString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, "")
 
-    zookeeperCluster = if (recoveryMode == HighAvailabilityMode.ZOOKEEPER &&
+    zookeeperCluster = if (haMode == HighAvailabilityMode.ZOOKEEPER &&
       zookeeperURL.equals("")) {
       LOG.info("Starting ZooKeeper cluster.")
 

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-tests/src/test/java/org/apache/flink/test/recovery/ChaosMonkeyITCase.java
----------------------------------------------------------------------
diff --git a/flink-tests/src/test/java/org/apache/flink/test/recovery/ChaosMonkeyITCase.java b/flink-tests/src/test/java/org/apache/flink/test/recovery/ChaosMonkeyITCase.java
index 22bf62a..cc8ab80 100644
--- a/flink-tests/src/test/java/org/apache/flink/test/recovery/ChaosMonkeyITCase.java
+++ b/flink-tests/src/test/java/org/apache/flink/test/recovery/ChaosMonkeyITCase.java
@@ -148,7 +148,7 @@ public class ChaosMonkeyITCase extends TestLogger {
 		// -----------------------------------------------------------------------------------------
 
 		// Setup
-		Configuration config = ZooKeeperTestUtils.createZooKeeperRecoveryModeConfig(
+		Configuration config = ZooKeeperTestUtils.createZooKeeperHAConfig(
 				ZooKeeper.getConnectString(), FileStateBackendBasePath.toURI().toString());
 
 		// Akka and restart timeouts
@@ -564,7 +564,7 @@ public class ChaosMonkeyITCase extends TestLogger {
 			fail(fsCheckpoints + " does not exist: " + Arrays.toString(FileStateBackendBasePath.listFiles()));
 		}
 
-		File fsRecovery = new File(new URI(config.getString(ConfigConstants.ZOOKEEPER_HA_PATH, "")).getPath());
+		File fsRecovery = new File(new URI(config.getString(ConfigConstants.HA_ZOOKEEPER_STORAGE_PATH, "")).getPath());
 
 		LOG.info("Checking " + fsRecovery);
 

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-tests/src/test/java/org/apache/flink/test/recovery/JobManagerHACheckpointRecoveryITCase.java
----------------------------------------------------------------------
diff --git a/flink-tests/src/test/java/org/apache/flink/test/recovery/JobManagerHACheckpointRecoveryITCase.java b/flink-tests/src/test/java/org/apache/flink/test/recovery/JobManagerHACheckpointRecoveryITCase.java
index f66e52c..49eaeb7 100644
--- a/flink-tests/src/test/java/org/apache/flink/test/recovery/JobManagerHACheckpointRecoveryITCase.java
+++ b/flink-tests/src/test/java/org/apache/flink/test/recovery/JobManagerHACheckpointRecoveryITCase.java
@@ -160,7 +160,7 @@ public class JobManagerHACheckpointRecoveryITCase extends TestLogger {
 
 		JobGraph jobGraph = env.getStreamGraph().getJobGraph();
 
-		Configuration config = ZooKeeperTestUtils.createZooKeeperRecoveryModeConfig(ZooKeeper
+		Configuration config = ZooKeeperTestUtils.createZooKeeperHAConfig(ZooKeeper
 				.getConnectString(), FileStateBackendBasePath.getAbsoluteFile().toURI().toString());
 		config.setInteger(ConfigConstants.TASK_MANAGER_NUM_TASK_SLOTS, Parallelism);
 
@@ -311,7 +311,7 @@ public class JobManagerHACheckpointRecoveryITCase extends TestLogger {
 		final String zooKeeperQuorum = ZooKeeper.getConnectString();
 		final String fileStateBackendPath = FileStateBackendBasePath.getAbsoluteFile().toString();
 
-		Configuration config = ZooKeeperTestUtils.createZooKeeperRecoveryModeConfig(
+		Configuration config = ZooKeeperTestUtils.createZooKeeperHAConfig(
 				zooKeeperQuorum,
 				fileStateBackendPath);
 

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-tests/src/test/java/org/apache/flink/test/recovery/JobManagerHAJobGraphRecoveryITCase.java
----------------------------------------------------------------------
diff --git a/flink-tests/src/test/java/org/apache/flink/test/recovery/JobManagerHAJobGraphRecoveryITCase.java b/flink-tests/src/test/java/org/apache/flink/test/recovery/JobManagerHAJobGraphRecoveryITCase.java
index e0e165d..bf39c4b 100644
--- a/flink-tests/src/test/java/org/apache/flink/test/recovery/JobManagerHAJobGraphRecoveryITCase.java
+++ b/flink-tests/src/test/java/org/apache/flink/test/recovery/JobManagerHAJobGraphRecoveryITCase.java
@@ -125,7 +125,7 @@ public class JobManagerHAJobGraphRecoveryITCase extends TestLogger {
 	 */
 	@Test
 	public void testJobPersistencyWhenJobManagerShutdown() throws Exception {
-		Configuration config = ZooKeeperTestUtils.createZooKeeperRecoveryModeConfig(
+		Configuration config = ZooKeeperTestUtils.createZooKeeperHAConfig(
 				ZooKeeper.getConnectString(), FileStateBackendBasePath.getPath());
 
 		// Configure the cluster
@@ -172,7 +172,7 @@ public class JobManagerHAJobGraphRecoveryITCase extends TestLogger {
 	 */
 	@Test
 	public void testSubmitJobToNonLeader() throws Exception {
-		Configuration config = ZooKeeperTestUtils.createZooKeeperRecoveryModeConfig(
+		Configuration config = ZooKeeperTestUtils.createZooKeeperHAConfig(
 				ZooKeeper.getConnectString(), FileStateBackendBasePath.getPath());
 
 		// Configure the cluster
@@ -257,7 +257,7 @@ public class JobManagerHAJobGraphRecoveryITCase extends TestLogger {
 	 */
 	@Test
 	public void testClientNonDetachedListeningBehaviour() throws Exception {
-		Configuration config = ZooKeeperTestUtils.createZooKeeperRecoveryModeConfig(
+		Configuration config = ZooKeeperTestUtils.createZooKeeperHAConfig(
 				ZooKeeper.getConnectString(), FileStateBackendBasePath.getPath());
 
 		// Test actor system

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-tests/src/test/java/org/apache/flink/test/recovery/JobManagerHAProcessFailureBatchRecoveryITCase.java
----------------------------------------------------------------------
diff --git a/flink-tests/src/test/java/org/apache/flink/test/recovery/JobManagerHAProcessFailureBatchRecoveryITCase.java b/flink-tests/src/test/java/org/apache/flink/test/recovery/JobManagerHAProcessFailureBatchRecoveryITCase.java
index 0c52204..9b0d9de 100644
--- a/flink-tests/src/test/java/org/apache/flink/test/recovery/JobManagerHAProcessFailureBatchRecoveryITCase.java
+++ b/flink-tests/src/test/java/org/apache/flink/test/recovery/JobManagerHAProcessFailureBatchRecoveryITCase.java
@@ -149,7 +149,7 @@ public class JobManagerHAProcessFailureBatchRecoveryITCase extends TestLogger {
 	 */
 	public void testJobManagerFailure(String zkQuorum, final File coordinateDir) throws Exception {
 		Configuration config = new Configuration();
-		config.setString(ConfigConstants.HIGH_AVAILABILITY, "ZOOKEEPER");
+		config.setString(ConfigConstants.HA_MODE, "ZOOKEEPER");
 		config.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, zkQuorum);
 
 		ExecutionEnvironment env = ExecutionEnvironment.createRemoteEnvironment(
@@ -249,7 +249,7 @@ public class JobManagerHAProcessFailureBatchRecoveryITCase extends TestLogger {
 			coordinateTempDir = createTempDirectory();
 
 			// Job Managers
-			Configuration config = ZooKeeperTestUtils.createZooKeeperRecoveryModeConfig(
+			Configuration config = ZooKeeperTestUtils.createZooKeeperHAConfig(
 					ZooKeeper.getConnectString(), FileStateBackendBasePath.getPath());
 
 			// Start first process

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-tests/src/test/java/org/apache/flink/test/runtime/leaderelection/ZooKeeperLeaderElectionITCase.java
----------------------------------------------------------------------
diff --git a/flink-tests/src/test/java/org/apache/flink/test/runtime/leaderelection/ZooKeeperLeaderElectionITCase.java b/flink-tests/src/test/java/org/apache/flink/test/runtime/leaderelection/ZooKeeperLeaderElectionITCase.java
index 7091339..9bd8cc3 100644
--- a/flink-tests/src/test/java/org/apache/flink/test/runtime/leaderelection/ZooKeeperLeaderElectionITCase.java
+++ b/flink-tests/src/test/java/org/apache/flink/test/runtime/leaderelection/ZooKeeperLeaderElectionITCase.java
@@ -91,11 +91,11 @@ public class ZooKeeperLeaderElectionITCase extends TestLogger {
 		int numJMs = 10;
 		int numTMs = 3;
 
-		configuration.setString(ConfigConstants.HIGH_AVAILABILITY, "zookeeper");
+		configuration.setString(ConfigConstants.HA_MODE, "zookeeper");
 		configuration.setInteger(ConfigConstants.LOCAL_NUMBER_JOB_MANAGER, numJMs);
 		configuration.setInteger(ConfigConstants.LOCAL_NUMBER_TASK_MANAGER, numTMs);
 		configuration.setString(ConfigConstants.STATE_BACKEND, "filesystem");
-		configuration.setString(ConfigConstants.ZOOKEEPER_HA_PATH, tempDirectory.getAbsoluteFile().toURI().toString());
+		configuration.setString(ConfigConstants.HA_ZOOKEEPER_STORAGE_PATH, tempDirectory.getAbsoluteFile().toURI().toString());
 
 		ForkableFlinkMiniCluster cluster = new ForkableFlinkMiniCluster(configuration);
 
@@ -139,12 +139,12 @@ public class ZooKeeperLeaderElectionITCase extends TestLogger {
 
 		Configuration configuration = new Configuration();
 
-		configuration.setString(ConfigConstants.HIGH_AVAILABILITY, "zookeeper");
+		configuration.setString(ConfigConstants.HA_MODE, "zookeeper");
 		configuration.setInteger(ConfigConstants.LOCAL_NUMBER_JOB_MANAGER, numJMs);
 		configuration.setInteger(ConfigConstants.LOCAL_NUMBER_TASK_MANAGER, numTMs);
 		configuration.setInteger(ConfigConstants.TASK_MANAGER_NUM_TASK_SLOTS, numSlotsPerTM);
 		configuration.setString(ConfigConstants.STATE_BACKEND, "filesystem");
-		configuration.setString(ConfigConstants.ZOOKEEPER_HA_PATH, tempDirectory.getAbsoluteFile().toURI().toString());
+		configuration.setString(ConfigConstants.HA_ZOOKEEPER_STORAGE_PATH, tempDirectory.getAbsoluteFile().toURI().toString());
 
 		// we "effectively" disable the automatic RecoverAllJobs message and sent it manually to make
 		// sure that all TMs have registered to the JM prior to issueing the RecoverAllJobs message

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YARNHighAvailabilityITCase.java
----------------------------------------------------------------------
diff --git a/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YARNHighAvailabilityITCase.java b/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YARNHighAvailabilityITCase.java
index 25dbe53..a293348 100644
--- a/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YARNHighAvailabilityITCase.java
+++ b/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YARNHighAvailabilityITCase.java
@@ -115,7 +115,7 @@ public class YARNHighAvailabilityITCase extends YarnTestBase {
 			zkServer.getConnectString() + "@@yarn.application-attempts=" + numberApplicationAttempts +
 			"@@" + ConfigConstants.STATE_BACKEND + "=FILESYSTEM" +
 			"@@" + FsStateBackendFactory.CHECKPOINT_DIRECTORY_URI_CONF_KEY + "=" + fsStateHandlePath + "/checkpoints" +
-			"@@" + ConfigConstants.ZOOKEEPER_HA_PATH + "=" + fsStateHandlePath + "/recovery");
+			"@@" + ConfigConstants.HA_ZOOKEEPER_STORAGE_PATH + "=" + fsStateHandlePath + "/recovery");
 		flinkYarnClient.setConfigurationFilePath(new Path(confDirPath + File.separator + "flink-conf.yaml"));
 
 		ClusterClient yarnCluster = null;


[66/89] [abbrv] flink git commit: [FLINK-3899] [docs] Add examples for incremental window computation.

Posted by se...@apache.org.
[FLINK-3899] [docs] Add examples for incremental window computation.

This closes #2368


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/d16dcd20
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/d16dcd20
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/d16dcd20

Branch: refs/heads/flip-6
Commit: d16dcd205691664b55aee324ced61b324ee3e28d
Parents: addad1a
Author: danielblazevski <da...@gmail.com>
Authored: Sat Aug 13 18:57:18 2016 -0400
Committer: Fabian Hueske <fh...@apache.org>
Committed: Thu Aug 25 14:44:54 2016 +0200

----------------------------------------------------------------------
 docs/dev/windows.md | 147 ++++++++++++++++++++++++++++++++++++++---------
 1 file changed, 120 insertions(+), 27 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/d16dcd20/docs/dev/windows.md
----------------------------------------------------------------------
diff --git a/docs/dev/windows.md b/docs/dev/windows.md
index 63505c3..67280dd 100644
--- a/docs/dev/windows.md
+++ b/docs/dev/windows.md
@@ -486,49 +486,142 @@ class MyWindowFunction extends WindowFunction[(String, Long), String, String, Ti
 
 ### WindowFunction with Incremental Aggregation
 
-A `WindowFunction` can be combined with either a `ReduceFunction` or a `FoldFunction`. When doing
-this, the `ReduceFunction`/`FoldFunction` will be used to incrementally aggregate elements as they
-arrive while the `WindowFunction` will be provided with the aggregated result when the window is
-ready for processing. This allows to get the benefit of incremental window computation and also have
-the additional meta information that writing a `WindowFunction` provides.
+A `WindowFunction` can be combined with either a `ReduceFunction` or a `FoldFunction` to 
+incrementally aggregate elements as they arrive in the window. 
+When the window is closed, the `WindowFunction` will be provided with the aggregated result. 
+This allows to incrementally compute windows while having access to the 
+additional window meta information of the `WindowFunction`.
 
-This is an example that shows how incremental aggregation functions can be combined with
-a `WindowFunction`.
+#### Incremental Window Aggregation with FoldFunction
+
+The following example shows how an incremental `FoldFunction` can be combined with
+a `WindowFunction` to extract the number of events in the window and return also 
+the key and end time of the window. 
+
+Please note that the use of a `FoldFunction` in combination with `WindowFunction` is
+restricted in that the types of the `Iterable` and `Collector` arguments in
+`WindowFunction` must both correspond to the type of the accumulator in the `FoldFunction`.
 
 <div class="codetabs" markdown="1">
 <div data-lang="java" markdown="1">
 {% highlight java %}
-DataStream<Tuple2<String, Long>> input = ...;
+DataStream<SensorReading> input = ...;
 
-// for folding incremental computation
 input
-    .keyBy(<key selector>)
-    .window(<window assigner>)
-    .apply(<initial value>, new MyFoldFunction(), new MyWindowFunction());
+  .keyBy(<key selector>)
+  .timeWindow(<window assigner>)
+  .apply(new Tuple3<String, Long, Integer>("",0L, 0), new MyFoldFunction(), new MyWindowFunction())
+
+// Function definitions
+
+private static class MyFoldFunction
+    implements FoldFunction<SensorReading, Tuple3<String, Long, Integer> > {
+
+  public Tuple3<String, Long, Integer> fold(Tuple3<String, Long, Integer> acc, SensorReading s) {
+      Integer cur = acc.getField(2);
+      acc.setField(2, cur + 1);
+      return acc;
+  }
+}
+
+private static class MyWindowFunction 
+    implements WindowFunction<Tuple3<String, Long, Integer>, Tuple3<String, Long, Integer>, String, TimeWindow> {
+  
+  public void apply(String key,
+                    TimeWindow window,
+                    Iterable<Tuple3<String, Long, Integer>> counts,
+                    Collector<Tuple3<String, Long, Integer>> out) {
+    Integer count = counts.iterator().next().getField(2);
+    out.collect(new Tuple3<String, Long, Integer>(key, window.getEnd(),count));
+  }
+}
 
-// for reducing incremental computation
-input
-    .keyBy(<key selector>)
-    .window(<window assigner>)
-    .apply(new MyReduceFunction(), new MyWindowFunction());
 {% endhighlight %}
 </div>
-
 <div data-lang="scala" markdown="1">
 {% highlight scala %}
-val input: DataStream[(String, Long)] = ...
 
-// for folding incremental computation
+val input: DataStream[SensorReading] = ...
+
+input
+ .keyBy(<key selector>)
+ .timeWindow(<window assigner>)
+ .apply (
+    ("", 0L, 0), 
+    (acc: (String, Long, Int), r: SensorReading) => { ("", 0L, acc._3 + 1) },
+    ( key: String, 
+      window: TimeWindow, 
+      counts: Iterable[(String, Long, Int)],
+      out: Collector[(String, Long, Int)] ) => 
+      {
+        val count = counts.iterator.next()
+        out.collect((key, window.getEnd, count._3))
+      }
+  )
+
+{% endhighlight %}
+</div>
+</div>
+
+#### Incremental Window Aggregation with ReduceFunction
+
+The following example shows how an incremental `ReduceFunction` can be combined with
+a `WindowFunction` to return the smallest event in a window along 
+with the start time of the window.  
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+DataStream<SensorReading> input = ...;
+
 input
-    .keyBy(<key selector>)
-    .window(<window assigner>)
-    .apply(<initial value>, new MyFoldFunction(), new MyWindowFunction())
+  .keyBy(<key selector>)
+  .timeWindow(<window assigner>)
+  .apply(new MyReduceFunction(), new MyWindowFunction());
+
+// Function definitions
+
+private static class MyReduceFunction implements ReduceFunction<SensorReading> {
+
+  public SensorReading reduce(SensorReading r1, SensorReading r2) {
+      return r1.value() > r2.value() ? r2 : r1;
+  }
+}
+
+private static class MyWindowFunction 
+    implements WindowFunction<SensorReading, Tuple2<Long, SensorReading>, String, TimeWindow> {
+  
+  public void apply(String key,
+                    TimeWindow window,
+                    Iterable<SensorReading> minReadings,
+                    Collector<Tuple2<Long, SensorReading>> out) {
+      SensorReading min = minReadings.iterator().next();
+      out.collect(new Tuple2<Long, SensorReading>(window.getStart(), min));
+  }
+}
+
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+
+val input: DataStream[SensorReading] = ...
 
-// for reducing incremental computation
 input
-    .keyBy(<key selector>)
-    .window(<window assigner>)
-    .apply(new MyReduceFunction(), new MyWindowFunction())
+  .keyBy(<key selector>)
+  .timeWindow(<window assigner>)
+  .apply(
+    (r1: SensorReading, r2: SensorReading) => { if (r1.value > r2.value) r2 else r1 },
+    ( key: String, 
+      window: TimeWindow, 
+      minReadings: Iterable[SensorReading], 
+      out: Collector[(Long, SensorReading)] ) => 
+      {
+        val min = minReadings.iterator.next()
+        out.collect((window.getStart, min))
+      }
+  )
+  
 {% endhighlight %}
 </div>
 </div>


[87/89] [abbrv] flink git commit: [FLINK-4443] [rpc] Add support for rpc gateway and rpc endpoint inheritance

Posted by se...@apache.org.
[FLINK-4443] [rpc] Add support for rpc gateway and rpc endpoint inheritance

This commit extends the RpcCompletenessTest such that it can now check for inherited
remote procedure calls. All methods defined at the RpcGateway are considered native.
This means that they need no RpcEndpoint counterpart because they are implemented by
the RpcGateway implementation.

This closes #2401.

update comments

remove native method annotation

add line break


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/4515c856
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/4515c856
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/4515c856

Branch: refs/heads/flip-6
Commit: 4515c8563c21d3e8705f95dc195630efe4162d53
Parents: 9923b5e
Author: wenlong.lwl <we...@alibaba-inc.com>
Authored: Sun Aug 21 00:46:51 2016 +0800
Committer: Stephan Ewen <se...@apache.org>
Committed: Thu Aug 25 20:21:05 2016 +0200

----------------------------------------------------------------------
 .../org/apache/flink/runtime/rpc/RpcMethod.java |  2 ++
 .../TestingHighAvailabilityServices.java        | 19 +++++++++++
 .../flink/runtime/rpc/RpcCompletenessTest.java  | 33 ++++++++++++++++++--
 3 files changed, 52 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/4515c856/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcMethod.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcMethod.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcMethod.java
index 875e557..e4b0e94 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcMethod.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcMethod.java
@@ -19,6 +19,7 @@
 package org.apache.flink.runtime.rpc;
 
 import java.lang.annotation.ElementType;
+import java.lang.annotation.Inherited;
 import java.lang.annotation.Retention;
 import java.lang.annotation.RetentionPolicy;
 import java.lang.annotation.Target;
@@ -29,6 +30,7 @@ import java.lang.annotation.Target;
  * RpcCompletenessTest makes sure that the set of rpc methods in a rpc server and the set of
  * gateway methods in the corresponding gateway implementation are identical.
  */
+@Inherited
 @Target(ElementType.METHOD)
 @Retention(RetentionPolicy.RUNTIME)
 public @interface RpcMethod {

http://git-wip-us.apache.org/repos/asf/flink/blob/4515c856/flink-runtime/src/test/java/org/apache/flink/runtime/highavailability/TestingHighAvailabilityServices.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/highavailability/TestingHighAvailabilityServices.java b/flink-runtime/src/test/java/org/apache/flink/runtime/highavailability/TestingHighAvailabilityServices.java
index 3a9f943..4d654a3 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/highavailability/TestingHighAvailabilityServices.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/highavailability/TestingHighAvailabilityServices.java
@@ -18,6 +18,8 @@
 
 package org.apache.flink.runtime.highavailability;
 
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.runtime.leaderelection.LeaderElectionService;
 import org.apache.flink.runtime.leaderretrieval.LeaderRetrievalService;
 
 /**
@@ -28,6 +30,8 @@ public class TestingHighAvailabilityServices implements HighAvailabilityServices
 
 	private volatile LeaderRetrievalService resourceManagerLeaderRetriever;
 
+	private volatile LeaderElectionService jobMasterLeaderElectionService;
+
 
 	// ------------------------------------------------------------------------
 	//  Setters for mock / testing implementations
@@ -36,6 +40,10 @@ public class TestingHighAvailabilityServices implements HighAvailabilityServices
 	public void setResourceManagerLeaderRetriever(LeaderRetrievalService resourceManagerLeaderRetriever) {
 		this.resourceManagerLeaderRetriever = resourceManagerLeaderRetriever;
 	}
+
+	public void setJobMasterLeaderElectionService(LeaderElectionService leaderElectionService) {
+		this.jobMasterLeaderElectionService = leaderElectionService;
+	}
 	
 	// ------------------------------------------------------------------------
 	//  HA Services Methods
@@ -50,4 +58,15 @@ public class TestingHighAvailabilityServices implements HighAvailabilityServices
 			throw new IllegalStateException("ResourceManagerLeaderRetriever has not been set");
 		}
 	}
+
+	@Override
+	public LeaderElectionService getJobMasterLeaderElectionService(JobID jobID) throws Exception {
+		LeaderElectionService service = jobMasterLeaderElectionService;
+
+		if (service != null) {
+			return service;
+		} else {
+			throw new IllegalStateException("JobMasterLeaderElectionService has not been set");
+		}
+	}
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/4515c856/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/RpcCompletenessTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/RpcCompletenessTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/RpcCompletenessTest.java
index b8aad62..b431eb9 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/RpcCompletenessTest.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/RpcCompletenessTest.java
@@ -68,8 +68,8 @@ public class RpcCompletenessTest extends TestLogger {
 
 	@SuppressWarnings("rawtypes")
 	private void checkCompleteness(Class<? extends RpcEndpoint> rpcEndpoint, Class<? extends RpcGateway> rpcGateway) {
-		Method[] gatewayMethods = rpcGateway.getDeclaredMethods();
-		Method[] serverMethods = rpcEndpoint.getDeclaredMethods();
+		Method[] gatewayMethods = getRpcMethodsFromGateway(rpcGateway).toArray(new Method[0]);
+		Method[] serverMethods = rpcEndpoint.getMethods();
 
 		Map<String, Set<Method>> rpcMethods = new HashMap<>();
 		Set<Method> unmatchedRpcMethods = new HashSet<>();
@@ -340,4 +340,33 @@ public class RpcCompletenessTest extends TestLogger {
 			throw new RuntimeException("Could not retrive basic type information for primitive type " + primitveType + '.');
 		}
 	}
+
+	/**
+	 * Extract all rpc methods defined by the gateway interface
+	 *
+	 * @param interfaceClass the given rpc gateway interface
+	 * @return all methods defined by the given interface
+	 */
+	private List<Method> getRpcMethodsFromGateway(Class<? extends RpcGateway> interfaceClass) {
+		if(!interfaceClass.isInterface()) {
+			fail(interfaceClass.getName() + "is not a interface");
+		}
+
+		ArrayList<Method> allMethods = new ArrayList<>();
+		// Methods defined in RpcGateway are native method
+		if(interfaceClass.equals(RpcGateway.class)) {
+			return allMethods;
+		}
+
+		// Get all methods declared in current interface
+		for(Method method : interfaceClass.getDeclaredMethods()) {
+			allMethods.add(method);
+		}
+
+		// Get all method inherited from super interface
+		for(Class superClass : interfaceClass.getInterfaces()) {
+			allMethods.addAll(getRpcMethodsFromGateway(superClass));
+		}
+		return allMethods;
+	}
 }


[60/89] [abbrv] flink git commit: [FLINK-4453] [docs] Scala code example in Window documentation shows Java

Posted by se...@apache.org.
[FLINK-4453] [docs] Scala code example in Window documentation shows Java

This closes #2411


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/42f65e4b
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/42f65e4b
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/42f65e4b

Branch: refs/heads/flip-6
Commit: 42f65e4b93ef7f71b6252bc9c664bee727fd4278
Parents: 58850f2
Author: Jark Wu <wu...@alibaba-inc.com>
Authored: Wed Aug 24 11:04:25 2016 +0800
Committer: Stephan Ewen <se...@apache.org>
Committed: Wed Aug 24 19:27:28 2016 +0200

----------------------------------------------------------------------
 docs/dev/windows.md | 21 ++++++++++-----------
 1 file changed, 10 insertions(+), 11 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/42f65e4b/docs/dev/windows.md
----------------------------------------------------------------------
diff --git a/docs/dev/windows.md b/docs/dev/windows.md
index 084a2ee..63505c3 100644
--- a/docs/dev/windows.md
+++ b/docs/dev/windows.md
@@ -409,19 +409,18 @@ public interface WindowFunction<IN, OUT, KEY, W extends Window> extends Function
 
 <div data-lang="scala" markdown="1">
 {% highlight scala %}
-public interface WindowFunction<IN, OUT, KEY, W extends Window> extends Function, Serializable {
+trait WindowFunction[IN, OUT, KEY, W <: Window] extends Function with Serializable {
 
   /**
-   * Evaluates the window and outputs none or several elements.
-   *
-   * @param key The key for which this window is evaluated.
-   * @param window The window that is being evaluated.
-   * @param input The elements in the window being evaluated.
-   * @param out A collector for emitting elements.
-   *
-   * @throws Exception The function may throw exceptions to fail the program and trigger recovery.
-   */
-  void apply(KEY key, W window, Iterable<IN> input, Collector<OUT> out) throws Exception;
+    * Evaluates the window and outputs none or several elements.
+    *
+    * @param key    The key for which this window is evaluated.
+    * @param window The window that is being evaluated.
+    * @param input  The elements in the window being evaluated.
+    * @param out    A collector for emitting elements.
+    * @throws Exception The function may throw exceptions to fail the program and trigger recovery.
+    */
+  def apply(key: KEY, window: W, input: Iterable[IN], out: Collector[OUT])
 }
 {% endhighlight %}
 </div>


[29/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/concepts/fig/processes.svg
----------------------------------------------------------------------
diff --git a/docs/concepts/fig/processes.svg b/docs/concepts/fig/processes.svg
deleted file mode 100644
index fe83a9d..0000000
--- a/docs/concepts/fig/processes.svg
+++ /dev/null
@@ -1,749 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<svg
-   xmlns:dc="http://purl.org/dc/elements/1.1/"
-   xmlns:cc="http://creativecommons.org/ns#"
-   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
-   xmlns:svg="http://www.w3.org/2000/svg"
-   xmlns="http://www.w3.org/2000/svg"
-   version="1.1"
-   width="851.09106"
-   height="613.16156"
-   id="svg2">
-  <defs
-     id="defs4" />
-  <metadata
-     id="metadata7">
-    <rdf:RDF>
-      <cc:Work
-         rdf:about="">
-        <dc:format>image/svg+xml</dc:format>
-        <dc:type
-           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
-        <dc:title></dc:title>
-      </cc:Work>
-    </rdf:RDF>
-  </metadata>
-  <g
-     transform="translate(50.54889,-225.78139)"
-     id="layer1">
-    <g
-       transform="translate(-391.17389,218.44297)"
-       id="g2989">
-      <path
-         d="m 341.26002,269.37336 0,209.43342 264.44088,0 0,-209.43342 -264.44088,0 z"
-         id="path2991"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 341.26002,269.37336 264.44088,0 0,209.43342 -264.44088,0 z"
-         id="path2993"
-         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="350.33728"
-         y="291.3476"
-         id="text2995"
-         xml:space="preserve"
-         style="font-size:17.55437279px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Flink Program</text>
-      <path
-         d="m 495.68599,390.9599 0,81.43278 105.02616,0 0,-81.43278 -105.02616,0 z"
-         id="path2997"
-         style="fill:#e4eaf4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 495.68599,390.9599 105.02616,0 0,81.43278 -105.02616,0 z"
-         id="path2999"
-         style="fill:none;stroke:#898c92;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="504.73251"
-         y="413.00705"
-         id="text3001"
-         xml:space="preserve"
-         style="font-size:17.55437279px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Client</text>
-      <path
-         d="m 943.285,29.932457 0,251.950263 204.1258,0 0,-251.950263 -204.1258,0 z"
-         id="path3003"
-         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 943.285,29.932457 204.1258,0 0,251.950263 -204.1258,0 z"
-         id="path3005"
-         style="fill:none;stroke:#935f1c;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="952.29791"
-         y="51.877296"
-         id="text3007"
-         xml:space="preserve"
-         style="font-size:17.55437279px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">TaskManager</text>
-      <path
-         d="m 1018.6413,77.306759 0,88.297001 53.9009,0 0,-88.297001 -53.9009,0 z"
-         id="path3009"
-         style="fill:#e4eaf4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 1018.6413,77.306759 53.9009,0 0,88.297001 -53.9009,0 z"
-         id="path3011"
-         style="fill:none;stroke:#898c92;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="1029.1053"
-         y="96.706863"
-         id="text3013"
-         xml:space="preserve"
-         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task </text>
-      <text
-         x="1031.0559"
-         y="114.71135"
-         id="text3015"
-         xml:space="preserve"
-         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Slot</text>
-      <path
-         d="m 1083.0073,77.306759 0,88.297001 53.9384,0 0,-88.297001 -53.9384,0 z"
-         id="path3017"
-         style="fill:#e4eaf4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 1083.0073,77.306759 53.9384,0 0,88.297001 -53.9384,0 z"
-         id="path3019"
-         style="fill:none;stroke:#898c92;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="1093.4702"
-         y="96.706863"
-         id="text3021"
-         xml:space="preserve"
-         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task </text>
-      <text
-         x="1095.4207"
-         y="114.71135"
-         id="text3023"
-         xml:space="preserve"
-         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Slot</text>
-      <path
-         d="m 1026.5933,139.90986 c 0,-10.50262 8.5146,-19.01724 19.0172,-19.01724 10.4651,0 18.9797,8.51462 18.9797,19.01724 0,10.4651 -8.5146,18.97972 -18.9797,18.97972 -10.5026,0 -19.0172,-8.51462 -19.0172,-18.97972"
-         id="path3025"
-         style="fill:#be73f1;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 1026.5933,139.90986 c 0,-10.50262 8.5146,-19.01724 19.0172,-19.01724 10.4651,0 18.9797,8.51462 18.9797,19.01724 0,10.4651 -8.5146,18.97972 -18.9797,18.97972 -10.5026,0 -19.0172,-8.51462 -19.0172,-18.97972"
-         id="path3027"
-         style="fill:none;stroke:#724591;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="1032.5719"
-         y="144.36874"
-         id="text3029"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task</text>
-      <path
-         d="m 953.78761,77.306759 0,88.297001 53.75089,0 0,-88.297001 -53.75089,0 z"
-         id="path3031"
-         style="fill:#e4eaf4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 953.78761,77.306759 53.75089,0 0,88.297001 -53.75089,0 z"
-         id="path3033"
-         style="fill:none;stroke:#898c92;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="964.16815"
-         y="96.706863"
-         id="text3035"
-         xml:space="preserve"
-         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task </text>
-      <text
-         x="966.11865"
-         y="114.71135"
-         id="text3037"
-         xml:space="preserve"
-         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Slot</text>
-      <path
-         d="m 961.58956,139.90986 c 0,-10.50262 8.55213,-19.01724 19.05474,-19.01724 10.54013,0 19.09226,8.51462 19.09226,19.01724 0,10.4651 -8.55213,18.97972 -19.09226,18.97972 -10.50261,0 -19.05474,-8.51462 -19.05474,-18.97972"
-         id="path3039"
-         style="fill:#be73f1;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 961.58956,139.90986 c 0,-10.50262 8.55213,-19.01724 19.05474,-19.01724 10.54013,0 19.09226,8.51462 19.09226,19.01724 0,10.4651 -8.55213,18.97972 -19.09226,18.97972 -10.50261,0 -19.05474,-8.51462 -19.05474,-18.97972"
-         id="path3041"
-         style="fill:none;stroke:#724591;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="967.63464"
-         y="144.36874"
-         id="text3043"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task</text>
-      <path
-         d="m 951.27449,206.714 0,31.7329 188.48441,0 0,-31.7329 -188.48441,0 z"
-         id="path3045"
-         style="fill:#b8bec6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 951.27449,206.714 188.48441,0 0,31.7329 -188.48441,0 z"
-         id="path3047"
-         style="fill:none;stroke:#6e7277;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="987.11847"
-         y="227.79962"
-         id="text3049"
-         xml:space="preserve"
-         style="font-size:13.20328903px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Network Manager</text>
-      <path
-         d="m 951.27449,243.28561 0,32.5206 188.48441,0 0,-32.5206 -188.48441,0 z"
-         id="path3051"
-         style="fill:#b8bec6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 951.27449,243.28561 188.48441,0 0,32.5206 -188.48441,0 z"
-         id="path3053"
-         style="fill:none;stroke:#6e7277;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="1001.0752"
-         y="264.77148"
-         id="text3055"
-         xml:space="preserve"
-         style="font-size:13.20328903px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Actor System</text>
-      <path
-         d="m 951.27449,170.44246 0,31.73291 188.48441,0 0,-31.73291 -188.48441,0 z"
-         id="path3057"
-         style="fill:#b8bec6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 951.27449,170.44246 188.48441,0 0,31.73291 -188.48441,0 z"
-         id="path3059"
-         style="fill:none;stroke:#6e7277;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="967.76367"
-         y="191.52837"
-         id="text3061"
-         xml:space="preserve"
-         style="font-size:13.20328903px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Memory &amp; I/O Manager</text>
-      <path
-         d="m 804.98804,438.48424 0,158.1769 200.52496,0 0,-158.1769 -200.52496,0 z"
-         id="path3063"
-         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 804.98804,438.48424 200.52496,0 0,158.1769 -200.52496,0 z"
-         id="path3065"
-         style="fill:none;stroke:#935f1c;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="814.04663"
-         y="460.43439"
-         id="text3067"
-         xml:space="preserve"
-         style="font-size:17.55437279px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">JobManager</text>
-      <text
-         x="1006.6214"
-         y="17.8258"
-         id="text3069"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">(Worker)</text>
-      <text
-         x="782.64081"
-         y="617.72314"
-         id="text3071"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">(Master / YARN Application Master)</text>
-      <path
-         d="m 811.2521,517.55394 0,56.45156 89.0847,0 0,-56.45156 -89.0847,0 z"
-         id="path3073"
-         style="fill:#e4eaf4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 811.2521,517.55394 89.0847,0 0,56.45156 -89.0847,0 z"
-         id="path3075"
-         style="fill:none;stroke:#898c92;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="816.61139"
-         y="532.03253"
-         id="text3077"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Dataflow Graph</text>
-      <path
-         d="m 820.32936,554.91324 c 0.93774,-2.47561 3.67592,-3.75093 6.15154,-2.85071 2.51312,0.90023 3.78844,3.67592 2.85071,6.15154 -0.90023,2.47561 -3.63841,3.75093 -6.15154,2.85071 -2.47561,-0.90023 -3.75093,-3.67592 -2.85071,-6.15154"
-         id="path3079"
-         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 820.32936,554.91324 c 0.93774,-2.47561 3.67592,-3.75093 6.15154,-2.85071 2.51312,0.90023 3.78844,3.67592 2.85071,6.15154 -0.90023,2.47561 -3.63841,3.75093 -6.15154,2.85071 -2.47561,-0.90023 -3.75093,-3.67592 -2.85071,-6.15154"
-         id="path3081"
-         style="fill:none;stroke:#000000;stroke-width:0.63765883px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 847.97375,550.82472 c 0.4126,-2.58814 2.88822,-4.38859 5.51388,-3.93848 2.62565,0.41261 4.38859,2.88822 3.93848,5.51388 -0.41261,2.58814 -2.88822,4.38859 -5.51388,3.93848 -2.58814,-0.4126 -4.35108,-2.88822 -3.93848,-5.51388"
-         id="path3083"
-         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 847.97375,550.82472 c 0.4126,-2.58814 2.88822,-4.38859 5.51388,-3.93848 2.62565,0.41261 4.38859,2.88822 3.93848,5.51388 -0.41261,2.58814 -2.88822,4.38859 -5.51388,3.93848 -2.58814,-0.4126 -4.35108,-2.88822 -3.93848,-5.51388"
-         id="path3085"
-         style="fill:none;stroke:#000000;stroke-width:0.63765883px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 858.55139,564.47813 c 0.90022,-2.47562 3.67591,-3.75094 6.15153,-2.81321 2.47562,0.90023 3.75093,3.67592 2.85071,6.15154 -0.93773,2.47561 -3.67592,3.75093 -6.18904,2.8132 -2.47562,-0.90023 -3.75094,-3.67592 -2.8132,-6.15153"
-         id="path3087"
-         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 858.55139,564.47813 c 0.90022,-2.47562 3.67591,-3.75094 6.15153,-2.81321 2.47562,0.90023 3.75093,3.67592 2.85071,6.15154 -0.93773,2.47561 -3.67592,3.75093 -6.18904,2.8132 -2.47562,-0.90023 -3.75094,-3.67592 -2.8132,-6.15153"
-         id="path3089"
-         style="fill:none;stroke:#000000;stroke-width:0.63765883px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 882.2948,554.46313 c 0.82521,-2.51313 3.52588,-3.90097 6.07652,-3.07577 2.51312,0.86272 3.86346,3.56339 3.03825,6.07652 -0.8252,2.51312 -3.56339,3.86346 -6.07651,3.03826 -2.51313,-0.82521 -3.86346,-3.52588 -3.03826,-6.03901"
-         id="path3091"
-         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 882.2948,554.46313 c 0.82521,-2.51313 3.52588,-3.90097 6.07652,-3.07577 2.51312,0.86272 3.86346,3.56339 3.03825,6.07652 -0.8252,2.51312 -3.56339,3.86346 -6.07651,3.03826 -2.51313,-0.82521 -3.86346,-3.52588 -3.03826,-6.03901"
-         id="path3093"
-         style="fill:none;stroke:#000000;stroke-width:0.63765883px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 882.06975,538.52166 c 0.86271,-2.51313 3.6384,-3.82595 6.11402,-2.92573 2.51312,0.90022 3.78844,3.63841 2.92573,6.11402 -0.90023,2.51313 -3.63841,3.82596 -6.11403,2.92573 -2.51312,-0.90022 -3.82595,-3.6384 -2.92572,-6.11402"
-         id="path3095"
-         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 882.06975,538.52166 c 0.86271,-2.51313 3.6384,-3.82595 6.11402,-2.92573 2.51312,0.90022 3.78844,3.63841 2.92573,6.11402 -0.90023,2.51313 -3.63841,3.82596 -6.11403,2.92573 -2.51312,-0.90022 -3.82595,-3.6384 -2.92572,-6.11402"
-         id="path3097"
-         style="fill:none;stroke:#000000;stroke-width:0.63765883px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 829.33161,555.17581 15.07875,-2.96324 -0.22505,-1.23781 -15.07876,2.96324 0.22506,1.23781 z m 14.21604,-0.90023 4.4261,-3.41335 -5.36383,-1.50037 0.93773,4.91372 z"
-         id="path3099"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 829.48164,557.61392 25.54387,5.47636 -0.26257,1.2003 -25.54386,-5.47637 0.26256,-1.20029 z m 24.71866,3.37584 4.35109,3.48837 -5.40135,1.38784 1.05026,-4.87621 z"
-         id="path3101"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 867.59114,564.66567 11.6654,-4.65116 -0.45011,-1.16279 -11.70291,4.65116 0.48762,1.16279 z m 11.21529,-2.43811 3.71343,-4.16353 -5.58889,-0.48762 1.87546,4.65115 z"
-         id="path3103"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 857.53863,551.79997 21.11777,1.76294 -0.075,1.2378 -21.11776,-1.76293 0.075,-1.23781 z m 20.02999,-0.22506 4.8012,2.92573 -5.2138,2.06301 0.4126,-4.98874 z"
-         id="path3105"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 856.71343,549.36186 22.01798,-5.70142 -0.30007,-1.2003 -22.01799,5.70142 0.30008,1.2003 z m 21.2678,-3.56339 4.23855,-3.67591 -5.47636,-1.16279 1.23781,4.8387 z"
-         id="path3107"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 811.2521,479.59448 0,32.5206 188.48446,0 0,-32.5206 -188.48446,0 z"
-         id="path3109"
-         style="fill:#b8bec6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 811.2521,479.59448 188.48446,0 0,32.5206 -188.48446,0 z"
-         id="path3111"
-         style="fill:none;stroke:#6e7277;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="860.96033"
-         y="501.12317"
-         id="text3113"
-         xml:space="preserve"
-         style="font-size:13.20328903px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Actor System</text>
-      <path
-         d="m 518.8105,428.46924 0,38.76591 76.89415,0 0,-38.76591 -76.89415,0 z"
-         id="path3115"
-         style="fill:#b8bec6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 518.8105,428.46924 76.89415,0 0,38.76591 -76.89415,0 z"
-         id="path3117"
-         style="fill:none;stroke:#6e7277;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="539.96814"
-         y="445.22284"
-         id="text3119"
-         xml:space="preserve"
-         style="font-size:13.20328903px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Actor</text>
-      <text
-         x="532.46631"
-         y="460.97678"
-         id="text3121"
-         xml:space="preserve"
-         style="font-size:13.20328903px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">System</text>
-      <path
-         d="m 986.57078,487.17137 2.43811,-2.88822 0.97524,0.8252 -2.43811,2.85071 -0.97524,-0.78769 z m 3.26331,-3.82596 2.4006,-2.85071 0.97524,0.7877 -2.4381,2.88822 -0.93774,-0.82521 z m 3.22581,-3.82595 2.4381,-2.85071 0.93774,0.82521 -2.43811,2.85071 -0.93773,-0.82521 z m 3.2258,-3.78844 2.43811,-2.88822 0.93773,0.8252 -2.4006,2.85071 -0.97524,-0.78769 z m 3.26331,-3.82596 1.23779,-1.50037 1.1628,-1.35034 0.9753,0.7877 -1.1628,1.35034 -1.2754,1.53788 -0.93769,-0.82521 z m 3.22579,-3.82595 2.4006,-2.85071 0.9753,0.7877 -2.4382,2.88822 -0.9377,-0.82521 z m 3.2258,-3.82595 1.838,-2.21305 0,0.0375 0.5626,-0.67517 0.9753,0.78769 -0.5627,0.67517 -1.8754,2.21305 -0.9378,-0.8252 z m 3.2258,-3.82596 2.4006,-2.85071 0.9378,0.7877 -2.4006,2.88822 -0.9378,-0.82521 z m 3.1883,-3.82595 2.4006,-2.85071 0,0 0,0 0.9753,0.7877 0,0 -2.4006,2.88822 -0.9753,-0.82521 z m 3.2258,-3.82595 2.4006,-2.88822 0.9378,0.8252 -2.4006,2.85071 -0.9378,-0.78769 z m 3.1883,-3.82596 2.4006,-2.88821 0.9753,0.78769
  -2.4006,2.88822 -0.9753,-0.7877 z m 3.1883,-3.86346 2.4006,-2.88822 0.9753,0.7877 -2.4006,2.88822 -0.9753,-0.7877 z m 3.1883,-3.86346 2.4006,-2.88822 0.9377,0.7877 -2.3631,2.88822 -0.9752,-0.7877 z m 3.1883,-3.86346 0.075,-0.075 0,0 2.2881,-2.8132 0.9752,0.78769 -2.3255,2.8132 -0.038,0.11253 -0.9752,-0.8252 z m 3.1508,-3.86347 2.3631,-2.92572 0.9752,0.78769 -2.3631,2.92573 -0.9752,-0.7877 z m 3.1508,-3.90097 0.3751,-0.45011 0,0 1.9504,-2.43811 0.9753,0.75019 -1.9505,2.47562 -0.3751,0.45011 -0.9752,-0.7877 z m 3.1132,-3.86346 2.3631,-2.96324 0.9753,0.7877 -2.3631,2.92573 -0.9753,-0.75019 z m 3.1508,-3.93848 0.5252,-0.67517 0,0 1.7629,-2.25056 1.0128,0.7877 -1.763,2.25056 -0.5626,0.67517 -0.9753,-0.7877 z m 3.0758,-3.90097 2.3256,-2.96324 0.9752,0.7877 -2.3256,2.92572 -0.9752,-0.75018 z m 3.0758,-3.93848 0.6376,-0.7877 0,0 1.6504,-2.17554 1.0128,0.75018 -1.6879,2.17555 -0.6002,0.8252 -1.0127,-0.78769 z m 3.0757,-3.93849 0.6002,-0.78769 0,0 1.6879,-2.21305 0.9752,0.78769 -1.6879,2.175
 55 -0.6001,0.78769 -0.9753,-0.75019 z m 3.0383,-3.97599 0.5251,-0.71267 0,0 1.7254,-2.28807 1.0128,0.75018 -1.7254,2.28807 -0.5627,0.71268 -0.9752,-0.75019 z m 3.0007,-3.97599 0.4501,-0.60015 0,0 1.8005,-2.4381 1.0127,0.75018 -1.8004,2.43811 -0.4501,0.60015 -1.0128,-0.75019 z m 3.0008,-4.0135 0.3001,-0.45011 0,0 1.9129,-2.58814 1.0128,0.75018 -1.913,2.58815 -0.3376,0.45011 -0.9752,-0.75019 z m 2.9257,-4.05101 0.1876,-0.18754 -0.038,0 2.0631,-2.85071 1.0127,0.75018 -2.063,2.81321 -0.15,0.22505 -1.0128,-0.75019 z m 2.9257,-4.051 2.1756,-3.03826 1.0127,0.71268 -2.1755,3.03825 -1.0128,-0.71267 z m 2.8883,-4.08852 2.138,-3.07577 1.0127,0.71268 -2.138,3.07576 -1.0127,-0.71267 z m 2.8507,-4.08852 1.9129,-2.85071 0,0 0.1876,-0.26257 1.0127,0.71268 -0.15,0.22506 -1.9505,2.88822 -1.0127,-0.71268 z m 2.7757,-4.16354 1.5754,-2.32558 -0.038,0 0.5252,-0.78769 1.0502,0.67516 -0.5251,0.82521 -1.5379,2.32558 -1.0502,-0.71268 z m 2.7381,-4.16354 1.1253,-1.72543 0,0 0.9002,-1.42535 1.0503,0.67517 -0.9
 002,1.42535 -1.1253,1.72543 -1.0503,-0.67517 z m 2.7007,-4.20104 0.6752,-1.08777 0,0.0375 1.2753,-2.13804 1.0878,0.67517 -1.3129,2.10052 -0.6751,1.08778 -1.0503,-0.67517 z m 2.6257,-4.23856 0.1875,-0.33758 0,0 1.7254,-2.88822 1.0878,0.63766 -1.7629,2.92573 -0.1876,0.30007 -1.0502,-0.63766 z m 2.5506,-4.31357 1.6129,-2.85071 0,0 0.2251,-0.3751 1.0877,0.60015 -0.225,0.3751 -1.6129,2.88822 -1.0878,-0.63766 z m 2.4381,-4.31358 1.0503,-1.87546 0,0 0.7502,-1.42536 1.0877,0.60015 -0.7502,1.42535 -1.0502,1.87547 -1.0878,-0.60015 z m 2.4006,-4.38859 0.4126,-0.82521 0,0.0375 1.3128,-2.55063 1.0878,0.56264 -1.3128,2.55063 -0.4126,0.7877 -1.0878,-0.56264 z m 2.2881,-4.4261 1.3878,-2.85071 0,0 0.2251,-0.52513 1.1628,0.52513 -0.2626,0.52513 -1.4254,2.85071 -1.0877,-0.52513 z m 2.1755,-4.50113 0.7127,-1.53788 0,0 0.8627,-1.83796 1.1253,0.48763 -0.8627,1.87546 -0.7127,1.53789 -1.1253,-0.52514 z m 2.063,-4.53863 0.075,-0.15003 0,0 1.3503,-3.11328 0,0.0375 0.075,-0.18755 1.1628,0.48763 -0.075,0.18754
  -1.3504,3.11328 -0.075,0.15004 -1.1628,-0.52514 z m 1.988,-4.57614 0.7502,-1.72543 -0.037,0 0.7127,-1.72543 1.1628,0.45012 -0.7127,1.72543 -0.7127,1.76294 -1.1628,-0.48763 z m 1.8755,-4.61365 0.075,-0.15003 0,0 1.1253,-3.00075 0,0 0.15,-0.33758 1.1628,0.4126 -0.1125,0.37509 -1.1628,3.00075 -0.075,0.15004 -1.1628,-0.45012 z m 1.7629,-4.65115 0.5252,-1.46287 0,0 0.7126,-2.06301 1.2003,0.4126 -0.7501,2.06301 -0.5252,1.46287 -1.1628,-0.4126 z m 1.6504,-4.72618 0.8628,-2.62566 0,0.0375 0.3,-0.93773 1.1628,0.33758 -0.2625,0.97525 -0.9003,2.62565 -1.1628,-0.4126 z m 1.5004,-4.72618 1.1253,-3.6009 1.1628,0.3751 -1.0878,3.56339 -1.2003,-0.33759 z m 1.4254,-4.80119 1.0127,-3.6009 1.2003,0.33758 -0.9752,3.6009 -1.2378,-0.33758 z m 1.3503,-4.8012 0.9002,-3.63841 1.2003,0.30008 -0.9002,3.6384 -1.2003,-0.30007 z m 1.2003,-4.83871 0.1125,-0.4126 0,0.0375 0.7502,-3.26331 1.2003,0.26256 -0.7502,3.26332 -0.075,0.4126 -1.2378,-0.30008 z m 1.1253,-4.87621 0.225,-1.01275 0,0 0.5627,-2.62566 1.2003,0.26
 257 -0.5251,2.62565 -0.2626,1.05026 -1.2003,-0.30007 z m 1.0503,-4.87622 0.3,-1.57539 0,0 0.4126,-2.10052 1.2378,0.26256 -0.4126,2.06302 -0.3375,1.6129 -1.2003,-0.26257 z m 0.9377,-4.87621 0.4126,-2.10052 0,0 0.2626,-1.61291 1.2378,0.22506 -0.2626,1.6129 -0.4126,2.10053 -1.2378,-0.22506 z m 0.9002,-4.91372 0.4876,-2.55064 0,0 0.1876,-1.16279 1.2378,0.22506 -0.2251,1.16279 -0.4501,2.55063 -1.2378,-0.22505 z m 0.8627,-4.91373 0.3751,-2.13803 1.2378,0.22505 -0.3751,2.10053 -1.2378,-0.18755 z m -2.9257,-1.42535 4.9512,-6.7892 2.4381,8.027 -7.3893,-1.2378 z"
-         id="path3123"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 1010.1642,268.942 -1.0503,3.6009 -1.2003,-0.3751 1.0503,-3.60089 1.2003,0.37509 z m -1.4254,4.8012 -1.0503,3.60089 -1.2003,-0.37509 1.0503,-3.6009 1.2003,0.3751 z m -1.4254,4.80119 -1.0502,3.56339 -1.2003,-0.33758 1.0502,-3.6009 1.2003,0.37509 z m -1.4253,4.76369 -0.4126,1.46286 -0.6377,2.13804 -1.2003,-0.33759 0.6377,-2.17554 0.4126,-1.42535 1.2003,0.33758 z m -1.3879,4.8012 -1.0877,3.60089 -1.1628,-0.33758 1.0502,-3.6009 1.2003,0.33759 z m -1.4253,4.80119 -1.0503,3.6009 -1.2003,-0.33759 1.0503,-3.60089 1.2003,0.33758 z m -1.3879,4.8012 -0.8627,2.8132 0,0 -0.225,0.78769 -1.20032,-0.33758 0.22505,-0.7877 0.86267,-2.8132 1.2003,0.33759 z m -1.4253,4.80119 -1.05027,3.6009 -1.2003,-0.33758 1.05026,-3.6009 1.20031,0.33758 z m -1.38786,4.8012 -0.30007,1.05026 0,0 -0.71268,2.55064 -1.2003,-0.33759 0.71268,-2.55063 0.30008,-1.05026 1.20029,0.33758 z m -1.35033,4.8012 -1.05026,3.6384 -1.2003,-0.37509 1.05026,-3.6009 1.2003,0.33759 z m -1.38785,4.8387 -1.01275,3.6009 -1.2003,-0
 .33759 1.01275,-3.60089 1.2003,0.33758 z m -1.31282,4.8012 -0.60015,2.10052 0,0 -0.41261,1.50037 -1.2003,-0.33758 0.41261,-1.46286 0.56264,-2.13804 1.23781,0.33759 z m -1.35034,4.80119 -0.97524,3.63841 -1.2003,-0.33758 0.97524,-3.6009 1.2003,0.30007 z m -1.31283,4.83871 -0.0375,0.15004 0,0 -0.93773,3.48837 -1.2003,-0.33759 0.93773,-3.48837 0.0375,-0.15004 1.2003,0.33759 z m -1.27532,4.8387 -0.78769,2.92573 0,-0.0375 -0.18755,0.71268 -1.2003,-0.30007 0.18755,-0.71268 0.7877,-2.92573 1.20029,0.33758 z m -1.27531,4.83871 -0.93774,3.6009 -1.2003,-0.30008 0.93774,-3.6384 1.2003,0.33758 z m -1.23781,4.83871 -0.18755,0.75018 0,-0.0375 -0.71268,2.92573 -1.2378,-0.30007 0.71267,-2.92573 0.18755,-0.75019 1.23781,0.33759 z m -1.2003,4.8387 -0.82521,3.33833 0,0 -0.075,0.30008 -1.2378,-0.30008 0.075,-0.30007 0.8252,-3.33833 1.23781,0.30007 z m -1.2003,4.83871 -0.86271,3.67591 -1.2003,-0.30007 0.86271,-3.63841 1.2003,0.26257 z m -1.16279,4.87621 -0.22506,0.93773 0,0 -0.60015,2.70068 -1.20029,-0.2
 6257 0.60015,-2.73818 0.22505,-0.93773 1.2003,0.30007 z m -1.08777,4.87622 -0.75019,3.26331 0,0 -0.075,0.37509 -1.2003,-0.26256 0.075,-0.3751 0.71268,-3.26331 1.23781,0.26257 z m -1.08777,4.87621 -0.75019,3.67592 -1.23781,-0.26257 0.7877,-3.67592 1.2003,0.26257 z m -1.01275,4.87622 -0.15004,0.63765 0,-0.0375 -0.60015,3.07577 -1.2003,-0.22506 0.60015,-3.07576 0.11253,-0.60015 1.23781,0.22506 z m -0.97525,4.91372 -0.52513,2.70067 0,-0.0375 -0.18754,1.01276 -1.23781,-0.22506 0.18754,-1.01275 0.52513,-2.66317 1.23781,0.22506 z m -0.93773,4.91372 -0.63766,3.67592 -1.23781,-0.22506 0.63766,-3.67591 1.23781,0.22505 z m -0.86272,4.91373 -0.60014,3.67591 -1.23781,-0.18754 0.60015,-3.71343 1.2378,0.22506 z m -0.78769,4.91372 -0.26257,1.46287 0,0 -0.30007,2.25056 -1.23781,-0.18755 0.30007,-2.25056 0.26257,-1.46287 1.23781,0.18755 z m -0.75019,4.95124 -0.45011,3.07576 0,0 -0.075,0.63766 -1.23781,-0.15004 0.075,-0.63766 0.45011,-3.11327 1.23781,0.18755 z m -0.67517,4.95123 -0.45011,3.71342 -1.23
 781,-0.15003 0.45011,-3.71343 1.23781,0.15004 z m -0.60015,4.95123 -0.4126,3.71343 -1.23781,-0.11253 0.41261,-3.75094 1.2378,0.15004 z m -0.52513,4.95124 -0.11253,1.01275 0,0 -0.26256,2.73818 -1.23781,-0.11253 0.26257,-2.77569 0.075,-0.97524 1.27531,0.11253 z m -0.48762,4.98874 -0.22505,2.25056 0,-0.0375 -0.11253,1.50037 -1.23781,-0.075 0.11253,-1.50037 0.22505,-2.25056 1.23781,0.11253 z m -0.4126,4.95123 -0.26257,3.37584 0,0 -0.0375,0.3751 -1.2378,-0.075 0.0375,-0.3751 0.22506,-3.37584 1.27532,0.075 z m -0.37509,4.98874 -0.22506,3.75094 -1.23781,-0.075 0.22506,-3.75093 1.23781,0.075 z m -0.30008,4.98875 -0.18755,3.75093 -1.2378,-0.075 0.18754,-3.75093 1.23781,0.075 z m -0.22506,4.98874 -0.0375,0.4126 0,0 -0.11253,3.33833 -1.23781,-0.0375 0.11253,-3.33834 0,-0.4126 1.27531,0.0375 z m -0.18754,4.98874 -0.075,1.31283 0,0 -0.0375,2.43811 -1.27532,-0.0375 0.075,-2.43811 0.0375,-1.31283 1.27532,0.0375 z m -0.15004,4.98875 -0.075,2.13803 0,0 0,1.6129 -1.23781,0 0,-1.65041 0.075,-2.13803 1
 .23781,0.0375 z m -0.11253,4.98874 -0.0375,3.75093 -1.23781,0 0.0375,-3.75093 1.23781,0 z m -0.0375,5.02625 -0.0375,3.6009 0,0 0,0.15004 -1.27531,0 0,-0.15004 0.0375,-3.63841 1.27531,0.0375 z m -0.0375,4.98874 0,3.75094 -1.27531,0 0,-3.75094 1.27531,0 z m 0,4.98875 0,3.75093 -1.2378,0 -0.0375,-3.75093 1.27531,0 z m 0,4.98874 0.0375,3.75093 -1.27531,0.0375 0,-3.75093 1.2378,-0.0375 z m 3.15079,3.75093 -3.71343,7.50187 -3.78844,-7.46436 7.50187,-0.0375 z"
-         id="path3125"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="1098.799"
-         y="347.50183"
-         id="text3127"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Deploy/Stop/</text>
-      <text
-         x="1098.799"
-         y="364.00595"
-         id="text3129"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Cancel Tasks</text>
-      <text
-         x="1077.9341"
-         y="393.53741"
-         id="text3131"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Trigger</text>
-      <text
-         x="1060.6798"
-         y="410.04153"
-         id="text3133"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Checkpoints</text>
-      <text
-         x="906.68597"
-         y="312.69434"
-         id="text3135"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task Status</text>
-      <text
-         x="905.30804"
-         y="341.08121"
-         id="text3137"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Heartbeats</text>
-      <text
-         x="912.66595"
-         y="368.71213"
-         id="text3139"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Statistics</text>
-      <text
-         x="1045.498"
-         y="439.573"
-         id="text3141"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">\u2026</text>
-      <text
-         x="936.66449"
-         y="397.45572"
-         id="text3143"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">\u2026</text>
-      <path
-         d="m 661.96491,29.932457 0,251.950263 204.27589,0 0,-251.950263 -204.27589,0 z"
-         id="path3145"
-         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 661.96491,29.932457 204.27589,0 0,251.950263 -204.27589,0 z"
-         id="path3147"
-         style="fill:none;stroke:#935f1c;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="671.09534"
-         y="51.877296"
-         id="text3149"
-         xml:space="preserve"
-         style="font-size:17.55437279px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">TaskManager</text>
-      <path
-         d="m 737.47122,77.306759 0,88.297001 53.90093,0 0,-88.297001 -53.90093,0 z"
-         id="path3151"
-         style="fill:#e4eaf4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 737.47122,77.306759 53.90093,0 0,88.297001 -53.90093,0 z"
-         id="path3153"
-         style="fill:none;stroke:#898c92;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="747.90277"
-         y="96.706863"
-         id="text3155"
-         xml:space="preserve"
-         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task </text>
-      <text
-         x="749.85327"
-         y="114.71135"
-         id="text3157"
-         xml:space="preserve"
-         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Slot</text>
-      <path
-         d="m 801.87477,77.306759 0,88.297001 53.75089,0 0,-88.297001 -53.75089,0 z"
-         id="path3159"
-         style="fill:#e4eaf4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 801.87477,77.306759 53.75089,0 0,88.297001 -53.75089,0 z"
-         id="path3161"
-         style="fill:none;stroke:#898c92;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="812.26758"
-         y="96.706863"
-         id="text3163"
-         xml:space="preserve"
-         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task </text>
-      <text
-         x="814.21808"
-         y="114.71135"
-         id="text3165"
-         xml:space="preserve"
-         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Slot</text>
-      <path
-         d="m 745.4232,139.90986 c 0,-10.50262 8.51462,-19.01724 19.01724,-19.01724 10.46511,0 18.97973,8.51462 18.97973,19.01724 0,10.4651 -8.51462,18.97972 -18.97973,18.97972 -10.50262,0 -19.01724,-8.51462 -19.01724,-18.97972"
-         id="path3167"
-         style="fill:#be73f1;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 745.4232,139.90986 c 0,-10.50262 8.51462,-19.01724 19.01724,-19.01724 10.46511,0 18.97973,8.51462 18.97973,19.01724 0,10.4651 -8.51462,18.97972 -18.97973,18.97972 -10.50262,0 -19.01724,-8.51462 -19.01724,-18.97972"
-         id="path3169"
-         style="fill:none;stroke:#724591;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="751.36926"
-         y="144.36874"
-         id="text3171"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task</text>
-      <path
-         d="m 672.46753,77.306759 0,88.297001 53.90093,0 0,-88.297001 -53.90093,0 z"
-         id="path3173"
-         style="fill:#e4eaf4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 672.46753,77.306759 53.90093,0 0,88.297001 -53.90093,0 z"
-         id="path3175"
-         style="fill:none;stroke:#898c92;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="682.96558"
-         y="96.706863"
-         id="text3177"
-         xml:space="preserve"
-         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task </text>
-      <text
-         x="684.91608"
-         y="114.71135"
-         id="text3179"
-         xml:space="preserve"
-         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Slot</text>
-      <path
-         d="m 680.41951,139.90986 c 0,-10.50262 8.51462,-19.01724 18.97973,-19.01724 10.50261,0 19.01724,8.51462 19.01724,19.01724 0,10.4651 -8.51463,18.97972 -19.01724,18.97972 -10.46511,0 -18.97973,-8.51462 -18.97973,-18.97972"
-         id="path3181"
-         style="fill:#be73f1;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 680.41951,139.90986 c 0,-10.50262 8.51462,-19.01724 18.97973,-19.01724 10.50261,0 19.01724,8.51462 19.01724,19.01724 0,10.4651 -8.51463,18.97972 -19.01724,18.97972 -10.46511,0 -18.97973,-8.51462 -18.97973,-18.97972"
-         id="path3183"
-         style="fill:none;stroke:#724591;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="686.43207"
-         y="144.36874"
-         id="text3185"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task</text>
-      <path
-         d="m 670.10444,206.714 0,31.7329 188.48446,0 0,-31.7329 -188.48446,0 z"
-         id="path3187"
-         style="fill:#b8bec6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 670.10444,206.714 188.48446,0 0,31.7329 -188.48446,0 z"
-         id="path3189"
-         style="fill:none;stroke:#6e7277;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="705.91583"
-         y="227.79962"
-         id="text3191"
-         xml:space="preserve"
-         style="font-size:13.20328903px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Network Manager</text>
-      <path
-         d="m 670.10444,243.28561 0,32.5206 188.48446,0 0,-32.5206 -188.48446,0 z"
-         id="path3193"
-         style="fill:#b8bec6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 670.10444,243.28561 188.48446,0 0,32.5206 -188.48446,0 z"
-         id="path3195"
-         style="fill:none;stroke:#6e7277;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="719.87256"
-         y="264.77148"
-         id="text3197"
-         xml:space="preserve"
-         style="font-size:13.20328903px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Actor System</text>
-      <path
-         d="m 670.10444,170.44246 0,31.73291 188.48446,0 0,-31.73291 -188.48446,0 z"
-         id="path3199"
-         style="fill:#b8bec6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 670.10444,170.44246 188.48446,0 0,31.73291 -188.48446,0 z"
-         id="path3201"
-         style="fill:none;stroke:#6e7277;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="686.56104"
-         y="191.52837"
-         id="text3203"
-         xml:space="preserve"
-         style="font-size:13.20328903px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Memory &amp; I/O Manager</text>
-      <text
-         x="725.4187"
-         y="17.8258"
-         id="text3205"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">(Worker)</text>
-      <path
-         d="m 844.22282,223.29313 24.23103,-24.23104 0,12.11552 69.8424,0 0,-12.11552 24.23104,24.23104 -24.23104,24.19353 0,-12.07801 -69.8424,0 0,12.07801 z"
-         id="path3207"
-         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 844.22282,223.29313 24.23103,-24.23104 0,12.11552 69.8424,0 0,-12.11552 24.23104,24.23104 -24.23104,24.19353 0,-12.07801 -69.8424,0 0,12.07801 z"
-         id="path3209"
-         style="fill:none;stroke:#000000;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="857.99353"
-         y="228.51564"
-         id="text3211"
-         xml:space="preserve"
-         style="font-size:13.20328903px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Data Streams</text>
-      <path
-         d="m 961.02692,479.89455 -1.72543,-3.33833 1.12528,-0.60015 1.72543,3.33833 -1.12528,0.60015 z m -2.28807,-4.46361 -1.72543,-3.33833 1.12528,-0.56264 1.72543,3.33833 -1.12528,0.56264 z m -2.28807,-4.4261 -1.72543,-3.33833 1.12528,-0.60015 1.72543,3.33833 -1.12528,0.60015 z m -2.28807,-4.46361 -1.72543,-3.33833 1.12528,-0.56264 1.72543,3.33833 -1.12528,0.56264 z m -2.28807,-4.46361 -1.12528,-2.17555 -0.60015,-1.12528 1.12528,-0.60015 0.60015,1.16279 1.08777,2.17554 -1.08777,0.56265 z m -2.28807,-4.42611 -1.72543,-3.33833 1.08777,-0.56264 1.72543,3.33833 -1.08777,0.56264 z m -2.32558,-4.4261 -1.65041,-3.2258 -0.075,-0.11253 1.12528,-0.60015 0.075,0.15004 1.65041,3.18829 -1.12528,0.60015 z m -2.28807,-4.46361 -1.72543,-3.30082 1.12528,-0.60015 1.72543,3.33833 -1.12528,0.56264 z m -2.28807,-4.4261 -1.72543,-3.33834 1.08777,-0.56264 1.72543,3.33833 -1.08777,0.56265 z m -2.32558,-4.42611 -1.72543,-3.33833 1.08777,-0.56264 1.76294,3.30082 -1.12528,0.60015 z m -2.32558,-4.4261 -1.7
 2543,-3.33833 1.08777,-0.56264 1.76294,3.30082 -1.12528,0.60015 z m -2.32558,-4.46361 -0.30008,-0.56264 -1.42535,-2.73818 1.08777,-0.60015 1.46286,2.73818 0.30008,0.60015 -1.12528,0.56264 z m -2.32558,-4.3886 -1.72543,-3.33833 1.08777,-0.56264 1.76294,3.30083 -1.12528,0.60014 z m -2.32558,-4.4261 -0.71268,-1.35033 0,0 -1.05026,-1.988 1.12528,-0.56264 1.05026,1.95049 0.71268,1.35033 -1.12528,0.60015 z m -2.32558,-4.4261 -1.76294,-3.30082 1.08777,-0.60015 1.76294,3.30082 -1.08777,0.60015 z m -2.36309,-4.4261 -1.05026,-1.95049 0,0 -0.71268,-1.35034 1.08777,-0.60015 0.75019,1.35034 1.01275,1.988 -1.08777,0.56264 z m -2.36309,-4.3886 -1.76294,-3.30082 1.08777,-0.60015 1.76294,3.30082 -1.08777,0.60015 z m -2.36309,-4.4261 -1.27531,-2.4006 0,0 -0.52513,-0.90022 1.12528,-0.60015 0.48762,0.93773 1.27532,2.36309 -1.08778,0.60015 z m -2.36308,-4.38859 -1.38785,-2.55064 -0.4126,-0.75018 1.08777,-0.60015 0.4126,0.75018 1.38785,2.55064 -1.08777,0.60015 z m -2.4006,-4.3886 -1.46287,-2.66316 -0.337
 58,-0.63766 1.08777,-0.60015 0.33758,0.63766 1.46287,2.66316 -1.08777,0.60015 z m -2.43811,-4.38859 -1.46286,-2.70067 0,0 -0.33759,-0.56264 1.08777,-0.60015 0.33759,0.56264 1.50037,2.70067 -1.12528,0.60015 z m -2.4006,-4.35108 -1.50037,-2.70068 -0.33759,-0.60015 1.08778,-0.60015 0.33758,0.56265 1.50037,2.70067 -1.08777,0.63766 z m -2.43811,-4.3886 -1.50037,-2.62565 0,0 -0.33758,-0.63766 1.08777,-0.60015 0.33758,0.60015 1.50038,2.66316 -1.08778,0.60015 z m -2.47561,-4.35108 -1.42536,-2.51313 0,0 -0.4126,-0.75018 1.08777,-0.60015 0.4126,0.71267 1.42536,2.55064 -1.08777,0.60015 z m -2.43811,-4.35108 -1.35034,-2.32558 0,0 -0.52513,-0.90023 1.05026,-0.63766 0.52514,0.90023 1.35033,2.36309 -1.05026,0.60015 z m -2.51313,-4.31358 -1.2378,-2.10052 0,0 -0.63766,-1.12528 1.05026,-0.63766 0.67517,1.12528 1.23781,2.10052 -1.08778,0.63766 z m -2.51312,-4.31357 -1.05026,-1.80045 0,0 -0.86272,-1.42536 1.08777,-0.63766 0.82521,1.42536 1.08777,1.80045 -1.08777,0.63766 z m -2.55064,-4.31358 -0.86271,-
 1.42535 0,0 -1.05026,-1.76294 1.05026,-0.67517 1.08777,1.80045 0.86271,1.42535 -1.08777,0.63766 z m -2.58814,-4.27606 -0.60015,-0.97525 0,0 -1.35034,-2.21305 1.08777,-0.67517 1.35034,2.25056 0.60015,0.97525 -1.08777,0.63766 z m -2.58815,-4.27607 -0.30007,-0.45011 0,0 -1.68792,-2.73818 1.08777,-0.63766 1.68792,2.70067 0.26256,0.48762 -1.05026,0.63766 z m -2.62565,-4.23855 -2.02551,-3.15079 1.05027,-0.67517 2.0255,3.15079 -1.05026,0.67517 z m -2.70067,-4.20105 -1.72543,-2.70067 0,0 -0.30008,-0.45012 1.05026,-0.67516 0.30008,0.45011 1.72543,2.70067 -1.05026,0.67517 z m -2.70068,-4.20105 -1.2378,-1.87547 0,0 -0.82521,-1.2378 1.05026,-0.71268 0.82521,1.27532 1.23781,1.87546 -1.05027,0.67517 z m -2.77569,-4.16354 -0.63766,-0.97524 0,0.0375 -1.46286,-2.13803 1.05026,-0.71268 1.46286,2.13803 0.63766,0.97525 -1.05026,0.67516 z m -2.8132,-4.12602 -2.13803,-3.03826 0,0 0,0 1.01275,-0.75019 0.0375,0.0375 2.13803,3.03826 -1.05026,0.71268 z m -2.85071,-4.08852 -1.38784,-1.91298 0,0 -0.82521,-1.08
 777 1.01275,-0.75019 0.82521,1.12528 1.38784,1.91298 -1.01275,0.71268 z m -2.96324,-4.0135 -0.52513,-0.75019 0,0 -1.68792,-2.25056 1.01275,-0.75019 1.68792,2.25056 0.52514,0.75019 -1.01276,0.75019 z m -2.96324,-4.0135 -1.72543,-2.21305 0.0375,0 -0.60015,-0.75019 0.97525,-0.7877 0.60015,0.7877 1.68792,2.21305 -0.97525,0.75019 z m -3.07576,-3.93848 -0.67517,-0.90023 0,0 -1.65041,-2.06301 1.01275,-0.75019 1.6129,2.06302 0.71268,0.86271 -1.01275,0.7877 z m -3.07577,-3.90097 -1.6129,-1.988 0,0 -0.7877,-0.90022 0.97525,-0.82521 0.75018,0.93773 1.61291,1.988 -0.93774,0.7877 z m -3.18829,-3.86347 -0.45011,-0.48762 0.0375,0 -1.988,-2.32558 0,0 -0.0375,-0.0375 0.93773,-0.8252 0.0375,0.0375 1.988,2.32558 0.4126,0.52513 -0.93773,0.78769 z m -3.26332,-3.78844 -1.08777,-1.27532 0,0 -1.38784,-1.53788 0.93773,-0.82521 1.38785,1.53789 1.08777,1.27531 -0.93774,0.82521 z m -3.30082,-3.75094 -1.68792,-1.83795 0,0 -0.8252,-0.90023 0.90022,-0.86271 0.86272,0.90022 1.68792,1.87547 -0.93774,0.8252 z m -3.3
 7584,-3.67591 -0.22505,-0.22506 0,0 -2.36309,-2.47561 0.90022,-0.86272 2.36309,2.47562 0.26257,0.26256 -0.93774,0.82521 z m -3.45086,-3.6009 -0.60015,-0.63766 0.0375,0 -2.06301,-2.0255 0.90022,-0.90023 2.02551,2.06302 0.60015,0.63766 -0.90023,0.86271 z m -3.48837,-3.56339 -0.86271,-0.86271 0,0 -1.80045,-1.76294 0.86271,-0.90022 1.80045,1.76294 0.90023,0.90022 -0.90023,0.86271 z m -3.56338,-3.48836 -1.05027,-1.01276 0,0 -1.65041,-1.57539 0.86272,-0.90022 1.68792,1.57539 1.01275,1.01275 -0.86271,0.90023 z m -3.6009,-3.45086 -1.16279,-1.08778 0.0375,0 -1.6129,-1.46286 0.86271,-0.93773 1.57539,1.50037 1.16279,1.08777 -0.86271,0.90023 z m -3.63841,-3.41335 -1.2003,-1.08778 0,0 -1.57539,-1.46286 0.86272,-0.90022 1.57539,1.42535 1.16279,1.08777 -0.82521,0.93774 z m -3.67591,-3.37585 -1.16279,-1.05026 0,0 -1.65042,-1.46286 0.86272,-0.93774 1.6129,1.46287 1.16279,1.05026 -0.8252,0.93773 z m -3.71343,-3.33833 -0.11253,-0.075 0.82521,-0.93773 0.11253,0.075 -0.82521,0.93773 z m -1.27532,3.07577
  -3.07576,-7.80194 8.10202,2.21305 -5.02626,5.58889 z"
-         id="path3213"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 783.12009,269.01702 2.36309,2.92573 -0.97524,0.75018 -2.36309,-2.92572 0.97524,-0.75019 z m 3.15079,3.90097 2.32558,2.88822 -0.97524,0.7877 -2.32558,-2.92573 0.97524,-0.75019 z m 3.11328,3.86346 2.36308,2.92573 -0.97524,0.7877 -2.36309,-2.92573 0.97525,-0.7877 z m 3.15078,3.90098 2.32558,2.92572 -0.97524,0.7877 -2.32558,-2.92573 0.97524,-0.78769 z m 3.11328,3.90097 2.36308,2.92573 -0.97524,0.78769 -2.36309,-2.92573 0.97525,-0.78769 z m 3.15078,3.90097 2.32558,2.92573 -0.97524,0.78769 -2.32558,-2.92573 0.97524,-0.78769 z m 3.11328,3.90097 2.36309,2.92573 -0.97525,0.7877 -2.36309,-2.92573 0.97525,-0.7877 z m 3.15078,3.90097 2.32558,2.92573 -0.97524,0.7877 -2.32558,-2.92573 0.97524,-0.7877 z m 3.11328,3.90097 2.36309,2.92573 -0.97525,0.7877 -2.36309,-2.92573 0.97525,-0.7877 z m 3.11327,3.90098 2.36309,2.92572 -0.97524,0.7877 -2.36309,-2.92573 0.97524,-0.78769 z m 3.15079,3.90097 2.36309,2.92573 -0.97525,0.78769 -2.36309,-2.92573 0.97525,-0.78769 z m 3.11327,3.90097 2.3630
 9,2.92573 -0.97524,0.78769 -2.36309,-2.92572 0.97524,-0.7877 z m 3.15079,3.90097 2.32558,2.92573 -0.97525,0.7877 -2.32557,-2.92573 0.97524,-0.7877 z m 3.11327,3.90097 2.36309,2.92573 -0.97524,0.7877 -2.36309,-2.92573 0.97524,-0.7877 z m 3.15079,3.90098 2.32558,2.92572 -0.97525,0.7877 -2.32557,-2.92573 0.97524,-0.78769 z m 3.11327,3.90097 2.36309,2.92573 -0.97524,0.78769 -2.36309,-2.92573 0.97524,-0.78769 z m 3.15079,3.90097 2.32558,2.92573 -0.97524,0.78769 -2.32558,-2.92572 0.97524,-0.7877 z m 3.11327,3.90097 2.36309,2.92573 -0.97524,0.7877 -2.36309,-2.92573 0.97524,-0.7877 z m 3.15079,3.90097 2.32558,2.92573 -0.97524,0.7877 -2.32558,-2.92573 0.97524,-0.7877 z m 3.11328,3.90097 2.36308,2.92573 -0.97524,0.7877 -2.36309,-2.92573 0.97525,-0.7877 z m 3.15078,3.90098 2.32558,2.92573 -0.97524,0.78769 -2.32558,-2.92573 0.97524,-0.78769 z m 3.11328,3.90097 2.36308,2.92573 -0.97524,0.78769 -2.36309,-2.92573 0.97525,-0.78769 z m 3.15078,3.90097 2.32558,2.92573 -0.97524,0.78769 -2.32558,-2.925
 72 0.97524,-0.7877 z m 3.11328,3.90097 2.36309,2.92573 -0.97525,0.7877 -2.36309,-2.92573 0.97525,-0.7877 z m 3.15078,3.90097 2.32558,2.92573 -0.97524,0.7877 -2.32558,-2.92573 0.97524,-0.7877 z m 3.11328,3.90098 2.36309,2.92572 -0.97525,0.7877 -2.36309,-2.92573 0.97525,-0.78769 z m 3.11327,3.90097 2.36309,2.92573 -0.97524,0.78769 -2.36309,-2.92573 0.97524,-0.78769 z m 3.15079,3.90097 2.36309,2.92573 -0.97525,0.78769 -2.36309,-2.92572 0.97525,-0.7877 z m 3.11327,3.90097 2.36309,2.92573 -0.97524,0.7877 -2.36309,-2.92573 0.97524,-0.7877 z m 3.15079,3.90097 2.32558,2.92573 -0.97525,0.7877 -2.32558,-2.92573 0.97525,-0.7877 z m 3.11327,3.90097 2.36309,2.92573 -0.97524,0.7877 -2.36309,-2.92573 0.97524,-0.7877 z m 3.15079,3.90098 2.32558,2.92573 -0.97525,0.78769 -2.32557,-2.92573 0.97524,-0.78769 z m 3.11327,3.90097 2.36309,2.92573 -0.97524,0.75018 -2.36309,-2.88822 0.97524,-0.78769 z m 3.15079,3.90097 2.32558,2.92573 -0.97524,0.75019 -2.32558,-2.92573 0.97524,-0.75019 z m 3.11327,3.90097 1.
 12528,1.35034 1.23781,1.57539 -0.97524,0.75019 -1.23781,-1.53789 -1.12528,-1.38784 0.97524,-0.75019 z m 3.15079,3.90097 0.63766,0.7877 1.68792,2.13803 -0.97524,0.7877 -1.68792,-2.13803 -0.67517,-0.7877 1.01275,-0.7877 z m 3.11328,3.90098 0.11252,0.15003 2.21305,2.77569 -0.97524,0.7877 -2.21305,-2.77569 -0.11253,-0.15004 0.97525,-0.78769 z m 3.11327,3.90097 2.02551,2.58814 0.30007,0.3751 -0.97524,0.75018 -0.30008,-0.37509 -2.0255,-2.55064 0.97524,-0.78769 z m 3.11328,3.93848 1.35033,1.68792 0.97525,1.23781 -0.97525,0.78769 -1.01275,-1.2378 -1.31283,-1.72543 0.97525,-0.75019 z m 3.11327,3.90097 0.60015,0.7877 1.72543,2.17554 -0.97524,0.75019 -1.72543,-2.13804 -0.63766,-0.78769 1.01275,-0.7877 z m 3.07577,3.93848 2.13803,2.66316 0.18755,0.26257 -0.97525,0.7877 -0.22505,-0.30008 -2.10052,-2.66316 0.97524,-0.75019 z m 3.11327,3.90097 1.23781,1.5754 1.08777,1.38784 -0.97524,0.75019 -1.08777,-1.38785 -1.23781,-1.53788 0.97524,-0.7877 z m 3.11328,3.93848 0.30007,0.3751 2.02551,2.55063 -0.97
 524,0.7877 -2.02551,-2.55064 -0.30007,-0.4126 0.97524,-0.75019 z m 3.11327,3.90098 1.42536,1.83795 0.90022,1.12528 -1.01275,0.7877 -0.86271,-1.12528 -1.46287,-1.83796 1.01275,-0.78769 z m 3.07577,3.93848 2.32558,2.96324 -0.97524,0.75018 -2.32558,-2.92573 0.97524,-0.78769 z m 3.11328,3.93848 2.32558,2.92573 -1.01276,0.78769 -2.28807,-2.96323 0.97525,-0.75019 z m 3.07576,3.93848 0.18755,0.22506 2.13803,2.70067 -0.97524,0.78769 -2.13803,-2.73818 -0.18755,-0.22505 0.97524,-0.75019 z m 3.11328,3.90097 0.90022,1.16279 1.42536,1.80045 -1.01275,0.7877 -1.38785,-1.80045 -0.90022,-1.16279 0.97524,-0.7877 z m 3.07576,3.93848 1.53789,1.95049 0.78769,1.01275 -0.97524,0.75019 -0.8252,-1.01275 -1.50038,-1.91298 0.97524,-0.7877 z m 3.07577,3.93848 2.06302,2.58815 0.26256,0.37509 -0.97524,0.75019 -0.26257,-0.33759 -2.06301,-2.58814 0.97524,-0.7877 z m 3.11328,3.93849 2.32558,2.96323 -1.01276,0.75019 -2.28807,-2.92573 0.97525,-0.78769 z m 3.07576,3.93848 2.32558,2.96323 -1.01275,0.75019 -2.28807,-2.9
 6324 0.97524,-0.75018 z m 3.07577,3.93848 2.32558,2.96324 -1.01275,0.75018 -2.28807,-2.92573 0.97524,-0.78769 z m 3.07577,3.93848 0.15003,0.15004 2.17554,2.8132 -0.97524,0.75018 -2.21305,-2.77569 -0.11253,-0.15004 0.97525,-0.78769 z m 4.76368,0.97524 1.65041,8.21455 -7.57688,-3.6009 5.92647,-4.61365 z"
-         id="path3215"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 586.8712,457.53898 3.71343,0.52514 0.18755,-1.23781 -3.71343,-0.52513 -0.18755,1.2378 z m 4.95124,0.71268 3.71342,0.52513 0.18755,-1.23781 -3.71343,-0.52513 -0.18754,1.23781 z m 4.95123,0.67517 3.71343,0.52513 0.18754,-1.23781 -3.71342,-0.52513 -0.18755,1.23781 z m 4.95123,0.71268 3.71343,0.52513 0.18755,-1.23781 -3.71343,-0.52513 -0.18755,1.23781 z m 4.95124,0.67517 3.71342,0.52513 0.18755,-1.23781 -3.71343,-0.52513 -0.18754,1.23781 z m 4.95123,0.71267 3.71343,0.52513 0.18754,-1.2378 -3.71342,-0.52513 -0.18755,1.2378 z m 4.95124,0.67517 1.20029,0.18755 2.51313,0.33758 0.18755,-1.23781 -2.51313,-0.33758 -1.2003,-0.18755 -0.18754,1.23781 z m 4.95123,0.71268 3.71342,0.52513 0.18755,-1.23781 -3.71342,-0.52513 -0.18755,1.23781 z m 4.95123,0.71268 3.71343,0.52513 0.18754,-1.23781 -3.71342,-0.52513 -0.18755,1.23781 z m 4.95124,0.71267 1.76293,0.22506 0,0 1.95049,0.30008 0.18755,-1.23781 -1.95049,-0.30008 -1.76294,-0.22505 -0.18754,1.2378 z m 4.95123,0.71268 3.71342,0.56264 0
 .18755,-1.23781 -3.71342,-0.56264 -0.18755,1.23781 z m 4.95123,0.71268 3.71343,0.56264 0.18754,-1.23781 -3.71342,-0.56264 -0.18755,1.23781 z m 4.95124,0.75019 2.28807,0.33758 0,0 1.42535,0.22506 0.18755,-1.23781 -1.42536,-0.22506 -2.28807,-0.33758 -0.18754,1.23781 z m 4.95123,0.75018 3.71342,0.56264 0.18755,-1.23781 -3.71342,-0.56264 -0.18755,1.23781 z m 4.95123,0.75019 3.71343,0.56264 0.18754,-1.23781 -3.71342,-0.56264 -0.18755,1.23781 z m 4.91373,0.75019 2.8132,0.4126 0,0 0.90022,0.15004 0.18755,-1.23781 -0.90023,-0.15004 -2.77569,-0.4126 -0.22505,1.23781 z m 4.95123,0.75018 3.71343,0.60015 0.18754,-1.23781 -3.71342,-0.60014 -0.18755,1.2378 z m 4.95123,0.7877 3.71343,0.60015 0.18755,-1.23781 -3.71343,-0.60015 -0.18755,1.23781 z m 4.95124,0.7877 3.2258,0.52513 0,0 0.45011,0.075 0.22506,-1.23781 -0.48762,-0.075 -3.22581,-0.52513 -0.18754,1.23781 z m 4.91372,0.78769 3.71343,0.63766 0.18754,-1.23781 -3.71342,-0.63766 -0.18755,1.23781 z m 4.95124,0.82521 3.67591,0.63766 0.22506,-1.2378
 1 -3.71343,-0.63766 -0.18754,1.23781 z m 4.91372,0.8252 3.63841,0.60015 0,0 0.075,0.0375 0.18755,-1.23781 -0.0375,0 -3.6384,-0.63765 -0.22506,1.2378 z m 4.91372,0.82521 3.71343,0.67517 0.22505,-1.23781 -3.71342,-0.63766 -0.22506,1.2003 z m 4.95124,0.90022 3.67591,0.63766 0.22506,-1.23781 -3.71343,-0.63765 -0.18754,1.2378 z m 4.91372,0.86272 3.67592,0.63766 0.22505,-1.23781 -3.67591,-0.63766 -0.22506,1.23781 z m 4.91373,0.86271 3.67591,0.67517 0.22506,-1.2003 -3.67592,-0.71267 -0.22505,1.2378 z m 4.91372,0.90023 3.67592,0.71268 0.26256,-1.23781 -3.71342,-0.67517 -0.22506,1.2003 z m 4.91372,0.93773 3.71343,0.67517 0.22505,-1.23781 -3.71342,-0.67517 -0.22506,1.23781 z m 4.91373,0.90023 3.67591,0.75018 0.26257,-1.23781 -3.67592,-0.71267 -0.26256,1.2003 z m 4.91372,0.97524 3.67592,0.71268 0.22505,-1.2003 -3.67591,-0.75019 -0.22506,1.23781 z m 4.91373,0.97524 3.67591,0.71268 0.22506,-1.23781 -3.67592,-0.71268 -0.22505,1.23781 z m 4.87621,0.97524 3.67592,0.75019 0.26256,-1.23781 -3.67591,-
 0.75018 -0.26257,1.2378 z m 4.91372,0.97525 3.67592,0.75018 0.26257,-1.20029 -3.67592,-0.75019 -0.26257,1.2003 z m 4.91373,1.01275 3.67591,0.75019 0.22506,-1.23781 -3.67592,-0.75019 -0.22505,1.23781 z m 4.87621,1.01275 3.67592,0.75019 0.26256,-1.2003 -3.67591,-0.7877 -0.26257,1.23781 z m 4.91373,1.01275 3.67591,0.7877 0.22506,-1.23781 -3.63841,-0.75018 -0.26256,1.20029 z m 4.87621,1.05027 3.67592,0.78769 0.26256,-1.23781 -3.67591,-0.78769 -0.26257,1.23781 z m 4.87622,1.01275 3.67591,0.8252 0.26257,-1.2378 -3.63841,-0.7877 -0.30007,1.2003 z m 4.91372,1.08777 3.67592,0.7877 0.26256,-1.23781 -3.67591,-0.7877 -0.26257,1.23781 z m 4.87622,1.05026 3.67591,0.7877 0.26257,-1.2003 -3.67592,-0.82521 -0.26256,1.23781 z m 4.87621,1.05026 0.075,0.0375 0,-0.0375 3.6009,0.82521 0.26256,-1.23781 -3.60089,-0.7877 -0.075,0 -0.26257,1.2003 z m 4.91373,1.08777 3.6384,0.82521 0.26257,-1.23781 -3.63841,-0.8252 -0.26256,1.2378 z m 4.87621,1.08777 3.63841,0.7877 0.30007,-1.2003 -3.67591,-0.8252 -0.26257,1.
 2378 z m 4.87621,1.08778 0.15004,0 3.48837,0.8252 0.30008,-1.23781 -3.52588,-0.78769 -0.15004,-0.0375 -0.26257,1.23781 z m 4.87622,1.08777 3.63841,0.8252 0.30007,-1.23781 -3.67592,-0.8252 -0.26256,1.23781 z m 4.87621,1.08777 3.67592,0.8252 0.26257,-1.23781 -3.67592,-0.78769 -0.26257,1.2003 z m 4.87622,1.08777 3.67591,0.8252 0.26257,-1.20029 -3.67592,-0.82521 -0.26256,1.2003 z m 4.87621,1.12528 3.67592,0.7877 0.26257,-1.2003 -3.63841,-0.82521 -0.30008,1.23781 z m 2.06302,3.63841 8.13953,-2.02551 -6.48912,-5.28882 -1.65041,7.31433 z"
-         id="path3217"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="620.69214"
-         y="485.10345"
-         id="text3219"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Submit job</text>
-      <text
-         x="604.18805"
-         y="501.60757"
-         id="text3221"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">(send dataflow)</text>
-      <text
-         x="731.01959"
-         y="505.01871"
-         id="text3223"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Cancel /</text>
-      <text
-         x="722.16742"
-         y="521.52283"
-         id="text3225"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">update job</text>
-      <path
-         d="m 819.16657,486.83378 -3.03825,-2.25056 0.75018,-0.97524 3.00075,2.21305 -0.71268,1.01275 z m -4.051,-2.96324 -3.00075,-2.25056 0.71268,-1.01275 3.03825,2.25056 -0.75018,1.01275 z m -4.0135,-2.96323 -3.03826,-2.25056 0.75019,-1.01276 3.03825,2.25056 -0.75018,1.01276 z m -4.0135,-2.96324 -3.07577,-2.21305 0.75019,-1.01276 3.03825,2.21306 -0.71267,1.01275 z m -4.08852,-2.92573 -1.53789,-1.12528 0,0 -1.50037,-1.08777 0.75019,-1.01275 1.50037,1.08777 1.53788,1.12528 -0.75018,1.01275 z m -4.05101,-2.92573 -3.03826,-2.17554 0.71268,-1.01275 3.03826,2.17554 -0.71268,1.01275 z m -4.08852,-2.88822 -3.07577,-2.10052 0.71268,-1.05026 3.07577,2.13803 -0.71268,1.01275 z m -4.08852,-2.8132 -1.23781,-0.86272 0.0375,0.0375 -1.91297,-1.27531 0.71267,-1.05027 1.87547,1.27532 1.23781,0.82521 -0.71268,1.05026 z m -4.16354,-2.8132 -3.07576,-2.06302 0,0 -0.0375,0 0.67517,-1.05026 0.0375,0 3.11327,2.06302 -0.71268,1.05026 z m -4.16353,-2.73818 -3.15079,-2.02551 0.67517,-1.05026 3.15078,2.02551
  -0.67516,1.05026 z m -4.20105,-2.70068 -0.82521,-0.48762 0.0375,0 -2.40059,-1.46286 0.63765,-1.05026 2.4006,1.42535 0.82521,0.52513 -0.67517,1.05026 z m -4.27607,-2.58814 -2.70067,-1.65041 0,0 -0.48762,-0.26257 0.63766,-1.08777 0.48762,0.26257 2.73818,1.65041 -0.67517,1.08777 z m -4.27606,-2.55064 -3.26331,-1.87546 0.63766,-1.05027 3.26331,1.83796 -0.63766,1.08777 z m -4.35108,-2.47561 -0.33759,-0.18755 0.0375,0 -2.96324,-1.57539 0.56264,-1.12528 3.00075,1.6129 0.33758,0.18755 -0.63765,1.08777 z m -4.3886,-2.36309 -2.28807,-1.2003 0.0375,0 -1.05026,-0.52513 0.56264,-1.12528 1.05026,0.52513 2.28807,1.23781 -0.60015,1.08777 z m -4.4261,-2.28807 -1.05026,-0.52513 0,0 -2.28807,-1.12528 0.52513,-1.12528 2.32558,1.12528 1.05026,0.52513 -0.56264,1.12528 z m -4.46361,-2.17554 -3.07577,-1.42536 0,0 -0.33758,-0.11253 0.48762,-1.16279 0.33758,0.15004 3.11328,1.42536 -0.52513,1.12528 z m -4.53863,-2.06302 -1.83796,-0.78769 0,0 -1.6129,-0.63766 0.48762,-1.16279 1.6129,0.67517 1.83796,0.78769 -0
 .48762,1.12528 z m -4.61365,-1.91297 -0.56264,-0.22506 0,0 -2.92573,-1.16279 0.45011,-1.16279 2.96324,1.16279 0.56264,0.22506 -0.48762,1.16279 z m -4.65116,-1.80045 -2.66316,-0.97525 0.0375,0 -0.86271,-0.30007 0.37509,-1.16279 0.90023,0.30007 2.66316,0.97525 -0.45011,1.16279 z m -4.68867,-1.65041 -1.38785,-0.48763 0,0 -2.17554,-0.67516 0.3751,-1.2003 2.21305,0.67517 1.38784,0.48762 -0.4126,1.2003 z m -4.76369,-1.53789 -0.075,-0.0375 0,0 -3.48837,-1.01275 0.33758,-1.2003 3.52588,1.01275 0.075,0.0375 -0.3751,1.2003 z m -4.76368,-1.35033 -2.36309,-0.63766 0,0 -1.27532,-0.30008 0.30008,-1.2003 1.27531,0.30008 2.36309,0.63766 -0.30007,1.2003 z m -4.83871,-1.2003 -1.08777,-0.26257 0,0 -2.55063,-0.56264 0.26256,-1.2003 2.55064,0.52513 1.12528,0.26257 -0.30008,1.23781 z m -4.87621,-1.05026 -3.52588,-0.67517 0.0375,0 -0.18755,-0.0375 0.22506,-1.2003 0.15003,0 3.56339,0.67517 -0.26256,1.23781 z m -4.87622,-0.90023 -2.32558,-0.37509 0,0 -1.38784,-0.18755 0.18754,-1.23781 1.38785,0.18755 2.3255
 8,0.37509 -0.18755,1.23781 z m -4.95123,-0.75019 -1.12528,-0.15003 0,0 -2.58815,-0.30008 0.15004,-1.23781 2.62566,0.30008 1.12528,0.15004 -0.18755,1.2378 z m -4.95123,-0.60014 -3.71343,-0.3751 0.11253,-1.23781 3.75093,0.33759 -0.15003,1.27532 z m -4.95124,-0.48763 -2.66316,-0.22505 0,0 -1.05026,-0.075 0.075,-1.23781 1.08777,0.075 2.66316,0.22506 -0.11253,1.2378 z m -4.98874,-0.37509 -1.53788,-0.075 0,0 -2.17555,-0.11253 0.0375,-1.2378 2.21305,0.075 1.5754,0.11253 -0.11253,1.23781 z m -4.95123,-0.26257 -0.52514,0 0.0375,0 -3.26331,-0.11252 0.0375,-1.23781 3.26331,0.11253 0.48762,0 -0.0375,1.2378 z m -4.98875,-0.15003 -3.45086,-0.0375 0,0 -0.30007,0 0,-1.23781 0.33758,0 3.41335,0.0375 0,1.23781 z m -4.98874,-0.0375 -2.47562,0 0.0375,0 -1.31282,0 -0.0375,-1.23781 1.31282,-0.0375 2.47562,0 0,1.27532 z m -4.98874,0.0375 -3.75094,0.075 -0.0375,-1.27532 3.75094,-0.075 0.0375,1.27532 z m -5.02626,0.075 -0.52513,0 0,0 -3.18829,0.15003 -0.075,-1.2378 3.2258,-0.15004 0.56264,0 0,1.23781 z m -4
 .95123,0.18754 -3.75093,0.15004 0,0 0,0 -0.075,-1.23781 0,0 3.75093,-0.15004 0.075,1.23781 z m -4.98874,0.22506 -3.75094,0.26256 -0.075,-1.27531 3.75093,-0.22506 0.075,1.23781 z m -4.98874,0.33758 -2.06302,0.11253 0.0375,0 -1.72543,0.11253 -0.075,-1.23781 1.68792,-0.11253 2.06302,-0.15003 0.075,1.27531 z m -4.98875,0.33759 -3.75093,0.30007 -0.075,-1.27532 3.71342,-0.26256 0.11253,1.23781 z m -4.98874,0.37509 -0.4126,0.0375 0,0 -3.30083,0.30007 -0.11252,-1.2378 3.30082,-0.30008 0.4126,-0.0375 0.11253,1.23781 z m -4.98874,0.45011 -3.71343,0.3751 -0.11253,-1.27532 3.71343,-0.33759 0.11253,1.23781 z m -4.95124,0.48762 -3.75093,0.3751 -0.11253,-1.23781 3.71343,-0.37509 0.15003,1.2378 z m -4.98874,0.52514 -2.36309,0.26256 0,0 -1.35033,0.15004 -0.15004,-1.23781 1.38784,-0.18755 2.36309,-0.22505 0.11253,1.23781 z m -4.95123,0.52513 -3.71343,0.45011 -0.15003,-1.23781 3.71342,-0.45011 0.15004,1.23781 z m -4.98875,0.60015 -0.90022,0.11252 0,0 -2.8132,0.30008 -0.15004,-1.23781 2.8132,-0.33758 0
 .93774,-0.075 0.11252,1.23781 z m -4.95123,0.56264 -3.71342,0.45011 -0.15004,-1.23781 3.71342,-0.45011 0.15004,1.23781 z m -4.95123,0.63765 -3.71343,0.45012 -0.18754,-1.23781 3.75093,-0.48762 0.15004,1.27531 z m -4.95124,0.60015 -3.75093,0.48763 -0.15004,-1.23781 3.71343,-0.48762 0.18754,1.2378 z m -4.98874,0.63766 -0.56264,0.075 -0.15004,-1.23781 0.56264,-0.075 0.15004,1.23781 z m 1.08777,3.03826 -7.91447,-2.77569 6.97674,-4.68867 0.93773,7.46436 z"
-         id="path3227"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="637.29187"
-         y="402.23114"
-         id="text3229"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Status</text>
-      <text
-         x="632.04059"
-         y="418.73523"
-         id="text3231"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">updates</text>
-      <text
-         x="724.87628"
-         y="415.9035"
-         id="text3233"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Statistics &amp;</text>
-      <text
-         x="740.78027"
-         y="432.40762"
-         id="text3235"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">results</text>
-      <path
-         d="m 421.75507,306.2638 116.27897,0 0,47.24302 c -58.13949,0 -58.13949,18.00448 -116.27897,7.78319 z"
-         id="path3237"
-         style="fill:#e6526e;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 421.75507,306.2638 116.27897,0 0,47.24302 c -58.13949,0 -58.13949,18.00448 -116.27897,7.78319 z"
-         id="path3239"
-         style="fill:none;stroke:#8a3142;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="451.71744"
-         y="327.26007"
-         id="text3241"
-         xml:space="preserve"
-         style="font-size:13.20328903px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Program</text>
-      <text
-         x="464.47064"
-         y="343.01398"
-         id="text3243"
-         xml:space="preserve"
-         style="font-size:13.20328903px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">code</text>
-      <path
-         d="m 904.23777,517.74148 0,32.48309 95.49879,0 0,-32.48309 -95.49879,0 z"
-         id="path3245"
-         style="fill:#b8bec6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 904.23777,517.74148 95.49879,0 0,32.48309 -95.49879,0 z"
-         id="path3247"
-         style="fill:none;stroke:#6e7277;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="919.12915"
-         y="539.22919"
-         id="text3249"
-         xml:space="preserve"
-         style="font-size:13.20328903px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Scheduler</text>
-      <path
-         d="m 904.23777,553.82547 0,32.37057 95.49879,0 0,-32.37057 -95.49879,0 z"
-         id="path3251"
-         style="fill:#b8bec6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 904.23777,553.82547 95.49879,0 0,32.37057 -95.49879,0 z"
-         id="path3253"
-         style="fill:none;stroke:#6e7277;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="915.22498"
-         y="567.3382"
-         id="text3255"
-         xml:space="preserve"
-         style="font-size:13.20328903px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Checkpoint</text>
-      <text
-         x="913.12445"
-         y="583.0921"
-         id="text3257"
-         xml:space="preserve"
-         style="font-size:13.20328903px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Coordinator</text>
-      <path
-         d="m 352.98169,391.89763 0,43.30454 107.83936,0 0,-43.30454 -107.83936,0 z"
-         id="path3259"
-         style="fill:#e4eaf4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 352.98169,391.89763 107.83936,0 0,43.30454 -107.83936,0 z"
-         id="path3261"
-         style="fill:none;stroke:#898c92;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="367.77243"
-         y="410.74432"
-         id="text3263"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Optimizer /</text>
-      <text
-         x="359.37033"
-         y="427.24844"
-         id="text3265"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Graph Builder</text>
-      <path
-         d="m 400.61855,432.22018 -0.75018,1.36909 0.0187,0 -0.71268,1.36909 0,-0.0188 -0.28132,0.5814 -1.12528,-0.54389 0.28132,-0.58139 0.73144,-1.38785 0.75018,-1.38784 1.08777,0.60015 z m -2.2318,4.38859 -0.4126,1.01275 0.0188,-0.0375 -0.43136,1.31283 0.0188,-0.075 -0.28132,1.25656 -1.21906,-0.26256 0.28132,-1.29407 0.45012,-1.36909 0.43135,-1.01276 1.14404,0.46887 z m -1.16279,4.6324 0,0 0,-0.075 0.0375,0.60015 -0.0188,-0.0563 0.0938,0.5814 0,-0.0563 0.15003,0.56264 -0.0188,-0.0563 0.22506,0.56264 -0.0375,-0.075 0.30007,0.54388 -0.0375,-0.0563 0.37509,0.52513 -0.0375,-0.0563 0.0375,0.0563 -0.93773,0.82521 -0.075,-0.075 -0.39385,-0.5814 -0.33758,-0.6189 -0.26257,-0.6189 -0.16879,-0.63766 -0.11253,-0.65642 -0.0375,-0.67517 0,-0.0563 1.25656,0.0938 z m 1.89422,3.86347 0.0563,0.0375 -0.0375,-0.0375 0.6189,0.48762 -0.0563,-0.0375 0.71267,0.45011 -0.0375,-0.0188 0.80645,0.43135 -0.0563,-0.0188 0.90022,0.4126 -0.0375,-0.0188 0.16879,0.0563 -0.45011,1.18155 -0.1688,-0.075 -0.95649,-0.4
 3136 -0.8252,-0.45011 -0.75019,-0.48762 -0.65641,-0.50638 -0.075,-0.075 0.84396,-0.90022 z m 4.18229,2.17554 0.7877,0.26256 -0.0375,0 1.21905,0.33759 -0.0187,0 1.31283,0.31883 -0.0188,-0.0188 0.30008,0.075 -0.26257,1.21906 -0.30007,-0.075 -1.33159,-0.31883 -1.25656,-0.33759 -0.80645,-0.28132 0.4126,-1.18154 z m 4.74493,1.21905 1.50038,0.24381 -0.0188,0 1.70668,0.22506 -0.0188,0 0.50638,0.075 -0.15004,1.23781 -0.50637,-0.0563 -1.72543,-0.24381 -1.50038,-0.26256 0.2063,-1.21906 z m 4.91373,0.67517 0.0563,0.0188 0,0 1.93173,0.18754 -0.0188,0 1.74418,0.13128 -0.0938,1.23781 -1.74418,-0.13128 -1.95049,-0.18755 -0.075,0 0.15004,-1.25656 z m 4.95123,0.4126 1.21906,0.0938 -0.0188,0 2.26932,0.11252 -0.0188,0 0.28132,0 -0.0563,1.25657 -0.28132,-0.0188 -2.25056,-0.11252 -1.21906,-0.075 0.075,-1.25657 z m 4.98874,0.26257 0.82521,0.0375 0,0 2.45686,0.075 0,0 0.45011,0 -0.0188,1.25656 -0.45011,-0.0188 -2.47562,-0.075 -0.84396,-0.0375 0.0563,-1.23781 z m 4.96999,0.13128 0.86272,0.0188 0,0 2.64441,
 0.0375 0,0 0.24381,0 0,1.25656 -0.26257,0 -2.64441,-0.0375 -0.86271,-0.0188 0.0188,-1.25656 z m 5.0075,0.0563 1.23781,0.0188 -0.0188,0 2.51313,-0.0188 0.0188,1.25656 -2.53188,0 -1.2378,0 0.0188,-1.25656 z m 4.98874,0 1.93173,-0.0188 0,0 1.81921,-0.0187 0.0188,1.25656 -1.8192,0.0188 -1.95049,0 0,-1.2378 z m 4.98875,-0.0563 2.94448,-0.0563 0.80645,-0.0188 0.0375,1.25657 -0.82521,0.0188 -2.94448,0.0563 -0.0188,-1.25656 z m 5.00749,-0.0938 1.03151,-0.0188 0,0 2.70067,-0.075 0.0375,1.23781 -2.70067,0.075 -1.05026,0.0188 -0.0188,-1.23781 z m 4.98875,-0.13128 2.4381,-0.075 0,0 1.31283,-0.0375 0.0375,1.23781 -1.31283,0.0563 -2.4381,0.075 -0.0375,-1.25656 z m 4.98874,-0.1688 3.75093,-0.11252 0.0563,1.2378 -3.75094,0.13129 -0.0563,-1.25657 z m 5.0075,-0.16879 3.75093,-0.15004 0.0375,1.25657 -3.75093,0.15003 -0.0375,-1.25656 z m 4.98874,-0.18755 0.86271,-0.0375 0,0 2.88822,-0.13129 0.0563,1.25657 -2.90698,0.13128 -0.84396,0.0375 -0.0563,-1.25657 z m 5.0075,-0.2063 2.8132,-0.13128 -0.0188,0 0.9
 3774,-0.0375 0.0563,1.23781 -0.93773,0.0563 -2.8132,0.11253 -0.0375,-1.23781 z m 4.98874,-0.22505 3.75093,-0.18755 0.0563,1.25656 -3.75094,0.1688 -0.0563,-1.23781 z m 4.98874,-0.24381 3.75094,-0.1688 0.0563,1.23781 -3.75093,0.18755 -0.0563,-1.25656 z m 4.98875,-0.22506 2.08176,-0.11253 1.66917,-0.075 0.075,1.23781 -1.66917,0.0938 -2.08177,0.0938 -0.075,-1.23781 z m 5.00749,-0.26257 3.73218,-0.18754 0.075,1.25656 -3.75093,0.18755 -0.0563,-1.25657 z m 4.98875,-0.24381 0.35633,-0.0187 0.0563,1.23781 -0.35634,0.0188 -0.0563,-1.23781 z m -1.05027,-3.07576 7.67067,3.35708 -7.29557,4.12603 -0.3751,-7.48311 z"
-         id="path3267"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="363.56607"
-         y="467.15714"
-         id="text3269"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Dataflow graph</text>
-      <path
-         d="m 442.44147,350.206 -3.00074,2.2318 -0.75019,-0.994 3.0195,-2.2318 0.73143,0.994 z m -4.0135,2.98199 -3.00074,2.25056 -0.75019,-0.994 3.00075,-2.25056 0.75018,0.994 z m -3.97599,3.00075 -2.96324,2.30682 -0.76894,-0.97524 2.96324,-2.30683 0.76894,0.97525 z m -3.91972,3.09452 -2.88822,2.38184 -0.7877,-0.95649 2.88822,-2.4006 0.7877,0.97525 z m -3.8072,3.20705 -0.80645,0.69392 0.0188,0 -1.63165,1.48162 0.0188,-0.0188 -0.35634,0.33759 -0.86271,-0.90023 0.35633,-0.33758 1.63166,-1.50038 0.80645,-0.71267 0.82521,0.95649 z m -3.65716,3.35708 -0.30008,0.30008 0,0 -1.48162,1.51912 0.0188,-0.0188 -0.80645,0.88147 -0.91898,-0.84396 0.80645,-0.88147 1.50037,-1.53788 0.31883,-0.31883 0.86272,0.90022 z m -3.3946,3.6009 -1.06902,1.25656 0.0188,-0.0188 -1.2003,1.57539 0.0188,-0.0375 -0.075,0.0938 -1.01276,-0.71267 0.075,-0.0938 1.21906,-1.59414 1.08777,-1.27532 0.93773,0.80645 z m -3.0195,3.90097 -0.31883,0.46887 0.0188,-0.0188 -1.01275,1.6129 0.0188,-0.0187 -0.61891,1.10653 -1.08777,-0
 .60015 0.61891,-1.12528 1.0315,-1.65042 0.33759,-0.48762 1.01275,0.71268 z m -2.45686,4.23856 -0.52513,1.05026 0.0188,-0.0188 -0.71268,1.66917 0.0188,-0.0375 -0.28132,0.71268 -1.16279,-0.43136 0.26256,-0.73143 0.73144,-1.68792 0.52513,-1.06902 1.12528,0.54389 z m -1.89423,4.53863 -0.48762,1.50037 0.0188,-0.0188 -0.48762,1.70667 0,-0.0187 -0.0938,0.37509 -1.21905,-0.30007 0.0938,-0.39385 0.50638,-1.72543 0.48762,-1.51913 1.18154,0.39385 z m -1.35033,4.76368 -0.0375,0.13129 0.0188,-0.0375 -0.69392,3.48837 0,-0.0188 0,0.0375 -1.23781,-0.16879 0,-0.075 0.69393,-3.50712 0.0375,-0.15004 1.21906,0.30007 z m -0.90023,4.83871 -0.43135,3.11328 -1.23781,-0.1688 0.43136,-3.11327 1.2378,0.16879 z m 2.83196,2.21305 -4.57614,7.033 -2.88822,-7.87696 7.46436,0.84396 z"
-         id="path3271"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="347.75757"
-         y="363.58319"
-         id="text3273"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Program</text>
-      <text
-         x="346.85733"
-         y="380.08731"
-         id="text3275"
-         xml:space="preserve"
-         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Dataflow</text>
-    </g>
-  </g>
-</svg>


[73/89] [abbrv] flink git commit: [FLINK-4368] [distributed runtime] Eagerly initialize the RPC endpoint members

Posted by se...@apache.org.
[FLINK-4368] [distributed runtime] Eagerly initialize the RPC endpoint members

This closes #2351


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/ddeee3b5
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/ddeee3b5
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/ddeee3b5

Branch: refs/heads/flip-6
Commit: ddeee3b5490d1e2156b92aeed9c62a7c01db82b6
Parents: b273afa
Author: Stephan Ewen <se...@apache.org>
Authored: Wed Aug 10 18:27:21 2016 +0200
Committer: Stephan Ewen <se...@apache.org>
Committed: Thu Aug 25 20:21:02 2016 +0200

----------------------------------------------------------------------
 .../flink/runtime/rpc/MainThreadExecutor.java   |   9 +-
 .../apache/flink/runtime/rpc/RpcEndpoint.java   | 156 +++++++++++--------
 .../runtime/rpc/akka/AkkaRpcServiceTest.java    |   4 +-
 3 files changed, 99 insertions(+), 70 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/ddeee3b5/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/MainThreadExecutor.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/MainThreadExecutor.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/MainThreadExecutor.java
index e06711e..14b2997 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/MainThreadExecutor.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/MainThreadExecutor.java
@@ -26,22 +26,23 @@ import java.util.concurrent.TimeoutException;
 
 /**
  * Interface to execute {@link Runnable} and {@link Callable} in the main thread of the underlying
- * rpc server.
+ * RPC endpoint.
  *
- * This interface is intended to be implemented by the self gateway in a {@link RpcEndpoint}
+ * <p>This interface is intended to be implemented by the self gateway in a {@link RpcEndpoint}
  * implementation which allows to dispatch local procedures to the main thread of the underlying
  * rpc server.
  */
 public interface MainThreadExecutor {
+
 	/**
-	 * Execute the runnable in the main thread of the underlying rpc server.
+	 * Execute the runnable in the main thread of the underlying RPC endpoint.
 	 *
 	 * @param runnable Runnable to be executed
 	 */
 	void runAsync(Runnable runnable);
 
 	/**
-	 * Execute the callable in the main thread of the underlying rpc server and return a future for
+	 * Execute the callable in the main thread of the underlying RPC endpoint and return a future for
 	 * the callable result. If the future is not completed within the given timeout, the returned
 	 * future will throw a {@link TimeoutException}.
 	 *

http://git-wip-us.apache.org/repos/asf/flink/blob/ddeee3b5/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java
index 3d8757f..0d928a8 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java
@@ -19,85 +19,116 @@
 package org.apache.flink.runtime.rpc;
 
 import akka.util.Timeout;
+
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
+
 import scala.concurrent.ExecutionContext;
 import scala.concurrent.Future;
 
 import java.util.concurrent.Callable;
 
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
 /**
- * Base class for rpc endpoints. Distributed components which offer remote procedure calls have to
- * extend the rpc endpoint base class.
+ * Base class for RPC endpoints. Distributed components which offer remote procedure calls have to
+ * extend the RPC endpoint base class. An RPC endpoint is backed by an {@link RpcService}. 
+ * 
+ * <h1>Endpoint and Gateway</h1>
+ * 
+ * To be done...
+ * 
+ * <h1>Single Threaded Endpoint Execution </h1>
+ * 
+ * <p>All RPC calls on the same endpoint are called by the same thread
+ * (referred to as the endpoint's <i>main thread</i>).
+ * Thus, by executing all state changing operations within the main 
+ * thread, we don't have to reason about concurrent accesses, in the same way in the Actor Model
+ * of Erlang or Akka.
  *
- * The main idea is that a rpc endpoint is backed by a rpc server which has a single thread
- * processing the rpc calls. Thus, by executing all state changing operations within the main
- * thread, we don't have to reason about concurrent accesses. The rpc provides provides
- * {@link #runAsync(Runnable)}, {@link #callAsync(Callable, Timeout)} and the
- * {@link #getMainThreadExecutionContext()} to execute code in the rpc server's main thread.
+ * <p>The RPC endpoint provides provides {@link #runAsync(Runnable)}, {@link #callAsync(Callable, Timeout)}
+  * and the {@link #getMainThreadExecutionContext()} to execute code in the RPC endoint's main thread.
  *
- * @param <C> Rpc gateway counterpart for the implementing rpc endpoint
+ * @param <C> The RPC gateway counterpart for the implementing RPC endpoint
  */
 public abstract class RpcEndpoint<C extends RpcGateway> {
 
 	protected final Logger log = LoggerFactory.getLogger(getClass());
 
-	/** Rpc service to be used to start the rpc server and to obtain rpc gateways */
+	// ------------------------------------------------------------------------
+
+	/** RPC service to be used to start the RPC server and to obtain rpc gateways */
 	private final RpcService rpcService;
 
 	/** Self gateway which can be used to schedule asynchronous calls on yourself */
-	private C self;
+	private final C self;
+
+	/** the fully qualified address of the this RPC endpoint */
+	private final String selfAddress;
+
+	/** The main thread execution context to be used to execute future callbacks in the main thread
+	 * of the executing rpc server. */
+	private final MainThreadExecutionContext mainThreadExecutionContext;
+
 
 	/**
-	 * The main thread execution context to be used to execute future callbacks in the main thread
-	 * of the executing rpc server.
-	 *
-	 * IMPORTANT: The main thread context is only available after the rpc server has been started.
+	 * Initializes the RPC endpoint.
+	 * 
+	 * @param rpcService The RPC server that dispatches calls to this RPC endpoint. 
 	 */
-	private MainThreadExecutionContext mainThreadExecutionContext;
-
 	public RpcEndpoint(RpcService rpcService) {
-		this.rpcService = rpcService;
+		this.rpcService = checkNotNull(rpcService, "rpcService");
+		this.self = rpcService.startServer(this);
+		this.selfAddress = rpcService.getAddress(self);
+		this.mainThreadExecutionContext = new MainThreadExecutionContext((MainThreadExecutor) self);
 	}
 
+	// ------------------------------------------------------------------------
+	//  Shutdown
+	// ------------------------------------------------------------------------
+
 	/**
-	 * Get self-gateway which should be used to run asynchronous rpc calls on this endpoint.
-	 *
-	 * IMPORTANT: Always issue local method calls via the self-gateway if the current thread
-	 * is not the main thread of the underlying rpc server, e.g. from within a future callback.
-	 *
-	 * @return Self gateway
+	 * Shuts down the underlying RPC endpoint via the RPC service.
+	 * After this method was called, the RPC endpoint will no longer be reachable, neither remotely,
+	 * not via its {@link #getSelf() self gateway}. It will also not accepts executions in main thread
+	 * any more (via {@link #callAsync(Callable, Timeout)} and {@link #runAsync(Runnable)}).
+	 * 
+	 * <p>This method can be overridden to add RPC endpoint specific shut down code.
+	 * The overridden method should always call the parent shut down method.
 	 */
-	public C getSelf() {
-		return self;
+	public void shutDown() {
+		rpcService.stopServer(self);
 	}
 
+	// ------------------------------------------------------------------------
+	//  Basic RPC endpoint properties
+	// ------------------------------------------------------------------------
+
 	/**
-	 * Execute the runnable in the main thread of the underlying rpc server.
+	 * Get self-gateway which should be used to run asynchronous RPC calls on this endpoint.
+	 *
+	 * <p><b>IMPORTANT</b>: Always issue local method calls via the self-gateway if the current thread
+	 * is not the main thread of the underlying rpc server, e.g. from within a future callback.
 	 *
-	 * @param runnable Runnable to be executed in the main thread of the underlying rpc server
+	 * @return The self gateway
 	 */
-	public void runAsync(Runnable runnable) {
-		((MainThreadExecutor) self).runAsync(runnable);
+	public C getSelf() {
+		return self;
 	}
 
 	/**
-	 * Execute the callable in the main thread of the underlying rpc server returning a future for
-	 * the result of the callable. If the callable is not completed within the given timeout, then
-	 * the future will be failed with a {@link java.util.concurrent.TimeoutException}.
+	 * Gets the address of the underlying RPC endpoint. The address should be fully qualified so that
+	 * a remote system can connect to this RPC endpoint via this address.
 	 *
-	 * @param callable Callable to be executed in the main thread of the underlying rpc server
-	 * @param timeout Timeout for the callable to be completed
-	 * @param <V> Return type of the callable
-	 * @return Future for the result of the callable.
+	 * @return Fully qualified address of the underlying RPC endpoint
 	 */
-	public <V> Future<V> callAsync(Callable<V> callable, Timeout timeout) {
-		return ((MainThreadExecutor) self).callAsync(callable, timeout);
+	public String getAddress() {
+		return selfAddress;
 	}
 
 	/**
 	 * Gets the main thread execution context. The main thread execution context can be used to
-	 * execute tasks in the main thread of the underlying rpc server.
+	 * execute tasks in the main thread of the underlying RPC endpoint.
 	 *
 	 * @return Main thread execution context
 	 */
@@ -106,52 +137,51 @@ public abstract class RpcEndpoint<C extends RpcGateway> {
 	}
 
 	/**
-	 * Gets the used rpc service.
+	 * Gets the endpoint's RPC service.
 	 *
-	 * @return Rpc service
+	 * @return The endpoint's RPC service
 	 */
 	public RpcService getRpcService() {
 		return rpcService;
 	}
 
-	/**
-	 * Starts the underlying rpc server via the rpc service and creates the main thread execution
-	 * context. This makes the rpc endpoint effectively reachable from the outside.
-	 *
-	 * Can be overriden to add rpc endpoint specific start up code. Should always call the parent
-	 * start method.
-	 */
-	public void start() {
-		self = rpcService.startServer(this);
-		mainThreadExecutionContext = new MainThreadExecutionContext((MainThreadExecutor) self);
-	}
-
+	// ------------------------------------------------------------------------
+	//  Asynchronous executions
+	// ------------------------------------------------------------------------
 
 	/**
-	 * Shuts down the underlying rpc server via the rpc service.
+	 * Execute the runnable in the main thread of the underlying RPC endpoint.
 	 *
-	 * Can be overriden to add rpc endpoint specific shut down code. Should always call the parent
-	 * shut down method.
+	 * @param runnable Runnable to be executed in the main thread of the underlying RPC endpoint
 	 */
-	public void shutDown() {
-		rpcService.stopServer(self);
+	public void runAsync(Runnable runnable) {
+		((MainThreadExecutor) self).runAsync(runnable);
 	}
 
 	/**
-	 * Gets the address of the underlying rpc server. The address should be fully qualified so that
-	 * a remote system can connect to this rpc server via this address.
+	 * Execute the callable in the main thread of the underlying RPC service, returning a future for
+	 * the result of the callable. If the callable is not completed within the given timeout, then
+	 * the future will be failed with a {@link java.util.concurrent.TimeoutException}.
 	 *
-	 * @return Fully qualified address of the underlying rpc server
+	 * @param callable Callable to be executed in the main thread of the underlying rpc server
+	 * @param timeout Timeout for the callable to be completed
+	 * @param <V> Return type of the callable
+	 * @return Future for the result of the callable.
 	 */
-	public String getAddress() {
-		return rpcService.getAddress(self);
+	public <V> Future<V> callAsync(Callable<V> callable, Timeout timeout) {
+		return ((MainThreadExecutor) self).callAsync(callable, timeout);
 	}
 
+	// ------------------------------------------------------------------------
+	//  Utilities
+	// ------------------------------------------------------------------------
+	
 	/**
 	 * Execution context which executes runnables in the main thread context. A reported failure
 	 * will cause the underlying rpc server to shut down.
 	 */
 	private class MainThreadExecutionContext implements ExecutionContext {
+
 		private final MainThreadExecutor gateway;
 
 		MainThreadExecutionContext(MainThreadExecutor gateway) {

http://git-wip-us.apache.org/repos/asf/flink/blob/ddeee3b5/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java
index c5bac94..642a380 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java
@@ -54,15 +54,13 @@ public class AkkaRpcServiceTest extends TestLogger {
 		ResourceManager resourceManager = new ResourceManager(akkaRpcService, executorService);
 		JobMaster jobMaster = new JobMaster(akkaRpcService2, executorService);
 
-		resourceManager.start();
-
 		ResourceManagerGateway rm = resourceManager.getSelf();
 
 		assertTrue(rm instanceof AkkaGateway);
 
 		AkkaGateway akkaClient = (AkkaGateway) rm;
 
-		jobMaster.start();
+		
 		jobMaster.registerAtResourceManager(AkkaUtils.getAkkaURL(actorSystem, akkaClient.getActorRef()));
 
 		// wait for successful registration


[10/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/windows.md
----------------------------------------------------------------------
diff --git a/docs/dev/windows.md b/docs/dev/windows.md
new file mode 100644
index 0000000..084a2ee
--- /dev/null
+++ b/docs/dev/windows.md
@@ -0,0 +1,677 @@
+---
+title: "Windows"
+nav-parent_id: dev
+nav-id: windows
+nav-pos: 3
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Flink uses a concept called *windows* to divide a (potentially) infinite `DataStream` into finite
+slices based on the timestamps of elements or other criteria. This division is required when working
+with infinite streams of data and performing transformations that aggregate elements.
+
+<span class="label label-info">Info</span> We will mostly talk about *keyed windowing* here, i.e.
+windows that are applied on a `KeyedStream`. Keyed windows have the advantage that elements are
+subdivided based on both window and key before being given to
+a user function. The work can thus be distributed across the cluster
+because the elements for different keys can be processed independently. If you absolutely have to,
+you can check out [non-keyed windowing](#non-keyed-windowing) where we describe how non-keyed
+windows work.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Basics
+
+For a windowed transformation you must at least specify a *key*
+(see [specifying keys]({{ site.baseurl }}/dev/api_concepts.html#specifying-keys)),
+a *window assigner* and a *window function*. The *key* divides the infinite, non-keyed, stream
+into logical keyed streams while the *window assigner* assigns elements to finite per-key windows.
+Finally, the *window function* is used to process the elements of each window.
+
+The basic structure of a windowed transformation is thus as follows:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+DataStream<T> input = ...;
+
+input
+    .keyBy(<key selector>)
+    .window(<window assigner>)
+    .<windowed transformation>(<window function>);
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val input: DataStream[T] = ...
+
+input
+    .keyBy(<key selector>)
+    .window(<window assigner>)
+    .<windowed transformation>(<window function>)
+{% endhighlight %}
+</div>
+</div>
+
+We will cover [window assigners](#window-assigners) in a separate section below.
+
+The window transformation can be one of `reduce()`, `fold()` or `apply()`. Which respectively
+takes a `ReduceFunction`, `FoldFunction` or `WindowFunction`. We describe each of these ways
+of specifying a windowed transformation in detail below: [window functions](#window-functions).
+
+For more advanced use cases you can also specify a `Trigger` that determines when exactly a window
+is being considered as *ready for processing*. These will be covered in more detail in
+[triggers](#triggers).
+
+## Window Assigners
+
+The window assigner specifies how elements of the stream are divided into finite slices. Flink comes
+with pre-implemented window assigners for the most typical use cases, namely *tumbling windows*,
+*sliding windows*, *session windows* and *global windows*, but you can implement your own by
+extending the `WindowAssigner` class. All the built-in window assigners, except for the global
+windows one, assign elements to windows based on time, which can either be processing time or event
+time. Please take a look at our section on [event time]({{ site.baseurl }}/dev/event_time.html) for more
+information about how Flink deals with time.
+
+Let's first look at how each of these window assigners works before looking at how they can be used
+in a Flink program. We will be using abstract figures to visualize the workings of each assigner:
+in the following, the purple circles are elements of the stream, they are partitioned
+by some key (in this case *user 1*, *user 2* and *user 3*) and the x-axis shows the progress
+of time.
+
+### Global Windows
+
+Global windows are a way of specifying that we don't want to subdivide our elements into windows.
+Each element is assigned to one single per-key *global window*.
+This windowing scheme is only useful if you also specify a custom [trigger](#triggers). Otherwise,
+no computation is ever going to be performed, as the global window does not have a natural end at
+which we could process the aggregated elements.
+
+<img src="{{ site.baseurl }}/fig/non-windowed.svg" class="center" style="width: 80%;" />
+
+### Tumbling Windows
+
+A *tumbling windows* assigner assigns elements to fixed length, non-overlapping windows of a
+specified *window size*.. For example, if you specify a window size of 5 minutes, the window
+function will get 5 minutes worth of elements in each invocation.
+
+<img src="{{ site.baseurl }}/fig/tumbling-windows.svg" class="center" style="width: 80%;" />
+
+### Sliding Windows
+
+The *sliding windows* assigner assigns elements to windows of fixed length equal to *window size*,
+as the tumbling windows assigner, but in this case, windows can be overlapping. The size of the
+overlap is defined by the user-specified parameter *window slide*. As windows are overlapping, an
+element can be assigned to multiple windows
+
+For example, you could have windows of size 10 minutes that slide by 5 minutes. With this you get 10
+minutes worth of elements in each invocation of the window function and it will be invoked for every
+5 minutes of data.
+
+<img src="{{ site.baseurl }}/fig/sliding-windows.svg" class="center" style="width: 80%;" />
+
+### Session Windows
+
+The *session windows* assigner is ideal for cases where the window boundaries need to adjust to the
+incoming data. Both the *tumbling windows* and *sliding windows* assigner assign elements to windows
+that start at fixed time points and have a fixed *window size*. With session windows it is possible
+to have windows that start at individual points in time for each key and that end once there has
+been a certain period of inactivity. The configuration parameter is the *session gap* that specifies
+how long to wait for new data before considering a session as closed.
+
+<img src="{{ site.baseurl }}/fig/session-windows.svg" class="center" style="width: 80%;" />
+
+### Specifying a Window Assigner
+
+The built-in window assigners (except `GlobalWindows`) come in two versions. One for processing-time
+windowing and one for event-time windowing. The processing-time assigners assign elements to
+windows based on the current clock of the worker machines while the event-time assigners assign
+windows based on the timestamps of elements. Please have a look at
+[event time]({{ site.baseurl }}/dev/event_time.html) to learn about the difference between processing time
+and event time and about how timestamps can be assigned to elements.
+
+The following code snippets show how each of the window assigners can be used in a program:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+DataStream<T> input = ...;
+
+// tumbling event-time windows
+input
+    .keyBy(<key selector>)
+    .window(TumblingEventTimeWindows.of(Time.seconds(5)))
+    .<windowed transformation>(<window function>);
+
+// sliding event-time windows
+input
+    .keyBy(<key selector>)
+    .window(SlidingEventTimeWindows.of(Time.seconds(10), Time.seconds(5)))
+    .<windowed transformation>(<window function>);
+
+// event-time session windows
+input
+    .keyBy(<key selector>)
+    .window(EventTimeSessionWindows.withGap(Time.minutes(10)))
+    .<windowed transformation>(<window function>);
+
+// tumbling processing-time windows
+input
+    .keyBy(<key selector>)
+    .window(TumblingProcessingTimeWindows.of(Time.seconds(5)))
+    .<windowed transformation>(<window function>);
+
+// sliding processing-time windows
+input
+    .keyBy(<key selector>)
+    .window(SlidingProcessingTimeWindows.of(Time.seconds(10), Time.seconds(5)))
+    .<windowed transformation>(<window function>);
+
+// processing-time session windows
+input
+    .keyBy(<key selector>)
+    .window(ProcessingTimeSessionWindows.withGap(Time.minutes(10)))
+    .<windowed transformation>(<window function>);
+
+// global windows
+input
+    .keyBy(<key selector>)
+    .window(GlobalWindows.create())
+    .<windowed transformation>(<window function>);
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val input: DataStream[T] = ...
+
+// tumbling event-time windows
+input
+    .keyBy(<key selector>)
+    .window(TumblingEventTimeWindows.of(Time.seconds(5)))
+    .<windowed transformation>(<window function>)
+
+// sliding event-time windows
+input
+    .keyBy(<key selector>)
+    .window(SlidingEventTimeWindows.of(Time.seconds(10), Time.seconds(5)))
+    .<windowed transformation>(<window function>)
+
+// event-time session windows
+input
+    .keyBy(<key selector>)
+    .window(EventTimeSessionWindows.withGap(Time.minutes(10)))
+    .<windowed transformation>(<window function>)
+
+// tumbling processing-time windows
+input
+    .keyBy(<key selector>)
+    .window(TumblingProcessingTimeWindows.of(Time.seconds(5)))
+    .<windowed transformation>(<window function>)
+
+// sliding processing-time windows
+input
+    .keyBy(<key selector>)
+    .window(SlidingProcessingTimeWindows.of(Time.seconds(10), Time.seconds(5)))
+    .<windowed transformation>(<window function>)
+
+// processing-time session windows
+input
+    .keyBy(<key selector>)
+    .window(ProcessingTimeSessionWindows.withGap(Time.minutes(10)))
+    .<windowed transformation>(<window function>)
+
+// global windows
+input
+    .keyBy(<key selector>)
+    .window(GlobalWindows.create())
+{% endhighlight %}
+</div>
+</div>
+
+Note, how we can specify a time interval by using one of `Time.milliseconds(x)`, `Time.seconds(x)`,
+`Time.minutes(x)`, and so on.
+
+The time-based window assigners also take an optional `offset` parameter that can be used to
+change the alignment of windows. For example, without offsets hourly windows are aligned
+with epoch, that is you will get windows such as `1:00 - 1:59`, `2:00 - 2:59` and so on. If you
+want to change that you can give an offset. With an offset of 15 minutes you would, for example,
+get `1:15 - 2:14`, `2:15 - 3:14` etc. Another important use case for offsets is when you
+want to have daily windows and live in a timezone other than UTC-0. For example, in China
+you would have to specify an offset of `Time.hours(-8)`.
+
+This example shows how an offset can be specified for tumbling event time windows (the other
+windows work accordingly):
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+DataStream<T> input = ...;
+
+// tumbling event-time windows
+input
+    .keyBy(<key selector>)
+    .window(TumblingEventTimeWindows.of(Time.days(1), Time.hours(-8)))
+    .<windowed transformation>(<window function>);
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val input: DataStream[T] = ...
+
+// tumbling event-time windows
+input
+    .keyBy(<key selector>)
+    .window(TumblingEventTimeWindows.of(Time.days(1), Time.hours(-8)))
+    .<windowed transformation>(<window function>)
+{% endhighlight %}
+</div>
+</div>
+
+## Window Functions
+
+The *window function* is used to process the elements of each window (and key) once the system
+determines that a window is ready for processing (see [triggers](#triggers) for how the system
+determines when a window is ready).
+
+The window function can be one of `ReduceFunction`, `FoldFunction` or `WindowFunction`. The first
+two can be executed more efficiently because Flink can incrementally aggregate the elements for each
+window as they arrive. A `WindowFunction` gets an `Iterable` for all the elements contained in a
+window and additional meta information about the window to which the elements belong.
+
+A windowed transformation with a `WindowFunction` cannot be executed as efficiently as the other
+cases because Flink has to buffer *all* elements for a window internally before invoking the function.
+This can be mitigated by combining a `WindowFunction` with a `ReduceFunction` or `FoldFunction` to
+get both incremental aggregation of window elements and the additional information that the
+`WindowFunction` receives. We will look at examples for each of these variants.
+
+### ReduceFunction
+
+A reduce function specifies how two values can be combined to form one element. Flink can use this
+to incrementally aggregate the elements in a window.
+
+A `ReduceFunction` can be used in a program like this:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+DataStream<Tuple2<String, Long>> input = ...;
+
+input
+    .keyBy(<key selector>)
+    .window(<window assigner>)
+    .reduce(new ReduceFunction<Tuple2<String, Long>> {
+      public Tuple2<String, Long> reduce(Tuple2<String, Long> v1, Tuple2<String, Long> v2) {
+        return new Tuple2<>(v1.f0, v1.f1 + v2.f1);
+      }
+    });
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val input: DataStream[(String, Long)] = ...
+
+input
+    .keyBy(<key selector>)
+    .window(<window assigner>)
+    .reduce { (v1, v2) => (v1._1, v1._2 + v2._2) }
+{% endhighlight %}
+</div>
+</div>
+
+A `ReduceFunction` specifies how two elements from the input can be combined to produce
+an output element. This example will sum up the second field of the tuple for all elements
+in a window.
+
+### FoldFunction
+
+A fold function can be specified like this:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+DataStream<Tuple2<String, Long>> input = ...;
+
+input
+    .keyBy(<key selector>)
+    .window(<window assigner>)
+    .fold("", new FoldFunction<Tuple2<String, Long>, String>> {
+       public String fold(String acc, Tuple2<String, Long> value) {
+         return acc + value.f1;
+       }
+    });
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val input: DataStream[(String, Long)] = ...
+
+input
+    .keyBy(<key selector>)
+    .window(<window assigner>)
+    .fold("") { (acc, v) => acc + v._2 }
+{% endhighlight %}
+</div>
+</div>
+
+A `FoldFunction` specifies how elements from the input will be added to an initial
+accumulator value (`""`, the empty string, in our example). This example will compute
+a concatenation of all the `Long` fields of the input.
+
+### WindowFunction - The Generic Case
+
+Using a `WindowFunction` provides most flexibility, at the cost of performance. The reason for this
+is that elements cannot be incrementally aggregated for a window and instead need to be buffered
+internally until the window is considered ready for processing. A `WindowFunction` gets an
+`Iterable` containing all the elements of the window being processed. The signature of
+`WindowFunction` is this:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+public interface WindowFunction<IN, OUT, KEY, W extends Window> extends Function, Serializable {
+
+  /**
+   * Evaluates the window and outputs none or several elements.
+   *
+   * @param key The key for which this window is evaluated.
+   * @param window The window that is being evaluated.
+   * @param input The elements in the window being evaluated.
+   * @param out A collector for emitting elements.
+   *
+   * @throws Exception The function may throw exceptions to fail the program and trigger recovery.
+   */
+  void apply(KEY key, W window, Iterable<IN> input, Collector<OUT> out) throws Exception;
+}
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+public interface WindowFunction<IN, OUT, KEY, W extends Window> extends Function, Serializable {
+
+  /**
+   * Evaluates the window and outputs none or several elements.
+   *
+   * @param key The key for which this window is evaluated.
+   * @param window The window that is being evaluated.
+   * @param input The elements in the window being evaluated.
+   * @param out A collector for emitting elements.
+   *
+   * @throws Exception The function may throw exceptions to fail the program and trigger recovery.
+   */
+  void apply(KEY key, W window, Iterable<IN> input, Collector<OUT> out) throws Exception;
+}
+{% endhighlight %}
+</div>
+</div>
+
+Here we show an example that uses a `WindowFunction` to count the elements in a window. We do this
+because we want to access information about the window itself to emit it along with the count.
+This is very inefficient, however, and should be implemented with a
+`ReduceFunction` in practice. Below, we will see an example of how a `ReduceFunction` can
+be combined with a `WindowFunction` to get both incremental aggregation and the added
+information of a `WindowFunction`.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+DataStream<Tuple2<String, Long>> input = ...;
+
+input
+    .keyBy(<key selector>)
+    .window(<window assigner>)
+    .apply(new MyWindowFunction());
+
+/* ... */
+
+public class MyWindowFunction implements WindowFunction<Tuple<String, Long>, String, String, TimeWindow> {
+
+  void apply(String key, TimeWindow window, Iterable<Tuple<String, Long>> input, Collector<String> out) {
+    long count = 0;
+    for (Tuple<String, Long> in: input) {
+      count++;
+    }
+    out.collect("Window: " + window + "count: " + count);
+  }
+}
+
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val input: DataStream[(String, Long)] = ...
+
+input
+    .keyBy(<key selector>)
+    .window(<window assigner>)
+    .apply(new MyWindowFunction())
+
+/* ... */
+
+class MyWindowFunction extends WindowFunction[(String, Long), String, String, TimeWindow] {
+
+  def apply(key: String, window: TimeWindow, input: Iterable[(String, Long)], out: Collector[String]): () = {
+    var count = 0L
+    for (in <- input) {
+      count = count + 1
+    }
+    out.collect(s"Window $window count: $count")
+  }
+}
+{% endhighlight %}
+</div>
+</div>
+
+### WindowFunction with Incremental Aggregation
+
+A `WindowFunction` can be combined with either a `ReduceFunction` or a `FoldFunction`. When doing
+this, the `ReduceFunction`/`FoldFunction` will be used to incrementally aggregate elements as they
+arrive while the `WindowFunction` will be provided with the aggregated result when the window is
+ready for processing. This allows to get the benefit of incremental window computation and also have
+the additional meta information that writing a `WindowFunction` provides.
+
+This is an example that shows how incremental aggregation functions can be combined with
+a `WindowFunction`.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+DataStream<Tuple2<String, Long>> input = ...;
+
+// for folding incremental computation
+input
+    .keyBy(<key selector>)
+    .window(<window assigner>)
+    .apply(<initial value>, new MyFoldFunction(), new MyWindowFunction());
+
+// for reducing incremental computation
+input
+    .keyBy(<key selector>)
+    .window(<window assigner>)
+    .apply(new MyReduceFunction(), new MyWindowFunction());
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val input: DataStream[(String, Long)] = ...
+
+// for folding incremental computation
+input
+    .keyBy(<key selector>)
+    .window(<window assigner>)
+    .apply(<initial value>, new MyFoldFunction(), new MyWindowFunction())
+
+// for reducing incremental computation
+input
+    .keyBy(<key selector>)
+    .window(<window assigner>)
+    .apply(new MyReduceFunction(), new MyWindowFunction())
+{% endhighlight %}
+</div>
+</div>
+
+## Dealing with Late Data
+
+When working with event-time windowing it can happen that elements arrive late, i.e the
+watermark that Flink uses to keep track of the progress of event-time is already past the
+end timestamp of a window to which an element belongs. Please
+see [event time](/apis/streaming/event_time.html) and especially
+[late elements](/apis/streaming/event_time.html#late-elements) for a more thorough discussion of
+how Flink deals with event time.
+
+You can specify how a windowed transformation should deal with late elements and how much lateness
+is allowed. The parameter for this is called *allowed lateness*. This specifies by how much time
+elements can be late. Elements that arrive within the allowed lateness are still put into windows
+and are considered when computing window results. If elements arrive after the allowed lateness they
+will be dropped. Flink will also make sure that any state held by the windowing operation is garbage
+collected once the watermark passes the end of a window plus the allowed lateness.
+
+<span class="label label-info">Default</span> By default, the allowed lateness is set to
+`0`. That is, elements that arrive behind the watermark will be dropped.
+
+You can specify an allowed lateness like this:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+DataStream<T> input = ...;
+
+input
+    .keyBy(<key selector>)
+    .window(<window assigner>)
+    .allowedLateness(<time>)
+    .<windowed transformation>(<window function>);
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val input: DataStream[T] = ...
+
+input
+    .keyBy(<key selector>)
+    .window(<window assigner>)
+    .allowedLateness(<time>)
+    .<windowed transformation>(<window function>)
+{% endhighlight %}
+</div>
+</div>
+
+<span class="label label-info">Note</span> When using the `GlobalWindows` window assigner no
+data is ever considered late because the end timestamp of the global window is `Long.MAX_VALUE`.
+
+## Triggers
+
+A `Trigger` determines when a window (as assigned by the `WindowAssigner`) is ready for being
+processed by the *window function*. The trigger observes how elements are added to windows
+and can also keep track of the progress of processing time and event time. Once a trigger
+determines that a window is ready for processing, it fires. This is the signal for the
+window operation to take the elements that are currently in the window and pass them along to
+the window function to produce output for the firing window.
+
+Each `WindowAssigner` (except `GlobalWindows`) comes with a default trigger that should be
+appropriate for most use cases. For example, `TumblingEventTimeWindows` has an `EventTimeTrigger` as
+default trigger. This trigger simply fires once the watermark passes the end of a window.
+
+You can specify the trigger to be used by calling `trigger()` with a given `Trigger`. The
+whole specification of the windowed transformation would then look like this:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+DataStream<T> input = ...;
+
+input
+    .keyBy(<key selector>)
+    .window(<window assigner>)
+    .trigger(<trigger>)
+    .<windowed transformation>(<window function>);
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val input: DataStream[T] = ...
+
+input
+    .keyBy(<key selector>)
+    .window(<window assigner>)
+    .trigger(<trigger>)
+    .<windowed transformation>(<window function>)
+{% endhighlight %}
+</div>
+</div>
+
+Flink comes with a few triggers out-of-box: there is the already mentioned `EventTimeTrigger` that
+fires based on the progress of event-time as measured by the watermark, the `ProcessingTimeTrigger`
+does the same but based on processing time and the `CountTrigger` fires once the number of elements
+in a window exceeds the given limit.
+
+<span class="label label-danger">Attention</span> By specifying a trigger using `trigger()` you
+are overwriting the default trigger of a `WindowAssigner`. For example, if you specify a
+`CountTrigger` for `TumblingEventTimeWindows` you will no longer get window firings based on the
+progress of time but only by count. Right now, you have to write your own custom trigger if
+you want to react based on both time and count.
+
+The internal `Trigger` API is still considered experimental but you can check out the code
+if you want to write your own custom trigger:
+{% gh_link /flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/triggers/Trigger.java "Trigger.java" %}.
+
+## Non-keyed Windowing
+
+You can also leave out the `keyBy()` when specifying a windowed transformation. This means, however,
+that Flink cannot process windows for different keys in parallel, essentially turning the
+transformation into a non-parallel operation.
+
+<span class="label label-danger">Warning</span> As mentioned in the introduction, non-keyed
+windows have the disadvantage that work cannot be distributed in the cluster because
+windows cannot be computed independently per key. This can have severe performance implications.
+
+
+The basic structure of a non-keyed windowed transformation is as follows:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+DataStream<T> input = ...;
+
+input
+    .windowAll(<window assigner>)
+    .<windowed transformation>(<window function>);
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val input: DataStream[T] = ...
+
+input
+    .windowAll(<window assigner>)
+    .<windowed transformation>(<window function>)
+{% endhighlight %}
+</div>
+</div>

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/fig/ClientJmTm.svg
----------------------------------------------------------------------
diff --git a/docs/fig/ClientJmTm.svg b/docs/fig/ClientJmTm.svg
new file mode 100644
index 0000000..b158b7d
--- /dev/null
+++ b/docs/fig/ClientJmTm.svg
@@ -0,0 +1,348 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<!-- Created with Inkscape (http://www.inkscape.org/) -->
+
+<svg
+   xmlns:dc="http://purl.org/dc/elements/1.1/"
+   xmlns:cc="http://creativecommons.org/ns#"
+   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+   xmlns:svg="http://www.w3.org/2000/svg"
+   xmlns="http://www.w3.org/2000/svg"
+   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+   width="817.49274"
+   height="463.47787"
+   id="svg2"
+   version="1.1"
+   inkscape:version="0.48.5 r10040">
+  <defs
+     id="defs4" />
+  <sodipodi:namedview
+     id="base"
+     pagecolor="#ffffff"
+     bordercolor="#666666"
+     borderopacity="1.0"
+     inkscape:pageopacity="0.0"
+     inkscape:pageshadow="2"
+     inkscape:zoom="0.7"
+     inkscape:cx="118.68649"
+     inkscape:cy="265.49231"
+     inkscape:document-units="px"
+     inkscape:current-layer="layer1"
+     showgrid="false"
+     fit-margin-top="0"
+     fit-margin-left="0"
+     fit-margin-right="0"
+     fit-margin-bottom="0"
+     inkscape:window-width="1600"
+     inkscape:window-height="838"
+     inkscape:window-x="1912"
+     inkscape:window-y="-8"
+     inkscape:window-maximized="1" />
+  <metadata
+     id="metadata7">
+    <rdf:RDF>
+      <cc:Work
+         rdf:about="">
+        <dc:format>image/svg+xml</dc:format>
+        <dc:type
+           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+        <dc:title></dc:title>
+      </cc:Work>
+    </rdf:RDF>
+  </metadata>
+  <g
+     inkscape:label="Layer 1"
+     inkscape:groupmode="layer"
+     id="layer1"
+     transform="translate(0.17493185,-2.7660971)">
+    <g
+       id="g2989"
+       transform="translate(-24.016809,-116.88402)">
+      <path
+         id="path2991"
+         d="m 400.33723,121.08016 0,124.38099 248.19934,0 0,-124.38099 -248.19934,0 z"
+         style="fill:#f2dcdb;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path2993"
+         d="m 400.33723,121.08016 248.19934,0 0,124.38099 -248.19934,0 z"
+         style="fill:none;stroke:#000000;stroke-width:2.51312613px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text2995"
+         style="font-size:22.5056076px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="162.61018"
+         x="447.93558"
+         xml:space="preserve">JobManager</text>
+      <path
+         id="path2997"
+         d="m 40.510092,137.88435 164.103378,0 0,37.75316 163.46573,0 0,-12.58439 25.16877,25.16877 -25.16877,25.16877 0,-12.58438 -163.46573,0 0,37.75315 -164.103378,0 z"
+         style="fill:#b9cde5;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path2999"
+         d="m 40.510092,137.88435 164.103378,0 0,37.75316 163.46573,0 0,-12.58439 25.16877,25.16877 -25.16877,25.16877 0,-12.58438 -163.46573,0 0,37.75315 -164.103378,0 z"
+         style="fill:none;stroke:#000000;stroke-width:2.55063534px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3001"
+         d="m 55.288774,144.86109 0,36.45908 132.483006,0 0,-36.45908 -132.483006,0 z"
+         style="fill:#b9cde5;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3003"
+         style="font-size:22.5056076px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="171.84125"
+         x="85.103935"
+         xml:space="preserve">Client</text>
+      <path
+         id="path3005"
+         d="m 24.46547,120.27371 42.507465,0 0,42.46058 -7.079889,7.07989 -35.427576,0 z"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3007"
+         d="m 59.893046,169.81418 1.415978,-5.66391 5.663911,-1.41598 z"
+         style="fill:#cdcdcd;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3009"
+         d="m 59.893046,169.81418 1.415978,-5.66391 5.663911,-1.41598 -7.079889,7.07989 -35.427576,0 0,-49.54047 42.507465,0 0,42.46058"
+         style="fill:none;stroke:#000000;stroke-width:1.24718571px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3011"
+         d="m 29.78242,124.80297 26.584747,0"
+         style="fill:none;stroke:#000000;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3013"
+         d="m 29.78242,129.18218 31.901697,0"
+         style="fill:none;stroke:#000000;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3015"
+         d="m 29.78242,133.71144 21.267798,0"
+         style="fill:none;stroke:#000000;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3017"
+         d="m 29.78242,138.24069 31.901697,0"
+         style="fill:none;stroke:#000000;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3019"
+         d="m 29.78242,142.77932 31.901697,0"
+         style="fill:none;stroke:#000000;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3021"
+         d="m 29.78242,147.30857 13.287685,0"
+         style="fill:none;stroke:#000000;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3023"
+         d="m 29.78242,151.83783 26.556615,0"
+         style="fill:none;stroke:#000000;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3025"
+         style="font-size:17.55437279px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="229.55898"
+         x="239.21106"
+         xml:space="preserve">Submit Job</text>
+      <text
+         id="text3027"
+         style="font-size:17.55437279px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="206.73724"
+         x="78.214714"
+         xml:space="preserve">Compiler/</text>
+      <text
+         id="text3029"
+         style="font-size:17.55437279px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="227.74248"
+         x="78.814865"
+         xml:space="preserve">Optimizer</text>
+      <text
+         id="text3031"
+         style="font-size:17.55437279px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="206.94733"
+         x="473.13199"
+         xml:space="preserve">Scheduling,</text>
+      <text
+         id="text3033"
+         style="font-size:17.55437279px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="227.95258"
+         x="424.21982"
+         xml:space="preserve">Resource Management</text>
+      <path
+         id="path3035"
+         d="m 591.89746,422.65529 0,124.38099 248.16182,0 0,-124.38099 -248.16182,0 z"
+         style="fill:#d7e4bd;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3037"
+         d="m 591.89746,422.65529 248.16182,0 0,124.38099 -248.16182,0 z"
+         style="fill:none;stroke:#000000;stroke-width:2.55063534px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3039"
+         d="m 621.82991,437.28394 0,36.30904 187.99684,0 0,-36.30904 -187.99684,0 z"
+         style="fill:#d7e4bd;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3041"
+         style="font-size:22.5056076px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="464.15421"
+         x="631.76624"
+         xml:space="preserve">TaskManager</text>
+      <path
+         id="path3043"
+         d="m 177.34418,422.65529 0,124.38099 248.16182,0 0,-124.38099 -248.16182,0 z"
+         style="fill:#d7e4bd;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3045"
+         d="m 177.34418,422.65529 248.16182,0 0,124.38099 -248.16182,0 z"
+         style="fill:none;stroke:#000000;stroke-width:2.55063534px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3047"
+         d="m 207.42668,437.28394 0,36.30904 187.84679,0 0,-36.30904 -187.84679,0 z"
+         style="fill:#d7e4bd;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3049"
+         style="font-size:22.5056076px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="464.15421"
+         x="217.23552"
+         xml:space="preserve">TaskManager</text>
+      <path
+         id="path3051"
+         d="m 416.89761,258.47689 -147.58052,146.38022 3.95724,3.99475 147.58052,-146.38022 z m -143.26694,128.76958 -6.30157,23.53712 23.59338,-6.11403 c 1.50037,-0.37509 2.40059,-1.91297 2.0255,-3.41335 -0.39385,-1.51913 -1.93173,-2.41935 -3.4321,-2.0255 l -18.92347,4.89497 3.43211,3.45086 5.045,-18.8672 c 0.39385,-1.51913 -0.48762,-3.05701 -1.98799,-3.45086 -1.50038,-0.41261 -3.05701,0.48762 -3.45086,1.98799 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3053"
+         d="m 612.90269,257.57667 140.0974,144.26094 -4.05101,3.93848 -140.0974,-144.26094 z m 136.12141,126.59404 5.85146,23.63088 -23.48085,-6.52662 c -1.50038,-0.45011 -2.36309,-1.988 -1.95049,-3.48837 0.41261,-1.50038 1.95049,-2.36309 3.45086,-1.95049 l 18.82969,5.25131 -3.48837,3.37584 -4.68866,-18.94222 c -0.3751,-1.53788 0.56264,-3.03825 2.06301,-3.41335 1.50037,-0.37509 3.03826,0.52513 3.41335,2.06302 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3055"
+         d="m 320.29229,409.69582 134.95862,-137.19043 -4.0135,-3.93848 -134.93986,137.19042 z m 130.85135,-119.52353 6.0015,-23.61213 -23.49961,6.39534 c -1.50037,0.39385 -2.38184,1.95049 -1.96924,3.45086 0.39385,1.50037 1.95049,2.38184 3.45086,1.96924 l 18.84845,-5.12003 -3.46962,-3.41335 -4.80119,18.94222 c -0.39385,1.50038 0.52513,3.03826 2.0255,3.41335 1.50037,0.3751 3.03826,-0.52513 3.41335,-2.0255 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3057"
+         d="m 713.84034,414.29071 -141.48525,-148.76206 4.08852,-3.90097 141.48524,148.79957 z m -137.69681,-131.05765 -5.58889,-23.70591 23.40583,6.7892 c 1.46286,0.45011 2.32558,1.98799 1.91298,3.48837 -0.45012,1.50037 -1.988,2.36308 -3.48837,1.91297 l -18.79218,-5.43885 3.52587,-3.33833 4.50113,19.01723 c 0.37509,1.50038 -0.56264,3.00075 -2.10053,3.37584 -1.50037,0.33759 -3.00074,-0.60015 -3.37584,-2.10052 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3059"
+         d="m 294.48586,342.57284 c 0.95649,1.01276 1.61291,2.21305 1.95049,3.6009 0.35634,1.38785 0.31883,2.75694 -0.11253,4.08852 -0.26256,0.8252 -0.63766,1.53788 -1.10652,2.1943 -0.48763,0.63765 -1.08777,1.29407 -1.80045,1.98799 l -2.49437,2.38184 -6.67667,-11.27155 2.15679,-2.04426 c 0.67517,-0.63766 1.33158,-1.18155 1.96924,-1.6129 0.63766,-0.43136 1.33158,-0.71268 2.08177,-0.84396 0.73143,-0.13129 1.42535,-0.0563 2.10052,0.18754 0.69393,0.26257 1.33158,0.69392 1.93173,1.33158 z m -1.2003,1.33159 c -0.45011,-0.46887 -0.91897,-0.7877 -1.4066,-0.97525 -0.50637,-0.18754 -1.01275,-0.22505 -1.57539,-0.13128 -0.54388,0.0938 -1.05026,0.28132 -1.50037,0.58139 -0.46887,0.28132 -0.97524,0.69393 -1.55664,1.25657 l -1.03151,0.97524 5.19505,8.8147 1.29407,-1.21906 c 0.58139,-0.56264 1.06902,-1.12528 1.44411,-1.66916 0.37509,-0.52513 0.65641,-1.12528 0.84396,-1.74419 0.30007,-1.01275 0.28132,-2.04426 -0.0563,-3.09452 -0.33759,-1.05026 -0.88147,-1.98799 -1.65042,-2.79444 z m 8.90847,-5.87022 
 c -0.0938,-0.13128 -0.16879,-0.24381 -0.24381,-0.33758 -0.0563,-0.0938 -0.15003,-0.18755 -0.24381,-0.28132 -0.4126,-0.45011 -0.86271,-0.67517 -1.35033,-0.69392 -0.46887,-0.0188 -0.97525,0.22505 -1.51913,0.73143 -0.5814,0.56264 -0.93774,1.23781 -1.05026,2.04426 -0.0938,0.78769 0.0563,1.55664 0.45011,2.30682 z m 1.85672,6.60165 c -0.97525,0.91898 -1.93173,1.4066 -2.90698,1.48162 -0.97524,0.075 -1.87546,-0.33759 -2.70067,-1.2003 -1.21905,-1.27532 -1.85671,-2.73818 -1.91298,-4.42611 -0.0563,-1.66916 0.50638,-3.05701 1.68792,-4.18229 0.7877,-0.75018 1.5754,-1.12528 2.36309,-1.14403 0.7877,0 1.51913,0.35634 2.1943,1.06901 0.11253,0.11253 0.28132,0.33759 0.50638,0.63766 0.22505,0.30008 0.45011,0.65642 0.71267,1.08777 l -5.10127,4.87622 c 0.0938,0.13128 0.18755,0.26256 0.28132,0.39385 0.0938,0.13128 0.18755,0.24381 0.28132,0.33758 0.56264,0.5814 1.18155,0.88147 1.85671,0.86272 0.67517,0 1.33159,-0.31883 1.988,-0.93774 0.45011,-0.43136 0.84396,-0.95649 1.14403,-1.59414 0.31883,-0.63766 0.525
 14,-1.23781 0.63766,-1.76294 l 0.075,-0.075 0.95649,1.51913 c -0.15004,0.26256 -0.26256,0.50637 -0.37509,0.73143 -0.11253,0.22505 -0.26257,0.48762 -0.45011,0.78769 -0.20631,0.30008 -0.3751,0.56264 -0.54389,0.7877 -0.16879,0.22506 -0.4126,0.46887 -0.69392,0.75019 z m 6.69541,-15.71642 c 0.65642,0.67517 1.16279,1.42536 1.51913,2.23181 0.3751,0.80645 0.5814,1.59415 0.61891,2.34433 0.0563,0.7877 -0.0188,1.51913 -0.26257,2.21305 -0.22505,0.71268 -0.6189,1.31283 -1.14403,1.81921 -0.3751,0.35634 -0.75019,0.63766 -1.14404,0.84396 -0.39385,0.2063 -0.80645,0.35634 -1.21905,0.45011 l 2.11928,3.56339 -1.18155,1.10652 -6.8267,-11.59039 1.16279,-1.10652 0.52513,0.88147 c 0.13128,-0.63766 0.30008,-1.21905 0.50638,-1.74419 0.2063,-0.52513 0.54388,-0.99399 0.994,-1.42535 0.67516,-0.63766 1.38784,-0.91898 2.13803,-0.84396 0.75018,0.075 1.48162,0.50638 2.19429,1.25656 z m -1.01275,1.38785 c -0.46886,-0.48762 -0.93773,-0.76894 -1.4066,-0.84396 -0.46886,-0.0563 -0.93773,0.13128 -1.4066,0.58139 -0.33758,
 0.33759 -0.6189,0.75019 -0.8252,1.27532 -0.20631,0.52513 -0.35634,1.03151 -0.48763,1.55664 l 2.83196,4.80119 c 0.4126,-0.11252 0.76894,-0.24381 1.08777,-0.4126 0.30008,-0.16879 0.63766,-0.4126 0.994,-0.73143 0.4126,-0.4126 0.71268,-0.88147 0.84396,-1.4066 0.15004,-0.54389 0.16879,-1.06902 0.075,-1.6129 -0.0938,-0.5814 -0.28132,-1.14404 -0.56264,-1.66917 -0.30008,-0.54388 -0.67517,-1.05026 -1.14404,-1.53788 z m 0.63766,-10.39009 6.95798,11.77793 -1.16279,1.10653 -6.95798,-11.77793 z m 11.81544,-1.38785 c 0.5814,0.63766 1.05027,1.31283 1.40661,2.06302 0.35633,0.75018 0.56264,1.51913 0.6189,2.26931 0.0563,0.76894 -0.0375,1.51913 -0.28132,2.21305 -0.22506,0.69393 -0.65641,1.33159 -1.29407,1.93174 -0.82521,0.78769 -1.68792,1.16279 -2.58815,1.14403 -0.91898,-0.0375 -1.76294,-0.46887 -2.55063,-1.29407 -0.60015,-0.61891 -1.06902,-1.31283 -1.4066,-2.04426 -0.35634,-0.75019 -0.56264,-1.50037 -0.61891,-2.28807 -0.0563,-0.75019 0.0375,-1.48162 0.30008,-2.21305 0.26256,-0.73143 0.69392,-1.36909 
 1.27531,-1.93173 0.80646,-0.76895 1.65042,-1.14404 2.56939,-1.12528 0.90023,0 1.76294,0.43135 2.56939,1.27531 z m 0.5814,4.44486 c -0.075,-0.54388 -0.28132,-1.10652 -0.5814,-1.65041 -0.28132,-0.54389 -0.65641,-1.05026 -1.12528,-1.53788 -0.54388,-0.5814 -1.10652,-0.88147 -1.66916,-0.91898 -0.56264,-0.0375 -1.08777,0.18754 -1.59415,0.65641 -0.4126,0.39385 -0.67517,0.82521 -0.80645,1.33158 -0.13128,0.48762 -0.16879,1.01276 -0.075,1.5754 0.0938,0.54388 0.28132,1.08777 0.5814,1.63165 0.30007,0.56264 0.65641,1.06902 1.10652,1.53789 0.54389,0.58139 1.10653,0.88146 1.66917,0.91897 0.58139,0.0563 1.10652,-0.16879 1.63165,-0.65641 0.39385,-0.37509 0.65642,-0.82521 0.80646,-1.31283 0.13128,-0.50637 0.15003,-1.0315 0.0563,-1.57539 z m 8.27081,0.56264 -1.29407,1.21905 -0.80645,-4.89496 -7.46436,-5.32633 1.2003,-1.14404 5.90772,4.23856 -1.59415,-8.32708 1.25656,-1.21905 z m 8.92722,-29.01348 -3.28206,3.13203 5.90772,10.015 -1.23781,1.18154 -5.92648,-10.01499 -3.28206,3.15078 -0.76895,-1.27531 7.8
 3946,-7.46436 z m 8.66466,0.43136 c -0.6189,0.63766 -1.14403,1.21905 -1.57539,1.76294 -0.4126,0.54389 -0.73143,1.03151 -0.91898,1.48162 -0.2063,0.45011 -0.28132,0.88147 -0.24381,1.27532 0.0563,0.4126 0.24381,0.78769 0.5814,1.12528 0.28132,0.30007 0.60014,0.43135 0.95648,0.39385 0.33759,-0.0375 0.73144,-0.26257 1.16279,-0.65642 0.3751,-0.35634 0.67517,-0.80645 0.91898,-1.35033 0.22506,-0.54389 0.39385,-1.08778 0.50638,-1.65042 z m 2.10053,3.56339 c -0.0563,0.16879 -0.11253,0.4126 -0.18755,0.71268 -0.075,0.31883 -0.18755,0.60015 -0.30008,0.88147 -0.11252,0.30007 -0.28132,0.60015 -0.46886,0.90022 -0.2063,0.31883 -0.46887,0.63766 -0.82521,0.95649 -0.54388,0.52513 -1.14403,0.78769 -1.78169,0.80645 -0.65642,0 -1.23781,-0.26257 -1.72543,-0.76894 -0.52513,-0.56264 -0.84396,-1.14404 -0.95649,-1.74419 -0.11253,-0.60015 -0.0563,-1.25656 0.18755,-1.95048 0.24381,-0.67517 0.65641,-1.38785 1.2003,-2.11928 0.56264,-0.73143 1.25656,-1.51913 2.08176,-2.34434 -0.0938,-0.15003 -0.16879,-0.26256 -0.225
 05,-0.37509 -0.0563,-0.0938 -0.13128,-0.18755 -0.22506,-0.28132 -0.18754,-0.2063 -0.37509,-0.31883 -0.58139,-0.37509 -0.18755,-0.0375 -0.39385,-0.0375 -0.61891,0.0187 -0.2063,0.075 -0.43135,0.18755 -0.65641,0.33759 -0.2063,0.15003 -0.43136,0.33758 -0.67517,0.56264 -0.33758,0.33758 -0.69392,0.76894 -1.03151,1.31282 -0.33758,0.54389 -0.60014,1.01276 -0.78769,1.38785 l -0.0563,0.0563 -0.88147,-1.38784 c 0.1688,-0.28132 0.45012,-0.67517 0.84396,-1.18155 0.3751,-0.52513 0.76895,-0.97524 1.2003,-1.36909 0.84396,-0.8252 1.63166,-1.29407 2.30683,-1.44411 0.69392,-0.13128 1.31283,0.075 1.87547,0.65642 0.0938,0.11252 0.2063,0.24381 0.30007,0.39384 0.11253,0.15004 0.2063,0.28132 0.28132,0.41261 l 3.3946,5.73893 -1.16279,1.10652 z m 6.22655,-4.20105 c -0.50638,0.48762 -1.03151,0.86272 -1.5754,1.12528 -0.52513,0.26257 -0.99399,0.45011 -1.4066,0.56264 l -0.93773,-1.50037 0.075,-0.0563 c 0.15004,-0.0188 0.35634,-0.0375 0.58139,-0.0563 0.24381,-0.0375 0.50638,-0.11253 0.82521,-0.2063 0.28132,-0.093
 8 0.58139,-0.22506 0.90022,-0.39385 0.31883,-0.16879 0.61891,-0.39385 0.90023,-0.65641 0.58139,-0.56264 0.93773,-1.10653 1.06901,-1.63166 0.11253,-0.52513 -0.0188,-1.01275 -0.4126,-1.42535 -0.2063,-0.22506 -0.45011,-0.30008 -0.73143,-0.26257 -0.28132,0.0375 -0.6189,0.16879 -1.01275,0.4126 -0.2063,0.11253 -0.43136,0.24381 -0.69393,0.41261 -0.28132,0.16879 -0.56264,0.33758 -0.84396,0.50637 -0.6189,0.31883 -1.16279,0.46887 -1.63165,0.43136 -0.48762,-0.0563 -0.90023,-0.24381 -1.25657,-0.6189 -0.30007,-0.33759 -0.52513,-0.69393 -0.65641,-1.10653 -0.13128,-0.43136 -0.16879,-0.88147 -0.11253,-1.38785 0.0563,-0.46886 0.22506,-0.97524 0.50638,-1.50037 0.26256,-0.52513 0.65641,-1.03151 1.16279,-1.50037 0.4126,-0.39385 0.88147,-0.73144 1.38784,-1.03151 0.50638,-0.28132 0.97525,-0.46887 1.3691,-0.5814 l 0.88147,1.44411 -0.0563,0.0563 c -0.11253,0 -0.28132,0.0375 -0.50637,0.0563 -0.22506,0.0375 -0.48763,0.11253 -0.76895,0.22506 -0.26256,0.075 -0.52513,0.2063 -0.8252,0.37509 -0.30008,0.16879 -0.5
 6264,0.3751 -0.82521,0.61891 -0.52513,0.50637 -0.84396,1.01275 -0.95649,1.53788 -0.11252,0.52513 0.0188,0.97524 0.3751,1.35034 0.18754,0.2063 0.43136,0.30007 0.73143,0.28132 0.28132,0 0.6189,-0.13129 1.01275,-0.35634 0.26257,-0.13129 0.50638,-0.30008 0.76894,-0.46887 0.28132,-0.15004 0.52513,-0.31883 0.7877,-0.46887 0.60015,-0.33758 1.12528,-0.48762 1.6129,-0.46886 0.48762,0.0187 0.90023,0.22505 1.27532,0.60015 0.30007,0.31883 0.50638,0.71267 0.65641,1.16279 0.13129,0.45011 0.16879,0.91897 0.0938,1.4066 -0.075,0.52513 -0.24381,1.0315 -0.54389,1.57539 -0.28132,0.52513 -0.69392,1.03151 -1.21905,1.53788 z m 10.67141,-10.48386 -1.4066,1.35034 -5.12003,-0.93774 -0.22506,1.76294 1.61291,2.71943 -1.18155,1.12528 -6.95798,-11.77793 1.18154,-1.12528 4.46361,7.55813 0.75019,-7.33308 1.53788,-1.46286 -0.84396,7.05175 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3061"
+         d="m 383.62682,392.9854 c -0.58139,0.60015 -1.2003,1.08777 -1.83796,1.50038 -0.65641,0.4126 -1.35033,0.75018 -2.11927,1.01275 l -1.10653,-1.65041 0.0938,-0.0938 c 0.75019,-0.0188 1.50038,-0.22505 2.26932,-0.60015 0.75019,-0.37509 1.38784,-0.8252 1.91297,-1.35033 0.84397,-0.88147 1.33159,-1.70668 1.44411,-2.51313 0.0938,-0.80645 -0.11252,-1.46286 -0.63765,-1.98799 -0.28132,-0.28132 -0.60015,-0.41261 -0.95649,-0.43136 -0.35634,-0.0188 -0.73143,0.075 -1.12528,0.30007 -0.33759,0.20631 -0.71268,0.45012 -1.14404,0.75019 -0.4126,0.30008 -0.8252,0.5814 -1.23781,0.82521 -0.76894,0.50637 -1.50037,0.75018 -2.19429,0.76894 -0.69393,0.0188 -1.29408,-0.22506 -1.80045,-0.73143 -0.86272,-0.86272 -1.23781,-1.93174 -1.08777,-3.24456 0.15003,-1.29407 0.78769,-2.53188 1.93173,-3.69467 0.6189,-0.63766 1.25656,-1.16279 1.89422,-1.53789 0.65641,-0.39384 1.23781,-0.69392 1.80045,-0.90022 l 1.03151,1.55664 -0.0938,0.0938 c -0.54388,0.0188 -1.18154,0.22506 -1.89422,0.56264 -0.71268,0.35634 -1.36909,
 0.82521 -1.95048,1.4066 -0.73144,0.75019 -1.14404,1.50038 -1.25657,2.26932 -0.11253,0.75018 0.0938,1.38784 0.60015,1.87546 0.26257,0.26257 0.5814,0.41261 0.93774,0.46887 0.35633,0.0375 0.78769,-0.0938 1.29407,-0.4126 0.31883,-0.2063 0.73143,-0.46887 1.23781,-0.82521 0.52513,-0.33758 0.95648,-0.6189 1.31282,-0.84396 0.73144,-0.4126 1.4066,-0.6189 2.00675,-0.6189 0.61891,0 1.16279,0.24381 1.63166,0.71268 0.39385,0.39384 0.71268,0.88147 0.90022,1.46286 0.20631,0.58139 0.28132,1.2003 0.22506,1.8192 -0.075,0.67517 -0.26257,1.35034 -0.5814,2.00675 -0.33758,0.67517 -0.8252,1.35034 -1.50037,2.04426 z m 6.32033,-15.11626 c -0.0938,-0.13129 -0.1688,-0.24381 -0.24382,-0.33759 -0.075,-0.075 -0.16879,-0.16879 -0.26256,-0.26256 -0.43136,-0.43136 -0.90022,-0.65642 -1.36909,-0.65642 -0.48762,0 -0.97524,0.26257 -1.50038,0.7877 -0.56264,0.58139 -0.90022,1.27532 -0.97524,2.08177 -0.075,0.78769 0.0938,1.55664 0.52513,2.30682 z m 2.08176,6.54538 c -0.93773,0.93773 -1.87546,1.46286 -2.85071,1.57539 -0.97
 524,0.0938 -1.89422,-0.28132 -2.75693,-1.10653 -1.23781,-1.2378 -1.93173,-2.68191 -2.04426,-4.35108 -0.13128,-1.66917 0.37509,-3.09452 1.51913,-4.25731 0.76894,-0.76894 1.53788,-1.18154 2.32558,-1.21905 0.78769,-0.0188 1.53788,0.30007 2.2318,0.97524 0.13128,0.13128 0.30008,0.33758 0.52513,0.63766 0.24381,0.28132 0.48762,0.63766 0.76894,1.06901 l -4.93247,5.04501 c 0.0938,0.13128 0.18754,0.26257 0.28132,0.39385 0.11252,0.11253 0.2063,0.22505 0.31882,0.31883 0.56265,0.56264 1.2003,0.8252 1.85672,0.80645 0.67517,-0.0375 1.33158,-0.37509 1.96924,-1.01275 0.43136,-0.45012 0.80645,-0.994 1.08777,-1.65041 0.30007,-0.63766 0.48762,-1.23781 0.58139,-1.7817 l 0.075,-0.0563 0.994,1.46286 c -0.13128,0.28132 -0.24381,0.52513 -0.33758,0.75019 -0.0938,0.22506 -0.24381,0.48762 -0.43136,0.80645 -0.18755,0.31883 -0.35634,0.58139 -0.52513,0.80645 -0.15004,0.22506 -0.3751,0.48762 -0.65642,0.7877 z m 5.45761,-16.44785 c 0.0938,0.0938 0.22506,0.24381 0.3751,0.4126 0.13128,0.18755 0.26256,0.35634 0.35634,
 0.50638 l 3.4321,5.38259 -1.12528,1.14403 -3.0195,-4.70742 c -0.16879,-0.26256 -0.31883,-0.48762 -0.45011,-0.67517 -0.13129,-0.16879 -0.28132,-0.33758 -0.45012,-0.50637 -0.33758,-0.33759 -0.67516,-0.50638 -1.01275,-0.50638 -0.33758,0 -0.73143,0.22506 -1.16279,0.67517 -0.30007,0.30007 -0.54388,0.73143 -0.75018,1.27532 -0.18755,0.54388 -0.33759,1.10652 -0.45012,1.68792 l 3.93848,6.18904 -1.12528,1.14403 -5.28881,-8.28956 1.12528,-1.14404 0.58139,0.91898 c 0.13128,-0.73143 0.30008,-1.36909 0.50638,-1.91297 0.2063,-0.54389 0.50637,-1.01276 0.91898,-1.42536 0.58139,-0.58139 1.18154,-0.90022 1.80045,-0.93773 0.63765,-0.0375 1.2378,0.2063 1.80044,0.76894 z m 4.40735,-12.99699 7.35183,11.55288 -1.12528,1.14403 -0.54388,-0.86271 c -0.15004,0.86271 -0.33759,1.51913 -0.50638,1.95049 -0.18755,0.43135 -0.46887,0.84396 -0.8252,1.21905 -0.63766,0.63766 -1.35034,0.95649 -2.11928,0.93773 -0.7877,-0.0375 -1.55664,-0.43135 -2.34434,-1.18154 -0.65641,-0.65641 -1.18154,-1.36909 -1.55663,-2.13803 -0.3938
 5,-0.7877 -0.63766,-1.55664 -0.73144,-2.34434 -0.0938,-0.76894 -0.0375,-1.51913 0.1688,-2.25056 0.2063,-0.71268 0.58139,-1.35033 1.10652,-1.87547 0.35634,-0.35633 0.71268,-0.65641 1.08777,-0.88147 0.3751,-0.24381 0.76894,-0.4126 1.2003,-0.52513 l -2.30682,-3.58214 z m 1.89422,5.90772 c -0.4126,0.13129 -0.78769,0.30008 -1.10652,0.48763 -0.31883,0.2063 -0.63766,0.45011 -0.91898,0.75018 -0.4126,0.4126 -0.69392,0.90023 -0.80645,1.42536 -0.13128,0.54388 -0.13128,1.10652 0,1.68792 0.11253,0.52513 0.33758,1.06901 0.67517,1.63165 0.33758,0.56264 0.71267,1.05027 1.14403,1.46287 0.48762,0.48762 0.97524,0.75019 1.46287,0.80645 0.46886,0.0375 0.93773,-0.18755 1.4066,-0.65641 0.33758,-0.35634 0.60015,-0.7877 0.76894,-1.31283 0.18754,-0.52513 0.31883,-1.03151 0.4126,-1.53788 z m 15.2288,-3.99474 c -0.5814,0.58139 -1.18155,1.08777 -1.83796,1.50037 -0.63766,0.4126 -1.35034,0.75019 -2.11928,1.01275 l -1.08777,-1.65041 0.075,-0.0938 c 0.75019,-0.0187 1.51913,-0.22506 2.26931,-0.60015 0.75019,-0.37509
  1.38785,-0.82521 1.91298,-1.35034 0.86272,-0.88147 1.33158,-1.70667 1.44411,-2.51312 0.11253,-0.80645 -0.11253,-1.46287 -0.63766,-1.988 -0.28132,-0.28132 -0.60015,-0.43135 -0.95649,-0.45011 -0.33758,-0.0188 -0.71267,0.0938 -1.12528,0.31883 -0.31883,0.2063 -0.71267,0.45011 -1.12528,0.75019 -0.43135,0.30007 -0.84396,0.56264 -1.23781,0.8252 -0.78769,0.50638 -1.51912,0.75019 -2.19429,0.76894 -0.69392,0 -1.29407,-0.24381 -1.8192,-0.75018 -0.86272,-0.84396 -1.21906,-1.91298 -1.08778,-3.22581 0.15004,-1.29407 0.7877,-2.53188 1.93174,-3.69467 0.63765,-0.63766 1.25656,-1.16279 1.91297,-1.55663 0.63766,-0.3751 1.23781,-0.67517 1.7817,-0.88147 l 1.0315,1.55663 -0.075,0.0938 c -0.56264,0.0187 -1.20029,0.2063 -1.91297,0.56264 -0.71268,0.35634 -1.36909,0.8252 -1.95049,1.4066 -0.71268,0.75018 -1.14403,1.50037 -1.23781,2.26931 -0.11252,0.75019 0.075,1.38785 0.5814,1.87547 0.26256,0.26257 0.58139,0.4126 0.93773,0.45011 0.3751,0.0375 0.80645,-0.0938 1.29407,-0.39385 0.31883,-0.2063 0.73144,-0.46886 
 1.25657,-0.8252 0.50637,-0.33759 0.93773,-0.61891 1.31282,-0.84396 0.73144,-0.43136 1.38785,-0.63766 2.00675,-0.61891 0.60015,0 1.14404,0.24381 1.61291,0.71268 0.4126,0.39385 0.71267,0.88147 0.91897,1.46287 0.20631,0.58139 0.26257,1.18154 0.20631,1.8192 -0.0563,0.67517 -0.26257,1.33158 -0.5814,2.00675 -0.31883,0.67517 -0.8252,1.35034 -1.50037,2.04426 z m 3.26331,-17.32932 0.73143,1.12528 -2.34433,2.36309 2.43811,3.82595 c 0.13128,0.2063 0.28132,0.41261 0.45011,0.65642 0.16879,0.22505 0.31883,0.4126 0.45011,0.52513 0.30008,0.30007 0.5814,0.43136 0.88147,0.43136 0.30007,-0.0188 0.63766,-0.22506 1.03151,-0.61891 0.16879,-0.16879 0.33758,-0.39385 0.50637,-0.69392 0.1688,-0.28132 0.28132,-0.48762 0.31883,-0.60015 l 0.0563,-0.0563 0.78769,1.18155 c -0.16879,0.28132 -0.35634,0.58139 -0.58139,0.86271 -0.2063,0.30008 -0.43136,0.54389 -0.61891,0.75019 -0.54388,0.56264 -1.10652,0.88147 -1.65041,0.93773 -0.54388,0.075 -1.08777,-0.15003 -1.63165,-0.67516 -0.13129,-0.13129 -0.24381,-0.26257 -0.35
 634,-0.41261 -0.11253,-0.15003 -0.22506,-0.31883 -0.35634,-0.50637 l -2.85071,-4.44486 -0.75019,0.76894 -0.73143,-1.12528 0.76894,-0.76894 -1.51913,-2.38184 1.12528,-1.16279 1.51913,2.38184 z m 7.95198,-1.63166 c -0.60015,0.63766 -1.10652,1.25657 -1.50037,1.80045 -0.4126,0.56264 -0.71268,1.06902 -0.88147,1.51913 -0.18755,0.46887 -0.26257,0.90023 -0.18755,1.29407 0.0563,0.39385 0.26257,0.75019 0.60015,1.08778 0.30008,0.30007 0.61891,0.4126 0.97524,0.37509 0.33759,-0.0563 0.73144,-0.28132 1.14404,-0.71268 0.35634,-0.37509 0.65641,-0.8252 0.86271,-1.36909 0.20631,-0.54389 0.35634,-1.10653 0.45012,-1.66917 z m 2.21305,3.48837 c -0.0375,0.1688 -0.0938,0.41261 -0.16879,0.71268 -0.0563,0.30008 -0.15004,0.60015 -0.24381,0.88147 -0.11253,0.30008 -0.26257,0.61891 -0.45011,0.93773 -0.18755,0.31883 -0.45011,0.63766 -0.7877,0.97525 -0.52513,0.54388 -1.12528,0.8252 -1.76294,0.86271 -0.65641,0.0188 -1.23781,-0.2063 -1.74418,-0.71268 -0.54389,-0.54388 -0.88147,-1.10652 -1.01275,-1.70667 -0.15004,-0
 .60015 -0.0938,-1.25656 0.11252,-1.96924 0.22506,-0.67517 0.60015,-1.4066 1.14404,-2.15679 0.52513,-0.75019 1.18154,-1.55664 1.98799,-2.41935 -0.0938,-0.13128 -0.16879,-0.24381 -0.24381,-0.33758 -0.0563,-0.11253 -0.13128,-0.20631 -0.22505,-0.28133 -0.2063,-0.2063 -0.39385,-0.31882 -0.60015,-0.35633 -0.18755,-0.0375 -0.39385,-0.0375 -0.61891,0.0375 -0.2063,0.075 -0.43135,0.18754 -0.63766,0.35633 -0.2063,0.1688 -0.43135,0.35634 -0.65641,0.5814 -0.33758,0.35634 -0.67517,0.80645 -0.994,1.35034 -0.31883,0.56264 -0.56264,1.0315 -0.73143,1.42535 l -0.0563,0.0563 -0.93774,-1.35033 c 0.1688,-0.30008 0.43136,-0.71268 0.80645,-1.23781 0.35634,-0.52513 0.73144,-0.994 1.14404,-1.4066 0.82521,-0.84396 1.59415,-1.35034 2.26931,-1.51913 0.67517,-0.15004 1.31283,0.0375 1.89423,0.60015 0.0938,0.11253 0.2063,0.24381 0.31883,0.37509 0.11252,0.15004 0.2063,0.28132 0.28132,0.41261 l 3.60089,5.6264 -1.12528,1.12528 z m 2.02551,-14.29106 0.71268,1.12528 -2.32558,2.38185 2.4381,3.82595 c 0.13129,0.18755 0.2
 6257,0.4126 0.45012,0.63766 0.16879,0.24381 0.31882,0.4126 0.43135,0.54388 0.30008,0.28132 0.60015,0.43136 0.90023,0.41261 0.28132,0 0.63766,-0.2063 1.0315,-0.61891 0.1688,-0.16879 0.33759,-0.39384 0.50638,-0.67516 0.16879,-0.30008 0.26257,-0.48763 0.31883,-0.60015 l 0.0563,-0.075 0.7877,1.18154 c -0.16879,0.30008 -0.37509,0.5814 -0.5814,0.88147 -0.22505,0.28132 -0.43135,0.54389 -0.6189,0.73143 -0.56264,0.56264 -1.10652,0.88147 -1.65041,0.95649 -0.56264,0.0563 -1.10653,-0.16879 -1.63166,-0.69392 -0.13128,-0.13128 -0.24381,-0.26257 -0.35634,-0.4126 -0.11252,-0.13129 -0.24381,-0.30008 -0.37509,-0.50638 l -2.83195,-4.44486 -0.76895,0.76894 -0.71267,-1.10652 0.76894,-0.7877 -1.51913,-2.38184 1.12528,-1.16279 1.51913,2.38184 z m 6.88296,-7.033 5.30757,8.27081 -1.12528,1.14404 -0.60015,-0.90023 c -0.13128,0.71268 -0.28132,1.35034 -0.48762,1.89422 -0.2063,0.54389 -0.50637,1.03151 -0.91898,1.44411 -0.58139,0.5814 -1.18154,0.90023 -1.80044,0.93774 -0.61891,0.0375 -1.21906,-0.22506 -1.7817,-0
 .76894 -0.13128,-0.13129 -0.26256,-0.28132 -0.37509,-0.43136 -0.11253,-0.15004 -0.24381,-0.31883 -0.3751,-0.50638 l -3.4321,-5.38259 1.12528,-1.14403 3.0195,4.70742 c 0.11253,0.2063 0.26257,0.4126 0.43136,0.63766 0.18755,0.24381 0.31883,0.4126 0.45011,0.52513 0.33759,0.35634 0.69392,0.52513 1.05026,0.50638 0.33759,0 0.73144,-0.20631 1.14404,-0.63766 0.30007,-0.30008 0.54388,-0.73144 0.75019,-1.29408 0.2063,-0.56264 0.35633,-1.12528 0.45011,-1.66916 l -3.95724,-6.18904 z m 9.78994,4.03226 c -0.50637,0.50637 -1.01275,0.90022 -1.53788,1.18154 -0.52513,0.28132 -0.994,0.48762 -1.4066,0.6189 l -0.97525,-1.46286 0.0563,-0.075 c 0.16879,0 0.35634,-0.0375 0.60015,-0.075 0.22505,-0.0375 0.50637,-0.11253 0.80645,-0.22505 0.28132,-0.11253 0.58139,-0.24381 0.88147,-0.43136 0.31883,-0.16879 0.60015,-0.4126 0.88147,-0.69392 0.56264,-0.56264 0.90022,-1.12528 1.01275,-1.65041 0.11253,-0.54389 -0.0563,-1.01276 -0.46887,-1.42536 -0.2063,-0.2063 -0.45011,-0.28132 -0.73143,-0.22506 -0.28132,0.0563 -0.61
 89,0.18755 -0.994,0.43136 -0.18754,0.13128 -0.43135,0.28132 -0.69392,0.46887 -0.26256,0.16879 -0.52513,0.33758 -0.80645,0.50637 -0.6189,0.35634 -1.14403,0.50638 -1.63166,0.48763 -0.48762,-0.0188 -0.90022,-0.20631 -1.27531,-0.56264 -0.31883,-0.31883 -0.54389,-0.67517 -0.67517,-1.08778 -0.15004,-0.4126 -0.2063,-0.86271 -0.16879,-1.36909 0.0375,-0.48762 0.18754,-0.99399 0.45011,-1.51912 0.24381,-0.54389 0.6189,-1.06902 1.10652,-1.55664 0.39385,-0.41261 0.84396,-0.76894 1.35034,-1.06902 0.50638,-0.30007 0.95649,-0.50638 1.35034,-0.6189 l 0.93773,1.38784 -0.0563,0.075 c -0.11253,0 -0.28132,0.0375 -0.50638,0.075 -0.22506,0.0563 -0.46887,0.13128 -0.76894,0.24381 -0.24381,0.0938 -0.52513,0.22506 -0.80645,0.4126 -0.28132,0.18755 -0.56264,0.39385 -0.80645,0.65642 -0.48763,0.50637 -0.7877,1.01275 -0.90023,1.55663 -0.0938,0.52514 0.0375,0.97525 0.4126,1.33159 0.20631,0.2063 0.45012,0.28132 0.75019,0.26256 0.28132,-0.0188 0.61891,-0.15004 1.01275,-0.39385 0.24381,-0.15003 0.48763,-0.31883 0.7501
 9,-0.48762 0.26257,-0.16879 0.50638,-0.33758 0.76894,-0.50637 0.5814,-0.35634 1.10653,-0.52513 1.59415,-0.52513 0.48762,0 0.90022,0.18754 1.27532,0.56264 0.31883,0.30007 0.56264,0.69392 0.71267,1.12528 0.15004,0.45011 0.20631,0.91897 0.15004,1.4066 -0.0563,0.52513 -0.22505,1.05026 -0.48762,1.59414 -0.28132,0.54389 -0.67517,1.06902 -1.16279,1.5754 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3063"
+         style="font-size:17.55437279px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="507.19437"
+         x="233.59662"
+         xml:space="preserve">Task Execution,</text>
+      <text
+         id="text3065"
+         style="font-size:17.55437279px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="528.19958"
+         x="236.89745"
+         xml:space="preserve">Data Exchange</text>
+      <text
+         id="text3067"
+         style="font-size:17.55437279px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="507.19437"
+         x="646.51672"
+         xml:space="preserve">Task Execution,</text>
+      <text
+         id="text3069"
+         style="font-size:17.55437279px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="528.19958"
+         x="649.8175"
+         xml:space="preserve">Data Exchange</text>
+      <path
+         id="path3071"
+         d="m 564.90948,478.2629 -122.78684,1.87546 0.0938,5.62641 122.76808,-1.87547 z m -15.24755,15.32256 20.8552,-12.60314 -21.23029,-11.94672 c -1.35034,-0.76894 -3.05701,-0.28132 -3.82595,1.06901 -0.75019,1.35034 -0.28132,3.07577 1.06901,3.84471 l 17.02925,9.58364 -0.075,-4.85746 -16.72917,10.09001 c -1.33158,0.80645 -1.74418,2.53188 -0.93773,3.86347 0.78769,1.33158 2.53188,1.76294 3.8447,0.95648 z m -92.21672,-23.14326 -20.85519,12.60314 21.23028,11.94672 c 1.35034,0.76895 3.07577,0.28133 3.82596,-1.06901 0.76894,-1.35034 0.28132,-3.07577 -1.06902,-3.82595 l 0,0 -17.02924,-9.6024 0.075,4.87622 16.72917,-10.10877 c 1.33158,-0.80645 1.76293,-2.53188 0.95648,-3.86346 -0.80645,-1.33158 -2.53188,-1.76294 -3.86346,-0.95649 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3073"
+         style="font-size:17.55437279px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="516.58087"
+         x="466.24557"
+         xml:space="preserve">Exchange </text>
+      <text
+         id="text3075"
+         style="font-size:17.55437279px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="537.58606"
+         x="451.39188"
+         xml:space="preserve">Intermediate</text>
+      <text
+         id="text3077"
+         style="font-size:17.55437279px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="558.59131"
+         x="476.7482"
+         xml:space="preserve">Results</text>
+      <text
+         id="text3079"
+         style="font-size:17.55437279px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="579.59656"
+         x="418.08359"
+         xml:space="preserve">(</text>
+      <text
+         id="text3081"
+         style="font-size:17.55437279px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="579.59656"
+         x="426.03558"
+         xml:space="preserve">shuffle</text>
+      <text
+         id="text3083"
+         style="font-size:17.55437279px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="579.59656"
+         x="490.85172"
+         xml:space="preserve">/ </text>
+      <text
+         id="text3085"
+         style="font-size:17.55437279px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="579.59656"
+         x="504.95523"
+         xml:space="preserve">broadcast</text>
+      <text
+         id="text3087"
+         style="font-size:17.55437279px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana"
+         y="579.59656"
+         x="591.22675"
+         xml:space="preserve">) </text>
+    </g>
+  </g>
+</svg>

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/fig/FlinkOnYarn.svg
----------------------------------------------------------------------
diff --git a/docs/fig/FlinkOnYarn.svg b/docs/fig/FlinkOnYarn.svg
new file mode 100644
index 0000000..3eddf50
--- /dev/null
+++ b/docs/fig/FlinkOnYarn.svg
@@ -0,0 +1,151 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" fill-opacity="1" color-rendering="auto" color-interpolation="auto" stroke="black" text-rendering="auto" stroke-linecap="square" width="877" stroke-miterlimit="10" stroke-opacity="1" shape-rendering="auto" fill="black" stroke-dasharray="none" font-weight="normal" stroke-width="1" height="397" font-family="'Dialog'" font-style="normal" stroke-linejoin="miter" font-size="12" stroke-dashoffset="0" image-rendering="auto">
+  <!--Generated by ySVG 2.5-->
+  <defs id="genericDefs"/>
+  <g>
+    <defs id="defs1">
+      <linearGradient x1="336.5" gradientUnits="userSpaceOnUse" x2="506.5" y1="386.5" y2="426.5" id="linearGradient1" spreadMethod="reflect">
+        <stop stop-opacity="1" stop-color="rgb(232,238,247)" offset="0%"/>
+        <stop stop-opacity="1" stop-color="rgb(183,201,227)" offset="100%"/>
+      </linearGradient>
+      <clipPath clipPathUnits="userSpaceOnUse" id="clipPath1">
+        <path d="M0 0 L877 0 L877 397 L0 397 L0 0 Z"/>
+      </clipPath>
+      <clipPath clipPathUnits="userSpaceOnUse" id="clipPath2">
+        <path d="M77 45 L954 45 L954 442 L77 442 L77 45 Z"/>
+      </clipPath>
+    </defs>
+    <g fill="white" text-rendering="geometricPrecision" shape-rendering="geometricPrecision" transform="translate(-77,-45)" stroke="white">
+      <rect x="77" width="877" height="397" y="45" clip-path="url(#clipPath2)" stroke="none"/>
+    </g>
+    <g fill="rgb(0,153,153)" text-rendering="geometricPrecision" shape-rendering="geometricPrecision" transform="matrix(1,0,0,1,-77,-45)" stroke="rgb(0,153,153)">
+      <rect x="92" y="209" clip-path="url(#clipPath2)" width="119" rx="4" ry="4" height="117" stroke="none"/>
+    </g>
+    <g text-rendering="geometricPrecision" stroke-miterlimit="1.45" shape-rendering="geometricPrecision" font-family="sans-serif" transform="matrix(1,0,0,1,-77,-45)" stroke-linecap="butt">
+      <text x="107.9238" xml:space="preserve" y="226.1387" clip-path="url(#clipPath2)" stroke="none">"Master" Node</text>
+      <rect x="92" y="209" clip-path="url(#clipPath2)" fill="none" width="119" rx="4" ry="4" height="117"/>
+    </g>
+    <g fill="rgb(0,153,153)" text-rendering="geometricPrecision" shape-rendering="geometricPrecision" transform="matrix(1,0,0,1,-77,-45)" stroke="rgb(0,153,153)">
+      <rect x="369" y="60" clip-path="url(#clipPath2)" width="105" rx="4" ry="4" height="117" stroke="none"/>
+    </g>
+    <g text-rendering="geometricPrecision" stroke-miterlimit="1.45" shape-rendering="geometricPrecision" font-family="sans-serif" transform="matrix(1,0,0,1,-77,-45)" stroke-linecap="butt">
+      <text x="375.498" xml:space="preserve" y="115.6699" clip-path="url(#clipPath2)" stroke="none">YARN Resource</text>
+      <text x="395.4492" xml:space="preserve" y="129.6387" clip-path="url(#clipPath2)" stroke="none">Manager</text>
+      <rect x="369" y="60" clip-path="url(#clipPath2)" fill="none" width="105" rx="4" ry="4" height="117"/>
+    </g>
+    <g fill="rgb(0,153,153)" text-rendering="geometricPrecision" shape-rendering="geometricPrecision" transform="matrix(1,0,0,1,-77,-45)" stroke="rgb(0,153,153)">
+      <rect x="369" y="212" clip-path="url(#clipPath2)" width="105" rx="4" ry="4" height="117" stroke="none"/>
+    </g>
+    <g text-rendering="geometricPrecision" stroke-miterlimit="1.45" shape-rendering="geometricPrecision" font-family="sans-serif" transform="matrix(1,0,0,1,-77,-45)" stroke-linecap="butt">
+      <text x="373.8457" xml:space="preserve" y="229.1387" clip-path="url(#clipPath2)" stroke="none">YARN Container</text>
+      <rect x="369" y="212" clip-path="url(#clipPath2)" fill="none" width="105" rx="4" ry="4" height="117"/>
+    </g>
+    <g fill="rgb(0,153,153)" text-rendering="geometricPrecision" shape-rendering="geometricPrecision" transform="matrix(1,0,0,1,-77,-45)" stroke="rgb(0,153,153)">
+      <rect x="614" y="212" clip-path="url(#clipPath2)" width="105" rx="4" ry="4" height="117" stroke="none"/>
+    </g>
+    <g text-rendering="geometricPrecision" stroke-miterlimit="1.45" shape-rendering="geometricPrecision" font-family="sans-serif" transform="matrix(1,0,0,1,-77,-45)" stroke-linecap="butt">
+      <text x="618.8457" xml:space="preserve" y="229.1387" clip-path="url(#clipPath2)" stroke="none">YARN Container</text>
+      <rect x="614" y="212" clip-path="url(#clipPath2)" fill="none" width="105" rx="4" ry="4" height="117"/>
+    </g>
+    <g fill="url(#linearGradient1)" text-rendering="geometricPrecision" shape-rendering="geometricPrecision" transform="matrix(1,0,0,1,-77,-45)" stroke="url(#linearGradient1)">
+      <path d="M336.5 394.5 C341.6 386.5 501.4 386.5 506.5 394.5 L506.5 418.5 C501.4 426.5 341.6 426.5 336.5 418.5 Z" clip-path="url(#clipPath2)" stroke="none"/>
+    </g>
+    <g text-rendering="geometricPrecision" stroke-miterlimit="1.45" shape-rendering="geometricPrecision" transform="matrix(1,0,0,1,-77,-45)" stroke-linecap="butt">
+      <path fill="none" d="M336.5 394.5 C341.6 386.5 501.4 386.5 506.5 394.5 L506.5 418.5 C501.4 426.5 341.6 426.5 336.5 418.5 Z" clip-path="url(#clipPath2)"/>
+      <path fill="none" d="M506.5 394.5 C501.4 402.5 341.6 402.5 336.5 394.5" clip-path="url(#clipPath2)"/>
+      <text x="405.1084" xml:space="preserve" y="410.6543" font-family="sans-serif" clip-path="url(#clipPath2)" stroke="none">HDFS</text>
+    </g>
+    <g fill="rgb(0,153,153)" text-rendering="geometricPrecision" shape-rendering="geometricPrecision" transform="matrix(1,0,0,1,-77,-45)" stroke="rgb(0,153,153)">
+      <rect x="791" y="212" clip-path="url(#clipPath2)" width="105" rx="4" ry="4" height="117" stroke="none"/>
+    </g>
+    <g text-rendering="geometricPrecision" stroke-miterlimit="1.45" shape-rendering="geometricPrecision" font-family="sans-serif" transform="matrix(1,0,0,1,-77,-45)" stroke-linecap="butt">
+      <text x="795.8457" xml:space="preserve" y="229.1387" clip-path="url(#clipPath2)" stroke="none">YARN Container</text>
+      <rect x="791" y="212" clip-path="url(#clipPath2)" fill="none" width="105" rx="4" ry="4" height="117"/>
+    </g>
+    <g fill="rgb(51,153,102)" text-rendering="geometricPrecision" shape-rendering="geometricPrecision" transform="matrix(1,0,0,1,-77,-45)" stroke="rgb(51,153,102)">
+      <rect x="99" width="105" height="30" y="242.5" clip-path="url(#clipPath2)" stroke="none"/>
+    </g>
+    <g text-rendering="geometricPrecision" stroke-miterlimit="1.45" shape-rendering="geometricPrecision" font-family="sans-serif" transform="matrix(1,0,0,1,-77,-45)" stroke-linecap="butt">
+      <text x="137.4375" xml:space="preserve" y="254.6699" clip-path="url(#clipPath2)" stroke="none">Flink</text>
+      <text x="115.7959" xml:space="preserve" y="268.6387" clip-path="url(#clipPath2)" stroke="none">YARN Client</text>
+      <rect fill="none" x="99" width="105" height="30" y="242.5" clip-path="url(#clipPath2)"/>
+    </g>
+    <g fill="rgb(51,153,102)" text-rendering="geometricPrecision" shape-rendering="geometricPrecision" transform="matrix(1,0,0,1,-77,-45)" stroke="rgb(51,153,102)">
+      <rect x="375" width="93" height="30" y="252.5" clip-path="url(#clipPath2)" stroke="none"/>
+    </g>
+    <g text-rendering="geometricPrecision" stroke-miterlimit="1.45" shape-rendering="geometricPrecision" font-family="sans-serif" transform="matrix(1,0,0,1,-77,-45)" stroke-linecap="butt">
+      <text x="407.4375" xml:space="preserve" y="264.6699" clip-path="url(#clipPath2)" stroke="none">Flink</text>
+      <text x="385.9512" xml:space="preserve" y="278.6387" clip-path="url(#clipPath2)" stroke="none">JobManager</text>
+      <rect fill="none" x="375" width="93" height="30" y="252.5" clip-path="url(#clipPath2)"/>
+    </g>
+    <g fill="rgb(51,153,102)" text-rendering="geometricPrecision" shape-rendering="geometricPrecision" transform="matrix(1,0,0,1,-77,-45)" stroke="rgb(51,153,102)">
+      <rect x="375" width="93" height="30" y="282.5" clip-path="url(#clipPath2)" stroke="none"/>
+    </g>
+    <g text-rendering="geometricPrecision" stroke-miterlimit="1.45" shape-rendering="geometricPrecision" font-family="sans-serif" transform="matrix(1,0,0,1,-77,-45)" stroke-linecap="butt">
+      <text x="389.5371" xml:space="preserve" y="294.6699" clip-path="url(#clipPath2)" stroke="none">YARN App.</text>
+      <text x="401.0098" xml:space="preserve" y="308.6387" clip-path="url(#clipPath2)" stroke="none">Master</text>
+      <rect fill="none" x="375" width="93" height="30" y="282.5" clip-path="url(#clipPath2)"/>
+    </g>
+    <g fill="rgb(51,153,102)" text-rendering="geometricPrecision" shape-rendering="geometricPrecision" transform="matrix(1,0,0,1,-77,-45)" stroke="rgb(51,153,102)">
+      <rect x="620" width="93" height="30" y="255.5" clip-path="url(#clipPath2)" stroke="none"/>
+    </g>
+    <g text-rendering="geometricPrecision" stroke-miterlimit="1.45" shape-rendering="geometricPrecision" font-family="sans-serif" transform="matrix(1,0,0,1,-77,-45)" stroke-linecap="butt">
+      <text x="652.4375" xml:space="preserve" y="267.6699" clip-path="url(#clipPath2)" stroke="none">Flink</text>
+      <text x="626.2578" xml:space="preserve" y="281.6387" clip-path="url(#clipPath2)" stroke="none">TaskManager</text>
+      <rect fill="none" x="620" width="93" height="30" y="255.5" clip-path="url(#clipPath2)"/>
+    </g>
+    <g fill="rgb(51,153,102)" text-rendering="geometricPrecision" shape-rendering="geometricPrecision" transform="matrix(1,0,0,1,-77,-45)" stroke="rgb(51,153,102)">
+      <rect x="797" width="93" height="30" y="255.5" clip-path="url(#clipPath2)" stroke="none"/>
+    </g>
+    <g text-rendering="geometricPrecision" stroke-miterlimit="1.45" shape-rendering="geometricPrecision" font-family="sans-serif" transform="matrix(1,0,0,1,-77,-45)" stroke-linecap="butt">
+      <text x="829.4375" xml:space="preserve" y="267.6699" clip-path="url(#clipPath2)" stroke="none">Flink</text>
+      <text x="803.2578" xml:space="preserve" y="281.6387" clip-path="url(#clipPath2)" stroke="none">TaskManager</text>
+      <rect fill="none" x="797" width="93" height="30" y="255.5" clip-path="url(#clipPath2)"/>
+      <text x="917.1621" xml:space="preserve" y="274.6543" font-weight="bold" clip-path="url(#clipPath2)" stroke="none">...</text>
+      <path fill="none" d="M210.991 234.6698 L361.997 151.3369" clip-path="url(#clipPath2)"/>
+      <path d="M369.0012 147.4715 L356.079 148.8919 L361.1215 151.82 L360.9107 157.6472 Z" clip-path="url(#clipPath2)" stroke="none"/>
+      <text x="157.7557" xml:space="preserve" y="141.7122" font-weight="bold" clip-path="url(#clipPath2)" stroke="none">2. Register resources </text>
+      <text x="150.4286" xml:space="preserve" y="155.681" font-weight="bold" clip-path="url(#clipPath2)" stroke="none">and request AppMaster </text>
+      <text x="199.911" xml:space="preserve" y="169.6497" font-weight="bold" clip-path="url(#clipPath2)" stroke="none">container</text>
+      <path fill="none" d="M421.5 177.0215 L421.5 204.0303" clip-path="url(#clipPath2)"/>
+      <path d="M421.5 212.0303 L426.5 200.0303 L421.5 203.0303 L416.5 200.0303 Z" clip-path="url(#clipPath2)" stroke="none"/>
+      <text x="452.5664" xml:space="preserve" y="198.6543" font-weight="bold" clip-path="url(#clipPath2)" stroke="none">3. Allocate AppMaster Container</text>
+      <path fill="none" d="M473.9872 270.5 L606.0008 270.5" clip-path="url(#clipPath2)"/>
+      <path d="M614.0008 270.5 L602.0008 265.5 L605.0008 270.5 L602.0008 275.5 Z" clip-path="url(#clipPath2)" stroke="none"/>
+      <text x="481.2285" xml:space="preserve" y="304.6543" font-weight="bold" clip-path="url(#clipPath2)" stroke="none">4. Allocate Worker</text>
+      <path fill="none" d="M210.991 298.1268 L380.3388 385.3096" clip-path="url(#clipPath2)"/>
+      <path d="M387.4516 388.9714 L379.071 379.0333 L379.4497 384.8519 L374.4938 387.9242 Z" clip-path="url(#clipPath2)" stroke="none"/>
+      <text x="207.4763" xml:space="preserve" y="355.1486" font-weight="bold" clip-path="url(#clipPath2)" stroke="none">1. Store Uberjar</text>
+      <text x="201.4939" xml:space="preserve" y="369.1174" font-weight="bold" clip-path="url(#clipPath2)" stroke="none">and configuration</text>
+      <path fill="none" d="M421.5 388.4707 L421.5 336.9957" clip-path="url(#clipPath2)" stroke="gray"/>
+      <path fill="gray" d="M421.5 328.9957 L416.5 340.9957 L421.5 337.9957 L426.5 340.9957 Z" clip-path="url(#clipPath2)" stroke="none"/>
+      <path fill="none" d="M453.1718 388.9189 L606.9806 303.5393" clip-path="url(#clipPath2)" stroke="gray"/>
+      <path fill="gray" d="M613.9752 299.6566 L601.0566 301.109 L606.1063 304.0247 L605.91 309.8523 Z" clip-path="url(#clipPath2)" stroke="none"/>
+      <path fill="none" d="M719.0037 270.5 L783.0085 270.5" clip-path="url(#clipPath2)"/>
+      <path d="M791.0085 270.5 L779.0085 265.5 L782.0085 270.5 L779.0085 275.5 Z" clip-path="url(#clipPath2)" stroke="none"/>
+      <path fill="none" d="M473.6061 389.7075 L783.3726 289.8775" clip-path="url(#clipPath2)" stroke="gray"/>
+      <path fill="gray" d="M790.987 287.4236 L778.0318 286.3455 L782.4208 290.1843 L781.0992 295.8634 Z" clip-path="url(#clipPath2)" stroke="none"/>
+      <text x="542.0724" xml:space="preserve" y="392.1359" font-weight="bold" clip-path="url(#clipPath2)" stroke="none">Always Bootstrap containers with</text>
+      <text x="593.1105" xml:space="preserve" y="406.1046" font-weight="bold" clip-path="url(#clipPath2)" stroke="none">Uberjar and config</text>
+    </g>
+  </g>
+</svg>

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/fig/back_pressure_sampling.png
----------------------------------------------------------------------
diff --git a/docs/fig/back_pressure_sampling.png b/docs/fig/back_pressure_sampling.png
new file mode 100644
index 0000000..ad6ce2f
Binary files /dev/null and b/docs/fig/back_pressure_sampling.png differ

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/fig/back_pressure_sampling_high.png
----------------------------------------------------------------------
diff --git a/docs/fig/back_pressure_sampling_high.png b/docs/fig/back_pressure_sampling_high.png
new file mode 100644
index 0000000..15372fd
Binary files /dev/null and b/docs/fig/back_pressure_sampling_high.png differ

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/fig/back_pressure_sampling_in_progress.png
----------------------------------------------------------------------
diff --git a/docs/fig/back_pressure_sampling_in_progress.png b/docs/fig/back_pressure_sampling_in_progress.png
new file mode 100644
index 0000000..96ec3cd
Binary files /dev/null and b/docs/fig/back_pressure_sampling_in_progress.png differ

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/fig/back_pressure_sampling_ok.png
----------------------------------------------------------------------
diff --git a/docs/fig/back_pressure_sampling_ok.png b/docs/fig/back_pressure_sampling_ok.png
new file mode 100644
index 0000000..2ca2d51
Binary files /dev/null and b/docs/fig/back_pressure_sampling_ok.png differ


[74/89] [abbrv] flink git commit: [FLINK-4362] [rpc] Auto generate rpc gateways via Java proxies

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java
index 642a380..a4e1d7f 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java
@@ -61,10 +61,10 @@ public class AkkaRpcServiceTest extends TestLogger {
 		AkkaGateway akkaClient = (AkkaGateway) rm;
 
 		
-		jobMaster.registerAtResourceManager(AkkaUtils.getAkkaURL(actorSystem, akkaClient.getActorRef()));
+		jobMaster.registerAtResourceManager(AkkaUtils.getAkkaURL(actorSystem, akkaClient.getRpcServer()));
 
 		// wait for successful registration
-		FiniteDuration timeout = new FiniteDuration(20, TimeUnit.SECONDS);
+		FiniteDuration timeout = new FiniteDuration(200, TimeUnit.SECONDS);
 		Deadline deadline = timeout.fromNow();
 
 		while (deadline.hasTimeLeft() && !jobMaster.isConnected()) {

http://git-wip-us.apache.org/repos/asf/flink/blob/dc808e76/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorTest.java
index c143527..33c9cb6 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorTest.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorTest.java
@@ -48,7 +48,7 @@ public class TaskExecutorTest extends TestLogger {
 	@Test
 	public void testTaskExecution() throws Exception {
 		RpcService testingRpcService = mock(RpcService.class);
-		DirectExecutorService directExecutorService = null;
+		DirectExecutorService directExecutorService = new DirectExecutorService();
 		TaskExecutor taskExecutor = new TaskExecutor(testingRpcService, directExecutorService);
 
 		TaskDeploymentDescriptor tdd = new TaskDeploymentDescriptor(


[45/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/libs/ml/distance_metrics.md
----------------------------------------------------------------------
diff --git a/docs/apis/batch/libs/ml/distance_metrics.md b/docs/apis/batch/libs/ml/distance_metrics.md
deleted file mode 100644
index 303de4a..0000000
--- a/docs/apis/batch/libs/ml/distance_metrics.md
+++ /dev/null
@@ -1,111 +0,0 @@
----
-mathjax: include
-title: Distance Metrics
-
-# Sub navigation
-sub-nav-group: batch
-sub-nav-parent: flinkml
-sub-nav-title: Distance Metrics
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-* This will be replaced by the TOC
-{:toc}
-
-## Description
-
-Different metrics of distance are convenient for different types of analysis. Flink ML provides
-built-in implementations for many standard distance metrics. You can create custom
-distance metrics by implementing the `DistanceMetric` trait.
-
-## Built-in Implementations
-
-Currently, FlinkML supports the following metrics:
-
-<table class="table table-bordered">
-    <thead>
-      <tr>
-        <th class="text-left" style="width: 20%">Metric</th>
-        <th class="text-center">Description</th>
-      </tr>
-    </thead>
-
-    <tbody>
-      <tr>
-        <td><strong>Euclidean Distance</strong></td>
-        <td>
-          $$d(\x, \y) = \sqrt{\sum_{i=1}^n \left(x_i - y_i \right)^2}$$
-        </td>
-      </tr>
-      <tr>
-        <td><strong>Squared Euclidean Distance</strong></td>
-        <td>
-          $$d(\x, \y) = \sum_{i=1}^n \left(x_i - y_i \right)^2$$
-        </td>
-      </tr>
-      <tr>
-        <td><strong>Cosine Similarity</strong></td>
-        <td>
-          $$d(\x, \y) = 1 - \frac{\x^T \y}{\Vert \x \Vert \Vert \y \Vert}$$
-        </td>
-      </tr>
-      <tr>
-        <td><strong>Chebyshev Distance</strong></td>
-        <td>
-          $$d(\x, \y) = \max_{i}\left(\left \vert x_i - y_i \right\vert \right)$$
-        </td>
-      </tr>
-      <tr>
-        <td><strong>Manhattan Distance</strong></td>
-        <td>
-          $$d(\x, \y) = \sum_{i=1}^n \left\vert x_i - y_i \right\vert$$
-        </td>
-      </tr>
-      <tr>
-        <td><strong>Minkowski Distance</strong></td>
-        <td>
-          $$d(\x, \y) = \left( \sum_{i=1}^{n} \left( x_i - y_i \right)^p \right)^{\rfrac{1}{p}}$$
-        </td>
-      </tr>
-      <tr>
-        <td><strong>Tanimoto Distance</strong></td>
-        <td>
-          $$d(\x, \y) = 1 - \frac{\x^T\y}{\Vert \x \Vert^2 + \Vert \y \Vert^2 - \x^T\y}$$ 
-          with $\x$ and $\y$ being bit-vectors
-        </td>
-      </tr>
-    </tbody>
-  </table>
-
-## Custom Implementation
-
-You can create your own distance metric by implementing the `DistanceMetric` trait.
-
-{% highlight scala %}
-class MyDistance extends DistanceMetric {
-  override def distance(a: Vector, b: Vector) = ... // your implementation for distance metric
-}
-
-object MyDistance {
-  def apply() = new MyDistance()
-}
-
-val myMetric = MyDistance()
-{% endhighlight %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/libs/ml/index.md
----------------------------------------------------------------------
diff --git a/docs/apis/batch/libs/ml/index.md b/docs/apis/batch/libs/ml/index.md
deleted file mode 100644
index 39b3a02..0000000
--- a/docs/apis/batch/libs/ml/index.md
+++ /dev/null
@@ -1,151 +0,0 @@
----
-title: "FlinkML - Machine Learning for Flink"
-# Top navigation
-top-nav-group: libs
-top-nav-pos: 2
-top-nav-title: Machine Learning
-# Sub navigation
-sub-nav-group: batch
-sub-nav-id: flinkml
-sub-nav-pos: 2
-sub-nav-parent: libs
-sub-nav-title: Machine Learning
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-FlinkML is the Machine Learning (ML) library for Flink. It is a new effort in the Flink community,
-with a growing list of algorithms and contributors. With FlinkML we aim to provide
-scalable ML algorithms, an intuitive API, and tools that help minimize glue code in end-to-end ML
-systems. You can see more details about our goals and where the library is headed in our [vision
-and roadmap here](https://cwiki.apache.org/confluence/display/FLINK/FlinkML%3A+Vision+and+Roadmap).
-
-* This will be replaced by the TOC
-{:toc}
-
-## Supported Algorithms
-
-FlinkML currently supports the following algorithms:
-
-### Supervised Learning
-
-* [SVM using Communication efficient distributed dual coordinate ascent (CoCoA)](svm.html)
-* [Multiple linear regression](multiple_linear_regression.html)
-* [Optimization Framework](optimization.html)
-
-### Unsupervised Learning
-
-* [k-Nearest neighbors join](knn.html)
-
-### Data Preprocessing
-
-* [Polynomial Features](polynomial_features.html)
-* [Standard Scaler](standard_scaler.html)
-* [MinMax Scaler](min_max_scaler.html)
-
-### Recommendation
-
-* [Alternating Least Squares (ALS)](als.html)
-
-### Utilities
-
-* [Distance Metrics](distance_metrics.html)
-* [Cross Validation](cross_validation.html)
-
-## Getting Started
-
-You can check out our [quickstart guide](quickstart.html) for a comprehensive getting started
-example.
-
-If you want to jump right in, you have to [set up a Flink program]({{ site.baseurl }}/apis/batch/index.html#linking-with-flink).
-Next, you have to add the FlinkML dependency to the `pom.xml` of your project.
-
-{% highlight xml %}
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-ml{{ site.scala_version_suffix }}</artifactId>
-  <version>{{site.version }}</version>
-</dependency>
-{% endhighlight %}
-
-Note that FlinkML is currently not part of the binary distribution.
-See linking with it for cluster execution [here]({{site.baseurl}}/apis/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution).
-
-Now you can start solving your analysis task.
-The following code snippet shows how easy it is to train a multiple linear regression model.
-
-{% highlight scala %}
-
-
-// LabeledVector is a feature vector with a label (class or real value)
-val trainingData: DataSet[LabeledVector] = ...
-val testingData: DataSet[Vector] = ...
-
-// Alternatively, a Splitter is used to break up a DataSet into training and testing data.
-val dataSet: DataSet[LabeledVector] = ...
-val trainTestData: DataSet[TrainTestDataSet] = Splitter.trainTestSplit(dataSet)
-val trainingData: DataSet[LabeledVector] = trainTestData.training
-val testingData: DataSet[Vector] = trainTestData.testing.map(lv => lv.vector)
-
-val mlr = MultipleLinearRegression()
-  .setStepsize(1.0)
-  .setIterations(100)
-  .setConvergenceThreshold(0.001)
-
-mlr.fit(trainingData)
-
-// The fitted model can now be used to make predictions
-val predictions: DataSet[LabeledVector] = mlr.predict(testingData)
-{% endhighlight %}
-
-## Pipelines
-
-A key concept of FlinkML is its [scikit-learn](http://scikit-learn.org) inspired pipelining mechanism.
-It allows you to quickly build complex data analysis pipelines how they appear in every data scientist's daily work.
-An in-depth description of FlinkML's pipelines and their internal workings can be found [here](pipelines.html).
-
-The following example code shows how easy it is to set up an analysis pipeline with FlinkML.
-
-{% highlight scala %}
-val trainingData: DataSet[LabeledVector] = ...
-val testingData: DataSet[Vector] = ...
-
-val scaler = StandardScaler()
-val polyFeatures = PolynomialFeatures().setDegree(3)
-val mlr = MultipleLinearRegression()
-
-// Construct pipeline of standard scaler, polynomial features and multiple linear regression
-val pipeline = scaler.chainTransformer(polyFeatures).chainPredictor(mlr)
-
-// Train pipeline
-pipeline.fit(trainingData)
-
-// Calculate predictions
-val predictions: DataSet[LabeledVector] = pipeline.predict(testingData)
-{% endhighlight %}
-
-One can chain a `Transformer` to another `Transformer` or a set of chained `Transformers` by calling the method `chainTransformer`.
-If one wants to chain a `Predictor` to a `Transformer` or a set of chained `Transformers`, one has to call the method `chainPredictor`.
-
-
-## How to contribute
-
-The Flink community welcomes all contributors who want to get involved in the development of Flink and its libraries.
-In order to get quickly started with contributing to FlinkML, please read our official
-[contribution guide]({{site.baseurl}}/libs/ml/contribution_guide.html).

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/libs/ml/knn.md
----------------------------------------------------------------------
diff --git a/docs/apis/batch/libs/ml/knn.md b/docs/apis/batch/libs/ml/knn.md
deleted file mode 100644
index 294d333..0000000
--- a/docs/apis/batch/libs/ml/knn.md
+++ /dev/null
@@ -1,149 +0,0 @@
----
-mathjax: include
-htmlTitle: FlinkML - k-Nearest neighbors join
-title: <a href="../ml">FlinkML</a> - k-Nearest neighbors join
-
-# Sub navigation
-sub-nav-group: batch
-sub-nav-parent: flinkml
-sub-nav-title: k-Nearest neighbors join
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-* This will be replaced by the TOC
-{:toc}
-
-## Description
-Implements an exact k-nearest neighbors join algorithm.  Given a training set $A$ and a testing set $B$, the algorithm returns
-
-$$
-KNNJ(A, B, k) = \{ \left( b, KNN(b, A, k) \right) \text{ where } b \in B \text{ and } KNN(b, A, k) \text{ are the k-nearest points to }b\text{ in }A \}
-$$
-
-The brute-force approach is to compute the distance between every training and testing point. To ease the brute-force computation of computing the distance between every training point a quadtree is used. The quadtree scales well in the number of training points, though poorly in the spatial dimension. The algorithm will automatically choose whether or not to use the quadtree, though the user can override that decision by setting a parameter to force use or not use a quadtree. 
-
-## Operations
-
-`KNN` is a `Predictor`. 
-As such, it supports the `fit` and `predict` operation.
-
-### Fit
-
-KNN is trained by a given set of `Vector`:
-
-* `fit[T <: Vector]: DataSet[T] => Unit`
-
-### Predict
-
-KNN predicts for all subtypes of FlinkML's `Vector` the corresponding class label:
-
-* `predict[T <: Vector]: DataSet[T] => DataSet[(T, Array[Vector])]`, where the `(T, Array[Vector])` tuple
-  corresponds to (test point, K-nearest training points)
-
-## Parameters
-
-The KNN implementation can be controlled by the following parameters:
-
-   <table class="table table-bordered">
-    <thead>
-      <tr>
-        <th class="text-left" style="width: 20%">Parameters</th>
-        <th class="text-center">Description</th>
-      </tr>
-    </thead>
-
-    <tbody>
-      <tr>
-        <td><strong>K</strong></td>
-        <td>
-          <p>
-            Defines the number of nearest-neighbors to search for. That is, for each test point, the algorithm finds the K-nearest neighbors in the training set
-            (Default value: <strong>5</strong>)
-          </p>
-        </td>
-      </tr>
-      <tr>
-        <td><strong>DistanceMetric</strong></td>
-        <td>
-          <p>
-            Sets the distance metric we use to calculate the distance between two points. If no metric is specified, then [[org.apache.flink.ml.metrics.distances.EuclideanDistanceMetric]] is used.
-            (Default value: <strong>EuclideanDistanceMetric</strong>)
-          </p>
-        </td>
-      </tr>
-      <tr>
-        <td><strong>Blocks</strong></td>
-        <td>
-          <p>
-            Sets the number of blocks into which the input data will be split. This number should be set
-            at least to the degree of parallelism. If no value is specified, then the parallelism of the
-            input [[DataSet]] is used as the number of blocks.
-            (Default value: <strong>None</strong>)
-          </p>
-        </td>
-      </tr>
-      <tr>
-        <td><strong>UseQuadTree</strong></td>
-        <td>
-          <p>
-            A boolean variable that whether or not to use a quadtree to partition the training set to potentially simplify the KNN search. If no value is specified, the code will automatically decide whether or not to use a quadtree. Use of a quadtree scales well with the number of training and testing points, though poorly with the dimension.
-            (Default value: <strong>None</strong>)
-          </p>
-        </td>
-      </tr>
-      <tr>
-        <td><strong>SizeHint</strong></td>
-        <td>
-          <p>Specifies whether the training set or test set is small to optimize the cross product operation needed for the KNN search. If the training set is small this should be `CrossHint.FIRST_IS_SMALL` and set to `CrossHint.SECOND_IS_SMALL` if the test set is small.
-             (Default value: <strong>None</strong>)
-          </p>
-        </td>
-      </tr>
-    </tbody>
-  </table>
-
-## Examples
-
-{% highlight scala %}
-import org.apache.flink.api.common.operators.base.CrossOperatorBase.CrossHint
-import org.apache.flink.api.scala._
-import org.apache.flink.ml.nn.KNN
-import org.apache.flink.ml.math.Vector
-import org.apache.flink.ml.metrics.distances.SquaredEuclideanDistanceMetric
-
-val env = ExecutionEnvironment.getExecutionEnvironment
-
-// prepare data
-val trainingSet: DataSet[Vector] = ...
-val testingSet: DataSet[Vector] = ...
-
-val knn = KNN()
-  .setK(3)
-  .setBlocks(10)
-  .setDistanceMetric(SquaredEuclideanDistanceMetric())
-  .setUseQuadTree(false)
-  .setSizeHint(CrossHint.SECOND_IS_SMALL)
-
-// run knn join
-knn.fit(trainingSet)
-val result = knn.predict(testingSet).collect()
-{% endhighlight %}
-
-For more details on the computing KNN with and without and quadtree, here is a presentation: [http://danielblazevski.github.io/](http://danielblazevski.github.io/)

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/libs/ml/min_max_scaler.md
----------------------------------------------------------------------
diff --git a/docs/apis/batch/libs/ml/min_max_scaler.md b/docs/apis/batch/libs/ml/min_max_scaler.md
deleted file mode 100644
index 2948a96..0000000
--- a/docs/apis/batch/libs/ml/min_max_scaler.md
+++ /dev/null
@@ -1,116 +0,0 @@
----
-mathjax: include
-title: MinMax Scaler
-
-# Sub navigation
-sub-nav-group: batch
-sub-nav-parent: flinkml
-sub-nav-title: MinMax Scaler
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-* This will be replaced by the TOC
-{:toc}
-
-## Description
-
- The MinMax scaler scales the given data set, so that all values will lie between a user specified range [min,max].
- In case the user does not provide a specific minimum and maximum value for the scaling range, the MinMax scaler transforms the features of the input data set to lie in the [0,1] interval.
- Given a set of input data $x_1, x_2,... x_n$, with minimum value:
-
- $$x_{min} = min({x_1, x_2,..., x_n})$$
-
- and maximum value:
-
- $$x_{max} = max({x_1, x_2,..., x_n})$$
-
-The scaled data set $z_1, z_2,...,z_n$ will be:
-
- $$z_{i}= \frac{x_{i} - x_{min}}{x_{max} - x_{min}} \left ( max - min \right ) + min$$
-
-where $\textit{min}$ and $\textit{max}$ are the user specified minimum and maximum values of the range to scale.
-
-## Operations
-
-`MinMaxScaler` is a `Transformer`.
-As such, it supports the `fit` and `transform` operation.
-
-### Fit
-
-MinMaxScaler is trained on all subtypes of `Vector` or `LabeledVector`:
-
-* `fit[T <: Vector]: DataSet[T] => Unit`
-* `fit: DataSet[LabeledVector] => Unit`
-
-### Transform
-
-MinMaxScaler transforms all subtypes of `Vector` or `LabeledVector` into the respective type:
-
-* `transform[T <: Vector]: DataSet[T] => DataSet[T]`
-* `transform: DataSet[LabeledVector] => DataSet[LabeledVector]`
-
-## Parameters
-
-The MinMax scaler implementation can be controlled by the following two parameters:
-
- <table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 20%">Parameters</th>
-      <th class="text-center">Description</th>
-    </tr>
-  </thead>
-
-  <tbody>
-    <tr>
-      <td><strong>Min</strong></td>
-      <td>
-        <p>
-          The minimum value of the range for the scaled data set. (Default value: <strong>0.0</strong>)
-        </p>
-      </td>
-    </tr>
-    <tr>
-      <td><strong>Max</strong></td>
-      <td>
-        <p>
-          The maximum value of the range for the scaled data set. (Default value: <strong>1.0</strong>)
-        </p>
-      </td>
-    </tr>
-  </tbody>
-</table>
-
-## Examples
-
-{% highlight scala %}
-// Create MinMax scaler transformer
-val minMaxscaler = MinMaxScaler()
-  .setMin(-1.0)
-
-// Obtain data set to be scaled
-val dataSet: DataSet[Vector] = ...
-
-// Learn the minimum and maximum values of the training data
-minMaxscaler.fit(dataSet)
-
-// Scale the provided data set to have min=-1.0 and max=1.0
-val scaledDS = minMaxscaler.transform(dataSet)
-{% endhighlight %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/libs/ml/multiple_linear_regression.md
----------------------------------------------------------------------
diff --git a/docs/apis/batch/libs/ml/multiple_linear_regression.md b/docs/apis/batch/libs/ml/multiple_linear_regression.md
deleted file mode 100644
index b427eac..0000000
--- a/docs/apis/batch/libs/ml/multiple_linear_regression.md
+++ /dev/null
@@ -1,164 +0,0 @@
----
-mathjax: include
-title: Multiple linear regression
-
-# Sub navigation
-sub-nav-group: batch
-sub-nav-parent: flinkml
-sub-nav-title: Multiple Linear Regression
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-* This will be replaced by the TOC
-{:toc}
-
-## Description
-
- Multiple linear regression tries to find a linear function which best fits the provided input data.
- Given a set of input data with its value $(\mathbf{x}, y)$, multiple linear regression finds
- a vector $\mathbf{w}$ such that the sum of the squared residuals is minimized:
-
- $$ S(\mathbf{w}) = \sum_{i=1} \left(y - \mathbf{w}^T\mathbf{x_i} \right)^2$$
-
- Written in matrix notation, we obtain the following formulation:
-
- $$\mathbf{w}^* = \arg \min_{\mathbf{w}} (\mathbf{y} - X\mathbf{w})^2$$
-
- This problem has a closed form solution which is given by:
-
-  $$\mathbf{w}^* = \left(X^TX\right)^{-1}X^T\mathbf{y}$$
-
-  However, in cases where the input data set is so huge that a complete parse over the whole data
-  set is prohibitive, one can apply stochastic gradient descent (SGD) to approximate the solution.
-  SGD first calculates for a random subset of the input data set the gradients. The gradient
-  for a given point $\mathbf{x}_i$ is given by:
-
-  $$\nabla_{\mathbf{w}} S(\mathbf{w}, \mathbf{x_i}) = 2\left(\mathbf{w}^T\mathbf{x_i} -
-    y\right)\mathbf{x_i}$$
-
-  The gradients are averaged and scaled. The scaling is defined by $\gamma = \frac{s}{\sqrt{j}}$
-  with $s$ being the initial step size and $j$ being the current iteration number. The resulting gradient is subtracted from the
-  current weight vector giving the new weight vector for the next iteration:
-
-  $$\mathbf{w}_{t+1} = \mathbf{w}_t - \gamma \frac{1}{n}\sum_{i=1}^n \nabla_{\mathbf{w}} S(\mathbf{w}, \mathbf{x_i})$$
-
-  The multiple linear regression algorithm computes either a fixed number of SGD iterations or terminates based on a dynamic convergence criterion.
-  The convergence criterion is the relative change in the sum of squared residuals:
-
-  $$\frac{S_{k-1} - S_k}{S_{k-1}} < \rho$$
-  
-## Operations
-
-`MultipleLinearRegression` is a `Predictor`.
-As such, it supports the `fit` and `predict` operation.
-
-### Fit
-
-MultipleLinearRegression is trained on a set of `LabeledVector`: 
-
-* `fit: DataSet[LabeledVector] => Unit`
-
-### Predict
-
-MultipleLinearRegression predicts for all subtypes of `Vector` the corresponding regression value: 
-
-* `predict[T <: Vector]: DataSet[T] => DataSet[LabeledVector]`
-
-If we call predict with a `DataSet[LabeledVector]`, we make a prediction on the regression value
-for each example, and return a `DataSet[(Double, Double)]`. In each tuple the first element
-is the true value, as was provided from the input `DataSet[LabeledVector]` and the second element
-is the predicted value. You can then use these `(truth, prediction)` tuples to evaluate
-the algorithm's performance.
-
-* `predict: DataSet[LabeledVector] => DataSet[(Double, Double)]`
-
-## Parameters
-
-  The multiple linear regression implementation can be controlled by the following parameters:
-  
-   <table class="table table-bordered">
-    <thead>
-      <tr>
-        <th class="text-left" style="width: 20%">Parameters</th>
-        <th class="text-center">Description</th>
-      </tr>
-    </thead>
-
-    <tbody>
-      <tr>
-        <td><strong>Iterations</strong></td>
-        <td>
-          <p>
-            The maximum number of iterations. (Default value: <strong>10</strong>)
-          </p>
-        </td>
-      </tr>
-      <tr>
-        <td><strong>Stepsize</strong></td>
-        <td>
-          <p>
-            Initial step size for the gradient descent method.
-            This value controls how far the gradient descent method moves in the opposite direction of the gradient.
-            Tuning this parameter might be crucial to make it stable and to obtain a better performance. 
-            (Default value: <strong>0.1</strong>)
-          </p>
-        </td>
-      </tr>
-      <tr>
-        <td><strong>ConvergenceThreshold</strong></td>
-        <td>
-          <p>
-            Threshold for relative change of the sum of squared residuals until the iteration is stopped.
-            (Default value: <strong>None</strong>)
-          </p>
-        </td>
-      </tr>
-      <tr>
-        <td><strong>LearningRateMethod</strong></td>
-        <td>
-            <p>
-                Learning rate method used to calculate the effective learning rate for each iteration.
-                See the list of supported <a href="optimization.html">learning rate methods</a>.
-                (Default value: <strong>LearningRateMethod.Default</strong>)
-            </p>
-        </td>
-      </tr>
-    </tbody>
-  </table>
-
-## Examples
-
-{% highlight scala %}
-// Create multiple linear regression learner
-val mlr = MultipleLinearRegression()
-.setIterations(10)
-.setStepsize(0.5)
-.setConvergenceThreshold(0.001)
-
-// Obtain training and testing data set
-val trainingDS: DataSet[LabeledVector] = ...
-val testingDS: DataSet[Vector] = ...
-
-// Fit the linear model to the provided data
-mlr.fit(trainingDS)
-
-// Calculate the predictions for the test data
-val predictions = mlr.predict(testingDS)
-{% endhighlight %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/libs/ml/optimization.md
----------------------------------------------------------------------
diff --git a/docs/apis/batch/libs/ml/optimization.md b/docs/apis/batch/libs/ml/optimization.md
deleted file mode 100644
index ccb7e45..0000000
--- a/docs/apis/batch/libs/ml/optimization.md
+++ /dev/null
@@ -1,385 +0,0 @@
----
-mathjax: include
-title: Optimization
-# Sub navigation
-sub-nav-group: batch
-sub-nav-parent: flinkml
-sub-nav-title: Optimization
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-* Table of contents
-{:toc}
-
-## Mathematical Formulation
-
-The optimization framework in FlinkML is a developer-oriented package that can be used to solve
-[optimization](https://en.wikipedia.org/wiki/Mathematical_optimization)
-problems common in Machine Learning (ML) tasks. In the supervised learning context, this usually
-involves finding a model, as defined by a set of parameters $w$, that minimize a function $f(\wv)$
-given a set of $(\x, y)$ examples,
-where $\x$ is a feature vector and $y$ is a real number, which can represent either a real value in
-the regression case, or a class label in the classification case. In supervised learning, the
-function to be minimized is usually of the form:
-
-
-\begin{equation} \label{eq:objectiveFunc}
-    f(\wv) :=
-    \frac1n \sum_{i=1}^n L(\wv;\x_i,y_i) +
-    \lambda\, R(\wv)
-    \ .
-\end{equation}
-
-
-where $L$ is the loss function and $R(\wv)$ the regularization penalty. We use $L$ to measure how
-well the model fits the observed data, and we use $R$ in order to impose a complexity cost to the
-model, with $\lambda > 0$ being the regularization parameter.
-
-### Loss Functions
-
-In supervised learning, we use loss functions in order to measure the model fit, by
-penalizing errors in the predictions $p$ made by the model compared to the true $y$ for each
-example. Different loss functions can be used for regression (e.g. Squared Loss) and classification
-(e.g. Hinge Loss) tasks.
-
-Some common loss functions are:
-
-* Squared Loss: $ \frac{1}{2} \left(\wv^T \cdot \x - y\right)^2, \quad y \in \R $
-* Hinge Loss: $ \max \left(0, 1 - y ~ \wv^T \cdot \x\right), \quad y \in \{-1, +1\} $
-* Logistic Loss: $ \log\left(1+\exp\left( -y ~ \wv^T \cdot \x\right)\right), \quad y \in \{-1, +1\}$
-
-### Regularization Types
-
-[Regularization](https://en.wikipedia.org/wiki/Regularization_(mathematics)) in machine learning
-imposes penalties to the estimated models, in order to reduce overfitting. The most common penalties
-are the $L_1$ and $L_2$ penalties, defined as:
-
-* $L_1$: $R(\wv) = \norm{\wv}_1$
-* $L_2$: $R(\wv) = \frac{1}{2}\norm{\wv}_2^2$
-
-The $L_2$ penalty penalizes large weights, favoring solutions with more small weights rather than
-few large ones.
-The $L_1$ penalty can be used to drive a number of the solution coefficients to 0, thereby
-producing sparse solutions.
-The regularization constant $\lambda$ in $\eqref{eq:objectiveFunc}$ determines the amount of regularization applied to the model,
-and is usually determined through model cross-validation.
-A good comparison of regularization types can be found in [this](http://www.robotics.stanford.edu/~ang/papers/icml04-l1l2.pdf) paper by Andrew Ng.
-Which regularization type is supported depends on the actually used optimization algorithm.
-
-## Stochastic Gradient Descent
-
-In order to find a (local) minimum of a function, Gradient Descent methods take steps in the
-direction opposite to the gradient of the function $\eqref{eq:objectiveFunc}$ taken with
-respect to the current parameters (weights).
-In order to compute the exact gradient we need to perform one pass through all the points in
-a dataset, making the process computationally expensive.
-An alternative is Stochastic Gradient Descent (SGD) where at each iteration we sample one point
-from the complete dataset and update the parameters for each point, in an online manner.
-
-In mini-batch SGD we instead sample random subsets of the dataset, and compute the gradient
-over each batch. At each iteration of the algorithm we update the weights once, based on
-the average of the gradients computed from each mini-batch.
-
-An important parameter is the learning rate $\eta$, or step size, which can be determined by one of five methods, listed below. The setting of the initial step size can significantly affect the performance of the
-algorithm. For some practical tips on tuning SGD see Leon Botou's
-"[Stochastic Gradient Descent Tricks](http://research.microsoft.com/pubs/192769/tricks-2012.pdf)".
-
-The current implementation of SGD  uses the whole partition, making it
-effectively a batch gradient descent. Once a sampling operator has been introduced in Flink, true
-mini-batch SGD will be performed.
-
-### Regularization
-
-FlinkML supports Stochastic Gradient Descent with L1, L2 and no regularization.
-The following list contains a mapping between the implementing classes and the regularization function.
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 20%">Class Name</th>
-      <th class="text-center">Regularization function $R(\wv)$</th>
-    </tr>
-  </thead>
-  <tbody>
-    <tr>
-      <td><code>SimpleGradient</code></td>
-      <td>$R(\wv) = 0$</td>
-    </tr>
-    <tr>
-      <td><code>GradientDescentL1</code></td>
-      <td>$R(\wv) = \norm{\wv}_1$</td>
-    </tr>
-    <tr>
-      <td><code>GradientDescentL2</code></td>
-      <td>$R(\wv) = \frac{1}{2}\norm{\wv}_2^2$</td>
-    </tr>
-  </tbody>
-</table>
-
-### Parameters
-
-  The stochastic gradient descent implementation can be controlled by the following parameters:
-
-   <table class="table table-bordered">
-    <thead>
-      <tr>
-        <th class="text-left" style="width: 20%">Parameter</th>
-        <th class="text-center">Description</th>
-      </tr>
-    </thead>
-    <tbody>
-      <tr>
-        <td><strong>LossFunction</strong></td>
-        <td>
-          <p>
-            The loss function to be optimized. (Default value: <strong>None</strong>)
-          </p>
-        </td>
-      </tr>
-      <tr>
-        <td><strong>RegularizationConstant</strong></td>
-        <td>
-          <p>
-            The amount of regularization to apply. (Default value: <strong>0.1</strong>)
-          </p>
-        </td>
-      </tr>
-      <tr>
-        <td><strong>Iterations</strong></td>
-        <td>
-          <p>
-            The maximum number of iterations. (Default value: <strong>10</strong>)
-          </p>
-        </td>
-      </tr>
-      <tr>
-        <td><strong>LearningRate</strong></td>
-        <td>
-          <p>
-            Initial learning rate for the gradient descent method.
-            This value controls how far the gradient descent method moves in the opposite direction
-            of the gradient.
-            (Default value: <strong>0.1</strong>)
-          </p>
-        </td>
-      </tr>
-      <tr>
-        <td><strong>ConvergenceThreshold</strong></td>
-        <td>
-          <p>
-            When set, iterations stop if the relative change in the value of the objective function $\eqref{eq:objectiveFunc}$ is less than the provided threshold, $\tau$.
-            The convergence criterion is defined as follows: $\left| \frac{f(\wv)_{i-1} - f(\wv)_i}{f(\wv)_{i-1}}\right| < \tau$.
-            (Default value: <strong>None</strong>)
-          </p>
-        </td>
-      </tr>
-      <tr>
-        <td><strong>LearningRateMethod</strong></td>
-        <td>
-          <p>
-            (Default value: <strong>LearningRateMethod.Default</strong>)
-          </p>
-        </td>
-      </tr>
-      <tr>
-        <td><strong>Decay</strong></td>
-        <td>
-          <p>
-            (Default value: <strong>0.0</strong>)
-          </p>
-        </td>
-      </tr>
-    </tbody>
-  </table>
-
-### Loss Function
-
-The loss function which is minimized has to implement the `LossFunction` interface, which defines methods to compute the loss and the gradient of it.
-Either one defines ones own `LossFunction` or one uses the `GenericLossFunction` class which constructs the loss function from an outer loss function and a prediction function.
-An example can be seen here
-
-```Scala
-val lossFunction = GenericLossFunction(SquaredLoss, LinearPrediction)
-```
-
-The full list of supported outer loss functions can be found [here](#partial-loss-function-values).
-The full list of supported prediction functions can be found [here](#prediction-function-values).
-
-#### Partial Loss Function Values ##
-
-  <table class="table table-bordered">
-    <thead>
-      <tr>
-        <th class="text-left" style="width: 20%">Function Name</th>
-        <th class="text-center">Description</th>
-        <th class="text-center">Loss</th>
-        <th class="text-center">Loss Derivative</th>
-      </tr>
-    </thead>
-    <tbody>
-      <tr>
-        <td><strong>SquaredLoss</strong></td>
-        <td>
-          <p>
-            Loss function most commonly used for regression tasks.
-          </p>
-        </td>
-        <td class="text-center">$\frac{1}{2} (\wv^T \cdot \x - y)^2$</td>
-        <td class="text-center">$\wv^T \cdot \x - y$</td>
-      </tr>
-    </tbody>
-  </table>
-
-#### Prediction Function Values ##
-
-  <table class="table table-bordered">
-      <thead>
-        <tr>
-          <th class="text-left" style="width: 20%">Function Name</th>
-          <th class="text-center">Description</th>
-          <th class="text-center">Prediction</th>
-          <th class="text-center">Prediction Gradient</th>
-        </tr>
-      </thead>
-      <tbody>
-        <tr>
-          <td><strong>LinearPrediction</strong></td>
-          <td>
-            <p>
-              The function most commonly used for linear models, such as linear regression and
-              linear classifiers.
-            </p>
-          </td>
-          <td class="text-center">$\x^T \cdot \wv$</td>
-          <td class="text-center">$\x$</td>
-        </tr>
-      </tbody>
-    </table>
-
-#### Effective Learning Rate ##
-
-Where:
-
-- $j$ is the iteration number
-
-- $\eta_j$ is the step size on step $j$
-
-- $\eta_0$ is the initial step size
-
-- $\lambda$ is the regularization constant
-
-- $\tau$ is the decay constant, which causes the learning rate to be a decreasing function of $j$, that is to say as iterations increase, learning rate decreases. The exact rate of decay is function specific, see **Inverse Scaling** and **Wei Xu's Method** (which is an extension of the **Inverse Scaling** method).
-
-<table class="table table-bordered">
-    <thead>
-      <tr>
-        <th class="text-left" style="width: 20%">Function Name</th>
-        <th class="text-center">Description</th>
-        <th class="text-center">Function</th>
-        <th class="text-center">Called As</th>
-      </tr>
-    </thead>
-    <tbody>
-      <tr>
-        <td><strong>Default</strong></td>
-        <td>
-          <p>
-            The function default method used for determining the step size. This is equivalent to the inverse scaling method for $\tau$ = 0.5.  This special case is kept as the default to maintain backwards compatibility.
-          </p>
-        </td>
-        <td class="text-center">$\eta_j = \eta_0/\sqrt{j}$</td>
-        <td class="text-center"><code>LearningRateMethod.Default</code></td>
-      </tr>
-      <tr>
-        <td><strong>Constant</strong></td>
-        <td>
-          <p> 
-            The step size is constant throughout the learning task.
-          </p>
-        </td>
-        <td class="text-center">$\eta_j = \eta_0$</td>
-        <td class="text-center"><code>LearningRateMethod.Constant</code></td>
-      </tr>
-      <tr>
-        <td><strong>Leon Bottou's Method</strong></td>
-        <td>
-          <p>
-            This is the <code>'optimal'</code> method of sklearn. 
-            The optimal initial value $t_0$ has to be provided.
-            Sklearn uses the following heuristic: $t_0 = \max(1.0, L^\prime(-\beta, 1.0) / (\alpha \cdot \beta)$
-            with $\beta = \sqrt{\frac{1}{\sqrt{\alpha}}}$ and $L^\prime(prediction, truth)$ being the derivative of the loss function. 
-          </p>
-        </td>
-        <td class="text-center">$\eta_j = 1 / (\lambda \cdot (t_0 + j -1)) $</td>
-        <td class="text-center"><code>LearningRateMethod.Bottou</code></td>
-      </tr>
-      <tr>
-        <td><strong>Inverse Scaling</strong></td>
-        <td>
-          <p>
-            A very common method for determining the step size.
-          </p>
-        </td>
-        <td class="text-center">$\eta_j = \eta_0 / j^{\tau}$</td>
-        <td class="text-center"><code>LearningRateMethod.InvScaling</code></td>
-      </tr>
-      <tr>
-        <td><strong>Wei Xu's Method</strong></td>
-        <td>
-          <p>
-            Method proposed by Wei Xu in <a href="http://arxiv.org/pdf/1107.2490.pdf">Towards Optimal One Pass Large Scale Learning with
-            Averaged Stochastic Gradient Descent</a>
-          </p>
-        </td>
-        <td class="text-center">$\eta_j = \eta_0 \cdot (1+ \lambda \cdot \eta_0 \cdot j)^{-\tau} $</td>
-        <td class="text-center"><code>LearningRateMethod.Xu</code></td>
-      </tr>
-    </tbody>
-  </table>
-
-### Examples
-
-In the Flink implementation of SGD, given a set of examples in a `DataSet[LabeledVector]` and
-optionally some initial weights, we can use `GradientDescentL1.optimize()` in order to optimize
-the weights for the given data.
-
-The user can provide an initial `DataSet[WeightVector]`,
-which contains one `WeightVector` element, or use the default weights which are all set to 0.
-A `WeightVector` is a container class for the weights, which separates the intercept from the
-weight vector. This allows us to avoid applying regularization to the intercept.
-
-
-
-{% highlight scala %}
-// Create stochastic gradient descent solver
-val sgd = GradientDescentL1()
-  .setLossFunction(SquaredLoss())
-  .setRegularizationConstant(0.2)
-  .setIterations(100)
-  .setLearningRate(0.01)
-  .setLearningRateMethod(LearningRateMethod.Xu(-0.75))
-
-
-// Obtain data
-val trainingDS: DataSet[LabeledVector] = ...
-
-// Optimize the weights, according to the provided data
-val weightDS = sgd.optimize(trainingDS)
-{% endhighlight %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/libs/ml/pipelines.md
----------------------------------------------------------------------
diff --git a/docs/apis/batch/libs/ml/pipelines.md b/docs/apis/batch/libs/ml/pipelines.md
deleted file mode 100644
index f86476c..0000000
--- a/docs/apis/batch/libs/ml/pipelines.md
+++ /dev/null
@@ -1,445 +0,0 @@
----
-mathjax: include
-title: Looking under the hood of pipelines
-# Sub navigation
-sub-nav-group: batch
-sub-nav-parent: flinkml
-sub-nav-title: Pipelines
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-* This will be replaced by the TOC
-{:toc}
-
-## Introduction
-
-The ability to chain together different transformers and predictors is an important feature for
-any Machine Learning (ML) library. In FlinkML we wanted to provide an intuitive API,
-and at the same
-time utilize the capabilities of the Scala language to provide
-type-safe implementations of our pipelines. What we hope to achieve then is an easy to use API,
-that protects users from type errors at pre-flight (before the job is launched) time, thereby
-eliminating cases where long
-running jobs are submitted to the cluster only to see them fail due to some
-error in the series of data transformations that commonly happen in an ML pipeline.
-
-In this guide then we will describe the choices we made during the implementation of chainable
-transformers and predictors in FlinkML, and provide guidelines on how developers can create their
-own algorithms that make use of these capabilities.
-
-## The what and the why
-
-So what do we mean by "ML pipelines"? Pipelines in the ML context can be thought of as chains of
-operations that have some data as input, perform a number of transformations to that data,
-and
-then output the transformed data, either to be used as the input (features) of a predictor
-function, such as a learning model, or just output the transformed data themselves, to be used in
-some other task. The end learner can of course be a part of the pipeline as well.
-ML pipelines can often be complicated sets of operations ([in-depth explanation](http://research.google.com/pubs/pub43146.html)) and
-can become sources of errors for end-to-end learning systems.
-
-The purpose of ML pipelines is then to create a
-framework that can be used to manage the complexity introduced by these chains of operations.
-Pipelines should make it easy for developers to define chained transformations that can be
-applied to the
-training data, in order to create the end features that will be used to train a
-learning model, and then perform the same set of transformations just as easily to unlabeled
-(test) data. Pipelines should also simplify cross-validation and model selection on
-these chains of operations.
-
-Finally, by ensuring that the consecutive links in the pipeline chain "fit together" we also
-avoid costly type errors. Since each step in a pipeline can be a computationally-heavy operation,
-we want to avoid running a pipelined job, unless we are sure that all the input/output pairs in a
-pipeline "fit".
-
-## Pipelines in FlinkML
-
-The building blocks for pipelines in FlinkML can be found in the `ml.pipeline` package.
-FlinkML follows an API inspired by [sklearn](http://scikit-learn.org) which means that we have
-`Estimator`, `Transformer` and `Predictor` interfaces. For an in-depth look at the design of the
-sklearn API the interested reader is referred to [this](http://arxiv.org/abs/1309.0238) paper.
-In short, the `Estimator` is the base class from which `Transformer` and `Predictor` inherit.
-`Estimator` defines a `fit` method, and `Transformer` also defines a `transform` method and
-`Predictor` defines a `predict` method.
-
-The `fit` method of the `Estimator` performs the actual training of the model, for example
-finding the correct weights in a linear regression task, or the mean and standard deviation of
-the data in a feature scaler.
-As evident by the naming, classes that implement
-`Transformer` are transform operations like [scaling the input](standard_scaler.html) and
-`Predictor` implementations are learning algorithms such as [Multiple Linear Regression]({{site.baseurl}}/libs/ml/multiple_linear_regression.html).
-Pipelines can be created by chaining together a number of Transformers, and the final link in a pipeline can be a Predictor or another Transformer.
-Pipelines that end with Predictor cannot be chained any further.
-Below is an example of how a pipeline can be formed:
-
-{% highlight scala %}
-// Training data
-val input: DataSet[LabeledVector] = ...
-// Test data
-val unlabeled: DataSet[Vector] = ...
-
-val scaler = StandardScaler()
-val polyFeatures = PolynomialFeatures()
-val mlr = MultipleLinearRegression()
-
-// Construct the pipeline
-val pipeline = scaler
-  .chainTransformer(polyFeatures)
-  .chainPredictor(mlr)
-
-// Train the pipeline (scaler and multiple linear regression)
-pipeline.fit(input)
-
-// Calculate predictions for the testing data
-val predictions: DataSet[LabeledVector] = pipeline.predict(unlabeled)
-
-{% endhighlight %}
-
-As we mentioned, FlinkML pipelines are type-safe.
-If we tried to chain a transformer with output of type `A` to another with input of type `B` we
-would get an error at pre-flight time if `A` != `B`. FlinkML achieves this kind of type-safety
-through the use of Scala's implicits.
-
-### Scala implicits
-
-If you are not familiar with Scala's implicits we can recommend [this excerpt](https://www.artima.com/pins1ed/implicit-conversions-and-parameters.html)
-from Martin Odersky's "Programming in Scala". In short, implicit conversions allow for ad-hoc
-polymorphism in Scala by providing conversions from one type to another, and implicit values
-provide the compiler with default values that can be supplied to function calls through implicit parameters.
-The combination of implicit conversions and implicit parameters is what allows us to chain transform
-and predict operations together in a type-safe manner.
-
-### Operations
-
-As we mentioned, the trait (abstract class) `Estimator` defines a `fit` method. The method has two
-parameter lists
-(i.e. is a [curried function](http://docs.scala-lang.org/tutorials/tour/currying.html)). The
-first parameter list
-takes the input (training) `DataSet` and the parameters for the estimator. The second parameter
-list takes one `implicit` parameter, of type `FitOperation`. `FitOperation` is a class that also
-defines a `fit` method, and this is where the actual logic of training the concrete Estimators
-should be implemented. The `fit` method of `Estimator` is essentially a wrapper around the  fit
-method of `FitOperation`. The `predict` method of `Predictor` and the `transform` method of
-`Transform` are designed in a similar manner, with a respective operation class.
-
-In these methods the operation object is provided as an implicit parameter.
-Scala will [look for implicits](http://docs.scala-lang.org/tutorials/FAQ/finding-implicits.html)
-in the companion object of a type, so classes that implement these interfaces should provide these 
-objects as implicit objects inside the companion object.
-
-As an example we can look at the `StandardScaler` class. `StandardScaler` extends `Transformer`, so it has access to its `fit` and `transform` functions.
-These two functions expect objects of `FitOperation` and `TransformOperation` as implicit parameters, 
-for the `fit` and `transform` methods respectively, which `StandardScaler` provides in its companion 
-object, through `transformVectors` and `fitVectorStandardScaler`:
-
-{% highlight scala %}
-class StandardScaler extends Transformer[StandardScaler] {
-  ...
-}
-
-object StandardScaler {
-
-  ...
-
-  implicit def fitVectorStandardScaler[T <: Vector] = new FitOperation[StandardScaler, T] {
-    override def fit(instance: StandardScaler, fitParameters: ParameterMap, input: DataSet[T])
-      : Unit = {
-        ...
-      }
-
-  implicit def transformVectors[T <: Vector: VectorConverter: TypeInformation: ClassTag] = {
-      new TransformOperation[StandardScaler, T, T] {
-        override def transform(
-          instance: StandardScaler,
-          transformParameters: ParameterMap,
-          input: DataSet[T])
-        : DataSet[T] = {
-          ...
-        }
-
-}
-
-{% endhighlight %}
-
-Note that `StandardScaler` does **not** override the `fit` method of `Estimator` or the `transform`
-method of `Transformer`. Rather, its implementations of `FitOperation` and `TransformOperation`
-override their respective `fit` and `transform` methods, which are then called by the `fit` and
-`transform` methods of `Estimator` and `Transformer`.  Similarly, a class that implements
-`Predictor` should define an implicit `PredictOperation` object inside its companion object.
-
-#### Types and type safety
-
-Apart from the `fit` and `transform` operations that we listed above, the `StandardScaler` also
-provides `fit` and `transform` operations for input of type `LabeledVector`.
-This allows us to use the  algorithm for input that is labeled or unlabeled, and this happens
-automatically, depending on  the type of the input that we give to the fit and transform
-operations. The correct implicit operation is chosen by the compiler, depending on the input type.
-
-If we try to call the `fit` or `transform` methods with types that are not supported we will get a 
-runtime error before the job is launched. 
-While it would be possible to catch these kinds of errors at compile time as well, the error 
-messages that we are able to provide the user would be much less informative, which is why we chose 
-to throw runtime exceptions instead.
-
-### Chaining
-
-Chaining is achieved by calling `chainTransformer` or `chainPredictor` on an object
-of a class that implements `Transformer`. These methods return a `ChainedTransformer` or
-`ChainedPredictor` object respectively. As we mentioned, `ChainedTransformer` objects can be
-chained further, while `ChainedPredictor` objects cannot. These classes take care of applying
-fit, transform, and predict operations for a pair of successive transformers or
-a transformer and a predictor. They also act recursively if the length of the
-chain is larger than two, since every `ChainedTransformer` defines a `transform` and `fit`
-operation that can be further chained with more transformers or a predictor.
-
-It is important to note that developers and users do not need to worry about chaining when
-implementing their algorithms, all this is handled automatically by FlinkML.
-
-### How to Implement a Pipeline Operator
-
-In order to support FlinkML's pipelining, algorithms have to adhere to a certain design pattern, which we will describe in this section.
-Let's assume that we want to implement a pipeline operator which changes the mean of your data.
-Since centering data is a common pre-processing step in many analysis pipelines, we will implement it as a `Transformer`.
-Therefore, we first create a `MeanTransformer` class which inherits from `Transformer`
-
-{% highlight scala %}
-class MeanTransformer extends Transformer[MeanTransformer] {}
-{% endhighlight %}
-
-Since we want to be able to configure the mean of the resulting data, we have to add a configuration parameter.
-
-{% highlight scala %}
-class MeanTransformer extends Transformer[MeanTransformer] {
-  def setMean(mean: Double): this.type = {
-    parameters.add(MeanTransformer.Mean, mean)
-    this
-  }
-}
-
-object MeanTransformer {
-  case object Mean extends Parameter[Double] {
-    override val defaultValue: Option[Double] = Some(0.0)
-  }
-  
-  def apply(): MeanTransformer = new MeanTransformer
-}
-{% endhighlight %}
-
-Parameters are defined in the companion object of the transformer class and extend the `Parameter` class.
-Since the parameter instances are supposed to act as immutable keys for a parameter map, they should be implemented as `case objects`.
-The default value will be used if no other value has been set by the user of this component.
-If no default value has been specified, meaning that `defaultValue = None`, then the algorithm has to handle this situation accordingly.
-
-We can now instantiate a `MeanTransformer` object and set the mean value of the transformed data.
-But we still have to implement how the transformation works.
-The workflow can be separated into two phases.
-Within the first phase, the transformer learns the mean of the given training data.
-This knowledge can then be used in the second phase to transform the provided data with respect to the configured resulting mean value.
-
-The learning of the mean can be implemented within the `fit` operation of our `Transformer`, which it inherited from `Estimator`.
-Within the `fit` operation, a pipeline component is trained with respect to the given training data.
-The algorithm is, however, **not** implemented by overriding the `fit` method but by providing an implementation of a corresponding `FitOperation` for the correct type.
-Taking a look at the definition of the `fit` method in `Estimator`, which is the parent class of `Transformer`, reveals what why this is the case.
-
-{% highlight scala %}
-trait Estimator[Self] extends WithParameters with Serializable {
-  that: Self =>
-
-  def fit[Training](
-      training: DataSet[Training],
-      fitParameters: ParameterMap = ParameterMap.Empty)
-      (implicit fitOperation: FitOperation[Self, Training]): Unit = {
-    FlinkMLTools.registerFlinkMLTypes(training.getExecutionEnvironment)
-    fitOperation.fit(this, fitParameters, training)
-  }
-}
-{% endhighlight %}
-
-We see that the `fit` method is called with an input data set of type `Training`, an optional parameter list and in the second parameter list with an implicit parameter of type `FitOperation`.
-Within the body of the function, first some machine learning types are registered and then the `fit` method of the `FitOperation` parameter is called.
-The instance gives itself, the parameter map and the training data set as a parameters to the method.
-Thus, all the program logic takes place within the `FitOperation`.
-
-The `FitOperation` has two type parameters.
-The first defines the pipeline operator type for which this `FitOperation` shall work and the second type parameter defines the type of the data set elements.
-If we first wanted to implement the `MeanTransformer` to work on `DenseVector`, we would, thus, have to provide an implementation for `FitOperation[MeanTransformer, DenseVector]`.
- 
-{% highlight scala %}
-val denseVectorMeanFitOperation = new FitOperation[MeanTransformer, DenseVector] {
-  override def fit(instance: MeanTransformer, fitParameters: ParameterMap, input: DataSet[DenseVector]) : Unit = {
-    import org.apache.flink.ml.math.Breeze._
-    val meanTrainingData: DataSet[DenseVector] = input
-      .map{ x => (x.asBreeze, 1) }
-      .reduce{
-        (left, right) => 
-          (left._1 + right._1, left._2 + right._2) 
-      }
-      .map{ p => (p._1/p._2).fromBreeze }
-  }
-}
-{% endhighlight %}
-
-A `FitOperation[T, I]` has a `fit` method which is called with an instance of type `T`, a parameter map and an input `DataSet[I]`.
-In our case `T=MeanTransformer` and `I=DenseVector`.
-The parameter map is necessary if our fit step depends on some parameter values which were not given directly at creation time of the `Transformer`.
-The `FitOperation` of the `MeanTransformer` sums the `DenseVector` instances of the given input data set up and divides the result by the total number of vectors.
-That way, we obtain a `DataSet[DenseVector]` with a single element which is the mean value.
-
-But if we look closely at the implementation, we see that the result of the mean computation is never stored anywhere.
-If we want to use this knowledge in a later step to adjust the mean of some other input, we have to keep it around.
-And here is where the parameter of type `MeanTransformer` which is given to the `fit` method comes into play.
-We can use this instance to store state, which is used by a subsequent `transform` operation which works on the same object.
-But first we have to extend `MeanTransformer` by a member field and then adjust the `FitOperation` implementation.
-
-{% highlight scala %}
-class MeanTransformer extends Transformer[Centering] {
-  var meanOption: Option[DataSet[DenseVector]] = None
-
-  def setMean(mean: Double): Mean = {
-    parameters.add(MeanTransformer.Mean, mu)
-  }
-}
-
-val denseVectorMeanFitOperation = new FitOperation[MeanTransformer, DenseVector] {
-  override def fit(instance: MeanTransformer, fitParameters: ParameterMap, input: DataSet[DenseVector]) : Unit = {
-    import org.apache.flink.ml.math.Breeze._
-    
-    instance.meanOption = Some(input
-      .map{ x => (x.asBreeze, 1) }
-      .reduce{
-        (left, right) => 
-          (left._1 + right._1, left._2 + right._2) 
-      }
-      .map{ p => (p._1/p._2).fromBreeze })
-  }
-}
-{% endhighlight %}
-
-If we look at the `transform` method in `Transformer`, we will see that we also need an implementation of `TransformOperation`.
-A possible mean transforming implementation could look like the following.
-
-{% highlight scala %}
-
-val denseVectorMeanTransformOperation = new TransformOperation[MeanTransformer, DenseVector, DenseVector] {
-  override def transform(
-      instance: MeanTransformer, 
-      transformParameters: ParameterMap, 
-      input: DataSet[DenseVector]) 
-    : DataSet[DenseVector] = {
-    val resultingParameters = parameters ++ transformParameters
-    
-    val resultingMean = resultingParameters(MeanTransformer.Mean)
-    
-    instance.meanOption match {
-      case Some(trainingMean) => {
-        input.map{ new MeanTransformMapper(resultingMean) }.withBroadcastSet(trainingMean, "trainingMean")
-      }
-      case None => throw new RuntimeException("MeanTransformer has not been fitted to data.")
-    }
-  }
-}
-
-class MeanTransformMapper(resultingMean: Double) extends RichMapFunction[DenseVector, DenseVector] {
-  var trainingMean: DenseVector = null
-
-  override def open(parameters: Configuration): Unit = {
-    trainingMean = getRuntimeContext().getBroadcastVariable[DenseVector]("trainingMean").get(0)
-  }
-  
-  override def map(vector: DenseVector): DenseVector = {
-    import org.apache.flink.ml.math.Breeze._
-    
-    val result = vector.asBreeze - trainingMean.asBreeze + resultingMean
-    
-    result.fromBreeze
-  }
-}
-{% endhighlight %}
-
-Now we have everything implemented to fit our `MeanTransformer` to a training data set of `DenseVector` instances and to transform them.
-However, when we execute the `fit` operation
-
-{% highlight scala %}
-val trainingData: DataSet[DenseVector] = ...
-val meanTransformer = MeanTransformer()
-
-meanTransformer.fit(trainingData)
-{% endhighlight %}
-
-we receive the following error at runtime: `"There is no FitOperation defined for class MeanTransformer which trains on a DataSet[org.apache.flink.ml.math.DenseVector]"`.
-The reason is that the Scala compiler could not find a fitting `FitOperation` value with the right type parameters for the implicit parameter of the `fit` method.
-Therefore, it chose a fallback implicit value which gives you this error message at runtime.
-In order to make the compiler aware of our implementation, we have to define it as an implicit value and put it in the scope of the `MeanTransformer's` companion object.
-
-{% highlight scala %}
-object MeanTransformer{
-  implicit val denseVectorMeanFitOperation = new FitOperation[MeanTransformer, DenseVector] ...
-  
-  implicit val denseVectorMeanTransformOperation = new TransformOperation[MeanTransformer, DenseVector, DenseVector] ...
-}
-{% endhighlight %}
-
-Now we can call `fit` and `transform` of our `MeanTransformer` with `DataSet[DenseVector]` as input.
-Furthermore, we can now use this transformer as part of an analysis pipeline where we have a `DenseVector` as input and expected output.
-
-{% highlight scala %}
-val trainingData: DataSet[DenseVector] = ...
-
-val mean = MeanTransformer.setMean(1.0)
-val polyFeatures = PolynomialFeatures().setDegree(3)
-
-val pipeline = mean.chainTransformer(polyFeatures)
-
-pipeline.fit(trainingData)
-{% endhighlight %}
-
-It is noteworthy that there is no additional code needed to enable chaining.
-The system automatically constructs the pipeline logic using the operations of the individual components.
-
-So far everything works fine with `DenseVector`.
-But what happens, if we call our transformer with `LabeledVector` instead?
-{% highlight scala %}
-val trainingData: DataSet[LabeledVector] = ...
-
-val mean = MeanTransformer()
-
-mean.fit(trainingData)
-{% endhighlight %}
-
-As before we see the following exception upon execution of the program: `"There is no FitOperation defined for class MeanTransformer which trains on a DataSet[org.apache.flink.ml.common.LabeledVector]"`.
-It is noteworthy, that this exception is thrown in the pre-flight phase, which means that the job has not been submitted to the runtime system.
-This has the advantage that you won't see a job which runs for a couple of days and then fails because of an incompatible pipeline component.
-Type compatibility is, thus, checked at the very beginning for the complete job.
-
-In order to make the `MeanTransformer` work on `LabeledVector` as well, we have to provide the corresponding operations.
-Consequently, we have to define a `FitOperation[MeanTransformer, LabeledVector]` and `TransformOperation[MeanTransformer, LabeledVector, LabeledVector]` as implicit values in the scope of `MeanTransformer`'s companion object.
-
-{% highlight scala %}
-object MeanTransformer {
-  implicit val labeledVectorFitOperation = new FitOperation[MeanTransformer, LabeledVector] ...
-  
-  implicit val labeledVectorTransformOperation = new TransformOperation[MeanTransformer, LabeledVector, LabeledVector] ...
-}
-{% endhighlight %}
-
-If we wanted to implement a `Predictor` instead of a `Transformer`, then we would have to provide a `FitOperation`, too.
-Moreover, a `Predictor` requires a `PredictOperation` which implements how predictions are calculated from testing data.  
-
-

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/libs/ml/polynomial_features.md
----------------------------------------------------------------------
diff --git a/docs/apis/batch/libs/ml/polynomial_features.md b/docs/apis/batch/libs/ml/polynomial_features.md
deleted file mode 100644
index 9ef7654..0000000
--- a/docs/apis/batch/libs/ml/polynomial_features.md
+++ /dev/null
@@ -1,111 +0,0 @@
----
-mathjax: include
-title: Polynomial Features
-# Sub navigation
-sub-nav-group: batch
-sub-nav-parent: flinkml
-sub-nav-title: Polynomial Features
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-* This will be replaced by the TOC
-{:toc}
-
-## Description
-
-The polynomial features transformer maps a vector into the polynomial feature space of degree $d$.
-The dimension of the input vector determines the number of polynomial factors whose values are the respective vector entries.
-Given a vector $(x, y, z, \ldots)^T$ the resulting feature vector looks like:
-
-$$\left(x, y, z, x^2, xy, y^2, yz, z^2, x^3, x^2y, x^2z, xy^2, xyz, xz^2, y^3, \ldots\right)^T$$
-
-Flink's implementation orders the polynomials in decreasing order of their degree.
-
-Given the vector $\left(3,2\right)^T$, the polynomial features vector of degree 3 would look like
- 
- $$\left(3^3, 3^2\cdot2, 3\cdot2^2, 2^3, 3^2, 3\cdot2, 2^2, 3, 2\right)^T$$
-
-This transformer can be prepended to all `Transformer` and `Predictor` implementations which expect an input of type `LabeledVector` or any sub-type of `Vector`.
-
-## Operations
-
-`PolynomialFeatures` is a `Transformer`.
-As such, it supports the `fit` and `transform` operation.
-
-### Fit
-
-PolynomialFeatures is not trained on data and, thus, supports all types of input data.
-
-### Transform
-
-PolynomialFeatures transforms all subtypes of `Vector` and `LabeledVector` into their respective types: 
-
-* `transform[T <: Vector]: DataSet[T] => DataSet[T]`
-* `transform: DataSet[LabeledVector] => DataSet[LabeledVector]`
-
-## Parameters
-
-The polynomial features transformer can be controlled by the following parameters:
-
-<table class="table table-bordered">
-    <thead>
-      <tr>
-        <th class="text-left" style="width: 20%">Parameters</th>
-        <th class="text-center">Description</th>
-      </tr>
-    </thead>
-
-    <tbody>
-      <tr>
-        <td><strong>Degree</strong></td>
-        <td>
-          <p>
-            The maximum polynomial degree. 
-            (Default value: <strong>10</strong>)
-          </p>
-        </td>
-      </tr>
-    </tbody>
-  </table>
-
-## Examples
-
-{% highlight scala %}
-// Obtain the training data set
-val trainingDS: DataSet[LabeledVector] = ...
-
-// Setup polynomial feature transformer of degree 3
-val polyFeatures = PolynomialFeatures()
-.setDegree(3)
-
-// Setup the multiple linear regression learner
-val mlr = MultipleLinearRegression()
-
-// Control the learner via the parameter map
-val parameters = ParameterMap()
-.add(MultipleLinearRegression.Iterations, 20)
-.add(MultipleLinearRegression.Stepsize, 0.5)
-
-// Create pipeline PolynomialFeatures -> MultipleLinearRegression
-val pipeline = polyFeatures.chainPredictor(mlr)
-
-// train the model
-pipeline.fit(trainingDS)
-{% endhighlight %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/libs/ml/quickstart.md
----------------------------------------------------------------------
diff --git a/docs/apis/batch/libs/ml/quickstart.md b/docs/apis/batch/libs/ml/quickstart.md
deleted file mode 100644
index 60f505e..0000000
--- a/docs/apis/batch/libs/ml/quickstart.md
+++ /dev/null
@@ -1,244 +0,0 @@
----
-mathjax: include
-title: Quickstart Guide
-# Sub navigation
-sub-nav-group: batch
-sub-nav-parent: flinkml
-sub-nav-title: Quickstart Guide
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-* This will be replaced by the TOC
-{:toc}
-
-## Introduction
-
-FlinkML is designed to make learning from your data a straight-forward process, abstracting away
-the complexities that usually come with big data learning tasks. In this
-quick-start guide we will show just how easy it is to solve a simple supervised learning problem
-using FlinkML. But first some basics, feel free to skip the next few lines if you're already
-familiar with Machine Learning (ML).
-
-As defined by Murphy [[1]](#murphy) ML deals with detecting patterns in data, and using those
-learned patterns to make predictions about the future. We can categorize most ML algorithms into
-two major categories: Supervised and Unsupervised Learning.
-
-* **Supervised Learning** deals with learning a function (mapping) from a set of inputs
-(features) to a set of outputs. The learning is done using a *training set* of (input,
-output) pairs that we use to approximate the mapping function. Supervised learning problems are
-further divided into classification and regression problems. In classification problems we try to
-predict the *class* that an example belongs to, for example whether a user is going to click on
-an ad or not. Regression problems one the other hand, are about predicting (real) numerical
-values, often called the dependent variable, for example what the temperature will be tomorrow.
-
-* **Unsupervised Learning** deals with discovering patterns and regularities in the data. An example
-of this would be *clustering*, where we try to discover groupings of the data from the
-descriptive features. Unsupervised learning can also be used for feature selection, for example
-through [principal components analysis](https://en.wikipedia.org/wiki/Principal_component_analysis).
-
-## Linking with FlinkML
-
-In order to use FlinkML in your project, first you have to
-[set up a Flink program](http://ci.apache.org/projects/flink/flink-docs-master/apis/programming_guide.html#linking-with-flink).
-Next, you have to add the FlinkML dependency to the `pom.xml` of your project:
-
-{% highlight xml %}
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-ml{{ site.scala_version_suffix }}</artifactId>
-  <version>{{site.version }}</version>
-</dependency>
-{% endhighlight %}
-
-## Loading data
-
-To load data to be used with FlinkML we can use the ETL capabilities of Flink, or specialized
-functions for formatted data, such as the LibSVM format. For supervised learning problems it is
-common to use the `LabeledVector` class to represent the `(label, features)` examples. A `LabeledVector`
-object will have a FlinkML `Vector` member representing the features of the example and a `Double`
-member which represents the label, which could be the class in a classification problem, or the dependent
-variable for a regression problem.
-
-As an example, we can use Haberman's Survival Data Set , which you can
-[download from the UCI ML repository](http://archive.ics.uci.edu/ml/machine-learning-databases/haberman/haberman.data).
-This dataset *"contains cases from a study conducted on the survival of patients who had undergone
-surgery for breast cancer"*. The data comes in a comma-separated file, where the first 3 columns
-are the features and last column is the class, and the 4th column indicates whether the patient
-survived 5 years or longer (label 1), or died within 5 years (label 2). You can check the [UCI
-page](https://archive.ics.uci.edu/ml/datasets/Haberman%27s+Survival) for more information on the data.
-
-We can load the data as a `DataSet[String]` first:
-
-{% highlight scala %}
-
-import org.apache.flink.api.scala.ExecutionEnvironment
-
-val env = ExecutionEnvironment.getExecutionEnvironment
-
-val survival = env.readCsvFile[(String, String, String, String)]("/path/to/haberman.data")
-
-{% endhighlight %}
-
-We can now transform the data into a `DataSet[LabeledVector]`. This will allow us to use the
-dataset with the FlinkML classification algorithms. We know that the 4th element of the dataset
-is the class label, and the rest are features, so we can build `LabeledVector` elements like this:
-
-{% highlight scala %}
-
-import org.apache.flink.ml.common.LabeledVector
-import org.apache.flink.ml.math.DenseVector
-
-val survivalLV = survival
-  .map{tuple =>
-    val list = tuple.productIterator.toList
-    val numList = list.map(_.asInstanceOf[String].toDouble)
-    LabeledVector(numList(3), DenseVector(numList.take(3).toArray))
-  }
-
-{% endhighlight %}
-
-We can then use this data to train a learner. We will however use another dataset to exemplify
-building a learner; that will allow us to show how we can import other dataset formats.
-
-**LibSVM files**
-
-A common format for ML datasets is the LibSVM format and a number of datasets using that format can be
-found [in the LibSVM datasets website](http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/). FlinkML provides utilities for loading
-datasets using the LibSVM format through the `readLibSVM` function available through the `MLUtils`
-object.
-You can also save datasets in the LibSVM format using the `writeLibSVM` function.
-Let's import the svmguide1 dataset. You can download the
-[training set here](http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary/svmguide1)
-and the [test set here](http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary/svmguide1.t).
-This is an astroparticle binary classification dataset, used by Hsu et al. [[3]](#hsu) in their 
-practical Support Vector Machine (SVM) guide. It contains 4 numerical features, and the class label.
-
-We can simply import the dataset then using:
-
-{% highlight scala %}
-
-import org.apache.flink.ml.MLUtils
-
-val astroTrain: DataSet[LabeledVector] = MLUtils.readLibSVM("/path/to/svmguide1")
-val astroTest: DataSet[LabeledVector] = MLUtils.readLibSVM("/path/to/svmguide1.t")
-
-{% endhighlight %}
-
-This gives us two `DataSet[LabeledVector]` objects that we will use in the following section to
-create a classifier.
-
-## Classification
-
-Once we have imported the dataset we can train a `Predictor` such as a linear SVM classifier.
-We can set a number of parameters for the classifier. Here we set the `Blocks` parameter,
-which is used to split the input by the underlying CoCoA algorithm [[2]](#jaggi) uses. The 
-regularization parameter determines the amount of $l_2$ regularization applied, which is used
-to avoid overfitting. The step size determines the contribution of the weight vector updates to
-the next weight vector value. This parameter sets the initial step size.
-
-{% highlight scala %}
-
-import org.apache.flink.ml.classification.SVM
-
-val svm = SVM()
-  .setBlocks(env.getParallelism)
-  .setIterations(100)
-  .setRegularization(0.001)
-  .setStepsize(0.1)
-  .setSeed(42)
-
-svm.fit(astroTrain)
-
-{% endhighlight %}
-
-We can now make predictions on the test set.
-
-{% highlight scala %}
-
-val predictionPairs = svm.predict(astroTest)
-
-{% endhighlight %}
-
-Next we will see how we can pre-process our data, and use the ML pipelines capabilities of FlinkML.
-
-## Data pre-processing and pipelines
-
-A pre-processing step that is often encouraged [[3]](#hsu) when using SVM classification is scaling
-the input features to the [0, 1] range, in order to avoid features with extreme values
-dominating the rest.
-FlinkML has a number of `Transformers` such as `MinMaxScaler` that are used to pre-process data,
-and a key feature is the ability to chain `Transformers` and `Predictors` together. This allows
-us to run the same pipeline of transformations and make predictions on the train and test data in
-a straight-forward and type-safe manner. You can read more on the pipeline system of FlinkML
-[in the pipelines documentation](pipelines.html).
-
-Let us first create a normalizing transformer for the features in our dataset, and chain it to a
-new SVM classifier.
-
-{% highlight scala %}
-
-import org.apache.flink.ml.preprocessing.MinMaxScaler
-
-val scaler = MinMaxScaler()
-
-val scaledSVM = scaler.chainPredictor(svm)
-
-{% endhighlight %}
-
-We can now use our newly created pipeline to make predictions on the test set.
-First we call fit again, to train the scaler and the SVM classifier.
-The data of the test set will then be automatically scaled before being passed on to the SVM to
-make predictions.
-
-{% highlight scala %}
-
-scaledSVM.fit(astroTrain)
-
-val predictionPairsScaled: DataSet[(Double, Double)] = scaledSVM.predict(astroTest)
-
-{% endhighlight %}
-
-The scaled inputs should give us better prediction performance.
-The result of the prediction on `LabeledVector`s is a data set of tuples where the first entry denotes the true label value and the second entry is the predicted label value.
-
-## Where to go from here
-
-This quickstart guide can act as an introduction to the basic concepts of FlinkML, but there's a lot
-more you can do.
-We recommend going through the [FlinkML documentation](index.html), and trying out the different
-algorithms.
-A very good way to get started is to play around with interesting datasets from the UCI ML
-repository and the LibSVM datasets.
-Tackling an interesting problem from a website like [Kaggle](https://www.kaggle.com) or
-[DrivenData](http://www.drivendata.org/) is also a great way to learn by competing with other
-data scientists.
-If you would like to contribute some new algorithms take a look at our
-[contribution guide](contribution_guide.html).
-
-**References**
-
-<a name="murphy"></a>[1] Murphy, Kevin P. *Machine learning: a probabilistic perspective.* MIT 
-press, 2012.
-
-<a name="jaggi"></a>[2] Jaggi, Martin, et al. *Communication-efficient distributed dual 
-coordinate ascent.* Advances in Neural Information Processing Systems. 2014.
-
-<a name="hsu"></a>[3] Hsu, Chih-Wei, Chih-Chung Chang, and Chih-Jen Lin.
- *A practical guide to support vector classification.* 2003.

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/libs/ml/standard_scaler.md
----------------------------------------------------------------------
diff --git a/docs/apis/batch/libs/ml/standard_scaler.md b/docs/apis/batch/libs/ml/standard_scaler.md
deleted file mode 100644
index 3a9cd4b..0000000
--- a/docs/apis/batch/libs/ml/standard_scaler.md
+++ /dev/null
@@ -1,116 +0,0 @@
----
-mathjax: include
-title: Standard Scaler
-# Sub navigation
-sub-nav-group: batch
-sub-nav-parent: flinkml
-sub-nav-title: Standard Scaler
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-* This will be replaced by the TOC
-{:toc}
-
-## Description
-
- The standard scaler scales the given data set, so that all features will have a user specified mean and variance. 
- In case the user does not provide a specific mean and standard deviation, the standard scaler transforms the features of the input data set to have mean equal to 0 and standard deviation equal to 1.
- Given a set of input data $x_1, x_2,... x_n$, with mean: 
- 
- $$\bar{x} = \frac{1}{n}\sum_{i=1}^{n}x_{i}$$ 
- 
- and standard deviation: 
- 
- $$\sigma_{x}=\sqrt{ \frac{1}{n} \sum_{i=1}^{n}(x_{i}-\bar{x})^{2}}$$
-
-The scaled data set $z_1, z_2,...,z_n$ will be:
-
- $$z_{i}= std \left (\frac{x_{i} - \bar{x}  }{\sigma_{x}}\right ) + mean$$
-
-where $\textit{std}$ and $\textit{mean}$ are the user specified values for the standard deviation and mean.
-
-## Operations
-
-`StandardScaler` is a `Transformer`.
-As such, it supports the `fit` and `transform` operation.
-
-### Fit
-
-StandardScaler is trained on all subtypes of `Vector` or `LabeledVector`: 
-
-* `fit[T <: Vector]: DataSet[T] => Unit` 
-* `fit: DataSet[LabeledVector] => Unit`
-
-### Transform
-
-StandardScaler transforms all subtypes of `Vector` or `LabeledVector` into the respective type: 
-
-* `transform[T <: Vector]: DataSet[T] => DataSet[T]` 
-* `transform: DataSet[LabeledVector] => DataSet[LabeledVector]`
-
-## Parameters
-
-The standard scaler implementation can be controlled by the following two parameters:
-
- <table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 20%">Parameters</th>
-      <th class="text-center">Description</th>
-    </tr>
-  </thead>
-
-  <tbody>
-    <tr>
-      <td><strong>Mean</strong></td>
-      <td>
-        <p>
-          The mean of the scaled data set. (Default value: <strong>0.0</strong>)
-        </p>
-      </td>
-    </tr>
-    <tr>
-      <td><strong>Std</strong></td>
-      <td>
-        <p>
-          The standard deviation of the scaled data set. (Default value: <strong>1.0</strong>)
-        </p>
-      </td>
-    </tr>
-  </tbody>
-</table>
-
-## Examples
-
-{% highlight scala %}
-// Create standard scaler transformer
-val scaler = StandardScaler()
-.setMean(10.0)
-.setStd(2.0)
-
-// Obtain data set to be scaled
-val dataSet: DataSet[Vector] = ...
-
-// Learn the mean and standard deviation of the training data
-scaler.fit(dataSet)
-
-// Scale the provided data set to have mean=10.0 and std=2.0
-val scaledDS = scaler.transform(dataSet)
-{% endhighlight %}


[89/89] [abbrv] flink git commit: [hotfix] Remove RecoveryMode from JobMaster

Posted by se...@apache.org.
[hotfix] Remove RecoveryMode from JobMaster

The recovery mode is not used any more by the latest CheckpointCoordinator.

All difference in recovery logic between high-availability and non-high-availability
is encapsulated in the HighAvailabilityServices.


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/73429761
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/73429761
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/73429761

Branch: refs/heads/flip-6
Commit: 734297615a772568ccbe0f5857f5df5d46d3acd3
Parents: 4515c85
Author: Stephan Ewen <se...@apache.org>
Authored: Thu Aug 25 20:37:15 2016 +0200
Committer: Stephan Ewen <se...@apache.org>
Committed: Thu Aug 25 20:37:15 2016 +0200

----------------------------------------------------------------------
 .../java/org/apache/flink/runtime/rpc/jobmaster/JobMaster.java    | 3 ---
 1 file changed, 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/73429761/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/jobmaster/JobMaster.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/jobmaster/JobMaster.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/jobmaster/JobMaster.java
index 49b200b..a046cb8 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/jobmaster/JobMaster.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/jobmaster/JobMaster.java
@@ -22,7 +22,6 @@ import org.apache.flink.api.common.JobID;
 import org.apache.flink.configuration.Configuration;
 import org.apache.flink.runtime.highavailability.HighAvailabilityServices;
 import org.apache.flink.runtime.jobgraph.JobGraph;
-import org.apache.flink.runtime.jobmanager.RecoveryMode;
 import org.apache.flink.runtime.leaderelection.LeaderContender;
 import org.apache.flink.runtime.leaderelection.LeaderElectionService;
 import org.apache.flink.runtime.messages.Acknowledge;
@@ -57,7 +56,6 @@ public class JobMaster extends RpcEndpoint<JobMasterGateway> {
 
 	/** Configuration of the job */
 	private final Configuration configuration;
-	private final RecoveryMode recoveryMode;
 
 	/** Service to contend for and retrieve the leadership of JM and RM */
 	private final HighAvailabilityServices highAvailabilityServices;
@@ -86,7 +84,6 @@ public class JobMaster extends RpcEndpoint<JobMasterGateway> {
 		this.jobID = Preconditions.checkNotNull(jobGraph.getJobID());
 
 		this.configuration = Preconditions.checkNotNull(configuration);
-		this.recoveryMode = RecoveryMode.fromConfig(configuration);
 
 		this.highAvailabilityServices = Preconditions.checkNotNull(highAvailabilityService);
 	}


[18/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/connectors/kinesis.md
----------------------------------------------------------------------
diff --git a/docs/dev/connectors/kinesis.md b/docs/dev/connectors/kinesis.md
new file mode 100644
index 0000000..ce011b3
--- /dev/null
+++ b/docs/dev/connectors/kinesis.md
@@ -0,0 +1,319 @@
+---
+title: "Amazon AWS Kinesis Streams Connector"
+nav-title: Kinesis
+nav-parent_id: connectors
+nav-pos: 3
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+The Kinesis connector provides access to [Amazon AWS Kinesis Streams](http://aws.amazon.com/kinesis/streams/).
+
+To use the connector, add the following Maven dependency to your project:
+
+{% highlight xml %}
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-connector-kinesis{{ site.scala_version_suffix }}</artifactId>
+  <version>{{site.version }}</version>
+</dependency>
+{% endhighlight %}
+
+**The `flink-connector-kinesis{{ site.scala_version_suffix }}` has a dependency on code licensed under the [Amazon Software License](https://aws.amazon.com/asl/) (ASL).
+Linking to the flink-connector-kinesis will include ASL licensed code into your application.**
+
+The `flink-connector-kinesis{{ site.scala_version_suffix }}` artifact is not deployed to Maven central as part of
+Flink releases because of the licensing issue. Therefore, you need to build the connector yourself from the source.
+
+Download the Flink source or check it out from the git repository. Then, use the following Maven command to build the module:
+{% highlight bash %}
+mvn clean install -Pinclude-kinesis -DskipTests
+# In Maven 3.3 the shading of flink-dist doesn't work properly in one run, so we need to run mvn for flink-dist again.
+cd flink-dist
+mvn clean install -Pinclude-kinesis -DskipTests
+{% endhighlight %}
+
+
+The streaming connectors are not part of the binary distribution. See how to link with them for cluster
+execution [here]({{site.baseurl}}/dev/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution).
+
+### Using the Amazon Kinesis Streams Service
+Follow the instructions from the [Amazon Kinesis Streams Developer Guide](https://docs.aws.amazon.com/streams/latest/dev/learning-kinesis-module-one-create-stream.html)
+to setup Kinesis streams. Make sure to create the appropriate IAM policy and user to read / write to the Kinesis streams.
+
+### Kinesis Consumer
+
+The `FlinkKinesisConsumer` is an exactly-once parallel streaming data source that subscribes to multiple AWS Kinesis
+streams within the same AWS service region, and can handle resharding of streams. Each subtask of the consumer is
+responsible for fetching data records from multiple Kinesis shards. The number of shards fetched by each subtask will
+change as shards are closed and created by Kinesis.
+
+Before consuming data from Kinesis streams, make sure that all streams are created with the status "ACTIVE" in the AWS dashboard.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+Properties consumerConfig = new Properties();
+consumerConfig.put(ConsumerConfigConstants.AWS_REGION, "us-east-1");
+consumerConfig.put(ConsumerConfigConstants.AWS_ACCESS_KEY_ID, "aws_access_key_id");
+consumerConfig.put(ConsumerConfigConstants.AWS_SECRET_ACCESS_KEY, "aws_secret_access_key");
+consumerConfig.put(ConsumerConfigConstants.STREAM_INITIAL_POSITION, "LATEST");
+
+StreamExecutionEnvironment env = StreamExecutionEnvironment.getEnvironment();
+
+DataStream<String> kinesis = env.addSource(new FlinkKinesisConsumer<>(
+    "kinesis_stream_name", new SimpleStringSchema(), consumerConfig));
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val consumerConfig = new Properties();
+consumerConfig.put(ConsumerConfigConstants.AWS_REGION, "us-east-1");
+consumerConfig.put(ConsumerConfigConstants.AWS_ACCESS_KEY_ID, "aws_access_key_id");
+consumerConfig.put(ConsumerConfigConstants.AWS_SECRET_ACCESS_KEY, "aws_secret_access_key");
+consumerConfig.put(ConsumerConfigConstants.STREAM_INITIAL_POSITION, "LATEST");
+
+val env = StreamExecutionEnvironment.getEnvironment
+
+val kinesis = env.addSource(new FlinkKinesisConsumer[String](
+    "kinesis_stream_name", new SimpleStringSchema, consumerConfig))
+{% endhighlight %}
+</div>
+</div>
+
+The above is a simple example of using the consumer. Configuration for the consumer is supplied with a `java.util.Properties`
+instance, the configuration keys for which can be found in `ConsumerConfigConstants`. The example
+demonstrates consuming a single Kinesis stream in the AWS region "us-east-1". The AWS credentials are supplied using the basic method in which
+the AWS access key ID and secret access key are directly supplied in the configuration (other options are setting
+`ConsumerConfigConstants.AWS_CREDENTIALS_PROVIDER` to `ENV_VAR`, `SYS_PROP`, `PROFILE`, and `AUTO`). Also, data is being consumed
+from the newest position in the Kinesis stream (the other option will be setting `ConsumerConfigConstants.STREAM_INITIAL_POSITION`
+to `TRIM_HORIZON`, which lets the consumer start reading the Kinesis stream from the earliest record possible).
+
+Other optional configuration keys for the consumer can be found in `ConsumerConfigConstants`.
+
+#### Fault Tolerance for Exactly-Once User-Defined State Update Semantics
+
+With Flink's checkpointing enabled, the Flink Kinesis Consumer will consume records from shards in Kinesis streams and
+periodically checkpoint each shard's progress. In case of a job failure, Flink will restore the streaming program to the
+state of the latest complete checkpoint and re-consume the records from Kinesis shards, starting from the progress that
+was stored in the checkpoint.
+
+The interval of drawing checkpoints therefore defines how much the program may have to go back at most, in case of a failure.
+
+To use fault tolerant Kinesis Consumers, checkpointing of the topology needs to be enabled at the execution environment:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
+env.enableCheckpointing(5000); // checkpoint every 5000 msecs
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = StreamExecutionEnvironment.getExecutionEnvironment()
+env.enableCheckpointing(5000) // checkpoint every 5000 msecs
+{% endhighlight %}
+</div>
+</div>
+
+Also note that Flink can only restart the topology if enough processing slots are available to restart the topology.
+Therefore, if the topology fails due to loss of a TaskManager, there must still be enough slots available afterwards.
+Flink on YARN supports automatic restart of lost YARN containers.
+
+#### Event Time for Consumed Records
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
+env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = StreamExecutionEnvironment.getExecutionEnvironment()
+env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
+{% endhighlight %}
+</div>
+</div>
+
+If streaming topologies choose to use the [event time notion]({{site.baseurl}}/apis/streaming/event_time.html) for record
+timestamps, an *approximate arrival timestamp* will be used by default. This timestamp is attached to records by Kinesis once they
+were successfully received and stored by streams. Note that this timestamp is typically referred to as a Kinesis server-side
+timestamp, and there are no guarantees about the accuracy or order correctness (i.e., the timestamps may not always be
+ascending).
+
+Users can choose to override this default with a custom timestamp, as described [here]({{ site.baseurl }}/apis/streaming/event_timestamps_watermarks.html),
+or use one from the [predefined ones]({{ site.baseurl }}/apis/streaming/event_timestamp_extractors.html). After doing so,
+it can be passed to the consumer in the following way:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+DataStream<String> kinesis = env.addSource(new FlinkKinesisConsumer<>(
+    "kinesis_stream_name", new SimpleStringSchema(), kinesisConsumerConfig));
+kinesis = kinesis.assignTimestampsAndWatermarks(new CustomTimestampAssigner());
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val kinesis = env.addSource(new FlinkKinesisConsumer[String](
+    "kinesis_stream_name", new SimpleStringSchema, kinesisConsumerConfig))
+kinesis = kinesis.assignTimestampsAndWatermarks(new CustomTimestampAssigner)
+{% endhighlight %}
+</div>
+</div>
+
+#### Threading Model
+
+The Flink Kinesis Consumer uses multiple threads for shard discovery and data consumption.
+
+For shard discovery, each parallel consumer subtask will have a single thread that constantly queries Kinesis for shard
+information even if the subtask initially did not have shards to read from when the consumer was started. In other words, if
+the consumer is run with a parallelism of 10, there will be a total of 10 threads constantly querying Kinesis regardless
+of the total amount of shards in the subscribed streams.
+
+For data consumption, a single thread will be created to consume each discovered shard. Threads will terminate when the
+shard it is responsible of consuming is closed as a result of stream resharding. In other words, there will always be
+one thread per open shard.
+
+#### Internally Used Kinesis APIs
+
+The Flink Kinesis Consumer uses the [AWS Java SDK](http://aws.amazon.com/sdk-for-java/) internally to call Kinesis APIs
+for shard discovery and data consumption. Due to Amazon's [service limits for Kinesis Streams](http://docs.aws.amazon.com/streams/latest/dev/service-sizes-and-limits.html)
+on the APIs, the consumer will be competing with other non-Flink consuming applications that the user may be running.
+Below is a list of APIs called by the consumer with description of how the consumer uses the API, as well as information
+on how to deal with any errors or warnings that the Flink Kinesis Consumer may have due to these service limits.
+
+- *[DescribeStream](http://docs.aws.amazon.com/kinesis/latest/APIReference/API_DescribeStream.html)*: this is constantly called
+by a single thread in each parallel consumer subtask to discover any new shards as a result of stream resharding. By default,
+the consumer performs the shard discovery at an interval of 10 seconds, and will retry indefinitely until it gets a result
+from Kinesis. If this interferes with other non-Flink consuming applications, users can slow down the consumer of
+calling this API by setting a value for `ConsumerConfigConstants.SHARD_DISCOVERY_INTERVAL_MILLIS` in the supplied
+configuration properties. This sets the discovery interval to a different value. Note that this setting directly impacts
+the maximum delay of discovering a new shard and starting to consume it, as shards will not be discovered during the interval.
+
+- *[GetShardIterator](http://docs.aws.amazon.com/kinesis/latest/APIReference/API_GetShardIterator.html)*: this is called
+only once when per shard consuming threads are started, and will retry if Kinesis complains that the transaction limit for the
+API has exceeded, up to a default of 3 attempts. Note that since the rate limit for this API is per shard (not per stream),
+the consumer itself should not exceed the limit. Usually, if this happens, users can either try to slow down any other
+non-Flink consuming applications of calling this API, or modify the retry behaviour of this API call in the consumer by
+setting keys prefixed by `ConsumerConfigConstants.SHARD_GETITERATOR_*` in the supplied configuration properties.
+
+- *[GetRecords](http://docs.aws.amazon.com/kinesis/latest/APIReference/API_GetRecords.html)*: this is constantly called
+by per shard consuming threads to fetch records from Kinesis. When a shard has multiple concurrent consumers (when there
+are any other non-Flink consuming applications running), the per shard rate limit may be exceeded. By default, on each call
+of this API, the consumer will retry if Kinesis complains that the data size / transaction limit for the API has exceeded,
+up to a default of 3 attempts. Users can either try to slow down other non-Flink consuming applications, or adjust the throughput
+of the consumer by setting the `ConsumerConfigConstants.SHARD_GETRECORDS_MAX` and
+`ConsumerConfigConstants.SHARD_GETRECORDS_INTERVAL_MILLIS` keys in the supplied configuration properties. Setting the former
+adjusts the maximum number of records each consuming thread tries to fetch from shards on each call (default is 100), while
+the latter modifies the sleep interval between each fetch (there will be no sleep by default). The retry behaviour of the
+consumer when calling this API can also be modified by using the other keys prefixed by `ConsumerConfigConstants.SHARD_GETRECORDS_*`.
+
+### Kinesis Producer
+
+The `FlinkKinesisProducer` is used for putting data from a Flink stream into a Kinesis stream. Note that the producer is not participating in
+Flink's checkpointing and doesn't provide exactly-once processing guarantees.
+Also, the Kinesis producer does not guarantee that records are written in order to the shards (See [here](https://github.com/awslabs/amazon-kinesis-producer/issues/23) and [here](http://docs.aws.amazon.com/kinesis/latest/APIReference/API_PutRecord.html#API_PutRecord_RequestSyntax) for more details).
+
+In case of a failure or a resharding, data will be written again to Kinesis, leading to duplicates. This behavior is usually called "at-least-once" semantics.
+
+To put data into a Kinesis stream, make sure the stream is marked as "ACTIVE" in the AWS dashboard.
+
+For the monitoring to work, the user accessing the stream needs access to the Cloud watch service.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+Properties producerConfig = new Properties();
+producerConfig.put(ProducerConfigConstants.AWS_REGION, "us-east-1");
+producerConfig.put(ProducerConfigConstants.AWS_ACCESS_KEY_ID, "aws_access_key_id");
+producerConfig.put(ProducerConfigConstants.AWS_SECRET_ACCESS_KEY, "aws_secret_access_key");
+
+FlinkKinesisProducer<String> kinesis = new FlinkKinesisProducer<>(new SimpleStringSchema(), producerConfig);
+kinesis.setFailOnError(true);
+kinesis.setDefaultStream("kinesis_stream_name");
+kinesis.setDefaultPartition("0");
+
+DataStream<String> simpleStringStream = ...;
+simpleStringStream.addSink(kinesis);
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val producerConfig = new Properties();
+producerConfig.put(ProducerConfigConstants.AWS_REGION, "us-east-1");
+producerConfig.put(ProducerConfigConstants.AWS_ACCESS_KEY_ID, "aws_access_key_id");
+producerConfig.put(ProducerConfigConstants.AWS_SECRET_ACCESS_KEY, "aws_secret_access_key");
+
+val kinesis = new FlinkKinesisProducer[String](new SimpleStringSchema, producerConfig);
+kinesis.setFailOnError(true);
+kinesis.setDefaultStream("kinesis_stream_name");
+kinesis.setDefaultPartition("0");
+
+val simpleStringStream = ...;
+simpleStringStream.addSink(kinesis);
+{% endhighlight %}
+</div>
+</div>
+
+The above is a simple example of using the producer. Configuration for the producer with the mandatory configuration values is supplied with a `java.util.Properties`
+instance as described above for the consumer. The example demonstrates producing a single Kinesis stream in the AWS region "us-east-1".
+
+Instead of a `SerializationSchema`, it also supports a `KinesisSerializationSchema`. The `KinesisSerializationSchema` allows to send the data to multiple streams. This is
+done using the `KinesisSerializationSchema.getTargetStream(T element)` method. Returning `null` there will instruct the producer to write the element to the default stream.
+Otherwise, the returned stream name is used.
+
+Other optional configuration keys for the producer can be found in `ProducerConfigConstants`.
+
+
+### Using Non-AWS Kinesis Endpoints for Testing
+
+It is sometimes desirable to have Flink operate as a consumer or producer against a non-AWS Kinesis endpoint such as
+[Kinesalite](https://github.com/mhart/kinesalite); this is especially useful when performing functional testing of a Flink
+application. The AWS endpoint that would normally be inferred by the AWS region set in the Flink configuration must be overridden via a configuration property.
+
+To override the AWS endpoint, taking the producer for example, set the `ProducerConfigConstants.AWS_ENDPOINT` property in the
+Flink configuration, in addition to the `ProducerConfigConstants.AWS_REGION` required by Flink. Although the region is
+required, it will not be used to determine the AWS endpoint URL.
+
+The following example shows how one might supply the `ProducerConfigConstants.AWS_ENDPOINT` configuration property:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+Properties producerConfig = new Properties();
+producerConfig.put(ProducerConfigConstants.AWS_REGION, "us-east-1");
+producerConfig.put(ProducerConfigConstants.AWS_ACCESS_KEY_ID, "aws_access_key_id");
+producerConfig.put(ProducerConfigConstants.AWS_SECRET_ACCESS_KEY, "aws_secret_access_key");
+producerConfig.put(ProducerConfigConstants.AWS_ENDPOINT, "http://localhost:4567");
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val producerConfig = new Properties();
+producerConfig.put(ProducerConfigConstants.AWS_REGION, "us-east-1");
+producerConfig.put(ProducerConfigConstants.AWS_ACCESS_KEY_ID, "aws_access_key_id");
+producerConfig.put(ProducerConfigConstants.AWS_SECRET_ACCESS_KEY, "aws_secret_access_key");
+producerConfig.put(ProducerConfigConstants.AWS_ENDPOINT, "http://localhost:4567");
+{% endhighlight %}
+</div>
+</div>

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/connectors/nifi.md
----------------------------------------------------------------------
diff --git a/docs/dev/connectors/nifi.md b/docs/dev/connectors/nifi.md
new file mode 100644
index 0000000..924a80b
--- /dev/null
+++ b/docs/dev/connectors/nifi.md
@@ -0,0 +1,138 @@
+---
+title: "Apache NiFi Connector"
+nav-title: NiFi
+nav-parent_id: connectors
+nav-pos: 8
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+This connector provides a Source and Sink that can read from and write to
+[Apache NiFi](https://nifi.apache.org/). To use this connector, add the
+following dependency to your project:
+
+{% highlight xml %}
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-connector-nifi{{ site.scala_version_suffix }}</artifactId>
+  <version>{{site.version }}</version>
+</dependency>
+{% endhighlight %}
+
+Note that the streaming connectors are currently not part of the binary
+distribution. See
+[here]({{site.baseurl}}/dev/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution)
+for information about how to package the program with the libraries for
+cluster execution.
+
+#### Installing Apache NiFi
+
+Instructions for setting up a Apache NiFi cluster can be found
+[here](https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#how-to-install-and-start-nifi).
+
+#### Apache NiFi Source
+
+The connector provides a Source for reading data from Apache NiFi to Apache Flink.
+
+The class `NiFiSource(\u2026)` provides 2 constructors for reading data from NiFi.
+
+- `NiFiSource(SiteToSiteConfig config)` - Constructs a `NiFiSource(\u2026)` given the client's SiteToSiteConfig and a
+     default wait time of 1000 ms.
+
+- `NiFiSource(SiteToSiteConfig config, long waitTimeMs)` - Constructs a `NiFiSource(\u2026)` given the client's
+     SiteToSiteConfig and the specified wait time (in milliseconds).
+
+Example:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+StreamExecutionEnvironment streamExecEnv = StreamExecutionEnvironment.getExecutionEnvironment();
+
+SiteToSiteClientConfig clientConfig = new SiteToSiteClient.Builder()
+        .url("http://localhost:8080/nifi")
+        .portName("Data for Flink")
+        .requestBatchCount(5)
+        .buildConfig();
+
+SourceFunction<NiFiDataPacket> nifiSource = new NiFiSource(clientConfig);
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val streamExecEnv = StreamExecutionEnvironment.getExecutionEnvironment()
+
+val clientConfig: SiteToSiteClientConfig = new SiteToSiteClient.Builder()
+       .url("http://localhost:8080/nifi")
+       .portName("Data for Flink")
+       .requestBatchCount(5)
+       .buildConfig()
+
+val nifiSource = new NiFiSource(clientConfig)       
+{% endhighlight %}       
+</div>
+</div>
+
+Here data is read from the Apache NiFi Output Port called "Data for Flink" which is part of Apache NiFi
+Site-to-site protocol configuration.
+
+#### Apache NiFi Sink
+
+The connector provides a Sink for writing data from Apache Flink to Apache NiFi.
+
+The class `NiFiSink(\u2026)` provides a constructor for instantiating a `NiFiSink`.
+
+- `NiFiSink(SiteToSiteClientConfig, NiFiDataPacketBuilder<T>)` constructs a `NiFiSink(\u2026)` given the client's `SiteToSiteConfig` and a `NiFiDataPacketBuilder` that converts data from Flink to `NiFiDataPacket` to be ingested by NiFi.
+
+Example:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+StreamExecutionEnvironment streamExecEnv = StreamExecutionEnvironment.getExecutionEnvironment();
+
+SiteToSiteClientConfig clientConfig = new SiteToSiteClient.Builder()
+        .url("http://localhost:8080/nifi")
+        .portName("Data from Flink")
+        .requestBatchCount(5)
+        .buildConfig();
+
+SinkFunction<NiFiDataPacket> nifiSink = new NiFiSink<>(clientConfig, new NiFiDataPacketBuilder<T>() {...});
+
+streamExecEnv.addSink(nifiSink);
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val streamExecEnv = StreamExecutionEnvironment.getExecutionEnvironment()
+
+val clientConfig: SiteToSiteClientConfig = new SiteToSiteClient.Builder()
+       .url("http://localhost:8080/nifi")
+       .portName("Data from Flink")
+       .requestBatchCount(5)
+       .buildConfig()
+
+val nifiSink: NiFiSink[NiFiDataPacket] = new NiFiSink[NiFiDataPacket](clientConfig, new NiFiDataPacketBuilder<T>() {...})
+
+streamExecEnv.addSink(nifiSink)
+{% endhighlight %}       
+</div>
+</div>      
+
+More information about [Apache NiFi](https://nifi.apache.org) Site-to-Site Protocol can be found [here](https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#site-to-site)

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/connectors/rabbitmq.md
----------------------------------------------------------------------
diff --git a/docs/dev/connectors/rabbitmq.md b/docs/dev/connectors/rabbitmq.md
new file mode 100644
index 0000000..02def40
--- /dev/null
+++ b/docs/dev/connectors/rabbitmq.md
@@ -0,0 +1,129 @@
+---
+title: "RabbitMQ Connector"
+nav-title: RabbitMQ
+nav-parent_id: connectors
+nav-pos: 7
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+This connector provides access to data streams from [RabbitMQ](http://www.rabbitmq.com/). To use this connector, add the following dependency to your project:
+
+{% highlight xml %}
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-connector-rabbitmq{{ site.scala_version_suffix }}</artifactId>
+  <version>{{site.version }}</version>
+</dependency>
+{% endhighlight %}
+
+Note that the streaming connectors are currently not part of the binary distribution. See linking with them for cluster execution [here]({{site.baseurl}}/dev/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution).
+
+#### Installing RabbitMQ
+Follow the instructions from the [RabbitMQ download page](http://www.rabbitmq.com/download.html). After the installation the server automatically starts, and the application connecting to RabbitMQ can be launched.
+
+#### RabbitMQ Source
+
+A class which provides an interface for receiving data from RabbitMQ.
+
+The followings have to be provided for the `RMQSource(\u2026)` constructor in order:
+
+- RMQConnectionConfig.
+- queueName: The RabbitMQ queue name.
+- usesCorrelationId: `true` when correlation ids should be used, `false` otherwise (default is `false`).
+- deserializationSchema: Deserialization schema to turn messages into Java objects.
+
+This source can be operated in three different modes:
+
+1. Exactly-once (when checkpointed) with RabbitMQ transactions and messages with
+    unique correlation IDs.
+2. At-least-once (when checkpointed) with RabbitMQ transactions but no deduplication mechanism
+    (correlation id is not set).
+3. No strong delivery guarantees (without checkpointing) with RabbitMQ auto-commit mode.
+
+Correlation ids are a RabbitMQ application feature. You have to set it in the message properties
+when injecting messages into RabbitMQ. If you set `usesCorrelationId` to true and do not supply
+unique correlation ids, the source will throw an exception (if the correlation id is null) or ignore
+messages with non-unique correlation ids. If you set `usesCorrelationId` to false, then you don't
+have to supply correlation ids.
+
+Example:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+RMQConnectionConfig connectionConfig = new RMQConnectionConfig.Builder()
+.setHost("localhost").setPort(5000).setUserName(..)
+.setPassword(..).setVirtualHost("/").build();
+DataStream<String> streamWithoutCorrelationIds = env
+	.addSource(new RMQSource<String>(connectionConfig, "hello", new SimpleStringSchema()))
+	.print
+
+DataStream<String> streamWithCorrelationIds = env
+	.addSource(new RMQSource<String>(connectionConfig, "hello", true, new SimpleStringSchema()))
+	.print
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val connectionConfig = new RMQConnectionConfig.Builder()
+.setHost("localhost").setPort(5000).setUserName(..)
+.setPassword(..).setVirtualHost("/").build()
+streamWithoutCorrelationIds = env
+    .addSource(new RMQSource[String](connectionConfig, "hello", new SimpleStringSchema))
+    .print
+
+streamWithCorrelationIds = env
+    .addSource(new RMQSource[String](connectionConfig, "hello", true, new SimpleStringSchema))
+    .print
+{% endhighlight %}
+</div>
+</div>
+
+#### RabbitMQ Sink
+A class providing an interface for sending data to RabbitMQ.
+
+The followings have to be provided for the `RMQSink(\u2026)` constructor in order:
+
+1. RMQConnectionConfig
+2. The queue name
+3. Serialization schema
+
+Example:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+RMQConnectionConfig connectionConfig = new RMQConnectionConfig.Builder()
+.setHost("localhost").setPort(5000).setUserName(..)
+.setPassword(..).setVirtualHost("/").build();
+stream.addSink(new RMQSink<String>(connectionConfig, "hello", new SimpleStringSchema()));
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val connectionConfig = new RMQConnectionConfig.Builder()
+.setHost("localhost").setPort(5000).setUserName(..)
+.setPassword(..).setVirtualHost("/").build()
+stream.addSink(new RMQSink[String](connectionConfig, "hello", new SimpleStringSchema))
+{% endhighlight %}
+</div>
+</div>
+
+More about RabbitMQ can be found [here](http://www.rabbitmq.com/).

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/connectors/redis.md
----------------------------------------------------------------------
diff --git a/docs/dev/connectors/redis.md b/docs/dev/connectors/redis.md
new file mode 100644
index 0000000..a987b90
--- /dev/null
+++ b/docs/dev/connectors/redis.md
@@ -0,0 +1,174 @@
+---
+title: "Redis Connector"
+nav-title: Redis
+nav-parent_id: connectors
+nav-pos: 8
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+This connector provides a Sink that can write to
+[Redis](http://redis.io/) and also can publish data to [Redis PubSub](http://redis.io/topics/pubsub). To use this connector, add the
+following dependency to your project:
+{% highlight xml %}
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-connector-redis{{ site.scala_version_suffix }}</artifactId>
+  <version>{{site.version }}</version>
+</dependency>
+{% endhighlight %}
+Version Compatibility: This module is compatible with Redis 2.8.5.
+
+Note that the streaming connectors are currently not part of the binary distribution. You need to link them for cluster execution [explicitly]({{site.baseurl}}/dev/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution).
+
+#### Installing Redis
+Follow the instructions from the [Redis download page](http://redis.io/download).
+
+#### Redis Sink
+A class providing an interface for sending data to Redis.
+The sink can use three different methods for communicating with different type of Redis environments:
+1. Single Redis Server
+2. Redis Cluster
+3. Redis Sentinel
+
+This code shows how to create a sink that communicate to a single redis server:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+public static class RedisExampleMapper implements RedisMapper<Tuple2<String, String>>{
+
+    @Override
+    public RedisCommandDescription getCommandDescription() {
+        return new RedisCommandDescription(RedisCommand.HSET, "HASH_NAME");
+    }
+
+    @Override
+    public String getKeyFromData(Tuple2<String, String> data) {
+        return data.f0;
+    }
+
+    @Override
+    public String getValueFromData(Tuple2<String, String> data) {
+        return data.f1;
+    }
+}
+FlinkJedisPoolConfig conf = new FlinkJedisPoolConfig.Builder().setHost("127.0.0.1").build();
+
+DataStream<String> stream = ...;
+stream.addSink(new RedisSink<Tuple2<String, String>>(conf, new RedisExampleMapper());
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+class RedisExampleMapper extends RedisMapper[(String, String)]{
+  override def getCommandDescription: RedisCommandDescription = {
+    new RedisCommandDescription(RedisCommand.HSET, "HASH_NAME")
+  }
+
+  override def getKeyFromData(data: (String, String)): String = data._1
+
+  override def getValueFromData(data: (String, String)): String = data._2
+}
+val conf = new FlinkJedisPoolConfig.Builder().setHost("127.0.0.1").build()
+stream.addSink(new RedisSink[(String, String)](conf, new RedisExampleMapper))
+{% endhighlight %}
+</div>
+</div>
+
+This example code does the same, but for Redis Cluster:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+
+FlinkJedisPoolConfig conf = new FlinkJedisPoolConfig.Builder()
+    .setNodes(new HashSet<InetSocketAddress>(Arrays.asList(new InetSocketAddress(5601)))).build();
+
+DataStream<String> stream = ...;
+stream.addSink(new RedisSink<Tuple2<String, String>>(conf, new RedisExampleMapper());
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val conf = new FlinkJedisPoolConfig.Builder().setNodes(...).build()
+stream.addSink(new RedisSink[(String, String)](conf, new RedisExampleMapper))
+{% endhighlight %}
+</div>
+</div>
+
+This example shows when the Redis environment is with Sentinels:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+
+FlinkJedisSentinelConfig conf = new FlinkJedisSentinelConfig.Builder()
+    .setMasterName("master").setSentinels(...).build();
+
+DataStream<String> stream = ...;
+stream.addSink(new RedisSink<Tuple2<String, String>>(conf, new RedisExampleMapper());
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val conf = new FlinkJedisSentinelConfig.Builder().setMasterName("master").setSentinels(...).build()
+stream.addSink(new RedisSink[(String, String)](conf, new RedisExampleMapper))
+{% endhighlight %}
+</div>
+</div>
+
+This section gives a description of all the available data types and what Redis command used for that.
+
+<table class="table table-bordered" style="width: 75%">
+    <thead>
+        <tr>
+          <th class="text-center" style="width: 20%">Data Type</th>
+          <th class="text-center" style="width: 25%">Redis Command [Sink]</th>
+          <th class="text-center" style="width: 25%">Redis Command [Source]</th>
+        </tr>
+      </thead>
+      <tbody>
+        <tr>
+            <td>HASH</td><td><a href="http://redis.io/commands/hset">HSET</a></td><td>--NA--</td>
+        </tr>
+        <tr>
+            <td>LIST</td><td>
+                <a href="http://redis.io/commands/rpush">RPUSH</a>,
+                <a href="http://redis.io/commands/lpush">LPUSH</a>
+            </td><td>--NA--</td>
+        </tr>
+        <tr>
+            <td>SET</td><td><a href="http://redis.io/commands/rpush">SADD</a></td><td>--NA--</td>
+        </tr>
+        <tr>
+            <td>PUBSUB</td><td><a href="http://redis.io/commands/publish">PUBLISH</a></td><td>--NA--</td>
+        </tr>
+        <tr>
+            <td>STRING</td><td><a href="http://redis.io/commands/set">SET</a></td><td>--NA--</td>
+        </tr>
+        <tr>
+            <td>HYPER_LOG_LOG</td><td><a href="http://redis.io/commands/pfadd">PFADD</a></td><td>--NA--</td>
+        </tr>
+        <tr>
+            <td>SORTED_SET</td><td><a href="http://redis.io/commands/zadd">ZADD</a></td><td>--NA--</td>
+        </tr>                
+      </tbody>
+</table>
+More about Redis can be found [here](http://redis.io/).

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/connectors/twitter.md
----------------------------------------------------------------------
diff --git a/docs/dev/connectors/twitter.md b/docs/dev/connectors/twitter.md
new file mode 100644
index 0000000..e92e51d
--- /dev/null
+++ b/docs/dev/connectors/twitter.md
@@ -0,0 +1,85 @@
+---
+title: "Twitter Connector"
+nav-title: Twitter
+nav-parent_id: connectors
+nav-pos: 9
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+The Twitter Streaming API provides access to the stream of tweets made available by Twitter.
+Flink Streaming comes with a built-in `TwitterSource` class for establishing a connection to this stream.
+To use this connector, add the following dependency to your project:
+
+{% highlight xml %}
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-connector-twitter{{ site.scala_version_suffix }}</artifactId>
+  <version>{{site.version }}</version>
+</dependency>
+{% endhighlight %}
+
+Note that the streaming connectors are currently not part of the binary distribution.
+See linking with them for cluster execution [here]({{site.baseurl}}/dev/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution).
+
+#### Authentication
+In order to connect to the Twitter stream the user has to register their program and acquire the necessary information for the authentication. The process is described below.
+
+#### Acquiring the authentication information
+First of all, a Twitter account is needed. Sign up for free at [twitter.com/signup](https://twitter.com/signup)
+or sign in at Twitter's [Application Management](https://apps.twitter.com/) and register the application by
+clicking on the "Create New App" button. Fill out a form about your program and accept the Terms and Conditions.
+After selecting the application, the API key and API secret (called `twitter-source.consumerKey` and `twitter-source.consumerSecret` in `TwitterSource` respectively) are located on the "API Keys" tab.
+The necessary OAuth Access Token data (`twitter-source.token` and `twitter-source.tokenSecret` in `TwitterSource`) can be generated and acquired on the "Keys and Access Tokens" tab.
+Remember to keep these pieces of information secret and do not push them to public repositories.
+
+
+
+#### Usage
+In contrast to other connectors, the `TwitterSource` depends on no additional services. For example the following code should run gracefully:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+Properties props = new Properties();
+p.setProperty(TwitterSource.CONSUMER_KEY, "");
+p.setProperty(TwitterSource.CONSUMER_SECRET, "");
+p.setProperty(TwitterSource.TOKEN, "");
+p.setProperty(TwitterSource.TOKEN_SECRET, "");
+DataStream<String> streamSource = env.addSource(new TwitterSource(props));
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val props = new Properties();
+p.setProperty(TwitterSource.CONSUMER_KEY, "");
+p.setProperty(TwitterSource.CONSUMER_SECRET, "");
+p.setProperty(TwitterSource.TOKEN, "");
+p.setProperty(TwitterSource.TOKEN_SECRET, "");
+DataStream<String> streamSource = env.addSource(new TwitterSource(props));
+{% endhighlight %}
+</div>
+</div>
+
+The `TwitterSource` emits strings containing a JSON object, representing a Tweet.
+
+The `TwitterExample` class in the `flink-examples-streaming` package shows a full example how to use the `TwitterSource`.
+
+By default, the `TwitterSource` uses the `StatusesSampleEndpoint`. This endpoint returns a random sample of Tweets.
+There is a `TwitterSource.EndpointInitializer` interface allowing users to provide a custom endpoint.


[63/89] [abbrv] flink git commit: [FLINK-4417] [checkpoints] Checkpoints are subsumed by CheckpointID not, by timestamp

Posted by se...@apache.org.
[FLINK-4417] [checkpoints] Checkpoints are subsumed by CheckpointID not, by timestamp

This closes #2407


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/4e9d1775
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/4e9d1775
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/4e9d1775

Branch: refs/heads/flip-6
Commit: 4e9d1775b5514c87981c78d55323cc2b17361867
Parents: 4da40bc
Author: Ramkrishna <ra...@intel.com>
Authored: Tue Aug 23 21:53:31 2016 +0530
Committer: Stephan Ewen <se...@apache.org>
Committed: Wed Aug 24 19:56:17 2016 +0200

----------------------------------------------------------------------
 .../flink/runtime/checkpoint/CheckpointCoordinator.java       | 7 ++++---
 .../org/apache/flink/runtime/jobmanager/JobManager.scala      | 2 +-
 2 files changed, 5 insertions(+), 4 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/4e9d1775/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java b/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java
index 2c0e63b..b710324 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java
@@ -671,7 +671,7 @@ public class CheckpointCoordinator {
 						pendingCheckpoints.remove(checkpointId);
 						rememberRecentCheckpointId(checkpointId);
 
-						dropSubsumedCheckpoints(completed.getTimestamp());
+						dropSubsumedCheckpoints(completed.getCheckpointID());
 
 						triggerQueuedRequests();
 					}
@@ -726,12 +726,13 @@ public class CheckpointCoordinator {
 		recentPendingCheckpoints.addLast(id);
 	}
 
-	private void dropSubsumedCheckpoints(long timestamp) throws Exception {
+	private void dropSubsumedCheckpoints(long checkpointId) throws Exception {
 		Iterator<Map.Entry<Long, PendingCheckpoint>> entries = pendingCheckpoints.entrySet().iterator();
 
 		while (entries.hasNext()) {
 			PendingCheckpoint p = entries.next().getValue();
-			if (p.getCheckpointTimestamp() <= timestamp && p.canBeSubsumed()) {
+			// remove all pending checkpoints that are lesser than the current completed checkpoint
+			if (p.getCheckpointId() < checkpointId && p.canBeSubsumed()) {
 				rememberRecentCheckpointId(p.getCheckpointId());
 				p.abortSubsumed();
 				entries.remove();

http://git-wip-us.apache.org/repos/asf/flink/blob/4e9d1775/flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobManager.scala
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobManager.scala b/flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobManager.scala
index d172a2b..34fed3f 100644
--- a/flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobManager.scala
+++ b/flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobManager.scala
@@ -1418,7 +1418,7 @@ class JobManager(
             if (checkpointCoordinator != null) {
               future {
                 try {
-                  if (checkpointCoordinator.receiveDeclineMessage(declineMessage)) {
+                  if (!checkpointCoordinator.receiveDeclineMessage(declineMessage)) {
                     log.info("Received message for non-existing checkpoint " +
                       declineMessage.getCheckpointId)
                   }


[36/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/session-windows.svg
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/session-windows.svg b/docs/apis/streaming/session-windows.svg
deleted file mode 100644
index 92785c7..0000000
--- a/docs/apis/streaming/session-windows.svg
+++ /dev/null
@@ -1,22 +0,0 @@
-<?xml version="1.0" standalone="yes"?>
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<svg version="1.1" viewBox="0.0 0.0 800.0 600.0" fill="none" stroke="none" stroke-linecap="square" stroke-miterlimit="10" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><clipPath id="p.0"><path d="m0 0l800.0 0l0 600.0l-800.0 0l0 -600.0z" clip-rule="nonzero"></path></clipPath><g clip-path="url(#p.0)"><path fill="#000000" fill-opacity="0.0" d="m0 0l800.0 0l0 600.0l-800.0 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m145.49606 485.0l509.0079 0" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m145.49606 485.0l503.0079 0" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m648.50397 486.65173l4.538086 -1.6517334l-4.538086 -1.6517334z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m145.49606 485.0l0 -394.99213" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" s
 troke-linejoin="round" stroke-linecap="butt" d="m145.49606 485.0l0 -388.99213" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m147.1478 96.00787l-1.6517334 -4.5380936l-1.6517334 4.5380936z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m587.0 477.0l60.0 0l0 42.992126l-60.0 0z" fill-rule="nonzero"></path><path fill="#000000" d="m600.90625 502.41998l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm1.5426636 -10.1875l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm4.1292114 0l0 -9.859375l1.5 0l0 1.390625q0.453125 
 -0.71875 1.21875 -1.15625q0.78125 -0.453125 1.765625 -0.453125q1.09375 0 1.796875 0.453125q0.703125 0.453125 0.984375 1.28125q1.171875 -1.734375 3.046875 -1.734375q1.46875 0 2.25 0.8125q0.796875 0.8125 0.796875 2.5l0 6.765625l-1.671875 0l0 -6.203125q0 -1.0 -0.15625 -1.4375q-0.15625 -0.453125 -0.59375 -0.71875q-0.421875 -0.265625 -1.0 -0.265625q-1.03125 0 -1.71875 0.6875q-0.6875 0.6875 -0.6875 2.21875l0 5.71875l-1.671875 0l0 -6.40625q0 -1.109375 -0.40625 -1.65625q-0.40625 -0.5625 -1.34375 -0.5625q-0.703125 0 -1.3125 0.375q-0.59375 0.359375 -0.859375 1.078125q-0.265625 0.71875 -0.265625 2.0625l0 5.109375l-1.671875 0zm22.290771 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.85
 9375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m42.0 133.0l82.01575 0l0 42.992126l-82.01575 0z" fill-rule="nonzero"></path><path fill="#000000" d="m58.703125 159.92l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.671875 0l0 9.859375l-1.5 0zm3.2507172 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46
 875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm16.75 -0.234375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.
 34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094467 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm17.23973 0l-1.671875 0l0 -10.640625q-0.59375 0.578125 -1.578125 1.15625q-0.984375 0.5625 -1.765625 0.859375l0
  -1.625q1.40625 -0.65625 2.453125 -1.59375q1.046875 -0.9375 1.484375 -1.8125l1.078125 0l0 13.65625z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m42.0 254.0l82.01575 0l0 42.992126l-82.01575 0z" fill-rule="nonzero"></path><path fill="#000000" d="m58.703125 280.91998l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.671875 0l0 9.859375l-1.5 0zm3.2507172 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.
 34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm16.75 -0.234375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 
 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094467 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm19.724106 -1.609375l0 1.609375l-8.984375 0q-0.015625 -0.609375 0.1875 -1.15625q0.34375 -0.921875 1.09375 -1.8125q0.765625 -0.890625 2.1875 -2.0625q2.21875 -1.8125 3.0 -2.875q0.78125 -1.0625 0.78125 -2.015625q0 -0.984375 -0.71875 -1.671875q-0.703125 -0.6875 -1.
 84375 -0.6875q-1.203125 0 -1.9375 0.734375q-0.71875 0.71875 -0.71875 2.0l-1.71875 -0.171875q0.171875 -1.921875 1.328125 -2.921875q1.15625 -1.015625 3.09375 -1.015625q1.953125 0 3.09375 1.09375q1.140625 1.078125 1.140625 2.6875q0 0.8125 -0.34375 1.609375q-0.328125 0.78125 -1.109375 1.65625q-0.765625 0.859375 -2.5625 2.390625q-1.5 1.265625 -1.9375 1.71875q-0.421875 0.4375 -0.703125 0.890625l6.671875 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m42.0 375.0l82.01575 0l0 42.992126l-82.01575 0z" fill-rule="nonzero"></path><path fill="#000000" d="m58.703125 401.91998l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.
 671875 0l0 9.859375l-1.5 0zm3.2507172 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.9218
 75 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm16.75 -0.234375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094467 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.8906
 25 -0.28125 1.953125l0 5.15625l-1.671875 0zm10.958481 -3.59375l1.671875 -0.21875q0.28125 1.421875 0.96875 2.046875q0.703125 0.625 1.6875 0.625q1.1875 0 2.0 -0.8125q0.8125 -0.828125 0.8125 -2.03125q0 -1.140625 -0.765625 -1.890625q-0.75 -0.75 -1.90625 -0.75q-0.46875 0 -1.171875 0.1875l0.1875 -1.46875q0.15625 0.015625 0.265625 0.015625q1.0625 0 1.90625 -0.546875q0.859375 -0.5625 0.859375 -1.71875q0 -0.921875 -0.625 -1.515625q-0.609375 -0.609375 -1.59375 -0.609375q-0.96875 0 -1.625 0.609375q-0.640625 0.609375 -0.828125 1.84375l-1.671875 -0.296875q0.296875 -1.6875 1.375 -2.609375q1.09375 -0.921875 2.71875 -0.921875q1.109375 0 2.046875 0.484375q0.9375 0.46875 1.421875 1.296875q0.5 0.828125 0.5 1.75q0 0.890625 -0.46875 1.609375q-0.46875 0.71875 -1.40625 1.15625q1.21875 0.265625 1.875 1.15625q0.671875 0.875 0.671875 2.1875q0 1.78125 -1.296875 3.015625q-1.296875 1.234375 -3.28125 1.234375q-1.796875 0 -2.984375 -1.0625q-1.171875 -1.0625 -1.34375 -2.765625z" fill-rule="nonzero"></path><path fi
 ll="#9900ff" d="m177.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.518509 0 4.9338684 1.000473 6.714737 2.7813263c1.7808533 1.7808685 2.7813263 4.196228 2.7813263 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m203.49606 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.518509 0 4.9338684 1.000473 6.714737 2.7813263c1.7808533 1.7808685 2.7813263 4.196228 2.7813263 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m290.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.4960
 63 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m323.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m348.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m373.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.4
 96063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m442.50394 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m469.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m492.50394 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416
  6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m524.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496033 -9.496063l0 0c2.5185547 0 4.933899 1.000473 6.7147827 2.7813263c1.7808228 1.7808685 2.781311 4.196228 2.781311 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496094 9.496063l0 0c-5.244507 0 -9.496033 -4.251526 -9.496033 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m603.0079 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496094 -9.496063l0 0c2.5184937 0 4.933838 1.000473 6.7147217 2.7813263c1.7808228 1.7808685 2.781311 4.196228 2.781311 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496033 9.496063l0 0c-5.244568 0 -9.496094 -4.251526 -9.496094 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m374.97638 275.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.781341
 6c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m401.47244 275.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m209.0 275.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.518509 0 4.9338684 1.0004883 6.714737 2.7813416c1.7808533 1.7808533 2.7813263 4.1961975 2.7813263 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m242.0 275.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.
 496063l0 0c2.518509 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m267.0 275.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m292.0 275.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m568.48
 03 275.49606l0 0c0 -5.2445374 4.251587 -9.496063 9.496094 -9.496063l0 0c2.5184937 0 4.933899 1.0004883 6.7147217 2.7813416c1.7808838 1.7808533 2.781311 4.1961975 2.781311 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496033 9.496063l0 0c-5.244507 0 -9.496094 -4.251526 -9.496094 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m594.9764 275.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496033 -9.496063l0 0c2.5185547 0 4.933899 1.0004883 6.7147827 2.7813416c1.7808228 1.7808533 2.781311 4.1961975 2.781311 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496094 9.496063l0 0c-5.244507 0 -9.496033 -4.251526 -9.496033 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m618.4803 275.49606l0 0c0 -5.2445374 4.251587 -9.496063 9.496094 -9.496063l0 0c2.5184937 0 4.933899 1.0004883 6.7147217 2.7813416c1.7808838 1.7808533 2.781311 4.1961975 2.781311 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496033 9.496063l0 0c-5.244507 0 -9.496094 -4.251526 -9.496094 -9.496063z" fi
 ll-rule="nonzero"></path><path fill="#9900ff" d="m477.0 275.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m487.99213 396.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m514.48816 396.49606l0 0c0 -5.2445374 4.251587 -9.496063 9.496094 -9.496063l0 0c2.5184937 0 4.933899 1.0004883 6.7147217 2.7813416c1.7808838 1.7808533 2.781311 4.1961975 2.781311 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496033 9.4960
 63l0 0c-5.244507 0 -9.496094 -4.251526 -9.496094 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m190.13878 396.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.518509 0 4.9338684 1.0004883 6.714737 2.7813416c1.7808533 1.7808533 2.7813263 4.1961975 2.7813263 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m265.0 396.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m291.49606 396.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.78134
 16 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m315.0 396.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m558.01575 396.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496094 -9.496063l0 0c2.5184937 0 4.933838 1.0004883 6.7147217 2.7813416c1.7808228 1.7808533 2.781311 4.1961975 2.781311 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496033 9.496063l0 0c-5.244568 0 -9.496094 -4.251526 -9.496094 -9.496063z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m167.0 102.0l82.01575 0l0 25.984253l-82.01575 0z" fill-rule="nonzero"></path><path fill="#000000"
  d="m178.09375 123.8l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm8.390778 -7.984375l0 -1.328125l1.140625 0l0 1.328125l-1.140625 0zm0 7.984375l0 -6.734375l1.140625 0l0 6.734375l-1.140625 0zm2.96109 0l0 -6.734375l1.03125 0l0 0.953125q0.734375 -1.109375 2.140625 -1.109375q0.609375 0 1.109375 0.21875q0.515625 0.21875 0.765625 0.578125q0.265625 0.34375 0.359375 0.84375q0.0625 0.3125 0.0625 1.109375l0 4.140625l-1.140625 0l0 -4.09375q0 -0.703125 -0.140625 -1.046875q-0.125 -0.34375 -0.46875 -0.546875q-0.328125 -0.21875 -0.78125 -0.21875q-0.734375 0 -1.265625 0.46875q-0.53125 0.453125 -0.53125 1.75l0 3.6875l-1.140625 0zm11.787476 0l0 -0.84375q-0.640625 1.0 -1.890625 1.0q-0.796875 0 -1.484375 -0.4375q-0.671875 -0.453125 -1.046875 -1.25q-0.3
 75 -0.796875 -0.375 -1.828125q0 -1.015625 0.34375 -1.828125q0.34375 -0.828125 1.015625 -1.265625q0.671875 -0.4375 1.5 -0.4375q0.609375 0 1.078125 0.265625q0.484375 0.25 0.78125 0.65625l0 -3.34375l1.140625 0l0 9.3125l-1.0625 0zm-3.609375 -3.359375q0 1.296875 0.53125 1.9375q0.546875 0.640625 1.296875 0.640625q0.75 0 1.265625 -0.609375q0.53125 -0.625 0.53125 -1.875q0 -1.390625 -0.53125 -2.03125q-0.53125 -0.65625 -1.3125 -0.65625q-0.765625 0 -1.28125 0.625q-0.5 0.625 -0.5 1.96875zm6.2249756 -0.015625q0 -1.875 1.03125 -2.765625q0.875 -0.75 2.125 -0.75q1.390625 0 2.265625 0.90625q0.890625 0.90625 0.890625 2.515625q0 1.296875 -0.390625 2.046875q-0.390625 0.75 -1.140625 1.171875q-0.75 0.40625 -1.625 0.40625q-1.421875 0 -2.296875 -0.90625q-0.859375 -0.90625 -0.859375 -2.625zm1.171875 0q0 1.296875 0.5625 1.953125q0.5625 0.640625 1.421875 0.640625q0.84375 0 1.40625 -0.640625q0.578125 -0.65625 0.578125 -1.984375q0 -1.25 -0.578125 -1.890625q-0.5625 -0.65625 -1.40625 -0.65625q-0.859375 0 -1.42187
 5 0.640625q-0.5625 0.640625 -0.5625 1.9375zm7.8968506 3.375l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm16.07814 0l-1.140625 0l0 -7.28125q-0.421875 0.390625 -1.09375 0.796875q-0.65625 0.390625 -1.1875 0.578125l0 -1.109375q0.953125 -0.4375 1.671875 -1.078125q0.71875 -0.640625 1.015625 -1.25l0.734375 0l0 9.34375z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m228.0 132.0l0 46.015747" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m228.0 132.0l0 46.015747" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m173.48819 131.48819l0 46.015747" fill-rule="nonzero"></path><path stroke="#000000" strok
 e-width="1.0" stroke-linejoin="round" stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m173.48819 131.48819l0 46.015747" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m285.99213 132.0l0 46.015747" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m285.99213 132.0l0 46.015747" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m396.00787 132.0l0 46.015747" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m396.00787 132.0l0 46.015747" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m439.0 132.0l0 46.015747" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m439.0 132.0l0 46.015747" fill-rule="nonzero"></path><path fill="#000000" fill-opa
 city="0.0" d="m545.9921 131.48819l0 46.015747" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m545.9921 131.48819l0 46.015747" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m601.0079 132.0l0 46.015747" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m601.0079 132.0l0 46.015747" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m625.9764 132.0l0 46.015747" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m625.9764 132.0l0 46.015747" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m303.0 102.0l82.01575 0l0 25.984253l-82.01575 0z" fill-rule="nonzero"></path><path fill="#000000" d="m314.09375 123.8l-2.0625 -6.734375l1.1875 0l1.078125 3.
 890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm8.390778 -7.984375l0 -1.328125l1.140625 0l0 1.328125l-1.140625 0zm0 7.984375l0 -6.734375l1.140625 0l0 6.734375l-1.140625 0zm2.96109 0l0 -6.734375l1.03125 0l0 0.953125q0.734375 -1.109375 2.140625 -1.109375q0.609375 0 1.109375 0.21875q0.515625 0.21875 0.765625 0.578125q0.265625 0.34375 0.359375 0.84375q0.0625 0.3125 0.0625 1.109375l0 4.140625l-1.140625 0l0 -4.09375q0 -0.703125 -0.140625 -1.046875q-0.125 -0.34375 -0.46875 -0.546875q-0.328125 -0.21875 -0.78125 -0.21875q-0.734375 0 -1.265625 0.46875q-0.53125 0.453125 -0.53125 1.75l0 3.6875l-1.140625 0zm11.787476 0l0 -0.84375q-0.640625 1.0 -1.890625 1.0q-0.796875 0 -1.484375 -0.4375q-0.671875 -0.453125 -1.046875 -1.25q-0.375 -0.796875 -0.375 -1.828125q0 -1.015625 0.34375 -1.828125
 q0.34375 -0.828125 1.015625 -1.265625q0.671875 -0.4375 1.5 -0.4375q0.609375 0 1.078125 0.265625q0.484375 0.25 0.78125 0.65625l0 -3.34375l1.140625 0l0 9.3125l-1.0625 0zm-3.609375 -3.359375q0 1.296875 0.53125 1.9375q0.546875 0.640625 1.296875 0.640625q0.75 0 1.265625 -0.609375q0.53125 -0.625 0.53125 -1.875q0 -1.390625 -0.53125 -2.03125q-0.53125 -0.65625 -1.3125 -0.65625q-0.765625 0 -1.28125 0.625q-0.5 0.625 -0.5 1.96875zm6.2249756 -0.015625q0 -1.875 1.03125 -2.765625q0.875 -0.75 2.125 -0.75q1.390625 0 2.265625 0.90625q0.890625 0.90625 0.890625 2.515625q0 1.296875 -0.390625 2.046875q-0.390625 0.75 -1.140625 1.171875q-0.75 0.40625 -1.625 0.40625q-1.421875 0 -2.296875 -0.90625q-0.859375 -0.90625 -0.859375 -2.625zm1.171875 0q0 1.296875 0.5625 1.953125q0.5625 0.640625 1.421875 0.640625q0.84375 0 1.40625 -0.640625q0.578125 -0.65625 0.578125 -1.984375q0 -1.25 -0.578125 -1.890625q-0.5625 -0.65625 -1.40625 -0.65625q-0.859375 0 -1.421875 0.640625q-0.5625 0.640625 -0.5625 1.9375zm7.8968506 3.375
 l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm17.78125 -1.09375l0 1.09375l-6.15625 0q-0.015625 -0.40625 0.140625 -0.796875q0.234375 -0.625 0.75 -1.234375q0.515625 -0.609375 1.5 -1.40625q1.515625 -1.25 2.046875 -1.96875q0.53125 -0.734375 0.53125 -1.375q0 -0.6875 -0.484375 -1.140625q-0.484375 -0.46875 -1.265625 -0.46875q-0.828125 0 -1.328125 0.5q-0.484375 0.484375 -0.5 1.359375l-1.171875 -0.125q0.125 -1.3125 0.90625 -2.0q0.78125 -0.6875 2.109375 -0.6875q1.34375 0 2.125 0.75q0.78125 0.734375 0.78125 1.828125q0 0.5625 -0.234375 1.109375q-0.21875 0.53125 -0.75 1.140625q-0.53125 0.59375 -1.765625 1.625q-1.03125 0.859375 -1.328125 1.171875q-0.28125 0.3125 -0.46875 0.625l4.5625 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opaci
 ty="0.0" d="m455.48032 102.0l82.01572 0l0 25.984253l-82.01572 0z" fill-rule="nonzero"></path><path fill="#000000" d="m466.57407 123.8l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm8.390778 -7.984375l0 -1.328125l1.140625 0l0 1.328125l-1.140625 0zm0 7.984375l0 -6.734375l1.140625 0l0 6.734375l-1.140625 0zm2.96109 0l0 -6.734375l1.03125 0l0 0.953125q0.734375 -1.109375 2.140625 -1.109375q0.609375 0 1.109375 0.21875q0.515625 0.21875 0.765625 0.578125q0.265625 0.34375 0.359375 0.84375q0.0625 0.3125 0.0625 1.109375l0 4.140625l-1.140625 0l0 -4.09375q0 -0.703125 -0.140625 -1.046875q-0.125 -0.34375 -0.46875 -0.546875q-0.328125 -0.21875 -0.78125 -0.21875q-0.734375 0 -1.265625 0.46875q-0.53125 0.453125 -0.53125 1.75l0 3.6875l-1.140625 0zm11.78747
 6 0l0 -0.84375q-0.640625 1.0 -1.890625 1.0q-0.796875 0 -1.484375 -0.4375q-0.671875 -0.453125 -1.046875 -1.25q-0.375 -0.796875 -0.375 -1.828125q0 -1.015625 0.34375 -1.828125q0.34375 -0.828125 1.015625 -1.265625q0.671875 -0.4375 1.5 -0.4375q0.609375 0 1.078125 0.265625q0.484375 0.25 0.78125 0.65625l0 -3.34375l1.140625 0l0 9.3125l-1.0625 0zm-3.609375 -3.359375q0 1.296875 0.53125 1.9375q0.546875 0.640625 1.296875 0.640625q0.75 0 1.265625 -0.609375q0.53125 -0.625 0.53125 -1.875q0 -1.390625 -0.53125 -2.03125q-0.53125 -0.65625 -1.3125 -0.65625q-0.765625 0 -1.28125 0.625q-0.5 0.625 -0.5 1.96875zm6.2249756 -0.015625q0 -1.875 1.03125 -2.765625q0.875 -0.75 2.125 -0.75q1.390625 0 2.265625 0.90625q0.890625 0.90625 0.890625 2.515625q0 1.296875 -0.390625 2.046875q-0.390625 0.75 -1.140625 1.171875q-0.75 0.40625 -1.625 0.40625q-1.421875 0 -2.296875 -0.90625q-0.859375 -0.90625 -0.859375 -2.625zm1.171875 0q0 1.296875 0.5625 1.953125q0.5625 0.640625 1.421875 0.640625q0.84375 0 1.40625 -0.640625q0.57812
 5 -0.65625 0.578125 -1.984375q0 -1.25 -0.578125 -1.890625q-0.5625 -0.65625 -1.40625 -0.65625q-0.859375 0 -1.421875 0.640625q-0.5625 0.640625 -0.5625 1.9375zm7.8968506 3.375l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm11.7812805 -2.453125l1.140625 -0.15625q0.203125 0.96875 0.671875 1.40625q0.46875 0.421875 1.15625 0.421875q0.796875 0 1.34375 -0.546875q0.5625 -0.5625 0.5625 -1.390625q0 -0.796875 -0.515625 -1.296875q-0.5 -0.515625 -1.296875 -0.515625q-0.328125 0 -0.8125 0.125l0.125 -1.0q0.125 0.015625 0.1875 0.015625q0.734375 0 1.3125 -0.375q0.59375 -0.390625 0.59375 -1.1875q0 -0.625 -0.4375 -1.03125q-0.421875 -0.421875 -1.09375 -0.421875q-0.671875 0 -1.109375 0.421875q-0.4375 0.421875 -0.578125 1.25l-1.140625 -0.203125q0.21875 -1.14
 0625 0.953125 -1.765625q0.75 -0.640625 1.84375 -0.640625q0.765625 0 1.40625 0.328125q0.640625 0.328125 0.984375 0.890625q0.34375 0.5625 0.34375 1.203125q0 0.59375 -0.328125 1.09375q-0.328125 0.5 -0.953125 0.78125q0.8125 0.203125 1.265625 0.796875q0.46875 0.59375 0.46875 1.5q0 1.21875 -0.890625 2.078125q-0.890625 0.84375 -2.25 0.84375q-1.21875 0 -2.03125 -0.734375q-0.8125 -0.734375 -0.921875 -1.890625z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m578.0079 102.0l82.01575 0l0 25.984253l-82.01575 0z" fill-rule="nonzero"></path><path fill="#000000" d="m589.1016 123.8l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm8.390808 -7.984375l0 -1.328125l1.140625 0l0 1.328125l-1.140625 0zm0 7.984375l0 -6.734375l1.140625 0l
 0 6.734375l-1.140625 0zm2.9610596 0l0 -6.734375l1.03125 0l0 0.953125q0.734375 -1.109375 2.140625 -1.109375q0.609375 0 1.109375 0.21875q0.515625 0.21875 0.765625 0.578125q0.265625 0.34375 0.359375 0.84375q0.0625 0.3125 0.0625 1.109375l0 4.140625l-1.140625 0l0 -4.09375q0 -0.703125 -0.140625 -1.046875q-0.125 -0.34375 -0.46875 -0.546875q-0.328125 -0.21875 -0.78125 -0.21875q-0.734375 0 -1.265625 0.46875q-0.53125 0.453125 -0.53125 1.75l0 3.6875l-1.140625 0zm11.787476 0l0 -0.84375q-0.640625 1.0 -1.890625 1.0q-0.796875 0 -1.484375 -0.4375q-0.671875 -0.453125 -1.046875 -1.25q-0.375 -0.796875 -0.375 -1.828125q0 -1.015625 0.34375 -1.828125q0.34375 -0.828125 1.015625 -1.265625q0.671875 -0.4375 1.5 -0.4375q0.609375 0 1.078125 0.265625q0.484375 0.25 0.78125 0.65625l0 -3.34375l1.140625 0l0 9.3125l-1.0625 0zm-3.609375 -3.359375q0 1.296875 0.53125 1.9375q0.546875 0.640625 1.296875 0.640625q0.75 0 1.265625 -0.609375q0.53125 -0.625 0.53125 -1.875q0 -1.390625 -0.53125 -2.03125q-0.53125 -0.65625 -1.3125
  -0.65625q-0.765625 0 -1.28125 0.625q-0.5 0.625 -0.5 1.96875zm6.2249756 -0.015625q0 -1.875 1.03125 -2.765625q0.875 -0.75 2.125 -0.75q1.390625 0 2.265625 0.90625q0.890625 0.90625 0.890625 2.515625q0 1.296875 -0.390625 2.046875q-0.390625 0.75 -1.140625 1.171875q-0.75 0.40625 -1.625 0.40625q-1.421875 0 -2.296875 -0.90625q-0.859375 -0.90625 -0.859375 -2.625zm1.171875 0q0 1.296875 0.5625 1.953125q0.5625 0.640625 1.421875 0.640625q0.84375 0 1.40625 -0.640625q0.578125 -0.65625 0.578125 -1.984375q0 -1.25 -0.578125 -1.890625q-0.5625 -0.65625 -1.40625 -0.65625q-0.859375 0 -1.421875 0.640625q-0.5625 0.640625 -0.5625 1.9375zm7.8968506 3.375l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm15.4375 0l0 -2.234375l-4.03125 0l0 -1.046875l4.234375 -6.03
 125l0.9375 0l0 6.03125l1.265625 0l0 1.046875l-1.265625 0l0 2.234375l-1.140625 0zm0 -3.28125l0 -4.1875l-2.921875 4.1875l2.921875 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m205.51968 252.48819l0 46.015747" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m205.51968 252.48819l0 46.015747" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m314.0 252.48819l0 46.015747" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m314.0 252.48819l0 46.015747" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m372.97638 252.48819l0 46.015747" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m372.97638 252.48819l0 46.015747" fill-rule="nonzero"></pa
 th><path fill="#000000" fill-opacity="0.0" d="m424.0 250.97638l0 46.015747" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m424.0 250.97638l0 46.015747" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m472.0 252.48819l0 46.015747" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m472.0 252.48819l0 46.015747" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m499.4882 252.48819l0 46.015747" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m499.4882 252.48819l0 46.015747" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m565.51184 250.97638l0 46.015747" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="
 round" stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m565.51184 250.97638l0 46.015747" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m641.0 252.48819l0 46.015747" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m641.0 252.48819l0 46.015747" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m216.76378 220.0l82.01576 0l0 25.984253l-82.01576 0z" fill-rule="nonzero"></path><path fill="#000000" d="m227.85753 241.8l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm8.390778 -7.984375l0 -1.328125l1.140625 0l0 1.328125l-1.140625 0zm0 7.984375l0 -6.734375l1.140625 0l0 6.734375l-1.140625 0zm2.96109 0l0 
 -6.734375l1.03125 0l0 0.953125q0.734375 -1.109375 2.140625 -1.109375q0.609375 0 1.109375 0.21875q0.515625 0.21875 0.765625 0.578125q0.265625 0.34375 0.359375 0.84375q0.0625 0.3125 0.0625 1.109375l0 4.140625l-1.140625 0l0 -4.09375q0 -0.703125 -0.140625 -1.046875q-0.125 -0.34375 -0.46875 -0.546875q-0.328125 -0.21875 -0.78125 -0.21875q-0.734375 0 -1.265625 0.46875q-0.53125 0.453125 -0.53125 1.75l0 3.6875l-1.140625 0zm11.787476 0l0 -0.84375q-0.640625 1.0 -1.890625 1.0q-0.796875 0 -1.484375 -0.4375q-0.671875 -0.453125 -1.046875 -1.25q-0.375 -0.796875 -0.375 -1.828125q0 -1.015625 0.34375 -1.828125q0.34375 -0.828125 1.015625 -1.265625q0.671875 -0.4375 1.5 -0.4375q0.609375 0 1.078125 0.265625q0.484375 0.25 0.78125 0.65625l0 -3.34375l1.140625 0l0 9.3125l-1.0625 0zm-3.609375 -3.359375q0 1.296875 0.53125 1.9375q0.546875 0.640625 1.296875 0.640625q0.75 0 1.265625 -0.609375q0.53125 -0.625 0.53125 -1.875q0 -1.390625 -0.53125 -2.03125q-0.53125 -0.65625 -1.3125 -0.65625q-0.765625 0 -1.28125 0.625q-
 0.5 0.625 -0.5 1.96875zm6.2249756 -0.015625q0 -1.875 1.03125 -2.765625q0.875 -0.75 2.1250153 -0.75q1.390625 0 2.265625 0.90625q0.890625 0.90625 0.890625 2.515625q0 1.296875 -0.390625 2.046875q-0.390625 0.75 -1.140625 1.171875q-0.75 0.40625 -1.625 0.40625q-1.4218903 0 -2.2968903 -0.90625q-0.859375 -0.90625 -0.859375 -2.625zm1.171875 0q0 1.296875 0.5625 1.953125q0.5625 0.640625 1.4218903 0.640625q0.84375 0 1.40625 -0.640625q0.578125 -0.65625 0.578125 -1.984375q0 -1.25 -0.578125 -1.890625q-0.5625 -0.65625 -1.40625 -0.65625q-0.85939026 0 -1.4218903 0.640625q-0.5625 0.640625 -0.5625 1.9375zm7.896866 3.375l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm16.078125 0l-1.140625 0l0 -7.28125q-0.421875 0.390625 -1.09375 0.796875q-0.65625 0.39062
 5 -1.1875 0.578125l0 -1.109375q0.953125 -0.4375 1.671875 -1.078125q0.71875 -0.640625 1.015625 -1.25l0.734375 0l0 9.34375z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m361.71652 220.0l82.01575 0l0 25.984253l-82.01575 0z" fill-rule="nonzero"></path><path fill="#000000" d="m372.81027 241.8l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm8.390778 -7.984375l0 -1.328125l1.140625 0l0 1.328125l-1.140625 0zm0 7.984375l0 -6.734375l1.140625 0l0 6.734375l-1.140625 0zm2.96109 0l0 -6.734375l1.03125 0l0 0.953125q0.734375 -1.109375 2.140625 -1.109375q0.609375 0 1.109375 0.21875q0.515625 0.21875 0.765625 0.578125q0.265625 0.34375 0.359375 0.84375q0.0625 0.3125 0.0625 1.109375l0 4.140625l-1.140625 0l0 -4.09375q0 -0.703125 -0.
 140625 -1.046875q-0.125 -0.34375 -0.46875 -0.546875q-0.328125 -0.21875 -0.78125 -0.21875q-0.734375 0 -1.265625 0.46875q-0.53125 0.453125 -0.53125 1.75l0 3.6875l-1.140625 0zm11.787476 0l0 -0.84375q-0.640625 1.0 -1.890625 1.0q-0.796875 0 -1.484375 -0.4375q-0.671875 -0.453125 -1.046875 -1.25q-0.375 -0.796875 -0.375 -1.828125q0 -1.015625 0.34375 -1.828125q0.34375 -0.828125 1.015625 -1.265625q0.671875 -0.4375 1.5 -0.4375q0.609375 0 1.078125 0.265625q0.484375 0.25 0.78125 0.65625l0 -3.34375l1.140625 0l0 9.3125l-1.0625 0zm-3.609375 -3.359375q0 1.296875 0.53125 1.9375q0.546875 0.640625 1.296875 0.640625q0.75 0 1.265625 -0.609375q0.53125 -0.625 0.53125 -1.875q0 -1.390625 -0.53125 -2.03125q-0.53125 -0.65625 -1.3125 -0.65625q-0.765625 0 -1.28125 0.625q-0.5 0.625 -0.5 1.96875zm6.2249756 -0.015625q0 -1.875 1.03125 -2.765625q0.875 -0.75 2.125 -0.75q1.390625 0 2.265625 0.90625q0.890625 0.90625 0.890625 2.515625q0 1.296875 -0.390625 2.046875q-0.390625 0.75 -1.140625 1.171875q-0.75 0.40625 -1.625 0.
 40625q-1.421875 0 -2.296875 -0.90625q-0.859375 -0.90625 -0.859375 -2.625zm1.171875 0q0 1.296875 0.5625 1.953125q0.5625 0.640625 1.421875 0.640625q0.84375 0 1.40625 -0.640625q0.578125 -0.65625 0.578125 -1.984375q0 -1.25 -0.578125 -1.890625q-0.5625 -0.65625 -1.40625 -0.65625q-0.859375 0 -1.421875 0.640625q-0.5625 0.640625 -0.5625 1.9375zm7.8968506 3.375l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm17.78128 -1.09375l0 1.09375l-6.15625 0q-0.015625 -0.40625 0.140625 -0.796875q0.234375 -0.625 0.75 -1.234375q0.515625 -0.609375 1.5 -1.40625q1.515625 -1.25 2.046875 -1.96875q0.53125 -0.734375 0.53125 -1.375q0 -0.6875 -0.484375 -1.140625q-0.484375 -0.46875 -1.265625 -0.46875q-0.828125 0 -1.328125 0.5q-0.484375 0.484375 -0.5 1.359375l-1.171875
  -0.125q0.125 -1.3125 0.90625 -2.0q0.78125 -0.6875 2.109375 -0.6875q1.34375 0 2.125 0.75q0.78125 0.734375 0.78125 1.828125q0 0.5625 -0.234375 1.109375q-0.21875 0.53125 -0.75 1.140625q-0.53125 0.59375 -1.765625 1.625q-1.03125 0.859375 -1.328125 1.171875q-0.28125 0.3125 -0.46875 0.625l4.5625 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m449.24408 220.0l82.01575 0l0 25.984253l-82.01575 0z" fill-rule="nonzero"></path><path fill="#000000" d="m460.33783 241.8l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm8.390778 -7.984375l0 -1.328125l1.140625 0l0 1.328125l-1.140625 0zm0 7.984375l0 -6.734375l1.140625 0l0 6.734375l-1.140625 0zm2.96109 0l0 -6.734375l1.03125 0l0 0.953125q0.734375 -1.109375 2.140625 -1.109375q0.60
 9375 0 1.109375 0.21875q0.515625 0.21875 0.765625 0.578125q0.265625 0.34375 0.359375 0.84375q0.0625 0.3125 0.0625 1.109375l0 4.140625l-1.140625 0l0 -4.09375q0 -0.703125 -0.140625 -1.046875q-0.125 -0.34375 -0.46875 -0.546875q-0.328125 -0.21875 -0.78125 -0.21875q-0.734375 0 -1.265625 0.46875q-0.53125 0.453125 -0.53125 1.75l0 3.6875l-1.140625 0zm11.787476 0l0 -0.84375q-0.640625 1.0 -1.890625 1.0q-0.796875 0 -1.484375 -0.4375q-0.671875 -0.453125 -1.046875 -1.25q-0.375 -0.796875 -0.375 -1.828125q0 -1.015625 0.34375 -1.828125q0.34375 -0.828125 1.015625 -1.265625q0.671875 -0.4375 1.5 -0.4375q0.609375 0 1.078125 0.265625q0.484375 0.25 0.78125 0.65625l0 -3.34375l1.140625 0l0 9.3125l-1.0625 0zm-3.609375 -3.359375q0 1.296875 0.53125 1.9375q0.546875 0.640625 1.296875 0.640625q0.75 0 1.265625 -0.609375q0.53125 -0.625 0.53125 -1.875q0 -1.390625 -0.53125 -2.03125q-0.53125 -0.65625 -1.3125 -0.65625q-0.765625 0 -1.28125 0.625q-0.5 0.625 -0.5 1.96875zm6.2249756 -0.015625q0 -1.875 1.03125 -2.765625q0.
 875 -0.75 2.125 -0.75q1.390625 0 2.265625 0.90625q0.890625 0.90625 0.890625 2.515625q0 1.296875 -0.390625 2.046875q-0.390625 0.75 -1.140625 1.171875q-0.75 0.40625 -1.625 0.40625q-1.421875 0 -2.296875 -0.90625q-0.859375 -0.90625 -0.859375 -2.625zm1.171875 0q0 1.296875 0.5625 1.953125q0.5625 0.640625 1.421875 0.640625q0.84375 0 1.40625 -0.640625q0.578125 -0.65625 0.578125 -1.984375q0 -1.25 -0.578125 -1.890625q-0.5625 -0.65625 -1.40625 -0.65625q-0.859375 0 -1.421875 0.640625q-0.5625 0.640625 -0.5625 1.9375zm7.8968506 3.375l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm11.7812805 -2.453125l1.140625 -0.15625q0.203125 0.96875 0.671875 1.40625q0.46875 0.421875 1.15625 0.421875q0.796875 0 1.34375 -0.546875q0.5625 -0.5625 0.5625 -1.390625q0 
 -0.796875 -0.515625 -1.296875q-0.5 -0.515625 -1.296875 -0.515625q-0.328125 0 -0.8125 0.125l0.125 -1.0q0.125 0.015625 0.1875 0.015625q0.734375 0 1.3125 -0.375q0.59375 -0.390625 0.59375 -1.1875q0 -0.625 -0.4375 -1.03125q-0.421875 -0.421875 -1.09375 -0.421875q-0.671875 0 -1.109375 0.421875q-0.4375 0.421875 -0.578125 1.25l-1.140625 -0.203125q0.21875 -1.140625 0.953125 -1.765625q0.75 -0.640625 1.84375 -0.640625q0.765625 0 1.40625 0.328125q0.640625 0.328125 0.9843445 0.890625q0.34375 0.5625 0.34375 1.203125q0 0.59375 -0.328125 1.09375q-0.32809448 0.5 -0.9530945 0.78125q0.8124695 0.203125 1.2655945 0.796875q0.46875 0.59375 0.46875 1.5q0 1.21875 -0.890625 2.078125q-0.8905945 0.84375 -2.2499695 0.84375q-1.21875 0 -2.03125 -0.734375q-0.8125 -0.734375 -0.921875 -1.890625z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m567.77167 220.0l82.01575 0l0 25.984253l-82.01575 0z" fill-rule="nonzero"></path><path fill="#000000" d="m578.8654 241.8l-2.0625 -6.734375l1.1875 0l1.0781
 25 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm8.390747 -7.984375l0 -1.328125l1.140625 0l0 1.328125l-1.140625 0zm0 7.984375l0 -6.734375l1.140625 0l0 6.734375l-1.140625 0zm2.9611206 0l0 -6.734375l1.03125 0l0 0.953125q0.734375 -1.109375 2.140625 -1.109375q0.609375 0 1.109375 0.21875q0.515625 0.21875 0.765625 0.578125q0.265625 0.34375 0.359375 0.84375q0.0625 0.3125 0.0625 1.109375l0 4.140625l-1.140625 0l0 -4.09375q0 -0.703125 -0.140625 -1.046875q-0.125 -0.34375 -0.46875 -0.546875q-0.328125 -0.21875 -0.78125 -0.21875q-0.734375 0 -1.265625 0.46875q-0.53125 0.453125 -0.53125 1.75l0 3.6875l-1.140625 0zm11.787476 0l0 -0.84375q-0.640625 1.0 -1.890625 1.0q-0.796875 0 -1.484375 -0.4375q-0.671875 -0.453125 -1.046875 -1.25q-0.375 -0.796875 -0.375 -1.828125q0 -1.015625 0.34375 -1
 .828125q0.34375 -0.828125 1.015625 -1.265625q0.671875 -0.4375 1.5 -0.4375q0.609375 0 1.078125 0.265625q0.484375 0.25 0.78125 0.65625l0 -3.34375l1.140625 0l0 9.3125l-1.0625 0zm-3.609375 -3.359375q0 1.296875 0.53125 1.9375q0.546875 0.640625 1.296875 0.640625q0.75 0 1.265625 -0.609375q0.53125 -0.625 0.53125 -1.875q0 -1.390625 -0.53125 -2.03125q-0.53125 -0.65625 -1.3125 -0.65625q-0.765625 0 -1.28125 0.625q-0.5 0.625 -0.5 1.96875zm6.2249756 -0.015625q0 -1.875 1.03125 -2.765625q0.875 -0.75 2.125 -0.75q1.390625 0 2.265625 0.90625q0.890625 0.90625 0.890625 2.515625q0 1.296875 -0.390625 2.046875q-0.390625 0.75 -1.140625 1.171875q-0.75 0.40625 -1.625 0.40625q-1.421875 0 -2.296875 -0.90625q-0.859375 -0.90625 -0.859375 -2.625zm1.171875 0q0 1.296875 0.5625 1.953125q0.5625 0.640625 1.421875 0.640625q0.84375 0 1.40625 -0.640625q0.578125 -0.65625 0.578125 -1.984375q0 -1.25 -0.578125 -1.890625q-0.5625 -0.65625 -1.40625 -0.65625q-0.859375 0 -1.421875 0.640625q-0.5625 0.640625 -0.5625 1.9375zm7.896850
 6 3.375l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm15.4375 0l0 -2.234375l-4.03125 0l0 -1.046875l4.234375 -6.03125l0.9375 0l0 6.03125l1.265625 0l0 1.046875l-1.265625 0l0 2.234375l-1.140625 0zm0 -3.28125l0 -4.1875l-2.921875 4.1875l2.921875 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m187.14272 373.4882l0 46.015747" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m187.14272 373.4882l0 46.015747" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m212.38287 373.4882l0 46.015747" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt
 " stroke-dasharray="4.0,3.0" d="m212.38287 373.4882l0 46.015747" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m163.88943 344.75067l82.01575 0l0 25.984222l-82.01575 0z" fill-rule="nonzero"></path><path fill="#000000" d="m174.98318 366.55066l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm8.390778 -7.984375l0 -1.328125l1.140625 0l0 1.328125l-1.140625 0zm0 7.984375l0 -6.734375l1.140625 0l0 6.734375l-1.140625 0zm2.96109 0l0 -6.734375l1.03125 0l0 0.953125q0.734375 -1.109375 2.140625 -1.109375q0.609375 0 1.109375 0.21875q0.515625 0.21875 0.765625 0.578125q0.265625 0.34375 0.359375 0.84375q0.0625 0.3125 0.0625 1.109375l0 4.140625l-1.140625 0l0 -4.09375q0 -0.703125 -0.140625 -1.046875q-0.125 -0.34375 -0.46875 -0.54687
 5q-0.328125 -0.21875 -0.78125 -0.21875q-0.734375 0 -1.265625 0.46875q-0.53125 0.453125 -0.53125 1.75l0 3.6875l-1.140625 0zm11.787476 0l0 -0.84375q-0.640625 1.0 -1.890625 1.0q-0.796875 0 -1.484375 -0.4375q-0.671875 -0.453125 -1.046875 -1.25q-0.375 -0.796875 -0.375 -1.828125q0 -1.015625 0.34375 -1.828125q0.34375 -0.828125 1.015625 -1.265625q0.671875 -0.4375 1.5 -0.4375q0.609375 0 1.078125 0.265625q0.484375 0.25 0.78125 0.65625l0 -3.34375l1.140625 0l0 9.3125l-1.0625 0zm-3.609375 -3.359375q0 1.296875 0.53125 1.9375q0.546875 0.640625 1.296875 0.640625q0.75 0 1.265625 -0.609375q0.53125 -0.625 0.53125 -1.875q0 -1.390625 -0.53125 -2.03125q-0.53125 -0.65625 -1.3125 -0.65625q-0.765625 0 -1.28125 0.625q-0.5 0.625 -0.5 1.96875zm6.2249756 -0.015625q0 -1.875 1.03125 -2.765625q0.875 -0.75 2.125 -0.75q1.390625 0 2.265625 0.90625q0.890625 0.90625 0.890625 2.515625q0 1.296875 -0.390625 2.046875q-0.390625 0.75 -1.140625 1.171875q-0.75 0.40625 -1.625 0.40625q-1.421875 0 -2.296875 -0.90625q-0.859375 -0.
 90625 -0.859375 -2.625zm1.171875 0q0 1.296875 0.5625 1.953125q0.5625 0.640625 1.421875 0.640625q0.84375 0 1.40625 -0.640625q0.578125 -0.65625 0.578125 -1.984375q0 -1.25 -0.578125 -1.890625q-0.5625 -0.65625 -1.40625 -0.65625q-0.859375 0 -1.421875 0.640625q-0.5625 0.640625 -0.5625 1.9375zm7.8968506 3.375l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm16.07814 0l-1.140625 0l0 -7.28125q-0.421875 0.390625 -1.09375 0.796875q-0.65625 0.390625 -1.1875 0.578125l0 -1.109375q0.953125 -0.4375 1.671875 -1.078125q0.71875 -0.640625 1.015625 -1.25l0.734375 0l0 9.34375z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m262.35532 374.24408l0 46.015778" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-li
 nejoin="round" stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m262.35532 374.24408l0 46.015778" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m336.75394 372.73227l0 46.015747" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m336.75394 372.73227l0 46.015747" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m262.2382 344.75067l82.01575 0l0 25.984222l-82.01575 0z" fill-rule="nonzero"></path><path fill="#000000" d="m273.33194 366.55066l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm8.390778 -7.984375l0 -1.328125l1.140625 0l0 1.328125l-1.140625 0zm0 7.984375l0 -6.734375l1.140625 0l0 6.734375l-1
 .140625 0zm2.96109 0l0 -6.734375l1.03125 0l0 0.953125q0.734375 -1.109375 2.140625 -1.109375q0.609375 0 1.109375 0.21875q0.515625 0.21875 0.765625 0.578125q0.265625 0.34375 0.359375 0.84375q0.0625 0.3125 0.0625 1.109375l0 4.140625l-1.140625 0l0 -4.09375q0 -0.703125 -0.140625 -1.046875q-0.125 -0.34375 -0.46875 -0.546875q-0.328125 -0.21875 -0.78125 -0.21875q-0.734375 0 -1.265625 0.46875q-0.53125 0.453125 -0.53125 1.75l0 3.6875l-1.140625 0zm11.787476 0l0 -0.84375q-0.640625 1.0 -1.890625 1.0q-0.796875 0 -1.484375 -0.4375q-0.671875 -0.453125 -1.046875 -1.25q-0.375 -0.796875 -0.375 -1.828125q0 -1.015625 0.34375 -1.828125q0.34375 -0.828125 1.015625 -1.265625q0.671875 -0.4375 1.5 -0.4375q0.609375 0 1.078125 0.265625q0.484375 0.25 0.78125 0.65625l0 -3.34375l1.140625 0l0 9.3125l-1.0625 0zm-3.609375 -3.359375q0 1.296875 0.53125 1.9375q0.546875 0.640625 1.296875 0.640625q0.75 0 1.265625 -0.609375q0.53125 -0.625 0.53125 -1.875q0 -1.390625 -0.53125 -2.03125q-0.53125 -0.65625 -1.3125 -0.65625q-0.76
 5625 0 -1.28125 0.625q-0.5 0.625 -0.5 1.96875zm6.2249756 -0.015625q0 -1.875 1.03125 -2.765625q0.875 -0.75 2.125 -0.75q1.390625 0 2.265625 0.90625q0.890625 0.90625 0.890625 2.515625q0 1.296875 -0.390625 2.046875q-0.390625 0.75 -1.140625 1.171875q-0.75 0.40625 -1.625 0.40625q-1.421875 0 -2.296875 -0.90625q-0.859375 -0.90625 -0.859375 -2.625zm1.171875 0q0 1.296875 0.5625 1.953125q0.5625 0.640625 1.421875 0.640625q0.84375 0 1.40625 -0.640625q0.578125 -0.65625 0.578125 -1.984375q0 -1.25 -0.578125 -1.890625q-0.5625 -0.65625 -1.40625 -0.65625q-0.859375 0 -1.421875 0.640625q-0.5625 0.640625 -0.5625 1.9375zm7.8968506 3.375l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm17.78125 -1.09375l0 1.09375l-6.15625 0q-0.015625 -0.40625 0.140625 -0.7968
 75q0.234375 -0.625 0.75 -1.234375q0.515625 -0.609375 1.5 -1.40625q1.515625 -1.25 2.046875 -1.96875q0.53125 -0.734375 0.53125 -1.375q0 -0.6875 -0.484375 -1.140625q-0.484375 -0.46875 -1.265625 -0.46875q-0.828125 0 -1.328125 0.5q-0.484375 0.484375 -0.5 1.359375l-1.171875 -0.125q0.125 -1.3125 0.90625 -2.0q0.78125 -0.6875 2.109375 -0.6875q1.34375 0 2.125 0.75q0.78125 0.734375 0.78125 1.828125q0 0.5625 -0.234375 1.109375q-0.21875 0.53125 -0.75 1.140625q-0.53125 0.59375 -1.765625 1.625q-1.03125 0.859375 -1.328125 1.171875q-0.28125 0.3125 -0.46875 0.625l4.5625 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m485.73984 372.73227l0 46.015747" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m485.73984 372.73227l0 46.015747" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m580.228 372.73227l0 46.015747" fill-rule="nonzero"></path><path stroke="#000000"
  stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m580.228 372.73227l0 46.015747" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m497.36713 344.75067l82.01575 0l0 25.984222l-82.01575 0z" fill-rule="nonzero"></path><path fill="#000000" d="m508.46088 366.55066l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm8.390747 -7.984375l0 -1.328125l1.140625 0l0 1.328125l-1.140625 0zm0 7.984375l0 -6.734375l1.140625 0l0 6.734375l-1.140625 0zm2.9611206 0l0 -6.734375l1.03125 0l0 0.953125q0.734375 -1.109375 2.140625 -1.109375q0.609375 0 1.109375 0.21875q0.515625 0.21875 0.765625 0.578125q0.265625 0.34375 0.359375 0.84375q0.0625 0.3125 0.0625 1.109375l0 4.140625l-1.140625 0l0 -4.09375q
 0 -0.703125 -0.140625 -1.046875q-0.125 -0.34375 -0.46875 -0.546875q-0.328125 -0.21875 -0.78125 -0.21875q-0.734375 0 -1.265625 0.46875q-0.53125 0.453125 -0.53125 1.75l0 3.6875l-1.140625 0zm11.787476 0l0 -0.84375q-0.640625 1.0 -1.890625 1.0q-0.796875 0 -1.484375 -0.4375q-0.671875 -0.453125 -1.046875 -1.25q-0.375 -0.796875 -0.375 -1.828125q0 -1.015625 0.34375 -1.828125q0.34375 -0.828125 1.015625 -1.265625q0.671875 -0.4375 1.5 -0.4375q0.609375 0 1.078125 0.265625q0.484375 0.25 0.78125 0.65625l0 -3.34375l1.140625 0l0 9.3125l-1.0625 0zm-3.609375 -3.359375q0 1.296875 0.53125 1.9375q0.546875 0.640625 1.296875 0.640625q0.75 0 1.265625 -0.609375q0.53125 -0.625 0.53125 -1.875q0 -1.390625 -0.53125 -2.03125q-0.53125 -0.65625 -1.3125 -0.65625q-0.765625 0 -1.28125 0.625q-0.5 0.625 -0.5 1.96875zm6.2249756 -0.015625q0 -1.875 1.03125 -2.765625q0.875 -0.75 2.125 -0.75q1.390625 0 2.265625 0.90625q0.890625 0.90625 0.890625 2.515625q0 1.296875 -0.390625 2.046875q-0.390625 0.75 -1.140625 1.171875q-0.75 0.
 40625 -1.625 0.40625q-1.421875 0 -2.296875 -0.90625q-0.859375 -0.90625 -0.859375 -2.625zm1.171875 0q0 1.296875 0.5625 1.953125q0.5625 0.640625 1.421875 0.640625q0.84375 0 1.40625 -0.640625q0.578125 -0.65625 0.578125 -1.984375q0 -1.25 -0.578125 -1.890625q-0.5625 -0.65625 -1.40625 -0.65625q-0.859375 0 -1.421875 0.640625q-0.5625 0.640625 -0.5625 1.9375zm7.8968506 3.375l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm11.78125 -2.453125l1.140625 -0.15625q0.203125 0.96875 0.671875 1.40625q0.46875 0.421875 1.15625 0.421875q0.796875 0 1.34375 -0.546875q0.5625 -0.5625 0.5625 -1.390625q0 -0.796875 -0.515625 -1.296875q-0.5 -0.515625 -1.296875 -0.515625q-0.328125 0 -0.8125 0.125l0.125 -1.0q0.125 0.015625 0.1875 0.015625q0.734375 0 1.3125 -0.375q0
 .59375 -0.390625 0.59375 -1.1875q0 -0.625 -0.4375 -1.03125q-0.421875 -0.421875 -1.09375 -0.421875q-0.671875 0 -1.109375 0.421875q-0.4375 0.421875 -0.578125 1.25l-1.140625 -0.203125q0.21875 -1.140625 0.953125 -1.765625q0.75 -0.640625 1.84375 -0.640625q0.765625 0 1.40625 0.328125q0.640625 0.328125 0.984375 0.890625q0.34375 0.5625 0.34375 1.203125q0 0.59375 -0.328125 1.09375q-0.328125 0.5 -0.953125 0.78125q0.8125 0.203125 1.265625 0.796875q0.46875 0.59375 0.46875 1.5q0 1.21875 -0.890625 2.078125q-0.890625 0.84375 -2.25 0.84375q-1.21875 0 -2.03125 -0.734375q-0.8125 -0.734375 -0.921875 -1.890625z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m391.66928 510.63254l55.84253 -213.13385" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m391.66928 510.63254l54.321808 -207.32974" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m447.5889 303.7
 214l-0.44760132 -4.808563l-2.7479858 3.9713135z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m327.8425 510.63254l127.653564 0l0 42.992157l-127.653564 0z" fill-rule="nonzero"></path><path fill="#000000" d="m337.42062 534.61505l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0
 .4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm16.75 -0.234375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm8.438232 2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.64062
 5 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm9.328
 125 0l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875
  0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm10.015625 -8.75l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm3.504181 -4.921875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm9.281982 4.921875l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -
 0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0zm15.262146 0.8125l1.609375 0.25q0.109375 0.75 0.578125 1.09375q0.609375 0.453125 1.6875 0.453125q1.171875 0 1.796875 -0.46875q0.625 -0.453125 0.859375 -1.28125q0.125 -0.515625 0.109375 -2.15625q-1.09375 1.296875 -2.71875 1.296875q-2.03125 0 -3.15625 -1.46875q-1.109375 -1.46875 -1.109375 -3.515625q0 -1.40625 0.515625 -2.59375q0.515625 -1.203125 1.484375 -1.84375q0.96875 -0.65625 2.265625 -0.65625q1.75 0 2.875 1.40625l0 -1.1875l1.546875 0l0 8.515625q0 2.3125 -0.46875 3.265625q-0.46875 0.96875 -1.484375 1.515625q-1.015625 0.5625 -2.5 0.5625q-1.765625 0 -2.859375 -0.796875q-1.078125 -0.796875 -1.03125 -2.390625zm1.375 -5.921875q0 1.953125 0.765625 2.84375q0.78125 0.890625 1.9375 0.890625q1.140625 0 1.921875 -0.890625q0.78125 -0.890625 0.78125 -2.78125q0 -1.8125 -0.8125 -2.71875q-0.796875 -0.921875 -1.921875 -0.921875q-1.109375 0 -1.890625 0.90
 625q-0.78125 0.890625 -0.78125 2.671875zm15.735077 3.890625q-0.9375 0.796875 -1.796875 1.125q-0.859375 0.3125 -1.84375 0.3125q-1.609375 0 -2.484375 -0.78125q-0.875 -0.796875 -0.875 -2.03125q0 -0.734375 0.328125 -1.328125q0.328125 -0.59375 0.859375 -0.953125q0.53125 -0.359375 1.203125 -0.546875q0.5 -0.140625 1.484375 -0.25q2.03125 -0.25 2.984375 -0.578125q0 -0.34375 0 -0.4375q0 -1.015625 -0.46875 -1.4375q-0.640625 -0.5625 -1.90625 -0.5625q-1.171875 0 -1.734375 0.40625q-0.5625 0.40625 -0.828125 1.46875l-1.640625 -0.234375q0.234375 -1.046875 0.734375 -1.6875q0.515625 -0.640625 1.46875 -0.984375q0.96875 -0.359375 2.25 -0.359375q1.265625 0 2.046875 0.296875q0.78125 0.296875 1.15625 0.75q0.375 0.453125 0.515625 1.140625q0.09375 0.421875 0.09375 1.53125l0 2.234375q0 2.328125 0.09375 2.953125q0.109375 0.609375 0.4375 1.171875l-1.75 0q-0.265625 -0.515625 -0.328125 -1.21875zm-0.140625 -3.71875q-0.90625 0.359375 -2.734375 0.625q-1.03125 0.140625 -1.453125 0.328125q-0.421875 0.1875 -0.65625 0.5
 46875q-0.234375 0.359375 -0.234375 0.796875q0 0.671875 0.5 1.125q0.515625 0.4375 1.484375 0.4375q0.96875 0 1.71875 -0.421875q0.75 -0.4375 1.109375 -1.15625q0.265625 -0.578125 0.265625 -1.671875l0 -0.609375zm4.0788574 8.71875l0 -13.640625l1.53125 0l0 1.28125q0.53125 -0.75 1.203125 -1.125q0.6875 -0.375 1.640625 -0.375q1.265625 0 2.234375 0.65625q0.96875 0.640625 1.453125 1.828125q0.5 1.1875 0.5 2.59375q0 1.515625 -0.546875 2.734375q-0.546875 1.203125 -1.578125 1.84375q-1.03125 0.640625 -2.171875 0.640625q-0.84375 0 -1.515625 -0.34375q-0.65625 -0.359375 -1.078125 -0.890625l0 4.796875l-1.671875 0zm1.515625 -8.65625q0 1.90625 0.765625 2.8125q0.78125 0.90625 1.875 0.90625q1.109375 0 1.890625 -0.9375q0.796875 -0.9375 0.796875 -2.921875q0 -1.875 -0.78125 -2.8125q-0.765625 -0.9375 -1.84375 -0.9375q-1.0625 0 -1.890625 1.0q-0.8125 1.0 -0.8125 2.890625z" fill-rule="nonzero"></path></g></svg>
-


[25/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/concepts/fig/tasks_chains.svg
----------------------------------------------------------------------
diff --git a/docs/concepts/fig/tasks_chains.svg b/docs/concepts/fig/tasks_chains.svg
deleted file mode 100644
index 581112a..0000000
--- a/docs/concepts/fig/tasks_chains.svg
+++ /dev/null
@@ -1,463 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<svg
-   xmlns:dc="http://purl.org/dc/elements/1.1/"
-   xmlns:cc="http://creativecommons.org/ns#"
-   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
-   xmlns:svg="http://www.w3.org/2000/svg"
-   xmlns="http://www.w3.org/2000/svg"
-   version="1.1"
-   width="575.61621"
-   height="396.65625"
-   id="svg2">
-  <defs
-     id="defs4" />
-  <metadata
-     id="metadata7">
-    <rdf:RDF>
-      <cc:Work
-         rdf:about="">
-        <dc:format>image/svg+xml</dc:format>
-        <dc:type
-           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
-        <dc:title></dc:title>
-      </cc:Work>
-    </rdf:RDF>
-  </metadata>
-  <g
-     transform="translate(-87.165908,-334.03316)"
-     id="layer1">
-    <g
-       transform="translate(1.4809116,315.50191)"
-       id="g2989">
-      <path
-         d="m 219.67348,58.533332 50.16874,0 0,-4.219801 8.43961,8.439602 -8.43961,8.439603 0,-4.219801 -50.16874,0 z"
-         id="path2991"
-         style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 285.93373,62.678115 c 0,-17.648147 14.34733,-31.957962 32.05174,-31.957962 17.68565,0 32.03298,14.309815 32.03298,31.957962 0,17.648146 -14.34733,31.957961 -32.03298,31.957961 -17.70441,0 -32.05174,-14.309815 -32.05174,-31.957961"
-         id="path2993"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="295.73941"
-         y="54.668739"
-         id="text2995"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">keyBy</text>
-      <text
-         x="326.64713"
-         y="54.668739"
-         id="text2997"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">()/</text>
-      <text
-         x="292.28857"
-         y="66.67173"
-         id="text2999"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">window()/</text>
-      <text
-         x="299.79044"
-         y="78.674713"
-         id="text3001"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">apply()</text>
-      <path
-         d="m 357.67035,58.533332 50.16875,0 0,-4.219801 8.43961,8.439602 -8.43961,8.439603 0,-4.219801 -50.16875,0 z"
-         id="path3003"
-         style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 423.94937,62.678115 c 0,-17.648147 14.34732,-31.957962 32.03298,-31.957962 17.70441,0 32.03298,14.309815 32.03298,31.957962 0,17.648146 -14.32857,31.957961 -32.03298,31.957961 -17.68566,0 -32.03298,-14.309815 -32.03298,-31.957961"
-         id="path3005"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="444.97049"
-         y="66.67173"
-         id="text3007"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Sink</text>
-      <text
-         x="134.60997"
-         y="304.1564"
-         id="text3009"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Subtask</text>
-      <text
-         x="128.00832"
-         y="316.15939"
-         id="text3011"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">(= thread)</text>
-      <path
-         d="m 154.21029,293.41685 -0.8252,-10.72768 1.24718,-0.0938 0.82521,10.7183 -1.24719,0.10315 z m -2.59752,-9.33045 2.10052,-5.18567 2.87885,4.8012 -4.97937,0.38447 z"
-         id="path3013"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 98.387011,241.5508 c 0,-17.69503 14.309819,-32.04236 31.967339,-32.04236 17.64815,0 31.95796,14.34733 31.95796,32.04236 0,17.69503 -14.30981,32.04236 -31.95796,32.04236 -17.65752,0 -31.967339,-14.34733 -31.967339,-32.04236"
-         id="path3015"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 98.387011,241.5508 c 0,-17.69503 14.309819,-32.04236 31.967339,-32.04236 17.64815,0 31.95796,14.34733 31.95796,32.04236 0,17.69503 -14.30981,32.04236 -31.95796,32.04236 -17.65752,0 -31.967339,-14.34733 -31.967339,-32.04236"
-         id="path3017"
-         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="112.93025"
-         y="239.48763"
-         id="text3019"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source</text>
-      <text
-         x="122.68268"
-         y="251.49062"
-         id="text3021"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[1]</text>
-      <path
-         d="m 147.93685,241.5508 c 0,-17.69503 14.34733,-32.04236 32.03298,-32.04236 17.69504,0 32.04236,14.34733 32.04236,32.04236 0,17.69503 -14.34732,32.04236 -32.04236,32.04236 -17.68565,0 -32.03298,-14.34733 -32.03298,-32.04236"
-         id="path3023"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 147.93685,241.5508 c 0,-17.69503 14.34733,-32.04236 32.03298,-32.04236 17.69504,0 32.04236,14.34733 32.04236,32.04236 0,17.69503 -14.34732,32.04236 -32.04236,32.04236 -17.68565,0 -32.03298,-14.34733 -32.03298,-32.04236"
-         id="path3025"
-         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="164.556"
-         y="239.48763"
-         id="text3027"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">map()</text>
-      <text
-         x="172.35796"
-         y="251.49062"
-         id="text3029"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[1]</text>
-      <path
-         d="m 219.67348,237.331 50.16874,0 0,-4.2198 8.43961,8.4396 -8.43961,8.4396 0,-4.2198 -50.16874,0 z"
-         id="path3031"
-         style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 285.93373,241.56018 c 0,-17.70441 14.34733,-32.05174 32.05174,-32.05174 17.68565,0 32.03298,14.34733 32.03298,32.05174 0,17.68565 -14.34733,32.03298 -32.03298,32.03298 -17.70441,0 -32.05174,-14.34733 -32.05174,-32.03298"
-         id="path3033"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="295.73941"
-         y="227.48463"
-         id="text3035"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">keyBy</text>
-      <text
-         x="326.64713"
-         y="227.48463"
-         id="text3037"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">()/</text>
-      <text
-         x="292.28857"
-         y="239.48763"
-         id="text3039"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">window()/</text>
-      <text
-         x="299.79044"
-         y="251.49062"
-         id="text3041"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">apply()</text>
-      <text
-         x="310.29306"
-         y="263.49359"
-         id="text3043"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[1]</text>
-      <path
-         d="m 361.70261,245.31111 45.1425,22.03674 1.85671,-3.78844 3.88222,11.29031 -11.29032,3.88222 1.85671,-3.78844 -45.16125,-22.03674 z"
-         id="path3045"
-         style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 418.79183,298.04925 c 0,-17.64815 14.34733,-31.95796 32.03298,-31.95796 17.70441,0 32.03298,14.30981 32.03298,31.95796 0,17.64815 -14.32857,31.95796 -32.03298,31.95796 -17.68565,0 -32.03298,-14.30981 -32.03298,-31.95796"
-         id="path3047"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="439.83328"
-         y="296.00317"
-         id="text3049"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Sink</text>
-      <text
-         x="443.13412"
-         y="308.00616"
-         id="text3051"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[1]</text>
-      <path
-         d="m 98.387011,370.32976 c 0,-17.68566 14.309819,-32.03298 31.976719,-32.03298 17.64814,0 31.95796,14.34732 31.95796,32.03298 0,17.70441 -14.30982,32.05173 -31.95796,32.05173 -17.6669,0 -31.976719,-14.34732 -31.976719,-32.05173"
-         id="path3053"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 98.387011,370.32976 c 0,-17.68566 14.309819,-32.03298 31.976719,-32.03298 17.64814,0 31.95796,14.34732 31.95796,32.03298 0,17.70441 -14.30982,32.05173 -31.95796,32.05173 -17.6669,0 -31.976719,-14.34732 -31.976719,-32.05173"
-         id="path3055"
-         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="112.93025"
-         y="368.29453"
-         id="text3057"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source</text>
-      <text
-         x="122.68268"
-         y="380.29749"
-         id="text3059"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[2]</text>
-      <path
-         d="m 147.93685,370.32976 c 0,-17.68566 14.34733,-32.03298 32.03298,-32.03298 17.70442,0 32.05174,14.34732 32.05174,32.03298 0,17.70441 -14.34732,32.05173 -32.05174,32.05173 -17.68565,0 -32.03298,-14.34732 -32.03298,-32.05173"
-         id="path3061"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 147.93685,370.32976 c 0,-17.68566 14.34733,-32.03298 32.03298,-32.03298 17.70442,0 32.05174,14.34732 32.05174,32.03298 0,17.70441 -14.34732,32.05173 -32.05174,32.05173 -17.68565,0 -32.03298,-14.34732 -32.03298,-32.05173"
-         id="path3063"
-         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="164.556"
-         y="368.29453"
-         id="text3065"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">map()</text>
-      <text
-         x="172.35796"
-         y="380.29749"
-         id="text3067"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[2]</text>
-      <path
-         d="m 219.67348,366.10996 50.16874,0 0,-4.2198 8.43961,8.4396 -8.43961,8.4396 0,-4.2198 -50.16874,0 z"
-         id="path3069"
-         style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 285.93373,370.32976 c 0,-17.68566 14.34733,-32.03298 32.05174,-32.03298 17.68565,0 32.03298,14.34732 32.03298,32.03298 0,17.70441 -14.34733,32.05173 -32.03298,32.05173 -17.70441,0 -32.05174,-14.34732 -32.05174,-32.05173"
-         id="path3071"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="295.73941"
-         y="356.29153"
-         id="text3073"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">keyBy</text>
-      <text
-         x="326.64713"
-         y="356.29153"
-         id="text3075"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">()/</text>
-      <text
-         x="292.28857"
-         y="368.29453"
-         id="text3077"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">window()/</text>
-      <text
-         x="299.79044"
-         y="380.29749"
-         id="text3079"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">apply()</text>
-      <text
-         x="310.29306"
-         y="392.30048"
-         id="text3081"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[2]</text>
-      <path
-         d="m 361.70261,366.54131 45.1425,-22.03674 1.85671,3.78845 3.88222,-11.29031 -11.29032,-3.88222 1.85671,3.78844 -45.16125,22.03674 z"
-         id="path3083"
-         style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 220.66747,351.51882 62.3968,-79.33226 3.31958,2.6069 -1.42536,-11.85295 -11.8342,1.42535 3.30082,2.6069 -62.39679,79.33226 z"
-         id="path3085"
-         style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 152.41922,319.56086 -1.6129,7.14553 1.21905,0.28132 1.61291,-7.14553 -1.21906,-0.28132 z m -3.16954,5.51387 1.35034,5.4201 3.54463,-4.31357 -4.89497,-1.10653 z"
-         id="path3087"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 515.52843,213.10934 c 4.87622,0 8.83345,0.65641 8.83345,1.48162 l 0,97.76811 c 0,0.8252 3.95724,1.48162 8.83345,1.48162 -4.87621,0 -8.83345,0.65641 -8.83345,1.46286 l 0,97.78686 c 0,0.82521 -3.95723,1.48162 -8.83345,1.48162"
-         id="path3089"
-         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="548.92151"
-         y="311.33228"
-         id="text3091"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Streaming Dataflow</text>
-      <text
-         x="552.97247"
-         y="324.83566"
-         id="text3093"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">(parallelized view)</text>
-      <path
-         d="m 210.76501,95.423772 c 0,4.876218 -0.65642,8.824078 -1.47224,8.824078 l -55.74827,0 c -0.80645,0 -1.47224,3.95723 -1.47224,8.83345 0,-4.87622 -0.65641,-8.83345 -1.46286,-8.83345 l -55.748268,0 c -0.815828,0 -1.472241,-3.94786 -1.472241,-8.824078"
-         id="path3095"
-         style="fill:none;stroke:#000000;stroke-width:1.24718571px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="548.92151"
-         y="57.884895"
-         id="text3097"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Streaming Dataflow</text>
-      <text
-         x="554.77295"
-         y="71.38826"
-         id="text3099"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">(condensed view)</text>
-      <path
-         d="m 93.548305,266.45701 0,-1.24719 1.247186,0 0,1.24719 -1.247186,0 z m 0,-2.49438 0,-1.25656 1.247186,0 0,1.25656 -1.247186,0 z m 0,-2.50374 0,-1.24719 1.247186,0 0,1.24719 -1.247186,0 z m 0,-2.50375 0,-1.24719 1.247186,0 0,1.24719 -1.247186,0 z m 0,-2.49437 0,-1.25657 1.247186,0 0,1.25657 -1.247186,0 z m 0,-2.50375 0,-1.24719 1.247186,0 0,1.24719 -1.247186,0 z m 0,-2.50375 0,-1.24719 1.247186,0 0,1.24719 -1.247186,0 z m 0,-2.49437 0,-1.25657 1.247186,0 0,1.25657 -1.247186,0 z m 0,-2.50375 0,-1.24719 1.247186,0 0,1.24719 -1.247186,0 z m 0,-2.50375 0,-1.24719 1.247186,0 0,1.24719 -1.247186,0 z m 0,-2.49437 0,-1.25657 1.247186,0 0,1.25657 -1.247186,0 z m 0,-2.50375 0,-1.24719 1.247186,0 0,1.24719 -1.247186,0 z m 0,-2.50375 0,-1.24719 1.247186,0 0,1.24719 -1.247186,0 z m 0,-2.49437 0,-1.25656 1.247186,0 0,1.25656 -1.247186,0 z m 0,-2.50375 0,-1.24719 1.247186,0 0,1.24719 -1.247186,0 z m 0,-2.50375 0,-1.24718 1.247186,0 0,1.24718 -1.247186,0 z m 0,-2.49437 0,-1.25656 1.247186
 ,0 0,1.25656 -1.247186,0 z m 0,-2.50375 0,-1.24718 1.247186,0 0,1.24718 -1.247186,0 z m 0,-2.50375 0,-1.24718 1.247186,0 0,1.24718 -1.247186,0 z m 0,-2.49437 0,-1.25656 1.247186,0 0,1.25656 -1.247186,0 z m 0.0094,-2.53188 0.05626,-1.09715 0.02813,-0.21568 1.237808,0.18755 -0.02813,0.17817 0.0094,-0.0563 -0.05626,1.06901 -1.247185,-0.0656 z m 0.309452,-2.6069 0.271942,-1.04088 0.07502,-0.21568 1.172167,0.42198 -0.06564,0.18754 0.01875,-0.0563 -0.262565,1.02213 -1.209676,-0.31883 z m 0.815828,-2.48499 0.440735,-0.90023 0.159414,-0.26256 1.069017,0.64703 -0.150038,0.23444 0.03751,-0.0563 -0.431358,0.88147 -1.12528,-0.54388 z m 1.30345,-2.26932 0.543885,-0.72205 0.271943,-0.30008 0.928356,0.84396 -0.253188,0.27195 0.03751,-0.0469 -0.52513,0.7033 -1.003375,-0.75019 z m 1.734807,-1.95986 0.581395,-0.53451 0.42198,-0.30945 0.740809,1.00337 -0.393848,0.2907 0.04689,-0.0375 -0.553263,0.50638 -0.84396,-0.91898 z m 2.072391,-1.57539 0.590767,-0.36572 0.56264,-0.27194 0.54389,1.12528 -0.53451,0
 .25319 0.0563,-0.0281 -0.56264,0.34696 -0.656407,-1.05963 z m 2.363087,-1.14404 0.58139,-0.2063 0.6658,-0.17817 0.31882,1.20968 -0.64703,0.16879 0.0563,-0.0188 -0.55326,0.19693 -0.42198,-1.17217 z m 2.54126,-0.63766 0.59077,-0.0938 0.71268,-0.0375 0.0656,1.24719 -0.68455,0.0375 0.0563,-0.009 -0.55327,0.0844 -0.18754,-1.22843 z m 2.58814,-0.15941 1.24719,0 0,1.24718 -1.24719,0 0,-1.24718 z m 2.49437,0 1.25657,0 0,1.24718 -1.25657,0 0,-1.24718 z m 2.50375,0 1.24719,0 0,1.24718 -1.24719,0 0,-1.24718 z m 2.50375,0 1.24719,0 0,1.24718 -1.24719,0 0,-1.24718 z m 2.49437,0 1.25657,0 0,1.24718 -1.25657,0 0,-1.24718 z m 2.50375,0 1.24719,0 0,1.24718 -1.24719,0 0,-1.24718 z m 2.50375,0 1.24719,0 0,1.24718 -1.24719,0 0,-1.24718 z m 2.49437,0 1.25656,0 0,1.24718 -1.25656,0 0,-1.24718 z m 2.50375,0 1.24719,0 0,1.24718 -1.24719,0 0,-1.24718 z m 2.50375,0 1.24719,0 0,1.24718 -1.24719,0 0,-1.24718 z m 2.49437,0 1.25656,0 0,1.24718 -1.25656,0 0,-1.24718 z m 2.50375,0 1.24719,0 0,1.24718 -1.24719,0 0,
 -1.24718 z m 2.50375,0 1.24718,0 0,1.24718 -1.24718,0 0,-1.24718 z m 2.49437,0 1.25656,0 0,1.24718 -1.25656,0 0,-1.24718 z m 2.50375,0 1.24718,0 0,1.24718 -1.24718,0 0,-1.24718 z m 2.50375,0 1.24718,0 0,1.24718 -1.24718,0 0,-1.24718 z m 2.49437,0 1.25656,0 0,1.24718 -1.25656,0 0,-1.24718 z m 2.50375,0 1.24718,0 0,1.24718 -1.24718,0 0,-1.24718 z m 2.50375,0 1.24718,0 0,1.24718 -1.24718,0 0,-1.24718 z m 2.49437,0 1.25656,0 0,1.24718 -1.25656,0 0,-1.24718 z m 2.50375,0 1.24718,0 0,1.24718 -1.24718,0 0,-1.24718 z m 2.50374,0 1.24719,0 0,1.24718 -1.24719,0 0,-1.24718 z m 2.49438,0 1.25656,0 0,1.24718 -1.25656,0 0,-1.24718 z m 2.50375,0 1.24718,0 0,1.24718 -1.24718,0 0,-1.24718 z m 2.50374,0 1.24719,0 0,1.24718 -1.24719,0 0,-1.24718 z m 2.49438,0 1.25656,0 0,1.24718 -1.25656,0 0,-1.24718 z m 2.50374,0 1.24719,0 0,1.24718 -1.24719,0 0,-1.24718 z m 2.50375,0 1.24719,0 0,1.24718 -1.24719,0 0,-1.24718 z m 2.49437,0 1.25657,0 0,1.24718 -1.25657,0 0,-1.24718 z m 2.50375,0 1.24719,0 0,1.24718 -1
 .24719,0 0,-1.24718 z m 2.50375,0 1.24719,0 0,1.24718 -1.24719,0 0,-1.24718 z m 2.49437,0 1.25657,0 0,1.24718 -1.25657,0 0,-1.24718 z m 2.50375,0 1.24719,0 0,1.24718 -1.24719,0 0,-1.24718 z m 2.50375,0 1.24719,0 0,1.24718 -1.24719,0 0,-1.24718 z m 2.49437,0 1.25657,0 0,1.24718 -1.25657,0 0,-1.24718 z m 2.50375,0 1.24719,0 0,1.24718 -1.24719,0 0,-1.24718 z m 2.50375,0 1.24719,0 0,1.24718 -1.24719,0 0,-1.24718 z m 2.49437,0 1.18155,0 0.10315,0 -0.0657,1.25656 -0.0844,-0.009 0.0281,0 -1.16279,0 0,-1.24718 z m 2.59752,0.075 1.20968,0.18754 0.0844,0.0188 -0.30946,1.21905 -0.0563,-0.0187 0.0563,0.009 -1.17216,-0.17817 0.18754,-1.2378 z m 2.56002,0.55326 1.09714,0.39385 0.13129,0.0656 -0.54389,1.12528 -0.10315,-0.0563 0.0656,0.0281 -1.06901,-0.38447 0.42198,-1.17217 z m 2.40059,1.04088 0.92836,0.56264 0.17817,0.13129 -0.75019,1.00337 -0.15003,-0.11253 0.0469,0.0281 -0.90022,-0.54389 0.64703,-1.06902 z m 2.13804,1.50975 0.72205,0.64704 0.22506,0.25319 -0.91898,0.84396 -0.21568,-0.23443 0.03
 75,0.0469 -0.69392,-0.63766 0.84396,-0.91898 z m 1.80045,1.89423 0.51575,0.68454 0.23443,0.38447 -1.06901,0.65642 -0.21568,-0.36572 0.0281,0.0469 -0.497,-0.66579 1.00338,-0.74081 z m 1.38784,2.21305 0.33758,0.7033 0.18755,0.50637 -1.18154,0.42198 -0.16879,-0.47824 0.0188,0.0563 -0.31883,-0.66579 1.12528,-0.54388 z m 0.9096,2.45686 0.18755,0.69392 0.0844,0.59077 -1.22844,0.18755 -0.0844,-0.56264 0.009,0.0656 -0.17817,-0.66579 1.20967,-0.30945 z m 0.41261,2.58814 0.0375,0.73144 0,0.54388 -1.24718,0 0,-0.53451 0,0.0375 -0.0375,-0.72205 1.24719,-0.0563 z m 0.0375,2.53189 0,1.24718 -1.24718,0 0,-1.24718 1.24718,0 z m 0,2.49437 0,1.25656 -1.24718,0 0,-1.25656 1.24718,0 z m 0,2.50375 0,1.24718 -1.24718,0 0,-1.24718 1.24718,0 z m 0,2.50374 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.49438 0,1.25656 -1.24718,0 0,-1.25656 1.24718,0 z m 0,2.50374 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.50375 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.49437 0,1.25657 -1.24718,0 0,-1.25657
  1.24718,0 z m 0,2.50375 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.50375 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.49437 0,1.25657 -1.24718,0 0,-1.25657 1.24718,0 z m 0,2.50375 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.50375 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.49437 0,1.25657 -1.24718,0 0,-1.25657 1.24718,0 z m 0,2.50375 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.50375 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.49437 0,1.25656 -1.24718,0 0,-1.25656 1.24718,0 z m 0,2.50375 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.50375 0,1.24718 -1.24718,0 0,-1.24718 1.24718,0 z m 0,2.49437 0,0.52513 -0.0375,0.75957 -1.24719,-0.0656 0.0375,-0.74081 0,0.0281 0,-0.50638 1.24718,0 z m -0.17816,2.58814 -0.0844,0.55327 -0.18755,0.73143 -1.20967,-0.31883 0.17817,-0.69392 -0.009,0.0563 0.075,-0.51575 1.23781,0.18754 z m -0.6658,2.53189 -0.17817,0.47824 -0.34696,0.72205 -1.12528,-0.54388 0.32821,-0.69392 -0.0188,0.0656 0.15941,-0.45949 1.18154,0.43
 136 z m -1.17216,2.34433 -0.21568,0.35634 -0.53451,0.71268 -1.00337,-0.75019 0.51575,-0.68455 -0.0281,0.0469 0.19692,-0.33758 1.06902,0.65641 z m -1.60353,2.06301 -0.2063,0.22506 -0.74081,0.67517 -0.84396,-0.92836 0.71268,-0.65641 -0.0375,0.0375 0.18755,-0.19693 0.92835,0.84396 z m -1.98799,1.70668 -0.15004,0.11253 -0.95649,0.58139 -0.64703,-1.06901 0.92835,-0.56264 -0.0469,0.0281 0.13128,-0.0938 0.74081,1.00338 z m -2.2787,1.27532 -0.10315,0.0469 -1.13465,0.41261 -0.42198,-1.18155 1.10652,-0.39385 -0.0656,0.0188 0.075,-0.0281 0.54388,1.12528 z m -2.49437,0.78769 -0.0563,0.0188 -1.24719,0.18754 -0.18754,-1.2378 1.20967,-0.17817 -0.0563,0.009 0.0188,-0.009 0.31883,1.20967 z m -2.61627,0.28132 -0.0656,0 -1.20968,0 0,-1.24718 1.19092,0 -0.0281,0 0.0563,0 0.0563,1.24718 z m -2.53188,0 -1.24719,0 0,-1.24718 1.24719,0 0,1.24718 z m -2.49438,0 -1.25656,0 0,-1.24718 1.25656,0 0,1.24718 z m -2.50374,0 -1.24719,0 0,-1.24718 1.24719,0 0,1.24718 z m -2.50375,0 -1.24719,0 0,-1.24718 1.24719,0 0,
 1.24718 z m -2.49437,0 -1.25657,0 0,-1.24718 1.25657,0 0,1.24718 z m -2.50375,0 -1.24719,0 0,-1.24718 1.24719,0 0,1.24718 z m -2.50375,0 -1.24719,0 0,-1.24718 1.24719,0 0,1.24718 z m -2.49437,0 -1.25657,0 0,-1.24718 1.25657,0 0,1.24718 z m -2.50375,0 -1.24719,0 0,-1.24718 1.24719,0 0,1.24718 z m -2.50375,0 -1.24719,0 0,-1.24718 1.24719,0 0,1.24718 z m -2.49437,0 -1.25657,0 0,-1.24718 1.25657,0 0,1.24718 z m -2.50375,0 -1.24719,0 0,-1.24718 1.24719,0 0,1.24718 z m -2.50375,0 -1.24719,0 0,-1.24718 1.24719,0 0,1.24718 z m -2.49437,0 -1.25656,0 0,-1.24718 1.25656,0 0,1.24718 z m -2.50375,0 -1.24719,0 0,-1.24718 1.24719,0 0,1.24718 z m -2.50375,0 -1.24718,0 0,-1.24718 1.24718,0 0,1.24718 z m -2.49437,0 -1.25656,0 0,-1.24718 1.25656,0 0,1.24718 z m -2.50375,0 -1.24718,0 0,-1.24718 1.24718,0 0,1.24718 z m -2.50375,0 -1.24718,0 0,-1.24718 1.24718,0 0,1.24718 z m -2.49437,0 -1.25656,0 0,-1.24718 1.25656,0 0,1.24718 z m -2.50375,0 -1.24718,0 0,-1.24718 1.24718,0 0,1.24718 z m -2.50375,0 -1.24
 718,0 0,-1.24718 1.24718,0 0,1.24718 z m -2.49437,0 -1.25656,0 0,-1.24718 1.25656,0 0,1.24718 z m -2.50375,0 -1.24718,0 0,-1.24718 1.24718,0 0,1.24718 z m -2.50375,0 -1.24718,0 0,-1.24718 1.24718,0 0,1.24718 z m -2.49437,0 -1.25656,0 0,-1.24718 1.25656,0 0,1.24718 z m -2.50375,0 -1.24718,0 0,-1.24718 1.24718,0 0,1.24718 z m -2.50374,0 -1.24719,0 0,-1.24718 1.24719,0 0,1.24718 z m -2.49438,0 -1.25656,0 0,-1.24718 1.25656,0 0,1.24718 z m -2.50374,0 -1.24719,0 0,-1.24718 1.24719,0 0,1.24718 z m -2.50375,0 -1.24719,0 0,-1.24718 1.24719,0 0,1.24718 z m -2.49437,0 -1.25657,0 0,-1.24718 1.25657,0 0,1.24718 z m -2.50375,0 -1.24719,0 0,-1.24718 1.24719,0 0,1.24718 z m -2.50375,0 -1.24719,0 0,-1.24718 1.24719,0 0,1.24718 z m -2.49437,0 -1.25657,0 0,-1.24718 1.25657,0 0,1.24718 z m -2.50375,0 -1.24719,0 0,-1.24718 1.24719,0 0,1.24718 z m -2.50375,0 -1.24719,0 0,-1.24718 1.24719,0 0,1.24718 z m -2.53188,-0.0281 -0.67517,-0.0375 -0.62828,-0.0938 0.18755,-1.23781 0.59077,0.0938 -0.0563,-0.009 0.6
 4704,0.0281 -0.0656,1.25656 z m -2.58815,-0.39385 -0.63766,-0.16879 -0.60952,-0.21568 0.42198,-1.18154 0.58139,0.21568 -0.0563,-0.0188 0.6189,0.15942 -0.31883,1.20967 z m -2.45686,-0.90022 -0.53451,-0.25319 -0.609522,-0.38447 0.647032,-1.05964 0.59077,0.36572 -0.0563,-0.0375 0.50638,0.24381 -0.54389,1.12528 z m -2.222425,-1.36909 -0.393848,-0.2907 -0.609527,-0.56264 0.84396,-0.91898 0.581395,0.53451 -0.04689,-0.0375 0.365716,0.27194 -0.740809,1.00338 z m -1.912977,-1.7817 -0.253188,-0.28132 -0.56264,-0.75018 1.003375,-0.75019 0.543885,0.73143 -0.03751,-0.0563 0.225056,0.26256 -0.918979,0.84396 z m -1.519128,-2.12865 -0.14066,-0.23444 -0.45949,-0.93773 1.134658,-0.54388 0.440735,0.9096 -0.03751,-0.0469 0.121906,0.19692 -1.059639,0.65642 z m -1.059639,-2.4006 -0.06564,-0.17817 -0.28132,-1.07839 1.209677,-0.31883 0.271942,1.05964 -0.01875,-0.0563 0.05626,0.15004 -1.172167,0.42198 z m -0.56264,-2.56001 -0.02813,-0.16879 -0.05626,-1.13466 1.247185,-0.0656 0.05626,1.10652 -0.0094,-0.0656 
 0.02813,0.14066 -1.237808,0.18755 z"
-         id="path3101"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 220.72374,260.33361 62.41555,79.33226 3.30082,-2.6069 -1.4066,11.85295 -11.85295,-1.42535 3.31957,-2.6069 -62.39679,-79.33227 z"
-         id="path3103"
-         style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 94.167209,62.678115 c 0,-17.648147 14.309811,-31.957962 31.967341,-31.957962 17.64814,0 31.95796,14.309815 31.95796,31.957962 0,17.657524 -14.30982,31.957961 -31.95796,31.957961 -17.65753,0 -31.967341,-14.300437 -31.967341,-31.957961"
-         id="path3105"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 94.167209,62.678115 c 0,-17.648147 14.309811,-31.957962 31.967341,-31.957962 17.64814,0 31.95796,14.309815 31.95796,31.957962 0,17.657524 -14.30982,31.957961 -31.95796,31.957961 -17.65753,0 -31.967341,-14.300437 -31.967341,-31.957961"
-         id="path3107"
-         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="108.70813"
-         y="66.67173"
-         id="text3109"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source</text>
-      <path
-         d="m 147.93685,62.678115 c 0,-17.648147 14.34733,-31.957962 32.03298,-31.957962 17.69504,0 32.04236,14.309815 32.04236,31.957962 0,17.657524 -14.34732,31.957961 -32.04236,31.957961 -17.68565,0 -32.03298,-14.300437 -32.03298,-31.957961"
-         id="path3111"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 147.93685,62.678115 c 0,-17.648147 14.34733,-31.957962 32.03298,-31.957962 17.69504,0 32.04236,14.309815 32.04236,31.957962 0,17.657524 -14.34732,31.957961 -32.04236,31.957961 -17.68565,0 -32.03298,-14.300437 -32.03298,-31.957961"
-         id="path3113"
-         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="164.556"
-         y="66.67173"
-         id="text3115"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">map()</text>
-      <path
-         d="m 281.88272,89.403523 0,-1.237809 1.25657,0 0,1.237809 -1.25657,0 z m 0,-2.494372 0,-1.256563 1.25657,0 0,1.256563 -1.25657,0 z m 0,-2.494371 0,-1.256563 1.25657,0 0,1.256563 -1.25657,0 z m 0,-2.513126 0,-1.237809 1.25657,0 0,1.237809 -1.25657,0 z m 0,-2.494372 0,-1.256563 1.25657,0 0,1.256563 -1.25657,0 z m 0,-2.494371 0,-1.256563 1.25657,0 0,1.256563 -1.25657,0 z m 0,-2.513126 0,-1.237809 1.25657,0 0,1.237809 -1.25657,0 z m 0,-2.494372 0,-1.256563 1.25657,0 0,1.256563 -1.25657,0 z m 0,-2.494371 0,-1.256563 1.25657,0 0,1.256563 -1.25657,0 z m 0,-2.513126 0,-1.237808 1.25657,0 0,1.237808 -1.25657,0 z m 0,-2.494371 0,-1.256564 1.25657,0 0,1.256564 -1.25657,0 z m 0,-2.494372 0,-1.256563 1.25657,0 0,1.256563 -1.25657,0 z m 0,-2.513126 0,-1.237808 1.25657,0 0,1.237808 -1.25657,0 z m 0,-2.494371 0,-1.256563 1.25657,0 0,1.256563 -1.25657,0 z m 0,-2.494372 0,-1.256563 1.25657,0 0,1.256563 -1.25657,0 z m 0,-2.513126 0,-1.237808 1.25657,0 0,1.237808 -1.25657,0 z m 0,-2.494371 0,-
 1.256563 1.25657,0 0,1.256563 -1.25657,0 z m 0,-2.494372 0,-1.256563 1.25657,0 0,1.256563 -1.25657,0 z m 0,-2.513126 0,-1.237808 1.25657,0 0,1.237808 -1.25657,0 z m 0,-2.494371 0,-1.256563 1.25657,0 0,1.256563 -1.25657,0 z m 0,-2.494372 0,-0.168792 0.0563,-1.12528 1.25656,0.07502 -0.0563,1.087771 0,-0.03751 0,0.168792 -1.25657,0 z m 0.24381,-2.588144 0.0188,-0.168792 0.28132,-1.106526 1.21905,0.318829 -0.28132,1.069017 0.0188,-0.05626 -0.0188,0.131282 -1.23781,-0.187546 z m 0.75019,-2.513126 0.0188,-0.07502 0.54388,-1.12528 0.0188,-0.03751 1.06901,0.656414 0,0 0.0188,-0.05626 -0.50638,1.069016 0.0188,-0.05626 -0.0188,0.05626 -1.16279,-0.431357 z m 1.27532,-2.344334 0.67517,-0.918979 0.11253,-0.112528 0.93773,0.825205 -0.0938,0.112528 0.0375,-0.03751 -0.67517,0.881469 -0.994,-0.750186 z m 1.68792,-2.00675 0.73143,-0.656414 0.26257,-0.187547 0.75018,0.993998 -0.22505,0.187547 0.0375,-0.03751 -0.71268,0.637659 -0.84396,-0.937733 z m 2.06301,-1.594148 0.73144,-0.450112 0.4126,-0.187546 
 0.54388,1.12528 -0.39384,0.168792 0.0563,-0.01875 -0.71268,0.431358 -0.63766,-1.069017 z m 2.34434,-1.162789 0.69392,-0.262566 0.56264,-0.131282 0.30008,1.200299 -0.52514,0.131282 0.0563,-0.01875 -0.65641,0.24381 -0.43136,-1.162789 z m 2.53188,-0.675168 0.67517,-0.09377 0.63766,-0.03751 0.0563,1.256563 -0.60015,0.01876 0.075,0 -0.65641,0.09377 -0.18755,-1.237808 z m 2.58814,-0.168792 1.25657,0 0,1.256563 -1.25657,0 0,-1.256563 z m 2.49437,0 1.25657,0 0,1.256563 -1.25657,0 0,-1.256563 z m 2.51313,0 1.23781,0 0,1.256563 -1.23781,0 0,-1.256563 z m 2.49437,0 1.25657,0 0,1.256563 -1.25657,0 0,-1.256563 z m 2.49437,0 1.25657,0 0,1.256563 -1.25657,0 0,-1.256563 z m 2.51313,0 1.23781,0 0,1.256563 -1.23781,0 0,-1.256563 z m 2.49437,0 1.25656,0 0,1.256563 -1.25656,0 0,-1.256563 z m 2.49437,0 1.25657,0 0,1.256563 -1.25657,0 0,-1.256563 z m 2.51313,0 1.23781,0 0,1.256563 -1.23781,0 0,-1.256563 z m 2.49437,0 1.25656,0 0,1.256563 -1.25656,0 0,-1.256563 z m 2.49437,0 1.25656,0 0,1.256563 -1.25656,
 0 0,-1.256563 z m 2.51313,0 1.23781,0 0,1.256563 -1.23781,0 0,-1.256563 z m 2.49437,0 1.25656,0 0,1.256563 -1.25656,0 0,-1.256563 z m 2.49437,0 1.25656,0 0,1.256563 -1.25656,0 0,-1.256563 z m 2.51313,0 1.2378,0 0,1.256563 -1.2378,0 0,-1.256563 z m 2.49437,0 1.25656,0 0,1.256563 -1.25656,0 0,-1.256563 z m 2.49437,0 1.25656,0 0,1.256563 -1.25656,0 0,-1.256563 z m 2.51313,0 1.2378,0 0,1.256563 -1.2378,0 0,-1.256563 z m 2.49437,0 1.25656,0 0,1.256563 -1.25656,0 0,-1.256563 z m 2.49437,0 0.994,0 0.30007,0.01875 -0.075,1.256563 -0.28132,-0.01875 0.0375,0 -0.97524,0 0,-1.256563 z m 2.58814,0.112528 0.994,0.150037 0.30008,0.09377 -0.30008,1.200299 -0.28132,-0.07502 0.0563,0.01875 -0.95648,-0.150037 0.18754,-1.237808 z m 2.55064,0.600149 0.86271,0.300075 0.35634,0.187547 -0.54388,1.12528 -0.33759,-0.168792 0.075,0.01875 -0.8252,-0.300075 0.4126,-1.16279 z m 2.38184,1.087771 0.67517,0.393848 0.4126,0.31883 -0.75018,1.012752 -0.39385,-0.300075 0.0563,0.03751 -0.63766,-0.393848 0.63766,-1.06901
 6 z m 2.10053,1.556638 0.48762,0.412603 0.45011,0.506376 -0.93773,0.84396 -0.43136,-0.487621 0.0563,0.05626 -0.45011,-0.412603 0.82521,-0.918979 z m 1.76293,1.931731 0.28132,0.393848 0.43136,0.712678 -1.06901,0.637659 -0.41261,-0.675169 0.0375,0.05626 -0.28132,-0.375093 1.01275,-0.750187 z m 1.31283,2.250561 0.16879,0.337584 0.31883,0.881469 -1.18154,0.431358 -0.30008,-0.862715 0.0188,0.05626 -0.15004,-0.300075 1.12528,-0.543885 z m 0.84396,2.494371 0.075,0.262566 0.15004,1.031507 -1.23781,0.187546 -0.15004,-0.993997 0.0188,0.05626 -0.075,-0.243811 1.21905,-0.300075 z m 0.31883,2.588145 0.0188,0.262565 0,1.031507 -1.25657,0 0,-1.031507 0,0.03751 0,-0.225056 1.23781,-0.07502 z m 0.0188,2.531881 0,1.256563 -1.25657,0 0,-1.256563 1.25657,0 z m 0,2.513126 0,1.237808 -1.25657,0 0,-1.237808 1.25657,0 z m 0,2.494371 0,1.256563 -1.25657,0 0,-1.256563 1.25657,0 z m 0,2.494372 0,1.256563 -1.25657,0 0,-1.256563 1.25657,0 z m 0,2.513126 0,1.237808 -1.25657,0 0,-1.237808 1.25657,0 z m 0,2.494371
  0,1.256563 -1.25657,0 0,-1.256563 1.25657,0 z m 0,2.494372 0,1.256563 -1.25657,0 0,-1.256563 1.25657,0 z m 0,2.513126 0,1.237808 -1.25657,0 0,-1.237808 1.25657,0 z m 0,2.494371 0,1.256563 -1.25657,0 0,-1.256563 1.25657,0 z m 0,2.494371 0,1.256563 -1.25657,0 0,-1.256563 1.25657,0 z m 0,2.513127 0,1.237808 -1.25657,0 0,-1.237808 1.25657,0 z m 0,2.494371 0,1.256563 -1.25657,0 0,-1.256563 1.25657,0 z m 0,2.494371 0,1.256563 -1.25657,0 0,-1.256563 1.25657,0 z m 0,2.513126 0,1.237809 -1.25657,0 0,-1.237809 1.25657,0 z m 0,2.494372 0,1.256563 -1.25657,0 0,-1.256563 1.25657,0 z m 0,2.494371 0,1.256563 -1.25657,0 0,-1.256563 1.25657,0 z m 0,2.513126 0,1.237809 -1.25657,0 0,-1.237809 1.25657,0 z m 0,2.494372 0,1.256563 -1.25657,0 0,-1.256563 1.25657,0 z m 0,2.494371 0,1.256563 -1.25657,0 0,-1.256563 1.25657,0 z m 0,2.513126 0,0.393848 -0.0563,0.88147 -1.23781,-0.05626 0.0375,-0.862715 0,0.01876 0,-0.375094 1.25657,0 z m -0.20631,2.588145 -0.0563,0.393848 -0.22506,0.88147 -1.21905,-0.31883 0.
 22506,-0.84396 -0.0188,0.05626 0.0563,-0.375093 1.2378,0.206301 z m -0.69392,2.513126 -0.11253,0.300075 -0.43135,0.900224 -1.12528,-0.543885 0.4126,-0.862715 -0.0188,0.03751 0.0938,-0.262565 1.18154,0.431357 z m -1.2003,2.325579 -0.11253,0.168792 -0.65641,0.900225 -1.01275,-0.750187 0.65641,-0.88147 -0.0375,0.05626 0.0938,-0.150037 1.06901,0.656413 z m -1.65041,2.04426 -0.0188,0.01875 -0.91898,0.843961 -0.075,0.05626 -0.75019,-1.012753 0.0563,-0.01875 -0.0563,0.03751 0.86272,-0.806451 -0.0375,0.05626 0.0188,-0.01875 0.91898,0.843961 z m -2.08177,1.687921 -0.91898,0.56264 -0.2063,0.0938 -0.54388,-1.125278 0.16879,-0.09377 -0.0375,0.03751 0.90022,-0.543885 0.63766,1.069013 z m -2.30682,1.2003 -0.93774,0.33758 -0.31883,0.075 -0.30007,-1.2003 0.28132,-0.075 -0.0563,0.0188 0.90022,-0.33759 0.43136,1.18155 z m -2.53188,0.71268 -0.91898,0.13128 -0.3751,0.0188 -0.075,-1.23781 0.35633,-0.0187 -0.0563,0 0.88147,-0.13128 0.18755,1.23781 z m -2.58815,0.2063 -1.23781,0 0,-1.25657 1.23781,0 0,1.2
 5657 z m -2.49437,0 -1.25656,0 0,-1.25657 1.25656,0 0,1.25657 z m -2.49437,0 -1.25656,0 0,-1.25657 1.25656,0 0,1.25657 z m -2.51313,0 -1.2378,0 0,-1.25657 1.2378,0 0,1.25657 z m -2.49437,0 -1.25656,0 0,-1.25657 1.25656,0 0,1.25657 z m -2.49437,0 -1.25656,0 0,-1.25657 1.25656,0 0,1.25657 z m -2.51313,0 -1.2378,0 0,-1.25657 1.2378,0 0,1.25657 z m -2.49437,0 -1.25656,0 0,-1.25657 1.25656,0 0,1.25657 z m -2.49437,0 -1.25656,0 0,-1.25657 1.25656,0 0,1.25657 z m -2.51312,0 -1.23781,0 0,-1.25657 1.23781,0 0,1.25657 z m -2.49438,0 -1.25656,0 0,-1.25657 1.25656,0 0,1.25657 z m -2.49437,0 -1.25656,0 0,-1.25657 1.25656,0 0,1.25657 z m -2.51312,0 -1.23781,0 0,-1.25657 1.23781,0 0,1.25657 z m -2.49437,0 -1.25657,0 0,-1.25657 1.25657,0 0,1.25657 z m -2.49438,0 -1.25656,0 0,-1.25657 1.25656,0 0,1.25657 z m -2.51312,0 -1.23781,0 0,-1.25657 1.23781,0 0,1.25657 z m -2.49437,0 -1.25657,0 0,-1.25657 1.25657,0 0,1.25657 z m -2.49437,0 -1.25657,0 0,-1.25657 1.25657,0 0,1.25657 z m -2.51313,0 -1.23781,0 0
 ,-1.25657 1.23781,0 0,1.25657 z m -2.49437,0 -1.21906,0 -0.0563,0 0.0563,-1.25657 0.0375,0 -0.0375,0 1.21906,0 0,1.25657 z m -2.58815,-0.075 -1.21905,-0.18755 -0.075,-0.0187 0.30008,-1.21906 0.0563,0.0188 -0.0563,-0.0188 1.18154,0.18755 -0.18755,1.23781 z m -2.56939,-0.54389 -1.06901,-0.39385 -0.15004,-0.075 0.54388,-1.12528 0.11253,0.0563 -0.0563,-0.0187 1.05026,0.37509 -0.43136,1.18154 z m -2.38184,-1.06901 -0.88147,-0.50638 -0.24381,-0.187546 0.75019,-0.993998 0.22505,0.150038 -0.0563,-0.03751 0.84396,0.506376 -0.63766,1.06902 z m -2.13803,-1.500377 -0.65642,-0.60015 -0.28132,-0.318829 0.91898,-0.84396 0.28132,0.318829 -0.0563,-0.05626 0.63766,0.581395 -0.84396,0.918979 z m -1.7817,-1.912977 -0.45011,-0.581395 -0.30007,-0.506376 1.06901,-0.637659 0.30008,0.468867 -0.0563,-0.05626 0.43136,0.56264 -0.994,0.750187 z m -1.36909,-2.231806 -0.26256,-0.543885 -0.24381,-0.675168 1.18154,-0.431358 0.22506,0.637659 -0.0188,-0.03751 0.24381,0.506376 -1.12528,0.543885 z m -0.88147,-2.475616 
 -0.13128,-0.487622 -0.11253,-0.787696 1.23781,-0.187547 0.11253,0.750187 -0.0188,-0.05626 0.13129,0.468867 -1.21906,0.300075 z m -0.37509,-2.588145 -0.0188,-0.468867 1.25657,-0.05626 0.0188,0.468867 -1.25656,0.05626 z"
-         id="path3117"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 418.473,89.403523 0,-1.237809 1.25657,0 0,1.237809 -1.25657,0 z m 0,-2.494372 0,-1.256563 1.25657,0 0,1.256563 -1.25657,0 z m 0,-2.494371 0,-1.256563 1.25657,0 0,1.256563 -1.25657,0 z m 0,-2.513126 0,-1.237809 1.25657,0 0,1.237809 -1.25657,0 z m 0,-2.494372 0,-1.256563 1.25657,0 0,1.256563 -1.25657,0 z m 0,-2.494371 0,-1.256563 1.25657,0 0,1.256563 -1.25657,0 z m 0,-2.513126 0,-1.237809 1.25657,0 0,1.237809 -1.25657,0 z m 0,-2.494372 0,-1.256563 1.25657,0 0,1.256563 -1.25657,0 z m 0,-2.494371 0,-1.256563 1.25657,0 0,1.256563 -1.25657,0 z m 0,-2.513126 0,-1.237808 1.25657,0 0,1.237808 -1.25657,0 z m 0,-2.494371 0,-1.256564 1.25657,0 0,1.256564 -1.25657,0 z m 0,-2.494372 0,-1.256563 1.25657,0 0,1.256563 -1.25657,0 z m 0,-2.513126 0,-1.237808 1.25657,0 0,1.237808 -1.25657,0 z m 0,-2.494371 0,-1.256563 1.25657,0 0,1.256563 -1.25657,0 z m 0,-2.494372 0,-1.256563 1.25657,0 0,1.256563 -1.25657,0 z m 0,-2.513126 0,-1.237808 1.25657,0 0,1.237808 -1.25657,0 z m 0,-2.494371 0,-1.
 256563 1.25657,0 0,1.256563 -1.25657,0 z m 0,-2.494372 0,-1.256563 1.25657,0 0,1.256563 -1.25657,0 z m 0,-2.513126 0,-1.237808 1.25657,0 0,1.237808 -1.25657,0 z m 0,-2.494371 0,-1.256563 1.25657,0 0,1.256563 -1.25657,0 z m 0,-2.494372 0,-0.168792 0.0563,-1.12528 1.25656,0.07502 -0.0563,1.087771 0,-0.03751 0,0.168792 -1.25657,0 z m 0.24381,-2.588144 0.0188,-0.168792 0.28132,-1.106526 1.21905,0.318829 -0.28132,1.069017 0.0188,-0.05626 -0.0188,0.131282 -1.23781,-0.187546 z m 0.75019,-2.513126 0.0188,-0.07502 0.54389,-1.12528 0.0188,-0.03751 1.06902,0.656414 0,0 0.0188,-0.05626 -0.50638,1.069016 0.0188,-0.05626 -0.0188,0.05626 -1.16279,-0.431357 z m 1.27532,-2.344334 0.67517,-0.918979 0.11252,-0.131283 0.93774,0.84396 -0.0938,0.112528 0.0375,-0.05626 -0.65641,0.900224 -1.01275,-0.750186 z m 1.68792,-2.00675 0.73143,-0.656414 0.26257,-0.187547 0.75018,0.993998 -0.22505,0.187547 0.0375,-0.03751 -0.71268,0.637659 -0.84396,-0.937733 z m 2.06301,-1.594148 0.73143,-0.450112 0.41261,-0.187546 
 0.54388,1.12528 -0.39385,0.168792 0.0563,-0.01875 -0.71268,0.431358 -0.63766,-1.069017 z m 2.34434,-1.162789 0.71267,-0.262566 0.54389,-0.131282 0.30007,1.200299 -0.50637,0.131282 0.0563,-0.01875 -0.67517,0.24381 -0.43135,-1.162789 z m 2.53188,-0.675168 0.69392,-0.09377 0.6189,-0.03751 0.0563,1.256563 -0.5814,0.01876 0.0563,0 -0.65642,0.09377 -0.18754,-1.237808 z m 2.58814,-0.168792 1.25656,0 0,1.256563 -1.25656,0 0,-1.256563 z m 2.49437,0 1.25657,0 0,1.256563 -1.25657,0 0,-1.256563 z m 2.51313,0 1.23781,0 0,1.256563 -1.23781,0 0,-1.256563 z m 2.49437,0 1.25656,0 0,1.256563 -1.25656,0 0,-1.256563 z m 2.49437,0 1.25656,0 0,1.256563 -1.25656,0 0,-1.256563 z m 2.51313,0 1.23781,0 0,1.256563 -1.23781,0 0,-1.256563 z m 2.49437,0 1.25656,0 0,1.256563 -1.25656,0 0,-1.256563 z m 2.49437,0 1.25656,0 0,1.256563 -1.25656,0 0,-1.256563 z m 2.51313,0 1.23781,0 0,1.256563 -1.23781,0 0,-1.256563 z m 2.49437,0 1.25656,0 0,1.256563 -1.25656,0 0,-1.256563 z m 2.49437,0 1.25656,0 0,1.256563 -1.25656,0
  0,-1.256563 z m 2.51313,0 1.2378,0 0,1.256563 -1.2378,0 0,-1.256563 z m 2.49437,0 1.25656,0 0,1.256563 -1.25656,0 0,-1.256563 z m 2.49437,0 1.25656,0 0,1.256563 -1.25656,0 0,-1.256563 z m 2.51312,0 1.23781,0 0,1.256563 -1.23781,0 0,-1.256563 z m 2.49438,0 1.25656,0 0,1.256563 -1.25656,0 0,-1.256563 z m 2.49437,0 1.25656,0 0,1.256563 -1.25656,0 0,-1.256563 z m 2.51312,0 1.23781,0 0,1.256563 -1.23781,0 0,-1.256563 z m 2.49437,0 1.25657,0 0,1.256563 -1.25657,0 0,-1.256563 z m 2.49438,0 0.99399,0 0.30008,0.01875 -0.075,1.256563 -0.26257,-0.01875 0.0188,0 -0.97524,0 0,-1.256563 z m 2.58814,0.112528 0.994,0.150037 0.30007,0.07502 -0.30007,1.219053 -0.28132,-0.07502 0.0563,0.01875 -0.95649,-0.150037 0.18755,-1.237808 z m 2.56939,0.600149 0.84396,0.300075 0.35634,0.168792 -0.52513,1.144035 -0.33758,-0.168792 0.0375,0.01875 -0.80645,-0.300075 0.43136,-1.16279 z m 2.36309,1.087771 0.67517,0.393848 0.43136,0.31883 -0.76895,1.012752 -0.39384,-0.300075 0.0563,0.03751 -0.63766,-0.393848 0.63766,
 -1.069016 z m 2.10052,1.556638 0.46887,0.412603 0.46887,0.506376 -0.91898,0.84396 -0.45011,-0.487621 0.0563,0.05626 -0.45011,-0.412603 0.8252,-0.918979 z m 1.76294,1.931731 0.30008,0.393848 0.4126,0.712678 -1.06902,0.637659 -0.4126,-0.675169 0.0375,0.05626 -0.28132,-0.375093 1.01275,-0.750187 z m 1.31283,2.250561 0.16879,0.337584 0.31883,0.881469 -1.18154,0.431358 -0.30008,-0.862715 0.0188,0.05626 -0.15004,-0.300075 1.12528,-0.543885 z m 0.84396,2.494371 0.075,0.262566 0.15004,1.012752 -1.23781,0.206301 -0.15004,-0.993997 0.0188,0.05626 -0.075,-0.243811 1.21905,-0.300075 z m 0.31883,2.588145 0.0188,0.262565 0,1.031507 -1.25656,0 0,-1.031507 0,0.03751 0,-0.225056 1.23781,-0.07502 z m 0.0188,2.531881 0,1.256563 -1.25656,0 0,-1.256563 1.25656,0 z m 0,2.513126 0,1.237808 -1.25656,0 0,-1.237808 1.25656,0 z m 0,2.494371 0,1.256563 -1.25656,0 0,-1.256563 1.25656,0 z m 0,2.494372 0,1.256563 -1.25656,0 0,-1.256563 1.25656,0 z m 0,2.513126 0,1.237808 -1.25656,0 0,-1.237808 1.25656,0 z m 0,2.4
 94371 0,1.256563 -1.25656,0 0,-1.256563 1.25656,0 z m 0,2.494372 0,1.256563 -1.25656,0 0,-1.256563 1.25656,0 z m 0,2.513126 0,1.237808 -1.25656,0 0,-1.237808 1.25656,0 z m 0,2.494371 0,1.256563 -1.25656,0 0,-1.256563 1.25656,0 z m 0,2.494371 0,1.256563 -1.25656,0 0,-1.256563 1.25656,0 z m 0,2.513127 0,1.237808 -1.25656,0 0,-1.237808 1.25656,0 z m 0,2.494371 0,1.256563 -1.25656,0 0,-1.256563 1.25656,0 z m 0,2.494371 0,1.256563 -1.25656,0 0,-1.256563 1.25656,0 z m 0,2.513126 0,1.237809 -1.25656,0 0,-1.237809 1.25656,0 z m 0,2.494372 0,1.256563 -1.25656,0 0,-1.256563 1.25656,0 z m 0,2.494371 0,1.256563 -1.25656,0 0,-1.256563 1.25656,0 z m 0,2.513126 0,1.237809 -1.25656,0 0,-1.237809 1.25656,0 z m 0,2.494372 0,1.256563 -1.25656,0 0,-1.256563 1.25656,0 z m 0,2.494371 0,1.256563 -1.25656,0 0,-1.256563 1.25656,0 z m 0,2.513126 0,0.393848 -0.0563,0.88147 -1.23781,-0.05626 0.0375,-0.862715 0,0.01876 0,-0.375094 1.25656,0 z m -0.2063,2.56939 -0.0563,0.412603 -0.22506,0.88147 -1.21905,-0.31883
  0.22505,-0.84396 -0.0188,0.05626 0.0563,-0.375093 1.23781,0.187546 z m -0.69392,2.531881 -0.11253,0.300075 -0.43136,0.900224 -1.12528,-0.543885 0.41261,-0.862715 -0.0188,0.03751 0.0938,-0.262565 1.18154,0.431357 z m -1.2003,2.325579 -0.0938,0.168792 -0.67517,0.900225 -0.994,-0.750187 0.63766,-0.88147 -0.0375,0.05626 0.0938,-0.131282 1.06902,0.637658 z m -1.65041,2.04426 -0.0188,0.03751 -0.91897,0.825206 -0.075,0.05626 -0.75019,-1.012753 0.0375,-0.01875 -0.0375,0.03751 0.86272,-0.806451 -0.0375,0.03751 0.0188,0 0.91898,0.843961 z m -2.06301,1.687921 -0.93774,0.56264 -0.2063,0.0938 -0.52513,-1.125278 0.16879,-0.09377 -0.0563,0.03751 0.90022,-0.543885 0.65642,1.069013 z m -2.32558,1.2003 -0.91898,0.33758 -0.31883,0.075 -0.31883,-1.2003 0.30007,-0.075 -0.075,0.0188 0.90023,-0.33759 0.43136,1.18155 z m -2.53188,0.71268 -0.90023,0.13128 -0.39385,0.0188 -0.0563,-1.23781 0.35634,-0.0187 -0.075,0 0.88147,-0.13128 0.18755,1.23781 z m -2.56939,0.2063 -1.25657,0 0,-1.25657 1.25657,0 0,1.25657 
 z m -2.51313,0 -1.23781,0 0,-1.25657 1.23781,0 0,1.25657 z m -2.49437,0 -1.25657,0 0,-1.25657 1.25657,0 0,1.25657 z m -2.49437,0 -1.25657,0 0,-1.25657 1.25657,0 0,1.25657 z m -2.51313,0 -1.23781,0 0,-1.25657 1.23781,0 0,1.25657 z m -2.49437,0 -1.25656,0 0,-1.25657 1.25656,0 0,1.25657 z m -2.49437,0 -1.25657,0 0,-1.25657 1.25657,0 0,1.25657 z m -2.51313,0 -1.23781,0 0,-1.25657 1.23781,0 0,1.25657 z m -2.49437,0 -1.25656,0 0,-1.25657 1.25656,0 0,1.25657 z m -2.49437,0 -1.25656,0 0,-1.25657 1.25656,0 0,1.25657 z m -2.51313,0 -1.23781,0 0,-1.25657 1.23781,0 0,1.25657 z m -2.49437,0 -1.25656,0 0,-1.25657 1.25656,0 0,1.25657 z m -2.49437,0 -1.25656,0 0,-1.25657 1.25656,0 0,1.25657 z m -2.51313,0 -1.2378,0 0,-1.25657 1.2378,0 0,1.25657 z m -2.49437,0 -1.25656,0 0,-1.25657 1.25656,0 0,1.25657 z m -2.49437,0 -1.25656,0 0,-1.25657 1.25656,0 0,1.25657 z m -2.51313,0 -1.2378,0 0,-1.25657 1.2378,0 0,1.25657 z m -2.49437,0 -1.25656,0 0,-1.25657 1.25656,0 0,1.25657 z m -2.49437,0 -1.25656,0 0,-1.2
 5657 1.25656,0 0,1.25657 z m -2.51312,0 -1.21906,0 -0.0563,0 0.0563,-1.25657 0.0563,0 -0.0375,0 1.2003,0 0,1.25657 z m -2.58815,-0.075 -1.21905,-0.18755 -0.075,-0.0187 0.31883,-1.21906 0.0375,0.0188 -0.0563,-0.0188 1.18155,0.18755 -0.18755,1.23781 z m -2.56939,-0.54389 -1.06902,-0.39385 -0.15003,-0.075 0.54388,-1.12528 0.11253,0.0563 -0.0563,-0.0187 1.05026,0.37509 -0.43136,1.18154 z m -2.38184,-1.05026 -0.88147,-0.52513 -0.22506,-0.187546 0.73143,-0.993998 0.22506,0.150038 -0.0563,-0.03751 0.84396,0.506376 -0.63766,1.08777 z m -2.13804,-1.519127 -0.65641,-0.60015 -0.28132,-0.318829 0.91898,-0.825206 0.28132,0.300075 -0.0563,-0.05626 0.63766,0.581395 -0.84397,0.918979 z m -1.78169,-1.912977 -0.43136,-0.581395 -0.31883,-0.506376 1.06902,-0.637659 0.30008,0.468867 -0.0375,-0.05626 0.4126,0.56264 -0.994,0.750187 z m -1.36909,-2.231806 -0.26257,-0.543885 -0.24381,-0.656414 1.18155,-0.431357 0.22505,0.618904 -0.0188,-0.03751 0.24381,0.506376 -1.12528,0.543885 z m -0.88147,-2.456862 -0.13
 128,-0.506376 -0.11253,-0.787696 1.23781,-0.187547 0.11253,0.750187 -0.0188,-0.05626 0.13128,0.468867 -1.21905,0.318829 z m -0.37509,-2.606899 -0.0188,-0.468867 1.25657,-0.05626 0.0187,0.468867 -1.25656,0.05626 z"
-         id="path3119"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 93.548305,395.23596 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.2378 1.256563,0 0,1.2378 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49438 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49438 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49438 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0
  0,1.25657 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0.01875,-2.53188 0.05626,-1.08777 0.01875,-0.22506 1.237808,0.18755 -0.01875,0.18754 0,-0.0563 -0.05626,1.06902 -1.237808,-0.075 z m 0.300075,-2.6069 0.28132,-1.03151 0.07502,-0.22505 1.16279,0.43135 -0.05626,0.18755 0.01875,-0.0563 -0.262566,1.01275 -1.219053,-0.31883 z m 0.825205,-2.47562 0.431358,-0.91898 0.168792,-0.26256 1.069016,0.65641 -0.150037,0.24381 0.01875,-0.0563 -0.412602,0.88147 -1.125281,-0.54389 z m 1.312827,-2.26931 0.525131,-0.73144 0.28132,-0.30007 0.918979,0.84396 -0.243811,0.28132 0.03751,-0.0563 -0.525131,0.71267 -0.993998,-0.75018 z m 1.72543,-1.96924 0.581395,-0.52514 0.412603,-0.31882 0.750187,1.01275 -0.393848,0.28132 0.03751,-0.0375 -0.543886,0.50637 -0.84396,-0.91897 z m 2.081769,-1.5754 0.581399,-0.35634 0.56264,-0.28132 0.54389,1.12528 -0.54389,0.26257
  0.0563,-0.0375 -0.56264,0.35634 -0.637649,-1.06902 z m 2.344339,-1.14403 0.58138,-0.2063 0.67517,-0.1688 0.31883,1.2003 -0.6564,0.1688 0.0563,-0.0188 -0.54388,0.2063 -0.43136,-1.18154 z m 2.55064,-0.63766 0.58139,-0.0938 0.71268,-0.0375 0.075,1.25656 -0.69393,0.0375 0.0563,-0.0188 -0.54389,0.0938 -0.18754,-1.23781 z m 2.58813,-0.15004 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.49438,0 1.25655,0 0,1.23781 -1.25655,0 0,-1.23781 z m 2.49437,0 1.25657,0 0,1.23781 -1.25657,0 0,-1.23781 z m 2.51313,0 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.49437,0 1.25656,0 0,1.23781 -1.25656,0 0,-1.23781 z m 2.49437,0 1.25656,0 0,1.23781 -1.25656,0 0,-1.23781 z m 2.51313,0 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.49437,0 1.25656,0 0,1.23781 -1.25656,0 0,-1.23781 z m 2.49437,0 1.25656,0 0,1.23781 -1.25656,0 0,-1.23781 z m 2.51313,0 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.49437,0 1.25656,0 0,1.23781 -1.25656,0 0,-1.23781 z m 2.49437,0 1.25656,0 0,1.23781 -1.25656,0 0,-1.23781 z
  m 2.51313,0 1.2378,0 0,1.23781 -1.2378,0 0,-1.23781 z m 2.49437,0 1.25656,0 0,1.23781 -1.25656,0 0,-1.23781 z m 2.49437,0 1.25656,0 0,1.23781 -1.25656,0 0,-1.23781 z m 2.51312,0 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.49438,0 1.25656,0 0,1.23781 -1.25656,0 0,-1.23781 z m 2.49437,0 1.25656,0 0,1.23781 -1.25656,0 0,-1.23781 z m 2.51312,0 1.2378,0 0,1.23781 -1.2378,0 0,-1.23781 z m 2.49437,0 1.25657,0 0,1.23781 -1.25657,0 0,-1.23781 z m 2.49438,0 1.25656,0 0,1.23781 -1.25656,0 0,-1.23781 z m 2.51311,0 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.49438,0 1.25657,0 0,1.23781 -1.25657,0 0,-1.23781 z m 2.49438,0 1.25656,0 0,1.23781 -1.25656,0 0,-1.23781 z m 2.51311,0 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.49438,0 1.25657,0 0,1.23781 -1.25657,0 0,-1.23781 z m 2.49437,0 1.25657,0 0,1.23781 -1.25657,0 0,-1.23781 z m 2.51312,0 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.49437,0 1.25657,0 0,1.23781 -1.25657,0 0,-1.23781 z m 2.49438,0 1.25657,0 0,1.23781 -1.25657,0 0,-1.
 23781 z m 2.51312,0 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.49437,0 1.25657,0 0,1.23781 -1.25657,0 0,-1.23781 z m 2.49437,0 1.25657,0 0,1.23781 -1.25657,0 0,-1.23781 z m 2.51313,0 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.49437,0 1.25656,0 0,1.23781 -1.25656,0 0,-1.23781 z m 2.49437,0 1.25657,0 0,1.23781 -1.25657,0 0,-1.23781 z m 2.51313,0 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.49437,0 1.18154,0 0.0938,0 -0.0563,1.23781 -0.0938,0 0.0375,0 -1.1628,0 0,-1.23781 z m 2.58814,0.075 1.21907,0.18755 0.075,0.0187 -0.30008,1.2003 -0.0563,0 0.0563,0 -1.18154,-0.16879 0.18754,-1.23781 z m 2.56939,0.54389 1.08778,0.39384 0.13129,0.075 -0.54389,1.12528 -0.0938,-0.0563 0.0563,0.0188 -1.06901,-0.37509 0.43135,-1.18154 z m 2.4006,1.05026 0.91898,0.56264 0.18755,0.13128 -0.75019,0.994 -0.15004,-0.11253 0.0375,0.0375 -0.90022,-0.54389 0.65641,-1.06901 z m 2.13803,1.50037 0.71268,0.65641 0.22506,0.24382 -0.91898,0.84396 -0.2063,-0.22506 0.0375,0.0375 -0.69393,-0.63766 0.84397,-0
 .91898 z m 1.80045,1.89422 0.50638,0.69393 0.22506,0.37509 -1.05027,0.65641 -0.22505,-0.35634 0.0375,0.0563 -0.50638,-0.67517 1.01275,-0.75019 z m 1.38785,2.21305 0.33758,0.71268 0.18755,0.50638 -1.18155,0.4126 -0.16879,-0.46887 0.0188,0.0563 -0.31883,-0.67517 1.12528,-0.54389 z m 0.90022,2.45687 0.18755,0.69392 0.0938,0.60015 -1.23781,0.18755 -0.075,-0.56264 0,0.0563 -0.1688,-0.65641 1.2003,-0.31883 z m 0.41261,2.58814 0.0375,0.73143 0,0.54389 -1.2378,0 0,-0.52513 0,0.0375 -0.0375,-0.71268 1.23781,-0.075 z m 0.0375,2.53188 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.51313 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.51313 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.51313 0,1.23781 -1.2378,0 0,-1.23781 1.23
 78,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.51313 0,1.2378 -1.2378,0 0,-1.2378 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.51313 0,1.2378 -1.2378,0 0,-1.2378 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.51312 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49438 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.49437 0,0.52513 -0.0375,0.76894 -1.23781,-0.075 0.0375,-0.73143 0,0.0187 0,-0.50637 1.2378,0 z m -0.16879,2.58814 -0.0938,0.56264 -0.18755,0.73143 -1.2003,-0.31883 0.1688,-0.69392 0,0.0563 0.075,-0.52513 1.23781,0.18754 z m -0.67517,2.53188 -0.16879,0.48762 -0.35634,0.71268 -1.12528,-0.54388 0.33759,-0.69393 -0.0188,0.075 0.15004,-0.45011 1.18154,0.4126 z m -1.16279,2.34434 -0.22505,0.35634 -0.52513,0.71267 -1.01275,-0.73143 0.52513,-0.69392 -0
 .0375,0.0375 0.2063,-0.33759 1.06901,0.65642 z m -1.6129,2.06301 -0.2063,0.24381 -0.73143,0.65641 -0.84396,-0.91897 0.71268,-0.65642 -0.0375,0.0375 0.18754,-0.2063 0.91898,0.84396 z m -1.96924,1.70668 -0.16879,0.13128 -0.93773,0.56264 -0.65642,-1.06902 0.91898,-0.56264 -0.0375,0.0375 0.13129,-0.0938 0.75018,0.994 z m -2.28807,1.29407 -0.11253,0.0375 -1.12528,0.4126 -0.4126,-1.18154 1.08777,-0.39385 -0.0563,0.0188 0.075,-0.0375 0.54388,1.14404 z m -2.49437,0.78769 -0.0563,0.0188 -1.23781,0.18755 -0.18755,-1.23781 1.2003,-0.18755 -0.0563,0.0188 0.0188,-0.0188 0.31883,1.21905 z m -2.6069,0.28132 -0.075,0 -1.21905,0 0,-1.25656 1.2003,0 -0.0375,0 0.0563,0 0.075,1.25656 z m -2.53188,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51313,0 -1.2378,0 0,-1.25656 1.2378,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49438,0 -1.25656,0 0,-1.25656 1.25
 656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.49438,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.49438,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437
 ,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51313,0 -1.23779,0 0,-1.25656 1.23779,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49438,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.49438,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.53188,-0.0375 -0.69392,-0.0375 -0.6189,-0.0938 0.18754,-1.23781 0.5814,0.0938 -0.0563,0 0.65642,0.0188 -0.0563,1.25657 z m -2.58814,-0.39385 -0.65642,-0.1688 -0.60014,-0.2063 0.43135,-1.18154 0.56264,0.20
 63 -0.0563,-0.0188 0.6189,0.16879 -0.30007,1.2003 z m -2.45686,-0.88147 -0.54389,-0.26257 -0.618899,-0.37509 0.656409,-1.06902 0.58139,0.35634 -0.0563,-0.0375 0.50638,0.24381 -0.52513,1.14404 z m -2.231805,-1.38785 -0.393848,-0.28132 -0.618904,-0.56264 0.84396,-0.91898 0.581395,0.52513 -0.03751,-0.0375 0.375093,0.28132 -0.750187,0.994 z m -1.912976,-1.78169 -0.262566,-0.28132 -0.543885,-0.75019 0.993997,-0.73143 0.543886,0.71268 -0.03751,-0.0563 0.225056,0.26257 -0.918978,0.84396 z m -1.519129,-2.11928 -0.150037,-0.24381 -0.450112,-0.93773 1.12528,-0.54389 0.431357,0.90022 -0.01875,-0.0375 0.131282,0.20631 -1.069016,0.65641 z m -1.069016,-2.4006 -0.05626,-0.18754 -0.28132,-1.06902 1.200299,-0.31883 0.28132,1.05026 -0.01875,-0.0563 0.05626,0.15003 -1.181544,0.43136 z m -0.56264,-2.55063 -0.01875,-0.18755 -0.05626,-1.12528 1.237808,-0.075 0.05626,1.10653 0,-0.075 0.01875,0.16879 -1.237808,0.18755 z"
-         id="path3121"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 281.88272,266.954 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51312 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.49438 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51312 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.51313 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.51313 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.51313 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51
 313 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51313 0,-0.78769 0.0375,-0.48762 1.23781,0.0563 -0.0188,0.48762 0,-0.0375 0,0.76894 -1.25657,0 z m 0.15004,-2.58814 0.11253,-0.71268 0.13128,-0.56264 1.21906,0.30008 -0.13129,0.54388 0,-0.075 -0.0938,0.69393 -1.23781,-0.18755 z m 0.63766,-2.53188 0.2063,-0.54389 0.31883,-0.65641 1.12528,0.54388 -0.31883,0.61891 0.0375,-0.0563 -0.18755,0.52513 -1.18154,-0.43136 z m 1.16279,-2.34434 0.2063,-0.33758 0.54389,-0.73143 0.99399,0.75019 -0.52513,0.69392 0.0375,-0.0375 -0.18754,0.31883 -1.06902,-0.65642 z m 1.59415,-2.06301 0.13128,-0.15004 0.84396,-0.75018 0.82521,0.93773 -0.80646,0.73143 0.0563,-0.0563 -0.11253,0.13128 -0.93773,-0.84396 z m 2.06301,-1.70667 0.97524,-0.60015 0.15004,-0.075 0.54389,1.12528 -0.13129,0.0563 0.0563,-0.0187 -0.93774,0.58139 -0.65641,-1.06901 z m 2.32558,-1.21906 0.91898,-0.33758 0.31883,-0.075 0.
 31883,1.2003 -0.30008,0.075 0.0563,-0.0188 -0.88147,0.33759 -0.43136,-1.18155 z m 2.51313,-0.69392 0.86271,-0.13128 0.45011,-0.0375 0.0563,1.25656 -0.41261,0.0188 0.0563,0 -0.82521,0.11253 -0.18754,-1.21905 z m 2.58814,-0.2063 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.51313,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51313,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51313,0 1.2378,0 0,1.25656 -1.2378,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51313,0 1.2378,0 0,1.25656 -1.2378,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 
 2.51312,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49438,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51312,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.53189,0.0187 1.10652,0.0563 0.2063,0.0187 -0.18754,1.23781 -0.1688,-0.0188 0.0563,0 -1.06902,-0.0563 0.0563,-1.23781 z m 2.60689,0.30008 0.95649,0.24381 0.30008,0.11253 -0.43136,1.18154 -0.26256,-0.11253 0.0563,0.0188 -0.93773,-0.22506 0.31882,-1.21905 z m 2.47562,0.84396 0.75019,0.35634 0.4126,0.26256 -0.65641,1.05026 -0.3751,-0.22505 0.0375,0.0187 -0.71267,-0.33758 0.54388,-1.12528 z m 2.25056,1.33158 0.50638,0.37509 0.50637,0.46887 -0.84396,0.91898 -0.48762,-0.43136 0.0563,0.0375 -0.48762,-0.35634 0.75018,-1.01275 z m 1.91298,1.78169 0.30007,0.30008 0.52513,0.71267 -0.99399,0.75019 -0.52513,-0.69392 0.0375,0.0563 -0.26257,-0.30007 0.91898,-0.82521 z m 1.53788,2.11928 0.11253,0.18755 0.46887,0.97524 -1.1
 2528,0.54389 -0.46887,-0.95649 0.0375,0.0563 -0.0938,-0.15004 1.06901,-0.65641 z m 1.05026,2.4006 0.0188,0.0563 0.30007,1.2003 0.0188,0.075 -1.23781,0.18755 -0.0188,-0.0563 0.0188,0.075 -0.28132,-1.14403 0.0187,0.0563 -0.0187,-0.0188 1.18154,-0.43136 z m 0.52513,2.62565 0.0563,1.14404 0,0.13128 -1.25656,0 0,-0.11253 0,0.0375 -0.0563,-1.12528 1.25656,-0.075 z m 0.0563,2.53188 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.51313 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.51313 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.51313 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51313 0,1
 .23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51313 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51313 0,1.2378 -1.25656,0 0,-1.2378 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m -0.0375,2.53188 -0.0375,0.65641 -0.0938,0.65642 -1.23781,-0.18755 0.0938,-0.6189 0,0.0563 0.0188,-0.6189 1.25656,0.0563 z m -0.39385,2.58814 -0.13128,0.52513 -0.26257,0.73144 -1.18154,-0.43136 0.26256,-0.69392 -0.0187,0.0563 0.11253,-0.48762 1.21905,0.30007 z m -0.93773,2.45687 -0.15004,0.33758 -0.48762,0.80645 -1.06902,-0.65641 0.46887,-0.76894 -0.0375,0.0375 0.15004,-0.30008 1.12528,0.54389 z m -1.4066,2.19429 -0.0938,0.15004 -0.76894,0.84396 -0.91898,-0.84396 0.73144,-0.82521 -0.0375,0.0563 0.0938,-0
 .13128 0.994,0.75018 z m -1.87547,1.91298 -0.90022,0.65641 -0.1688,0.11253 -0.63766,-1.06902 0.13129,-0.0938 -0.0375,0.0375 0.86271,-0.65641 0.75019,1.01275 z m -2.1943,1.42535 -0.84396,0.39385 -0.35634,0.13129 -0.43135,-1.16279 0.31883,-0.13129 -0.0563,0.0188 0.82521,-0.3751 0.54388,1.12528 z m -2.45686,0.93774 -0.75019,0.18754 -0.52513,0.075 -0.18754,-1.23781 0.50637,-0.075 -0.075,0.0187 0.73144,-0.18755 0.30007,1.21906 z m -2.56939,0.4126 -0.69392,0.0375 -0.60015,0 0,-1.25656 0.58139,0 -0.0375,0 0.67517,-0.0375 0.075,1.25656 z m -2.53188,0.0375 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51313,0 -1.2378,0 0,-1.25656 1.2378,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49438,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m 
 -2.49438,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.49438,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.01276,0 -0.28132,-0.0188 0.075,-1.25657 0.26257,0.0188 -0.0375,0 0.994,0 0,1.25656 z m -2.58815,-0.11253 -0.93773,-0.15004 -0.35634,-0.0938 0.30007,-1.2003 0.33759,0.075 -0.075,-0.0187 0.90022,0.15003 -0.16879,1.23781 z m -2.55063,-0.60015 -0.75019,-0.28132 -0.46887,-0.22505 0.54389,-1.12528 0.43136,0.2063 -0.0563,-0.0188 0.73143,0.26257 -0.43135,1.18154 z m -2.38185,-1.12528 -0.50637,-0.31883 -0.56264,-0.4126
  0.75018,-1.01275 0.52513,0.4126 -0.0375,-0.0375 0.48762,0.30008 -0.65642,1.06901 z m -2.08177,-1.57539 -0.30007,-0.26257 -0.60015,-0.67516 0.91898,-0.84396 0.60015,0.65641 -0.0563,-0.0563 0.28132,0.26256 -0.84396,0.91898 z m -1.70667,-1.96924 -0.11253,-0.16879 -0.58139,-0.93774 1.06901,-0.65641 0.56264,0.91898 -0.0375,-0.0375 0.11253,0.13128 -1.01275,0.75019 z m -1.29407,-2.34433 -0.41261,-1.12529 -0.0375,-0.11252 1.21906,-0.31883 0.0188,0.0938 -0.0188,-0.0563 0.39385,1.08777 -1.16279,0.43136 z m -0.75019,-2.51313 -0.15004,-1.05026 -0.0188,-0.26257 1.25656,-0.0563 0,0.22505 0,-0.075 0.15004,1.0315 -1.23781,0.18755 z"
-         id="path3123"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 281.88272,395.74234 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.51313 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51313 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51313 0,-1.2378 1.25657,0 0,1.2378 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51312 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49438 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51312 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49438 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51
 312 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.49438 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51312 0,-0.7877 0.0375,-0.48762 1.23781,0.0563 -0.0188,0.46887 0,-0.0375 0,0.7877 -1.25657,0 z m 0.15004,-2.58815 0.11253,-0.73143 0.13128,-0.54388 1.21906,0.30007 -0.13129,0.52513 0,-0.0563 -0.0938,0.69392 -1.23781,-0.18755 z m 0.63766,-2.53188 0.2063,-0.56264 0.30008,-0.63766 1.14403,0.54389 -0.31883,0.6189 0.0375,-0.0563 -0.2063,0.52513 -1.16279,-0.43136 z m 1.16279,-2.34433 0.2063,-0.35634 0.54389,-0.71268 0.99399,0.73143 -0.52513,0.71268 0.0375,-0.0563 -0.2063,0.31883 -1.05026,-0.63766 z m 1.59415,-2.06302 0.13128,-0.15003 0.82521,-0.75019 0.84396,0.93773 -0.80646,0.71268 0.0563,-0.0375 -0.11253,0.13129 -0.93773,-0.84397 z m 2.06301,-1.70667 0.97524,-0.60015 0.15004,-0.075 0.54389,1.12528 -0.13129,0.0563 0.0563,-0.0375 -0.95649,0.58139 -0.63766,-1.05026 z m 2.30683,-1.21905 0.93773,-0.33759 0.31883,-0.0938 0.300
 07,1.21905 -0.28132,0.075 0.0563,-0.0188 -0.90023,0.31883 -0.43135,-1.16279 z m 2.53188,-0.71268 0.86271,-0.13128 0.43136,-0.0188 0.075,1.25656 -0.41261,0.0188 0.0563,-0.0188 -0.82521,0.13129 -0.18754,-1.23781 z m 2.58814,-0.18755 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.49437,0 1.25657,0 0,1.23781 -1.25657,0 0,-1.23781 z m 2.49437,0 1.25657,0 0,1.23781 -1.25657,0 0,-1.23781 z m 2.51313,0 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.49437,0 1.25656,0 0,1.23781 -1.25656,0 0,-1.23781 z m 2.49437,0 1.25657,0 0,1.23781 -1.25657,0 0,-1.23781 z m 2.51313,0 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.49437,0 1.25656,0 0,1.23781 -1.25656,0 0,-1.23781 z m 2.49437,0 1.25656,0 0,1.23781 -1.25656,0 0,-1.23781 z m 2.51313,0 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.49437,0 1.25656,0 0,1.23781 -1.25656,0 0,-1.23781 z m 2.49437,0 1.25656,0 0,1.23781 -1.25656,0 0,-1.23781 z m 2.51313,0 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.49437,0 1.25656,0 0,1.23781 -1.25656,0 0,-1.23
 781 z m 2.49437,0 1.25656,0 0,1.23781 -1.25656,0 0,-1.23781 z m 2.51313,0 1.2378,0 0,1.23781 -1.2378,0 0,-1.23781 z m 2.49437,0 1.25656,0 0,1.23781 -1.25656,0 0,-1.23781 z m 2.49437,0 1.25656,0 0,1.23781 -1.25656,0 0,-1.23781 z m 2.51312,0 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.53189,0 1.10652,0.0563 0.18755,0.0375 -0.16879,1.2378 -0.1688,-0.0375 0.0563,0.0188 -1.08777,-0.0563 0.075,-1.25656 z m 2.58814,0.30008 0.97524,0.26256 0.30008,0.0938 -0.43136,1.18154 -0.26256,-0.0938 0.0563,0.0188 -0.93773,-0.24381 0.30007,-1.21905 z m 2.49437,0.84396 0.75019,0.37509 0.39385,0.24381 -0.63766,1.06902 -0.3751,-0.24381 0.0375,0.0375 -0.71267,-0.35634 0.54388,-1.12528 z m 2.23181,1.33158 0.52513,0.39385 0.48762,0.45011 -0.84396,0.91898 -0.46887,-0.43136 0.0563,0.0375 -0.48762,-0.35634 0.73143,-1.01275 z m 1.93173,1.78169 0.30007,0.31883 0.52513,0.69392 -1.01275,0.75019 -0.50637,-0.67517 0.0375,0.0375 -0.26257,-0.28132 0.91898,-0.84396 z m 1.51913,2.11928 0.13128,0.2063 0.46887,0.95649 -
 1.12528,0.54389 -0.46887,-0.93774 0.0375,0.0563 -0.11253,-0.1688 1.06902,-0.65641 z m 1.06901,2.4006 0.0188,0.0563 0.30007,1.2003 0.0188,0.075 -1.23781,0.18755 -0.0188,-0.0375 0.0188,0.0563 -0.28132,-1.14403 0.0187,0.0563 -0.0187,-0.0188 1.18154,-0.43136 z m 0.50638,2.62565 0.075,1.16279 0,0.11253 -1.25656,0 0,-0.11253 0,0.0375 -0.0563,-1.12528 1.23781,-0.075 z m 0.075,2.53188 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.51313 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.51313 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.51313 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51313 0
 ,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51313 0,1.2378 -1.25656,0 0,-1.2378 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51313 0,1.2378 -1.25656,0 0,-1.2378 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m -0.0375,2.53188 -0.0375,0.67517 -0.0938,0.63766 -1.23781,-0.18755 0.0938,-0.60015 0,0.0563 0.0375,-0.63766 1.23781,0.0563 z m -0.39385,2.58814 -0.13128,0.54389 -0.26257,0.71268 -1.18154,-0.43136 0.26256,-0.67517 -0.0187,0.0563 0.13128,-0.50638 1.2003,0.30007 z m -0.91898,2.45687 -0.16879,0.35633 -0.48762,0.7877 -1.06902,-0.65641 0.46887,-0.75019 -0.0375,0.0375 0.16879,-0.31883 1.12528,0.54389 z m -1.4066,2.21305 -0.11253,0.15003 -0.76894,0.82521 -0.91898,-0.84396 0.73144,-0.80645 -0.0375,0.0563 0.0938,-
 0.13128 1.01275,0.75019 z m -1.89422,1.89422 -0.90022,0.67517 -0.15004,0.0938 -0.65642,-1.06902 0.13129,-0.075 -0.0375,0.0375 0.88147,-0.65641 0.73143,0.994 z m -2.17554,1.42535 -0.86272,0.41261 -0.33758,0.13128 -0.43136,-1.18154 0.30008,-0.11253 -0.0563,0.0188 0.84396,-0.39385 0.54389,1.12528 z m -2.45686,0.95649 -0.76895,0.18755 -0.50637,0.075 -0.18755,-1.23781 0.48762,-0.075 -0.075,0.0188 0.75019,-0.18755 0.30008,1.21905 z m -2.56939,0.41261 -0.71268,0.0375 -0.5814,0 0,-1.25656 0.56264,0 -0.0375,0 0.69393,-0.0375 0.075,1.25657 z m -2.53189,0.0375 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 
 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51313,0 -1.2378,0 0,-1.25656 1.2378,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.03151,0 -0.26256,-0.0188 0.075,-1.25656 0.24381,0.0188 -0.0375,0 1.01275,0 0,1.25656 z m -2.58814,-0.11252 -0.95649,-0.15004 -0.33759,-0.075 0.30008,-1.21905 0.31883,0.075 -0.075,-0.0188 0.91898,0.15004 -0.16879,1.23781 z m -2.56939,-0.60015 -0.75019,-0.28132 -0.45011,-0.2063 0.54388,-1.12528 0.41261,0.18754 -0.0563,-0.0188 0.73143,0.26256 -0.43135,1.18155 z m -2.36309,-1.12528 -0.52513,-0.31883 -0.54389,-0.
 41261 0.73143,-0.99399 0.52513,0.39384 -0.0375,-0.0375 0.50637,0.31882 -0.65641,1.05027 z m -2.08177,-1.5754 -0.31883,-0.28132 -0.5814,-0.65641 0.91898,-0.84396 0.5814,0.63766 -0.0563,-0.0375 0.30008,0.26257 -0.84396,0.91897 z m -1.70668,-1.96924 -0.13128,-0.16879 -0.58139,-0.93773 1.06901,-0.63766 0.56264,0.90022 -0.0375,-0.0375 0.11253,0.13129 -0.994,0.75018 z m -1.27531,-2.26931 -0.0188,-0.0375 -0.43136,-1.16279 -0.0188,-0.0938 1.2003,-0.31883 0.0188,0.075 -0.0188,-0.0563 0.4126,1.08777 -0.0375,-0.0563 0.0188,0.0187 -1.12528,0.54389 z m -0.7877,-2.56939 -0.15004,-1.08777 -0.0188,-0.22506 1.25656,-0.0563 0,0.18754 0,-0.0563 0.15004,1.05026 -1.23781,0.18755 z"
-         id="path3125"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 415.35973,324.7559 0,-1.25656 1.2378,0 0,1.25656 -1.2378,0 z m 0,-2.49437 0,-1.25656 1.2378,0 0,1.25656 -1.2378,0 z m 0,-2.51312 0,-1.23781 1.2378,0 0,1.23781 -1.2378,0 z m 0,-2.49437 0,-1.25657 1.2378,0 0,1.25657 -1.2378,0 z m 0,-2.49438 0,-1.25656 1.2378,0 0,1.25656 -1.2378,0 z m 0,-2.51312 0,-1.23781 1.2378,0 0,1.23781 -1.2378,0 z m 0,-2.49437 0,-1.25657 1.2378,0 0,1.25657 -1.2378,0 z m 0,-2.49437 0,-1.25657 1.2378,0 0,1.25657 -1.2378,0 z m 0,-2.51313 0,-1.23781 1.2378,0 0,1.23781 -1.2378,0 z m 0,-2.49437 0,-1.25656 1.2378,0 0,1.25656 -1.2378,0 z m 0,-2.49437 0,-1.25657 1.2378,0 0,1.25657 -1.2378,0 z m 0,-2.51313 0,-1.23781 1.2378,0 0,1.23781 -1.2378,0 z m 0,-2.49437 0,-1.25656 1.2378,0 0,1.25656 -1.2378,0 z m 0,-2.49437 0,-1.25657 1.2378,0 0,1.25657 -1.2378,0 z m 0,-2.51313 0,-1.23781 1.2378,0 0,1.23781 -1.2378,0 z m 0,-2.49437 0,-1.25656 1.2378,0 0,1.25656 -1.2378,0 z m 0,-2.49437 0,-1.25656 1.2378,0 0,1.25656 -1.2378,0 z m 0,-2.51313 0,-1.23781 1.2378,0 0,1.23781
  -1.2378,0 z m 0,-2.49437 0,-1.25656 1.2378,0 0,1.25656 -1.2378,0 z m 0,-2.49437 0,-1.25656 1.2378,0 0,1.25656 -1.2378,0 z m 0,-2.51313 0,-0.73143 0.0188,-0.54388 1.25656,0.0563 -0.0375,0.52513 0,-0.0375 0,0.73143 -1.2378,0 z m 0.15003,-2.58814 0.0938,-0.67517 0.15003,-0.60015 1.21906,0.30008 -0.15004,0.58139 0.0188,-0.0563 -0.0938,0.63766 -1.23781,-0.18755 z m 0.63766,-2.53188 0.18755,-0.50638 0.33758,-0.69392 1.12528,0.54388 -0.31883,0.65642 0.0188,-0.0563 -0.16879,0.48763 -1.18155,-0.43136 z m 1.16279,-2.34434 0.18755,-0.30007 0.58139,-0.76894 0.994,0.75018 -0.54388,0.73144 0.0188,-0.0375 -0.16879,0.28132 -1.06902,-0.65642 z m 1.63166,-2.04425 0.0938,-0.13129 0.86272,-0.76894 0.84396,0.91898 -0.84396,0.76894 0.0563,-0.0563 -0.0938,0.0938 -0.91898,-0.8252 z m 2.04426,-1.72543 0.97524,-0.5814 0.16879,-0.075 0.52513,1.12528 -0.13128,0.075 0.0563,-0.0375 -0.93774,0.56264 -0.65641,-1.06901 z m 2.32558,-1.2003 0.90022,-0.31883 0.33759,-0.0938 0.31883,1.2003 -0.31883,0.0938 0.0563,-0.01
 88 -0.86271,0.31883 -0.43136,-1.18154 z m 2.53188,-0.71268 0.8252,-0.11253 0.46887,-0.0375 0.075,1.25656 -0.45011,0.0188 0.0563,0 -0.78769,0.11253 -0.18755,-1.23781 z m 2.56939,-0.18755 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51312,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49438,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51312,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.49438,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51312,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.49438,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51312,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.51313,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.2565
 6,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.51313,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.53188,0 1.21906,0.075 0.0938,0 -0.18755,1.23781 -0.0563,0 0.0563,0 -1.18154,-0.0563 0.0563,-1.25656 z m 2.6069,0.30008 1.06902,0.28132 0.2063,0.0563 -0.43136,1.18155 -0.16879,-0.0563 0.0563,0.0188 -1.0315,-0.28132 0.30007,-1.2003 z m 2.49437,0.8252 0.84396,0.39385 0.31883,0.18755 -0.65641,1.08777 -0.28132,-0.18755 0.0563,0.0375 -0.8252,-0.39385 0.54388,-1.12528 z m 2.26932,1.31283 0.58139,0.45011 0.41261,0.37509 -0.84396,0.91898 -0.39385,-0.35633 0.0563,0.0375 -0.58139,-0.43135 0.76894,-0.994 z m 1.93173,1.74418 0.35634,0.41261 0.46887,0.60015 -1.01276,0.75018 -0.43135,-0.58139 0.0375,0.0375 -0.35634,-0.3751 0.93773,-0.84396 z m 1.53788,2.11928 0.18755,0.30008 0.4126,0.86271 -1.12528,0.54389 -0.4126,-0.84396 0.0375,0.0563 -0.16879,-0.26256 1.
 06901,-0.65642 z m 1.06902,2.38185 0.075,0.18754 0.26256,1.08777 -1.2003,0.30008 -0.26256,-1.05026 0.0188,0.0563 -0.0563,-0.15004 1.16279,-0.43135 z m 0.56264,2.56939 0,0.075 0.075,1.21906 -1.25656,0.075 -0.0563,-1.2003 0,0.0563 0,-0.0375 1.23781,-0.18754 z m 0.075,2.58814 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51313 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51313 0,1.2378 -1.25656,0 0,-1.2378 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51313 0,1.2378 -1.25656,0 0,-1.2378 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51312 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49438 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437
  0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51312 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.49438 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51312 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m -0.0375,2.53189 -0.0375,0.76894 -0.075,0.54388 -1.23781,-0.18754 0.075,-0.50638 0,0.0563 0.0375,-0.73143 1.23781,0.0563 z m -0.37509,2.58814 -0.15004,0.63766 -0.22506,0.6189 -1.18154,-0.43135 0.22505,-0.5814 -0.0188,0.0563 0.15004,-0.60015 1.2003,0.30007 z m -0.90023,2.45686 -0.2063,0.43136 -0.43136,0.71268 -1.06901,-0.63766 0.4126,-0.69392 -0.0375,0.0563 0.2063,-0.4126 1.12528,0.54388 z m -1.4066,2.21305 -0.16879,0.24381 -0.67517,0.75019 -0.93773,-0.84396 0.67517,-0.73143 -0.0375,0.0375 0.15003,-0.2063 0.994,0.75018 z m -1.80045,1.89423 -0.075,0.0563 -0.95648,0.73143 -0.075,0.0375 -0.63766,-1.06902 0.0375,
 -0.0375 -0.0563,0.0375 0.91898,-0.69392 -0.0375,0.0375 0.0375,-0.0375 0.84396,0.93774 z m -2.2318,1.48161 -0.95649,0.46887 -0.22506,0.075 -0.43136,-1.16279 0.20631,-0.075 -0.0563,0.0188 0.91898,-0.45012 0.54389,1.12528 z m -2.43811,0.97525 -0.88147,0.22505 -0.39385,0.0563 -0.18755,-1.23781 0.3751,-0.0563 -0.075,0.0188 0.86271,-0.22506 0.30008,1.21906 z m -2.56939,0.43136 -0.84396,0.0375 -0.45011,0 0,-1.2378 0.43135,0 -0.0188,0 0.80645,-0.0563 0.075,1.25657 z m -2.53188,0.0375 -1.25656,0 0,-1.2378 1.25656,0 0,1.2378 z m -2.51313,0 -1.23781,0 0,-1.2378 1.23781,0 0,1.2378 z m -2.49437,0 -1.25656,0 0,-1.2378 1.25656,0 0,1.2378 z m -2.49437,0 -1.25656,0 0,-1.2378 1.25656,0 0,1.2378 z m -2.51313,0 -1.23781,0 0,-1.2378 1.23781,0 0,1.2378 z m -2.49437,0 -1.25656,0 0,-1.2378 1.25656,0 0,1.2378 z m -2.49437,0 -1.25656,0 0,-1.2378 1.25656,0 0,1.2378 z m -2.51313,0 -1.2378,0 0,-1.2378 1.2378,0 0,1.2378 z m -2.49437,0 -1.25656,0 0,-1.2378 1.25656,0 0,1.2378 z m -2.49437,0 -1.25656,0 0,-1.2378 1.
 25656,0 0,1.2378 z m -2.51312,0 -1.23781,0 0,-1.2378 1.23781,0 0,1.2378 z m -2.49438,0 -1.25656,0 0,-1.2378 1.25656,0 0,1.2378 z m -2.49437,0 -1.25656,0 0,-1.2378 1.25656,0 0,1.2378 z m -2.51312,0 -1.23781,0 0,-1.2378 1.23781,0 0,1.2378 z m -2.49438,0 -1.25656,0 0,-1.2378 1.25656,0 0,1.2378 z m -2.49437,0 -1.25656,0 0,-1.2378 1.25656,0 0,1.2378 z m -2.51312,0 -1.23781,0 0,-1.2378 1.23781,0 0,1.2378 z m -2.49437,0 -1.25657,0 0,-1.2378 1.25657,0 0,1.2378 z m -2.49438,0 -1.25656,0 -0.0375,0 0.075,-1.2378 0.0188,0 -0.0375,0 1.2378,0 0,1.2378 z m -2.58814,-0.075 -1.2003,-0.1688 -0.0938,-0.0375 0.30007,-1.20029 0.075,0.0188 -0.0563,-0.0188 1.14403,0.16879 -0.16879,1.23781 z m -2.56939,-0.54389 -0.994,-0.35634 -0.22505,-0.11253 0.54388,-1.12528 0.2063,0.0938 -0.075,-0.0188 0.97525,0.33759 -0.43136,1.18154 z m -2.38184,-1.06902 -0.75019,-0.45011 -0.35634,-0.28132 0.75019,-0.994 0.33758,0.24381 -0.0563,-0.0375 0.71268,0.45011 -0.63766,1.06901 z m -2.11928,-1.53788 -0.50638,-0.45011 -0.4126,-
 0.48762 0.91898,-0.82521 0.4126,0.45011 -0.0563,-0.0563 0.48762,0.43136 -0.84396,0.93773 z m -1.74419,-1.95048 -0.30007,-0.3751 -0.43136,-0.71268 1.06902,-0.65641 0.4126,0.69392 -0.0188,-0.0563 0.26256,0.35634 -0.994,0.75019 z m -1.33158,-2.25057 -0.13128,-0.26256 -0.33758,-0.95649 1.16279,-0.43136 0.33758,0.93774 -0.0188,-0.075 0.11253,0.24381 -1.12528,0.54388 z m -0.8252,-2.49437 -0.0375,-0.13128 -0.16879,-1.16279 1.2378,-0.18755 0.1688,1.12528 -0.0188,-0.0563 0.0375,0.11253 -1.21905,0.30007 z m -0.28132,-2.58814 0,-0.0563 1.2378,-0.075 0,0.0563 -1.2378,0.075 z"
-         id="path3127"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="313.1387"
-         y="304.1564"
-         id="text3129"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Subtask</text>
-      <text
-         x="306.53705"
-         y="316.15939"
-         id="text3131"


<TRUNCATED>

[09/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/fig/checkpointing.svg
----------------------------------------------------------------------
diff --git a/docs/fig/checkpointing.svg b/docs/fig/checkpointing.svg
new file mode 100644
index 0000000..a572d8e
--- /dev/null
+++ b/docs/fig/checkpointing.svg
@@ -0,0 +1,1731 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<svg
+   xmlns:dc="http://purl.org/dc/elements/1.1/"
+   xmlns:cc="http://creativecommons.org/ns#"
+   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+   xmlns:svg="http://www.w3.org/2000/svg"
+   xmlns="http://www.w3.org/2000/svg"
+   version="1.1"
+   width="1034.4915"
+   height="358.78302"
+   id="svg2">
+  <defs
+     id="defs4" />
+  <metadata
+     id="metadata7">
+    <rdf:RDF>
+      <cc:Work
+         rdf:about="">
+        <dc:format>image/svg+xml</dc:format>
+        <dc:type
+           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+        <dc:title></dc:title>
+      </cc:Work>
+    </rdf:RDF>
+  </metadata>
+  <g
+     transform="translate(565.22264,-320.66515)"
+     id="layer1">
+    <g
+       transform="translate(-572.34764,307.29015)"
+       id="g3564">
+      <path
+         d="m 92.760609,13.681533 0,81.742239 132.220441,0 0,-81.742239 -132.220441,0 z"
+         id="path3566"
+         style="fill:#e4eaf4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 92.760609,13.681533 132.220441,0 0,81.742239 -132.220441,0 z"
+         id="path3568"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="129.17928"
+         y="35.674438"
+         id="text3570"
+         xml:space="preserve"
+         style="font-size:17.55437279px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Master</text>
+      <path
+         d="m 97.449277,53.694627 0,9.377336 52.363043,0 0,-9.377336 -52.363043,0 z"
+         id="path3572"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 97.449277,53.694627 52.363043,0 0,9.377336 -52.363043,0 z"
+         id="path3574"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="101.08463"
+         y="60.420757"
+         id="text3576"
+         xml:space="preserve"
+         style="font-size:4.95123339px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source 1:</text>
+      <path
+         d="m 97.449277,63.071963 0,9.377336 52.363043,0 0,-9.377336 -52.363043,0 z"
+         id="path3578"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 97.449277,63.071963 52.363043,0 0,9.377336 -52.363043,0 z"
+         id="path3580"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="101.08463"
+         y="69.828026"
+         id="text3582"
+         xml:space="preserve"
+         style="font-size:4.95123339px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source 2:</text>
+      <path
+         d="m 97.449277,72.289884 0,9.377336 52.363043,0 0,-9.377336 -52.363043,0 z"
+         id="path3584"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 97.449277,72.289884 52.363043,0 0,9.377336 -52.363043,0 z"
+         id="path3586"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="101.08463"
+         y="78.97686"
+         id="text3588"
+         xml:space="preserve"
+         style="font-size:4.95123339px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source 3:</text>
+      <path
+         d="m 97.449277,81.826635 0,9.377336 52.363043,0 0,-9.377336 -52.363043,0 z"
+         id="path3590"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 97.449277,81.826635 52.363043,0 0,9.377336 -52.363043,0 z"
+         id="path3592"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="101.08463"
+         y="88.561928"
+         id="text3594"
+         xml:space="preserve"
+         style="font-size:4.95123339px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source 4:</text>
+      <path
+         d="m 149.96236,53.694627 0,9.377336 52.36304,0 0,-9.377336 -52.36304,0 z"
+         id="path3596"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 149.96236,53.694627 52.36304,0 0,9.377336 -52.36304,0 z"
+         id="path3598"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="153.60519"
+         y="60.420757"
+         id="text3600"
+         xml:space="preserve"
+         style="font-size:4.95123339px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">State 1:</text>
+      <path
+         d="m 149.96236,63.071963 0,9.377336 52.36304,0 0,-9.377336 -52.36304,0 z"
+         id="path3602"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 149.96236,63.071963 52.36304,0 0,9.377336 -52.36304,0 z"
+         id="path3604"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="153.60519"
+         y="69.828026"
+         id="text3606"
+         xml:space="preserve"
+         style="font-size:4.95123339px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">State 2:</text>
+      <path
+         d="m 149.96236,72.289884 0,9.377336 52.36304,0 0,-9.377336 -52.36304,0 z"
+         id="path3608"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 149.96236,72.289884 52.36304,0 0,9.377336 -52.36304,0 z"
+         id="path3610"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="153.60519"
+         y="78.97686"
+         id="text3612"
+         xml:space="preserve"
+         style="font-size:4.95123339px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Sink 1: (pending)</text>
+      <path
+         d="m 149.96236,81.826635 0,9.377336 52.36304,0 0,-9.377336 -52.36304,0 z"
+         id="path3614"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 149.96236,81.826635 52.36304,0 0,9.377336 -52.36304,0 z"
+         id="path3616"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="153.60519"
+         y="88.561928"
+         id="text3618"
+         xml:space="preserve"
+         style="font-size:4.95123339px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Sink </text>
+      <text
+         x="166.20834"
+         y="88.561928"
+         id="text3620"
+         xml:space="preserve"
+         style="font-size:4.95123339px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">2: </text>
+      <text
+         x="173.41013"
+         y="88.561928"
+         id="text3622"
+         xml:space="preserve"
+         style="font-size:4.95123339px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">(pending</text>
+      <text
+         x="195.7657"
+         y="88.561928"
+         id="text3624"
+         xml:space="preserve"
+         style="font-size:4.95123339px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">)</text>
+      <text
+         x="99.956451"
+         y="50.442242"
+         id="text3626"
+         xml:space="preserve"
+         style="font-size:6.30156994px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Checkpoint data</text>
+      <path
+         d="m 112.14356,245.7706 c 0,-6.73292 5.45761,-12.19053 12.19054,-12.19053 6.73293,0 12.19054,5.45761 12.19054,12.19053 0,6.73293 -5.45761,12.19054 -12.19054,12.19054 -6.73293,0 -12.19054,-5.45761 -12.19054,-12.19054"
+         id="path3628"
+         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 112.14356,245.7706 c 0,-6.73292 5.45761,-12.19053 12.19054,-12.19053 6.73293,0 12.19054,5.45761 12.19054,12.19053 0,6.73293 -5.45761,12.19054 -12.19054,12.19054 -6.73293,0 -12.19054,-5.45761 -12.19054,-12.19054"
+         id="path3630"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 181.06698,192.94807 c 0,-6.73293 5.45761,-12.19054 12.19054,-12.19054 6.73293,0 12.19054,5.45761 12.19054,12.19054 0,6.73293 -5.45761,12.19054 -12.19054,12.19054 -6.73293,0 -12.19054,-5.45761 -12.19054,-12.19054"
+         id="path3632"
+         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 181.06698,192.94807 c 0,-6.73293 5.45761,-12.19054 12.19054,-12.19054 6.73293,0 12.19054,5.45761 12.19054,12.19054 0,6.73293 -5.45761,12.19054 -12.19054,12.19054 -6.73293,0 -12.19054,-5.45761 -12.19054,-12.19054"
+         id="path3634"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 181.06698,245.7706 c 0,-6.73292 5.45761,-12.19053 12.19054,-12.19053 6.73293,0 12.19054,5.45761 12.19054,12.19053 0,6.73293 -5.45761,12.19054 -12.19054,12.19054 -6.73293,0 -12.19054,-5.45761 -12.19054,-12.19054"
+         id="path3636"
+         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 181.06698,245.7706 c 0,-6.73292 5.45761,-12.19053 12.19054,-12.19053 6.73293,0 12.19054,5.45761 12.19054,12.19053 0,6.73293 -5.45761,12.19054 -12.19054,12.19054 -6.73293,0 -12.19054,-5.45761 -12.19054,-12.19054"
+         id="path3638"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 136.52464,245.14232 43.32329,0 0,1.25656 -43.32329,0 z m 36.13087,-4.28544 8.43023,4.91372 -8.43023,4.91373 c -0.29069,0.17817 -0.67517,0.075 -0.85333,-0.22506 -0.1688,-0.2907 -0.075,-0.67517 0.22505,-0.85334 l 7.50187,-4.36984 0,1.0784 -7.50187,-4.37922 c -0.30007,-0.16879 -0.39385,-0.55326 -0.22505,-0.85334 0.17816,-0.30007 0.56264,-0.40322 0.85333,-0.22505 z"
+         id="path3640"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 202.14723,162.8562 52.08173,27.06299 -0.57202,1.1159 -52.0911,-27.07237 z m 47.68376,19.94559 5.2138,8.25206 -9.75243,0.46886 c -0.33759,0.0188 -0.63766,-0.24381 -0.64704,-0.59077 -0.0188,-0.34696 0.24381,-0.63766 0.59077,-0.65641 l 8.67404,-0.42198 -0.497,0.95649 -4.64178,-7.34246 c -0.17817,-0.2907 -0.0938,-0.67517 0.19692,-0.86271 0.2907,-0.17817 0.67517,-0.0938 0.86272,0.19692 z"
+         id="path3642"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 112.14356,192.94807 c 0,-6.73293 5.45761,-12.19054 12.19054,-12.19054 6.73293,0 12.19054,5.45761 12.19054,12.19054 0,6.73293 -5.45761,12.19054 -12.19054,12.19054 -6.73293,0 -12.19054,-5.45761 -12.19054,-12.19054"
+         id="path3644"
+         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 112.14356,192.94807 c 0,-6.73293 5.45761,-12.19054 12.19054,-12.19054 6.73293,0 12.19054,5.45761 12.19054,12.19054 0,6.73293 -5.45761,12.19054 -12.19054,12.19054 -6.73293,0 -12.19054,-5.45761 -12.19054,-12.19054"
+         id="path3646"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 136.52464,193.56697 43.32329,0 0,-1.24718 -43.32329,0 z m 36.13087,4.29482 8.43023,-4.91372 -8.43023,-4.91373 c -0.29069,-0.17817 -0.67517,-0.075 -0.85333,0.22506 -0.1688,0.2907 -0.075,0.67517 0.22505,0.85334 l 7.50187,4.36984 0,-1.0784 -7.50187,4.37922 c -0.30007,0.16879 -0.39385,0.55326 -0.22505,0.85334 0.17816,0.30007 0.56264,0.40322 0.85333,0.22505 z"
+         id="path3648"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 181.06698,154.73542 c 0,-6.69541 5.45761,-12.11551 12.19054,-12.11551 6.73293,0 12.19054,5.4201 12.19054,12.11551 0,6.68604 -5.45761,12.10614 -12.19054,12.10614 -6.73293,0 -12.19054,-5.4201 -12.19054,-12.10614"
+         id="path3650"
+         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 181.06698,154.73542 c 0,-6.69541 5.45761,-12.11551 12.19054,-12.11551 6.73293,0 12.19054,5.4201 12.19054,12.11551 0,6.68604 -5.45761,12.10614 -12.19054,12.10614 -6.73293,0 -12.19054,-5.4201 -12.19054,-12.10614"
+         id="path3652"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 112.14356,154.73542 c 0,-6.69541 5.45761,-12.11551 12.19054,-12.11551 6.73293,0 12.19054,5.4201 12.19054,12.11551 0,6.68604 -5.45761,12.10614 -12.19054,12.10614 -6.73293,0 -12.19054,-5.4201 -12.19054,-12.10614"
+         id="path3654"
+         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 112.14356,154.73542 c 0,-6.69541 5.45761,-12.11551 12.19054,-12.11551 6.73293,0 12.19054,5.4201 12.19054,12.11551 0,6.68604 -5.45761,12.10614 -12.19054,12.10614 -6.73293,0 -12.19054,-5.4201 -12.19054,-12.10614"
+         id="path3656"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 136.52464,155.27931 43.32329,0 0,-1.24719 -43.32329,0 z m 36.13087,4.29482 8.43023,-4.9231 -8.43023,-4.91373 c -0.29069,-0.16879 -0.67517,-0.075 -0.85333,0.22506 -0.1688,0.30007 -0.075,0.68454 0.22505,0.85334 l 7.50187,4.37921 0,-1.07839 -7.50187,4.37922 c -0.30007,0.16879 -0.39385,0.55326 -0.22505,0.85333 0.17816,0.30008 0.56264,0.39385 0.85333,0.22506 z"
+         id="path3658"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 133.16755,162.82806 50.54384,20.55513 -0.46886,1.16279 -50.54385,-20.55513 z m 45.49884,13.87846 5.9546,7.72693 -9.65865,1.37847 c -0.33759,0.0469 -0.65642,-0.18755 -0.7033,-0.53451 -0.0469,-0.33759 0.18754,-0.65642 0.53451,-0.7033 l 8.58964,-1.22843 -0.40323,1.00337 -5.2982,-6.88296 c -0.21567,-0.27195 -0.15941,-0.66579 0.11253,-0.8721 0.27195,-0.21567 0.66579,-0.15941 0.8721,0.11253 z"
+         id="path3660"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 133.16755,185.01484 50.54384,-20.5645 -0.46886,-1.15341 -50.54385,20.55512 z m 45.49884,-13.87846 5.9546,-7.72692 -9.65865,-1.37847 c -0.33759,-0.0563 -0.65642,0.18755 -0.7033,0.52513 -0.0469,0.34696 0.18754,0.65642 0.53451,0.71268 l 8.58964,1.22843 -0.40323,-1.00337 -5.2982,6.88296 c -0.21567,0.27194 -0.15941,0.66579 0.11253,0.87209 0.27195,0.21568 0.66579,0.15942 0.8721,-0.11253 z"
+         id="path3662"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 112.14356,283.90823 c 0,-6.73293 5.45761,-12.19054 12.19054,-12.19054 6.73293,0 12.19054,5.45761 12.19054,12.19054 0,6.73293 -5.45761,12.19054 -12.19054,12.19054 -6.73293,0 -12.19054,-5.45761 -12.19054,-12.19054"
+         id="path3664"
+         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 112.14356,283.90823 c 0,-6.73293 5.45761,-12.19054 12.19054,-12.19054 6.73293,0 12.19054,5.45761 12.19054,12.19054 0,6.73293 -5.45761,12.19054 -12.19054,12.19054 -6.73293,0 -12.19054,-5.45761 -12.19054,-12.19054"
+         id="path3666"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 181.06698,283.90823 c 0,-6.73293 5.45761,-12.19054 12.19054,-12.19054 6.73293,0 12.19054,5.45761 12.19054,12.19054 0,6.73293 -5.45761,12.19054 -12.19054,12.19054 -6.73293,0 -12.19054,-5.45761 -12.19054,-12.19054"
+         id="path3668"
+         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 181.06698,283.90823 c 0,-6.73293 5.45761,-12.19054 12.19054,-12.19054 6.73293,0 12.19054,5.45761 12.19054,12.19054 0,6.73293 -5.45761,12.19054 -12.19054,12.19054 -6.73293,0 -12.19054,-5.45761 -12.19054,-12.19054"
+         id="path3670"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 136.52464,283.27995 43.32329,0 0,1.24718 -43.32329,0 z m 36.13087,-4.28545 8.43023,4.91373 -8.43023,4.91372 c -0.29069,0.17817 -0.67517,0.075 -0.85333,-0.22505 -0.1688,-0.30008 -0.075,-0.68455 0.22505,-0.85334 l 7.50187,-4.37922 0,1.0784 -7.50187,-4.36984 c -0.30007,-0.17817 -0.39385,-0.56264 -0.22505,-0.85334 0.17816,-0.30007 0.56264,-0.40322 0.85333,-0.22506 z"
+         id="path3672"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 251.55642,199.58722 c 0,-6.68604 5.4201,-12.11552 12.10614,-12.11552 6.69542,0 12.11552,5.42948 12.11552,12.11552 0,6.69542 -5.4201,12.11552 -12.11552,12.11552 -6.68604,0 -12.10614,-5.4201 -12.10614,-12.11552"
+         id="path3674"
+         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 251.55642,199.58722 c 0,-6.68604 5.4201,-12.11552 12.10614,-12.11552 6.69542,0 12.11552,5.42948 12.11552,12.11552 0,6.69542 -5.4201,12.11552 -12.11552,12.11552 -6.68604,0 -12.10614,-5.4201 -12.10614,-12.11552"
+         id="path3676"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 251.55642,238.7376 c 0,-6.73293 5.4201,-12.19054 12.10614,-12.19054 6.69542,0 12.11552,5.45761 12.11552,12.19054 0,6.73293 -5.4201,12.19054 -12.11552,12.19054 -6.68604,0 -12.10614,-5.45761 -12.10614,-12.19054"
+         id="path3678"
+         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 251.55642,238.7376 c 0,-6.73293 5.4201,-12.19054 12.10614,-12.19054 6.69542,0 12.11552,5.45761 12.11552,12.19054 0,6.73293 -5.4201,12.19054 -12.11552,12.19054 -6.68604,0 -12.10614,-5.45761 -12.10614,-12.19054"
+         id="path3680"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 202.14723,200.99382 52.10048,27.96322 -0.59077,1.09715 -52.10048,-27.96322 z m 47.79629,20.78018 5.10127,8.3177 -9.75243,0.34696 c -0.34696,0.009 -0.63766,-0.26257 -0.64704,-0.60953 -0.009,-0.33758 0.25319,-0.62828 0.60015,-0.64703 l 8.68342,-0.30008 -0.51576,0.94711 -4.53863,-7.39872 c -0.17817,-0.30007 -0.0844,-0.68454 0.2063,-0.86271 0.2907,-0.17817 0.67517,-0.0844 0.86272,0.2063 z"
+         id="path3682"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 202.23163,237.79049 48.63087,-36.88106 -0.75019,-0.994 -48.63087,36.87168 z m 45.49884,-29.11663 3.74155,-9.01162 -9.68679,1.18155 c -0.33758,0.0375 -0.58139,0.34696 -0.54388,0.69392 0.0469,0.33758 0.35634,0.58139 0.69392,0.54388 l 0,0 8.62715,-1.05026 -0.65641,-0.85334 -3.32896,8.01763 c -0.13128,0.31883 0.0188,0.68454 0.33759,0.81582 0.31883,0.13129 0.68454,-0.0187 0.81583,-0.33758 z"
+         id="path3684"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 205.82315,284.36772 48.62149,-35.79329 -0.74081,-1.01276 -48.62149,35.80267 z m 45.37693,-28.07575 3.87284,-8.95535 -9.70554,1.0315 c -0.33759,0.0375 -0.59078,0.34697 -0.55327,0.69393 0.0375,0.33758 0.34697,0.59077 0.68455,0.55326 l 8.63653,-0.92836 -0.63766,-0.86271 -3.45086,7.97073 c -0.13128,0.31883 0.009,0.68455 0.3282,0.81583 0.31883,0.14066 0.68455,-0.009 0.82521,-0.31883 z"
+         id="path3686"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 205.54183,246.43639 44.83304,-6.89234 -0.18754,-1.23781 -44.83305,6.89235 z m 38.38144,-1.55663 7.57688,-6.14216 -9.07726,-3.57276 c -0.31883,-0.13129 -0.68454,0.0281 -0.80645,0.34696 -0.13128,0.31883 0.0281,0.68454 0.35634,0.81583 l 8.07389,3.17891 -0.15942,-1.06901 -6.75168,5.46698 c -0.27194,0.21568 -0.30945,0.60953 -0.0938,0.88147 0.21568,0.27195 0.60952,0.30945 0.88147,0.0938 z"
+         id="path3688"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 202.34416,275.72181 52.41931,-66.10084 -0.98462,-0.77832 -52.40993,66.10085 z m 51.31278,-57.8019 1.38785,-9.65865 -9.09602,3.54463 c -0.31883,0.13128 -0.47824,0.48762 -0.34696,0.81583 0.12191,0.31883 0.48762,0.47824 0.80645,0.35634 l 8.09264,-3.16016 -0.85334,-0.67517 -1.22843,8.59901 c -0.0469,0.33759 0.18755,0.65642 0.52513,0.70331 0.34697,0.0563 0.6658,-0.18755 0.71268,-0.52514 z"
+         id="path3690"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 7.4268502,164.18778 0,11.09339 85.3337588,0 0,-11.09339 -85.3337588,0 z"
+         id="path3692"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 7.4268502,164.18778 85.3337588,0 0,11.09339 -85.3337588,0 z"
+         id="path3694"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="11.046059"
+         y="172.18478"
+         id="text3696"
+         xml:space="preserve"
+         style="font-size:6.30156994px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Current position: 6791</text>
+      <path
+         d="m 93.032552,170.34869 26.903578,-12.78131 -0.53451,-1.13466 -26.903576,12.79069 z m 28.244538,-9.95873 c 1.86609,-0.89085 2.66316,-3.13203 1.78169,-4.99812 -0.89084,-1.87547 -3.13203,-2.67254 -4.99812,-1.7817 -1.87547,0.89085 -2.67254,3.13203 -1.78169,4.99812 0.89084,1.87547 3.12265,2.67254 4.99812,1.7817 z"
+         id="path3698"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="36.166245"
+         y="113.06025"
+         id="text3700"
+         xml:space="preserve"
+         style="font-size:6.30156994px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Start checkpoint</text>
+      <text
+         x="48.319271"
+         y="120.56212"
+         id="text3702"
+         xml:space="preserve"
+         style="font-size:6.30156994px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">message</text>
+      <text
+         x="293.39252"
+         y="120.61977"
+         id="text3704"
+         xml:space="preserve"
+         style="font-size:6.30156994px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Ack. with </text>
+      <text
+         x="286.79089"
+         y="128.12164"
+         id="text3706"
+         xml:space="preserve"
+         style="font-size:6.30156994px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">position 6791 </text>
+      <text
+         x="146.4926"
+         y="133.59908"
+         id="text3708"
+         xml:space="preserve"
+         style="font-size:6.30156994px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Emit stream barriers</text>
+      <path
+         d="m 152.72867,134.2647 -5.9546,7.55813 0.98462,0.76894 5.94523,-7.55813 -0.97525,-0.76894 z m -7.63315,4.64178 -1.6973,8.21455 7.59565,-3.57277 -5.89835,-4.64178 z"
+         id="path3710"
+         style="fill:#935f1c;fill-opacity:1;fill-rule:nonzero;stroke:#935f1c;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="354.89056"
+         y="123.24857"
+         id="text3712"
+         xml:space="preserve"
+         style="font-size:6.30156994px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Operator received barrier</text>
+      <text
+         x="374.09534"
+         y="130.75044"
+         id="text3714"
+         xml:space="preserve"
+         style="font-size:6.30156994px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">at each input</text>
+      <text
+         x="697.52765"
+         y="164.91731"
+         id="text3716"
+         xml:space="preserve"
+         style="font-size:6.30156994px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Emits next barrier</text>
+      <path
+         d="m 693.36023,162.34044 -14.02849,-0.33758 0,-1.23781 14.02849,0.33758 0,1.23781 z m -12.8657,2.8132 -7.42685,-3.93848 7.61439,-3.56338 -0.18754,7.50186 z"
+         id="path3718"
+         style="fill:#935f1c;fill-opacity:1;fill-rule:nonzero;stroke:#935f1c;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 396.5863,133.60829 6.8267,7.35183 -0.91898,0.86271 -6.8267,-7.37058 0.91898,-0.84396 z m 8.27081,4.31357 2.34433,8.06451 -7.8582,-2.96324 5.51387,-5.10127 z"
+         id="path3720"
+         style="fill:#935f1c;fill-opacity:1;fill-rule:nonzero;stroke:#935f1c;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="979.40442"
+         y="139.69937"
+         id="text3722"
+         xml:space="preserve"
+         style="font-size:6.30156994px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Sink acknowledges</text>
+      <text
+         x="983.75555"
+         y="147.20123"
+         id="text3724"
+         xml:space="preserve"
+         style="font-size:6.30156994px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">checkpoint after</text>
+      <text
+         x="977.00385"
+         y="154.70313"
+         id="text3726"
+         xml:space="preserve"
+         style="font-size:6.30156994px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">receiving all barriers</text>
+      <path
+         d="m 234.35838,63.906546 c 0,-3.394596 10.81207,-6.151533 24.14665,-6.151533 13.34394,0 24.14664,2.756937 24.14664,6.151533 l 0,24.587375 c 0,3.394596 -10.8027,6.142155 -24.14664,6.142155 -13.33458,0 -24.14665,-2.747559 -24.14665,-6.142155 z"
+         id="path3728"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 282.65167,63.906546 c 0,3.394595 -10.8027,6.142155 -24.14664,6.142155 -13.33458,0 -24.14665,-2.74756 -24.14665,-6.142155"
+         id="path3730"
+         style="fill:none;stroke:#000000;stroke-width:1.24718571px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 234.35838,63.906546 c 0,-3.394596 10.81207,-6.151533 24.14665,-6.151533 13.34394,0 24.14664,2.756937 24.14664,6.151533 l 0,24.587375 c 0,3.394596 -10.8027,6.142155 -24.14664,6.142155 -13.33458,0 -24.14665,-2.747559 -24.14665,-6.142155 z"
+         id="path3732"
+         style="fill:none;stroke:#000000;stroke-width:1.24718571px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="443.11224"
+         y="127.02732"
+         id="text3734"
+         xml:space="preserve"
+         style="font-size:6.30156994px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Writes a snapshot</text>
+      <text
+         x="454.81516"
+         y="134.52919"
+         id="text3736"
+         xml:space="preserve"
+         style="font-size:6.30156994px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">of its state</text>
+      <text
+         x="235.96613"
+         y="53.513088"
+         id="text3738"
+         xml:space="preserve"
+         style="font-size:6.30156994px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">State Backend</text>
+      <path
+         d="m 7.4268502,202.47544 0,11.10277 85.3337588,0 0,-11.10277 -85.3337588,0 z"
+         id="path3740"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 7.4268502,202.47544 85.3337588,0 0,11.10277 -85.3337588,0 z"
+         id="path3742"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="11.046059"
+         y="210.52177"
+         id="text3744"
+         xml:space="preserve"
+         style="font-size:6.30156994px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Current position: 7252</text>
+      <path
+         d="m 93.032552,208.63635 26.903578,-12.78131 -0.53451,-1.12528 -26.903576,12.78131 z m 28.244538,-9.95873 c 1.86609,-0.89085 2.66316,-3.12265 1.78169,-4.99812 -0.89084,-1.86609 -3.13203,-2.66316 -4.99812,-1.77232 -1.87547,0.88147 -2.67254,3.12266 -1.78169,4.98875 0.89084,1.87546 3.12265,2.67254 4.99812,1.78169 z"
+         id="path3746"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 7.4268502,255.61681 0,11.09338 85.3337588,0 0,-11.09338 -85.3337588,0 z"
+         id="path3748"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 7.4268502,255.61681 85.3337588,0 0,11.09338 -85.3337588,0 z"
+         id="path3750"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="11.046059"
+         y="263.69882"
+         id="text3752"
+         xml:space="preserve"
+         style="font-size:6.30156994px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Current position: 5589</text>
+      <path
+         d="m 93.032552,261.77772 26.903578,-12.78131 -0.53451,-1.13466 -26.903576,12.79069 z m 28.244538,-9.95874 c 1.86609,-0.89084 2.66316,-3.13203 1.78169,-4.99812 -0.89084,-1.87546 -3.13203,-2.67254 -4.99812,-1.78169 -1.87547,0.89085 -2.67254,3.13203 -1.78169,4.99812 0.89084,1.87547 3.12265,2.67254 4.99812,1.78169 z"
+         id="path3754"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 93.023174,300.056 26.903576,-12.14365 -0.51575,-1.13466 -26.903579,12.13428 z m 28.188276,-9.29294 c 1.88484,-0.85334 2.7288,-3.07576 1.87546,-4.96061 -0.85333,-1.88484 -3.07576,-2.7288 -4.96061,-1.87547 -1.89422,0.85334 -2.7288,3.07577 -1.87546,4.96061 0.84396,1.88485 3.06638,2.72881 4.96061,1.87547 z"
+         id="path3756"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 320.79867,13.690911 0,81.732861 132.22044,0 0,-81.732861 -132.22044,0 z"
+         id="path3758"
+         style="fill:#e4eaf4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 320.79867,13.690911 132.22044,0 0,81.732861 -132.22044,0 z"
+         id="path3760"
+         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="357.26532"
+         y="35.674438"
+         id="text3762"
+         xml:space="preserve"
+         style="font-size:17.55437279px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Master</text>
+      <path
+         d="m 325.48734,53.694627 0,9.377336 52.34429,0 0,-9.377336 -52.34429,0 z"
+         id="path3764"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 325.48734,53.694627 52.34429,0 0,9.377336 -52.34429,0 z"
+         id="path3766"
+         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="329.17068"
+         y="60.420757"
+         id="text3768"
+         xml:space="preserve"
+         style="font-size:4.95123339px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source 1</text>
+      <text
+         x="351.37622"
+         y="60.420757"
+         id="text3770"
+         xml:space="preserve"
+         style="font-size:4.95123339px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">: </text>
+      <text
+         x="355.42722"
+         y="60.420757"
+         id="text3772"
+         xml:space="preserve"
+         style="font-size:4.95123339px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">6791</text>
+      <path
+         d="m 325.48734,63.071963 0,9.377336 52.34429,0 0,-9.377336 -52.34429,0 z"
+         id="path3774"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 325.48734,63.071963 52.34429,0 0,9.377336 -52.34429,0 z"
+         id="path3776"
+         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="329.17068"
+         y="69.828026"
+         id="text3778"
+         xml:space="preserve"
+         style="font-size:4.95123339px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source 2</text>
+      <text
+         x="351.37622"
+         y="69.828026"
+         id="text3780"
+         xml:space="preserve"
+         style="font-size:4.95123339px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">: 7252</text>
+      <path
+         d="m 325.48734,72.299262 0,9.377336 52.34429,0 0,-9.377336 -52.34429,0 z"
+         id="path3782"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 325.48734,72.299262 52.34429,0 0,9.377336 -52.34429,0 z"
+         id="path3784"
+         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="329.17068"
+         y="78.97686"
+         id="text3786"
+         xml:space="preserve"
+         style="font-size:4.95123339px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source 3</text>
+      <text
+         x="351.37622"
+         y="78.97686"
+         id="text3788"
+         xml:space="preserve"
+         style="font-size:4.95123339px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">: 5589</text>
+      <path
+         d="m 325.48734,81.826635 0,9.377336 52.34429,0 0,-9.377336 -52.34429,0 z"
+         id="path3790"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 325.48734,81.826635 52.34429,0 0,9.377336 -52.34429,0 z"
+         id="path3792"
+         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="329.17068"
+         y="88.561928"
+         id="text3794"
+         xml:space="preserve"
+         style="font-size:4.95123339px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source 4</text>
+      <text
+         x="351.37622"
+         y="88.561928"
+         id="text3796"
+         xml:space="preserve"
+         style="font-size:4.95123339px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">: 6843</text>
+      <path
+         d="m 378.15046,53.694627 0,9.377336 52.19425,0 0,-9.377336 -52.19425,0 z"
+         id="path3798"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 378.15046,53.694627 52.19425,0 0,9.377336 -52.19425,0 z"
+         id="path3800"
+         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="381.69125"
+         y="60.420757"
+         id="text3802"
+         xml:space="preserve"
+         style="font-size:4.95123339px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">State 1:</text>
+      <path
+         d="m 378.15046,63.071963 0,9.377336 52.19425,0 0,-9.377336 -52.19425,0 z"
+         id="path3804"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 378.15046,63.071963 52.19425,0 0,9.377336 -52.19425,0 z"
+         id="path3806"
+         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="381.69125"
+         y="69.828026"
+         id="text3808"
+         xml:space="preserve"
+         style="font-size:4.95123339px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">State 2:</text>
+      <text
+         x="328.04251"
+         y="50.442242"
+         id="text3810"
+         xml:space="preserve"
+         style="font-size:6.30156994px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Checkpoint data</text>
+      <path
+         d="m 378.15046,71.980432 0,9.377336 52.19425,0 0,-9.377336 -52.19425,0 z"
+         id="path3812"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 378.15046,71.980432 52.19425,0 0,9.377336 -52.19425,0 z"
+         id="path3814"
+         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="381.69125"
+         y="78.712715"
+         id="text3816"
+         xml:space="preserve"
+         style="font-size:4.95123339px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Sink 1: (pending)</text>
+      <path
+         d="m 378.15046,81.826635 0,9.377336 52.19425,0 0,-9.377336 -52.19425,0 z"
+         id="path3818"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 378.15046,81.826635 52.19425,0 0,9.377336 -52.19425,0 z"
+         id="path3820"
+         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="381.69125"
+         y="88.478439"
+         id="text3822"
+         xml:space="preserve"
+         style="font-size:4.95123339px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Sink 2</text>
+      <text
+         x="397.44516"
+         y="88.478439"
+         id="text3824"
+         xml:space="preserve"
+         style="font-size:4.95123339px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">: (pending)</text>
+      <path
+         d="m 340.32228,245.77998 c 0,-6.73293 5.43886,-12.19054 12.11552,-12.19054 6.69542,0 12.11552,5.45761 12.11552,12.19054 0,6.73293 -5.4201,12.19054 -12.11552,12.19054 -6.67666,0 -12.11552,-5.45761 -12.11552,-12.19054"
+         id="path3826"
+         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 340.32228,245.77998 c 0,-6.73293 5.43886,-12.19054 12.11552,-12.19054 6.69542,0 12.11552,5.45761 12.11552,12.19054 0,6.73293 -5.4201,12.19054 -12.11552,12.19054 -6.67666,0 -12.11552,-5.45761 -12.11552,-12.19054"
+         id="path3828"
+         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 409.2457,192.94807 c 0,-6.73293 5.43886,-12.19054 12.11552,-12.19054 6.69542,0 12.11552,5.45761 12.11552,12.19054 0,6.73293 -5.4201,12.19054 -12.11552,12.19054 -6.67666,0 -12.11552,-5.45761 -12.11552,-12.19054"
+         id="path3830"
+         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 409.2457,192.94807 c 0,-6.73293 5.43886,-12.19054 12.11552,-12.19054 6.69542,0 12.11552,5.45761 12.11552,12.19054 0,6.73293 -5.4201,12.19054 -12.11552,12.19054 -6.67666,0 -12.11552,-5.45761 -12.11552,-12.19054"
+         id="path3832"
+         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 409.2457,245.77998 c 0,-6.73293 5.43886,-12.19054 12.11552,-12.19054 6.69542,0 12.11552,5.45761 12.11552,12.19054 0,6.73293 -5.4201,12.19054 -12.11552,12.19054 -6.67666,0 -12.11552,-5.45761 -12.11552,-12.19054"
+         id="path3834"
+         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 409.2457,245.77998 c 0,-6.73293 5.43886,-12.19054 12.11552,-12.19054 6.69542,0 12.11552,5.45761 12.11552,12.19054 0,6.73293 -5.4201,12.19054 -12.11552,12.19054 -6.67666,0 -12.11552,-5.45761 -12.11552,-12.19054"
+         id="path3836"
+         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 364.55332,245.14232 43.32329,0 0,1.25656 -43.32329,0 z m 36.14025,-4.27606 8.42085,4.91372 -8.42085,4.91372 c -0.30007,0.1688 -0.69392,0.075 -0.86271,-0.22505 -0.16879,-0.30008 -0.075,-0.67517 0.22505,-0.86272 l 7.50187,-4.36984 0,1.08778 -7.50187,-4.3886 c -0.30007,-0.16879 -0.39384,-0.54388 -0.22505,-0.84396 0.16879,-0.30007 0.56264,-0.4126 0.86271,-0.22505 z"
+         id="path3838"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 430.32595,162.86557 52.08173,27.063 -0.56264,1.10652 -52.10048,-27.06299 z m 47.69314,19.93622 5.2138,8.25206 -9.75243,0.46886 c -0.33759,0.0188 -0.63766,-0.24381 -0.65642,-0.58139 -0.0188,-0.35634 0.24381,-0.63766 0.60015,-0.65642 l 8.66466,-0.43135 -0.48762,0.95649 -4.63241,-7.33308 c -0.18754,-0.30008 -0.11252,-0.67517 0.18755,-0.86272 0.28132,-0.18754 0.67517,-0.0938 0.86272,0.18755 z"
+         id="path3840"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 340.32228,192.94807 c 0,-6.73293 5.43886,-12.19054 12.11552,-12.19054 6.69542,0 12.11552,5.45761 12.11552,12.19054 0,6.73293 -5.4201,12.19054 -12.11552,12.19054 -6.67666,0 -12.11552,-5.45761 -12.11552,-12.19054"
+         id="path3842"
+         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 340.32228,192.94807 c 0,-6.73293 5.43886,-12.19054 12.11552,-12.19054 6.69542,0 12.11552,5.45761 12.11552,12.19054 0,6.73293 -5.4201,12.19054 -12.11552,12.19054 -6.67666,0 -12.11552,-5.45761 -12.11552,-12.19054"
+         id="path3844"
+         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 364.55332,193.56697 43.32329,0 0,-1.23781 -43.32329,0 z m 36.14025,4.29482 8.42085,-4.91372 -8.42085,-4.91373 c -0.30007,-0.16879 -0.69392,-0.075 -0.86271,0.22506 -0.16879,0.30007 -0.075,0.67517 0.22505,0.86272 l 7.50187,4.36983 0,-1.08777 -7.50187,4.3886 c -0.30007,0.16879 -0.39384,0.54388 -0.22505,0.84396 0.16879,0.30007 0.56264,0.4126 0.86271,0.22505 z"
+         id="path3846"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 409.2457,154.7448 c 0,-6.69542 5.43886,-12.11552 12.11552,-12.11552 6.69542,0 12.11552,5.4201 12.11552,12.11552 0,6.67666 -5.4201,12.09676 -12.11552,12.09676 -6.67666,0 -12.11552,-5.4201 -12.11552,-12.09676"
+         id="path3848"
+         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 409.2457,154.7448 c 0,-6.69542 5.43886,-12.11552 12.11552,-12.11552 6.69542,0 12.11552,5.4201 12.11552,12.11552 0,6.67666 -5.4201,12.09676 -12.11552,12.09676 -6.67666,0 -12.11552,-5.4201 -12.11552,-12.09676"
+         id="path3850"
+         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 340.32228,154.7448 c 0,-6.69542 5.43886,-12.11552 12.11552,-12.11552 6.69542,0 12.11552,5.4201 12.11552,12.11552 0,6.67666 -5.4201,12.09676 -12.11552,12.09676 -6.67666,0 -12.11552,-5.4201 -12.11552,-12.09676"
+         id="path3852"
+         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 340.32228,154.7448 c 0,-6.69542 5.43886,-12.11552 12.11552,-12.11552 6.69542,0 12.11552,5.4201 12.11552,12.11552 0,6.67666 -5.4201,12.09676 -12.11552,12.09676 -6.67666,0 -12.11552,-5.4201 -12.11552,-12.09676"
+         id="path3854"
+         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 364.55332,155.28869 43.32329,0 0,-1.25657 -43.32329,0 z m 36.14025,4.29482 8.42085,-4.93248 -8.42085,-4.91373 c -0.30007,-0.16879 -0.69392,-0.075 -0.86271,0.22506 -0.16879,0.30007 -0.075,0.69392 0.22505,0.86271 l 7.50187,4.36984 0,-1.06901 -7.50187,4.36984 c -0.30007,0.16879 -0.39384,0.56264 -0.22505,0.86271 0.16879,0.30007 0.56264,0.39385 0.86271,0.22506 z"
+         id="path3856"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 361.34627,162.82806 50.54384,20.55513 -0.46886,1.16279 -50.54385,-20.55513 z m 45.51759,13.87846 5.94523,7.72693 -9.65866,1.38784 c -0.33758,0.0375 -0.65641,-0.18754 -0.71267,-0.54388 -0.0375,-0.33759 0.18754,-0.65642 0.54388,-0.69393 l 8.58964,-1.2378 -0.4126,1.01275 -5.28882,-6.88297 c -0.2063,-0.28132 -0.16879,-0.67516 0.11253,-0.88147 0.26257,-0.2063 0.65641,-0.15003 0.88147,0.11253 z"
+         id="path3858"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 361.34627,185.01484 50.54384,-20.55512 -0.46886,-1.16279 -50.54385,20.55512 z m 45.51759,-13.87846 5.94523,-7.72692 -9.65866,-1.36909 c -0.33758,-0.0563 -0.65641,0.18755 -0.71267,0.52513 -0.0375,0.33758 0.18754,0.65641 0.54388,0.71268 l 8.58964,1.21905 -0.4126,-0.994 -5.28882,6.88297 c -0.2063,0.26256 -0.16879,0.65641 0.11253,0.86271 0.26257,0.2063 0.65641,0.16879 0.88147,-0.11253 z"
+         id="path3860"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 340.32228,283.90823 c 0,-6.73293 5.43886,-12.19054 12.11552,-12.19054 6.69542,0 12.11552,5.45761 12.11552,12.19054 0,6.73293 -5.4201,12.19054 -12.11552,12.19054 -6.67666,0 -12.11552,-5.45761 -12.11552,-12.19054"
+         id="path3862"
+         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 340.32228,283.90823 c 0,-6.73293 5.43886,-12.19054 12.11552,-12.19054 6.69542,0 12.11552,5.45761 12.11552,12.19054 0,6.73293 -5.4201,12.19054 -12.11552,12.19054 -6.67666,0 -12.11552,-5.45761 -12.11552,-12.19054"
+         id="path3864"
+         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 409.2457,283.90823 c 0,-6.73293 5.43886,-12.19054 12.11552,-12.19054 6.69542,0 12.11552,5.45761 12.11552,12.19054 0,6.73293 -5.4201,12.19054 -12.11552,12.19054 -6.67666,0 -12.11552,-5.45761 -12.11552,-12.19054"
+         id="path3866"
+         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 409.2457,283.90823 c 0,-6.73293 5.43886,-12.19054 12.11552,-12.19054 6.69542,0 12.11552,5.45761 12.11552,12.19054 0,6.73293 -5.4201,12.19054 -12.11552,12.19054 -6.67666,0 -12.11552,-5.45761 -12.11552,-12.19054"
+         id="path3868"
+         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 364.55332,283.28932 43.32329,0 0,1.23781 -43.32329,0 z m 36.14025,-4.29482 8.42085,4.91373 -8.42085,4.91372 c -0.30007,0.18755 -0.69392,0.075 -0.86271,-0.22505 -0.16879,-0.30008 -0.075,-0.67517 0.22505,-0.84396 l 7.50187,-4.3886 0,1.08777 -7.50187,-4.36983 c -0.30007,-0.18755 -0.39384,-0.56264 -0.22505,-0.86272 0.16879,-0.30007 0.56264,-0.39385 0.86271,-0.22506 z"
+         id="path3870"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 479.57572,199.58722 c 0,-6.67666 5.45761,-12.11552 12.19054,-12.11552 6.73293,0 12.19054,5.43886 12.19054,12.11552 0,6.69542 -5.45761,12.11552 -12.19054,12.11552 -6.73293,0 -12.19054,-5.4201 -12.19054,-12.11552"
+         id="path3872"
+         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 479.57572,199.58722 c 0,-6.67666 5.45761,-12.11552 12.19054,-12.11552 6.73293,0 12.19054,5.43886 12.19054,12.11552 0,6.69542 -5.45761,12.11552 -12.19054,12.11552 -6.73293,0 -12.19054,-5.4201 -12.19054,-12.11552"
+         id="path3874"
+         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 479.57572,238.74698 c 0,-6.73293 5.45761,-12.19054 12.19054,-12.19054 6.73293,0 12.19054,5.45761 12.19054,12.19054 0,6.73293 -5.45761,12.19053 -12.19054,12.19053 -6.73293,0 -12.19054,-5.4576 -12.19054,-12.19053"
+         id="path3876"
+         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 479.57572,238.74698 c 0,-6.73293 5.45761,-12.19054 12.19054,-12.19054 6.73293,0 12.19054,5.45761 12.19054,12.19054 0,6.73293 -5.45761,12.19053 -12.19054,12.19053 -6.73293,0 -12.19054,-5.4576 -12.19054,-12.19053"
+         id="path3878"
+         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 430.34471,200.99382 52.08172,27.96322 -0.58139,1.10652 -52.10048,-27.96321 z m 47.7869,20.78018 5.10128,8.32707 -9.75243,0.33759 c -0.35634,0.0188 -0.63766,-0.26257 -0.65642,-0.60015 0,-0.33759 0.26257,-0.63766 0.60015,-0.65641 l 8.68342,-0.30008 -0.50638,0.95649 -4.53863,-7.4081 c -0.18755,-0.30007 -0.0938,-0.67517 0.2063,-0.86271 0.30007,-0.16879 0.67517,-0.0938 0.86271,0.2063 z"
+         id="path3880"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 430.41973,237.79049 48.63086,-36.87169 -0.75018,-0.99399 -48.63087,36.87168 z m 45.49883,-29.10725 3.75094,-9.021 -9.69617,1.18155 c -0.33758,0.0375 -0.58139,0.35633 -0.54388,0.69392 0.0375,0.33758 0.35634,0.58139 0.69392,0.54388 l 0,0 8.62715,-1.05026 -0.65641,-0.84396 -3.33834,8.00825 c -0.13128,0.31883 0.0188,0.69392 0.33759,0.8252 0.31883,0.13129 0.69392,-0.0187 0.8252,-0.33758 z"
+         id="path3882"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 433.85183,284.3771 48.61211,-35.80267 -0.73143,-1.01276 -48.63086,35.80267 z m 45.36756,-28.07575 3.88221,-8.96473 -9.69616,1.0315 c -0.35634,0.0375 -0.60015,0.35634 -0.56264,0.69393 0.0375,0.33758 0.33758,0.60015 0.69392,0.56264 l 8.62715,-0.93774 -0.63766,-0.86271 -3.45086,7.97073 c -0.13128,0.31883 0.0188,0.69393 0.33758,0.82521 0.31883,0.13128 0.67517,-0.0188 0.80646,-0.31883 z"
+         id="path3884"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 433.57051,246.43639 44.82367,-6.88296 -0.18755,-1.23781 -44.82366,6.88297 z m 38.37206,-1.55663 7.59564,-6.13278 -9.07726,-3.58214 c -0.31883,-0.13129 -0.69392,0.0375 -0.80645,0.35633 -0.13128,0.31883 0.0188,0.67517 0.33759,0.80646 l 8.08326,3.18829 -0.16879,-1.06902 -6.73293,5.45761 c -0.28132,0.22506 -0.31883,0.61891 -0.0938,0.88147 0.2063,0.28132 0.60015,0.31883 0.86271,0.0938 z"
+         id="path3886"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 430.53226,275.73119 52.41931,-66.11022 -0.994,-0.76894 -52.40056,66.09147 z m 51.31278,-57.8019 1.38785,-9.65865 -9.09602,3.54463 c -0.31883,0.13128 -0.48762,0.48762 -0.35634,0.80645 0.13128,0.31883 0.48762,0.48762 0.80645,0.35634 l 8.10202,-3.15079 -0.84396,-0.67516 -1.23781,8.58963 c -0.0563,0.33759 0.18755,0.65642 0.52513,0.71268 0.33759,0.0563 0.65642,-0.18754 0.71268,-0.52513 z"
+         id="path3888"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 462.54648,63.915923 c 0,-3.394596 10.82145,-6.151532 24.15602,-6.151532 13.33457,0 24.13726,2.756936 24.13726,6.151532 l 0,24.587375 c 0,3.394596 -10.80269,6.132778 -24.13726,6.132778 -13.33457,0 -24.15602,-2.738182 -24.15602,-6.132778 z"
+         id="path3890"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 510.83976,63.915923 c 0,3.394596 -10.80269,6.132778 -24.13726,6.132778 -13.33457,0 -24.15602,-2.738182 -24.15602,-6.132778"
+         id="path3892"
+         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 462.54648,63.915923 c 0,-3.394596 10.82145,-6.151532 24.15602,-6.151532 13.33457,0 24.13726,2.756936 24.13726,6.151532 l 0,24.587375 c 0,3.394596 -10.80269,6.132778 -24.13726,6.132778 -13.33457,0 -24.15602,-2.738182 -24.15602,-6.132778 z"
+         id="path3894"
+         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="464.05219"
+         y="53.513088"
+         id="text3896"
+         xml:space="preserve"
+         style="font-size:6.30156994px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">State Backend</text>
+      <path
+         d="m 408.68306,141.22268 0,1.23781 1.25657,0.0188 0,-1.25657 -1.25657,0 z m 0,2.49437 0,1.25657 1.23781,0 0,-1.25657 -1.23781,0 z m -0.0188,2.49437 0,1.25657 1.25656,0 0,-1.23781 -1.25656,-0.0188 z m 0,2.51313 0,1.23781 1.25656,0.0188 0,-1.25656 -1.25656,0 z m 0,2.49437 0,1.25657 1.23781,0 0,-1.25657 -1.23781,0 z m -0.0188,2.49437 0,1.25657 1.25657,0 0,-1.23781 -1.25657,-0.0188 z m 0,2.51313 0,1.23781 1.25657,0.0188 0,-1.25656 -1.25657,0 z m 0,2.49437 0,1.25656 1.23781,0 0.0188,-1.25656 -1.25657,0 z m 0,2.49437 -0.0188,1.25657 1.25656,0 0,-1.23781 -1.23781,-0.0188 z m -0.0188,2.51313 0,1.03151 1.25656,0 0,-1.03151 -1.25656,0 z"
+         id="path3898"
+         style="fill:#8a3142;fill-opacity:1;fill-rule:nonzero;stroke:#8a3142;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 400.99365,170.89257 -0.67517,1.05027 1.06902,0.67516 0.67516,-1.05026 -1.06901,-0.67517 z m -1.33158,2.10053 -0.67517,1.06901 1.05026,0.65642 0.67517,-1.05026 -1.05026,-0.67517 z m -1.35034,2.11927 -0.67517,1.05027 1.06902,0.67516 0.65641,-1.05026 -1.05026,-0.67517 z m -1.33158,2.10053 -0.67517,1.06901 1.05026,0.65642 0.67517,-1.05026 -1.05026,-0.67517 z m -1.35034,2.11928 -0.67517,1.05026 1.05026,0.67517 0.67517,-1.05027 -1.05026,-0.67516 z m -1.35034,2.10052 -0.65641,1.06902 1.05026,0.65641 0.67517,-1.05026 -1.06902,-0.67517 z m -1.33158,2.11928 -0.31883,0.48762 1.05027,0.67517 0.31882,-0.48762 -1.05026,-0.67517 z"
+         id="path3900"
+         style="fill:#8a3142;fill-opacity:1;fill-rule:nonzero;stroke:#8a3142;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 408.72057,155.3262 0.54389,1.12528 -1.10653,0.54388 -0.56264,-1.12528 1.12528,-0.54388 z m 1.10653,2.2318 0.54388,1.12528 -1.10652,0.56264 -0.56264,-1.12528 1.12528,-0.56264 z m 1.10652,2.25056 0.54389,1.12528 -1.12528,0.54389 -0.54389,-1.12528 1.12528,-0.54389 z m 1.10653,2.25056 0.54389,1.10653 -1.12529,0.56264 -0.54388,-1.12528 1.12528,-0.54389 z m 1.10653,2.23181 0.54388,1.12528 -1.12528,0.56264 -0.54389,-1.12528 1.12529,-0.56264 z m 1.08777,2.25056 0.56264,1.12528 -1.12528,0.54389 -0.54389,-1.12528 1.10653,-0.54389 z m 1.10652,2.25056 0.56264,1.10653 -1.12528,0.56264 -0.54388,-1.12528 1.10652,-0.54389 z m 1.10653,2.23181 0.56264,1.12528 -1.12528,0.54388 -0.56264,-1.10652 1.12528,-0.56264 z"
+         id="path3902"
+         style="fill:#8a3142;fill-opacity:1;fill-rule:nonzero;stroke:#8a3142;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 437.69654,220.96755 0.84396,0.93773 -0.93773,0.84396 -0.84396,-0.93773 0.93773,-0.84396 z m 1.66917,1.85671 0.84396,0.93773 -0.91898,0.82521 -0.84396,-0.91898 0.91898,-0.84396 z m 1.68792,1.85671 0.84396,0.91898 -0.93774,0.84396 -0.84396,-0.91898 0.93774,-0.84396 z m 1.66916,1.85672 0.84396,0.91898 -0.91898,0.84396 -0.84396,-0.91898 0.91898,-0.84396 z m 1.68792,1.85671 0.84396,0.91898 -0.93773,0.84396 -0.84396,-0.93774 0.93773,-0.8252 z m 1.66917,1.85671 0.84396,0.91898 -0.91898,0.84396 -0.84396,-0.93773 0.91898,-0.82521 z m 1.68792,1.83796 0.84396,0.93773 -0.93773,0.84396 -0.84396,-0.93773 0.93773,-0.84396 z"
+         id="path3904"
+         style="fill:#8a3142;fill-opacity:1;fill-rule:nonzero;stroke:#8a3142;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 437.94035,247.31786 0.97524,0.7877 -0.78769,0.97524 -0.97524,-0.78769 0.78769,-0.97525 z m 1.93173,1.5754 0.97525,0.78769 -0.76895,0.97524 -0.97524,-0.78769 0.76894,-0.97524 z m 1.95049,1.57539 0.97524,0.76894 -0.78769,0.97524 -0.97525,-0.76894 0.7877,-0.97524 z m 1.95048,1.55664 0.97525,0.78769 -0.7877,0.97525 -0.97524,-0.7877 0.78769,-0.97524 z m 1.95049,1.57539 0.97524,0.78769 -0.78769,0.97525 -0.97525,-0.7877 0.7877,-0.97524 z m 1.95049,1.57539 0.95649,0.7877 -0.76895,0.97524 -0.97524,-0.7877 0.7877,-0.97524 z m 1.93173,1.55664 0.97524,0.78769 -0.7877,0.97525 -0.95648,-0.7877 0.76894,-0.97524 z"
+         id="path3906"
+         style="fill:#8a3142;fill-opacity:1;fill-rule:nonzero;stroke:#8a3142;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 465.39719,251.03129 0.75019,0.994 -1.01275,0.75018 -0.75019,-0.99399 1.01275,-0.75019 z m 1.50038,2.00675 0.73143,0.994 -0.994,0.75018 -0.75019,-0.99399 1.01276,-0.75019 z m 1.48161,2.00675 0.75019,0.994 -0.994,0.75018 -0.75018,-0.994 0.99399,-0.75018 z m 1.50038,2.00675 0.75019,0.994 -1.01276,0.75018 -0.75018,-1.01275 1.01275,-0.73143 z m 1.48162,2.00675 0.75018,0.994 -0.99399,0.75018 -0.75019,-1.01275 0.994,-0.73143 z m 1.50037,2.00675 0.75019,0.994 -1.01275,0.75018 -0.73144,-1.01275 0.994,-0.73143 z m 1.50038,1.98799 0.75018,1.01276 -1.01275,0.75018 -0.75019,-1.01275 1.01276,-0.75019 z m 1.48161,2.00675 0.75019,1.01276 -0.994,0.75018 -0.75018,-1.01275 0.99399,-0.75019 z m 1.50038,2.00675 0.0187,0.0375 -0.99399,0.75019 -0.0188,-0.0375 0.994,-0.75019 z"
+         id="path3908"
+         style="fill:#8a3142;fill-opacity:1;fill-rule:nonzero;stroke:#8a3142;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 477.53147,230.02606 0.26256,1.21905 -1.23781,0.24381 -0.24381,-1.21905 1.21906,-0.24381 z m 0.50637,2.45686 0.24381,1.21905 -1.21905,0.24381 -0.26257,-1.21905 1.23781,-0.24381 z m 0.48762,2.4381 0.24381,1.23781 -1.21905,0.24381 -0.24381,-1.2378 1.21905,-0.24382 z m 0.48762,2.45687 0.26257,1.21905 -1.23781,0.26257 -0.24381,-1.23781 1.21905,-0.24381 z m 0.50638,2.45686 0.24381,1.21905 -1.21905,0.24381 -0.26257,-1.21905 1.23781,-0.24381 z m 0.48762,2.43811 0.24381,1.2378 -1.21905,0.24382 -0.24381,-1.21906 1.21905,-0.26256 z"
+         id="path3910"
+         style="fill:#8a3142;fill-opacity:1;fill-rule:nonzero;stroke:#8a3142;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 671.37976,154.76356 -0.71268,1.01275 1.01275,0.75018 0.75019,-1.01275 -1.05026,-0.75018 z m -1.46287,2.0255 -0.71267,1.01275 1.01275,0.75019 0.71268,-1.01275 -1.01276,-0.75019 z m -1.46286,2.0255 -0.71268,1.01276 1.01275,0.75018 0.71268,-1.01275 -1.01275,-0.75019 z m -1.46287,2.02551 -0.71267,1.01275 1.01275,0.75019 0.71268,-1.01275 -1.01276,-0.75019 z m -1.46286,2.0255 -0.71268,1.01276 1.01275,0.75018 0.71268,-1.01275 -1.01275,-0.75019 z m -1.46286,2.02551 -0.71268,1.01275 1.01275,0.75019 0.71268,-1.01275 -1.01275,-0.75019 z m -1.46287,2.0255 -0.71268,1.05026 1.01276,0.71268 0.71267,-1.01275 -1.01275,-0.75019 z m -1.46286,2.06302 -0.71268,1.01275 1.01275,0.71268 0.71268,-1.01276 -1.01275,-0.71267 z m -1.46287,2.0255 -0.075,0.11253 1.01276,0.71268 0.075,-0.11253 -1.01275,-0.71268 z"
+         id="path3912"
+         style="fill:#8a3142;fill-opacity:1;fill-rule:nonzero;stroke:#8a3142;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 467.38519,76.200233 0,9.527374 17.51686,0 0,-9.527374 -17.51686,0 z"
+         id="path3914"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 467.38519,76.200233 17.51686,0 0,9.527374 -17.51686,0 z"
+         id="path3916"
+         style="fill:none;stroke:#70ad47;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="471.28561"
+         y="84.458046"
+         id="text3918"
+         xml:space="preserve"
+         style="font-size:8.70216751px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">s1</text>
+      <path
+         d="m 426.76257,145.25494 0.0375,-1.51913 0.11252,-1.50038 0.075,-0.78769 1.23781,0.13128 -0.075,0.76894 0,-0.0188 -0.0938,1.48162 0,-0.0188 -0.0375,1.48162 -1.25656,-0.0187 z m 0.39385,-5.06376 0.15003,-0.97525 0.30008,-1.50037 0.30007,-1.25656 1.21906,0.30007 -0.30008,1.23781 0,-0.0188 -0.28132,1.46287 0,-0.0188 -0.15004,0.95649 -1.2378,-0.18754 z m 1.08777,-4.95124 0.13128,-0.45011 0.46887,-1.44411 0.65641,-1.70668 1.16279,0.45012 -0.65641,1.70667 0.0187,-0.0375 -0.46887,1.4066 0.0188,-0.0375 -0.13128,0.45012 -1.2003,-0.33759 z m 1.74418,-4.80119 1.21905,-2.56939 0.48763,-0.82521 1.08777,0.6189 -0.46887,0.82521 0.0188,-0.0563 -1.21905,2.55063 -1.12528,-0.54388 z m 2.32558,-4.48237 0.35634,-0.60015 1.63166,-2.38184 0.16879,-0.20631 0.97524,0.7877 -0.15004,0.18755 0.0188,-0.0375 -1.59415,2.32558 0.0188,-0.0375 -0.33759,0.60015 -1.08777,-0.63766 z m 2.94448,-4.16354 0.80645,-0.994 0.93774,-1.0315 0.80645,-0.80645 0.88147,0.90022 -0.80645,0.7877 0.0375,-0.0188 -0.91898,0.994 
 0.0188,-0.0375 -0.78769,0.994 -0.97525,-0.7877 z m 3.48837,-3.69467 0.2063,-0.18755 1.01276,-0.8252 1.05026,-0.7877 0.78769,-0.52513 0.69393,1.05026 -0.7877,0.50638 0.0375,-0.0188 -1.01275,0.76895 0.0188,-0.0188 -0.99399,0.80645 0.0187,-0.0187 -0.18755,0.16879 -0.84396,-0.91898 z m 4.16354,-3.00075 0.24381,-0.13128 1.10653,-0.56264 1.10652,-0.48762 1.06902,-0.39385 0.4126,1.18155 -1.03151,0.37509 0.0375,0 -1.08777,0.46887 0.0375,-0.0188 -1.06901,0.54389 0.0375,-0.0188 -0.22505,0.13128 -0.63766,-1.08777 z m 4.81995,-1.93173 1.06902,-0.2063 1.14403,-0.15004 1.14404,-0.0375 0.45011,-0.0187 0.0563,1.23781 -0.45011,0.0187 -1.12528,0.0563 0.0563,0 -1.10652,0.13128 0.0563,-0.0188 -1.05026,0.20631 -0.24381,-1.21906 z m 5.0075,-0.50637 0.52513,-0.075 -0.0563,0.0187 1.10652,-0.22505 -0.0375,0 1.08777,-0.30008 -0.0375,0.0188 0.90023,-0.33759 0.43135,1.18155 -0.91897,0.33758 -1.14404,0.31883 -1.16279,0.22506 -0.54388,0.075 -0.15004,-1.2378 z m 4.59489,-1.36909 0.13129,-0.0563 -0.0375,0.0188 1.0
 6901,-0.54389 -0.0375,0.0188 1.05027,-0.61891 -0.0188,0.0188 0.994,-0.65642 0.69392,1.03151 -1.01275,0.67517 -1.08777,0.63766 -1.10653,0.56264 -0.13128,0.0563 -0.50638,-1.14403 z m 4.12603,-2.55064 0.0375,-0.0375 -0.0188,0.0188 0.99399,-0.80646 -0.0375,0.0188 0.97525,-0.88147 -0.0188,0.0187 0.7877,-0.78769 0.88147,0.90022 -0.80645,0.7877 -0.97525,0.90022 -1.01275,0.82521 -0.0563,0.0375 -0.75019,-0.994 z m 3.54463,-3.33833 0.22506,-0.24381 -0.0188,0.0375 1.72543,-2.15679 -0.0375,0.0375 0.39384,-0.54389 1.03151,0.69393 -0.39385,0.58139 -1.76294,2.17554 -0.24381,0.26257 -0.91898,-0.84396 z m 2.982,-3.90097 0.50637,-0.75019 -0.0188,0.0375 1.42535,-2.456863 1.08777,0.618904 -1.44411,2.475619 -0.52513,0.76894 -1.0315,-0.69392 z m 2.4381,-4.257314 0.73144,-1.537883 -0.0188,0.03751 0.75019,-1.894222 1.16279,0.450112 -0.75019,1.931731 -0.75019,1.556638 -1.12528,-0.543885 z m 1.87547,-4.538631 0.35634,-1.087771 0,0.01875 0.4126,-1.425355 -0.0188,0.05626 0.2063,-1.106526 1.21905,0.225056 -0.20
 63,1.144035 -0.4126,1.462864 -0.35634,1.106526 -1.2003,-0.393848 z m 1.16279,-4.763687 0.0563,-0.318829 1.2378,0.225056 -0.0563,0.318829 -1.23781,-0.225056 z m -3.18829,0.618904 4.55738,-7.051756 2.90698,7.876962 -7.46436,-0.825206 z"
+         id="path3920"
+         style="fill:#724591;fill-opacity:1;fill-rule:nonzero;stroke:#724591;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 393.30423,188.09061 0.0188,1.25656 -1.25657,0.0188 0,-1.25657 1.23781,-0.0188 z m 0.0375,2.51312 0,1.23781 -1.23781,0.0188 -0.0188,-1.25657 1.25656,0 z m 0.0188,2.49438 0.0188,1.25656 -1.25656,0 -0.0188,-1.23781 1.25657,-0.0188 z m 0.0188,2.49437 0.0188,1.25656 -1.25657,0.0188 0,-1.25656 1.23781,-0.0188 z m 0.0375,2.51312 0,1.23781 -1.23781,0.0188 -0.0188,-1.25657 1.25656,0 z"
+         id="path3922"
+         style="fill:#8a3142;fill-opacity:1;fill-rule:nonzero;stroke:#8a3142;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 94.795491,92.7231 0.28132,3.732179 -1.247186,0.09377 -0.28132,-3.741557 1.247186,-0.0844 z m 0.365716,4.979365 0.03751,0.45949 0,-0.0094 0.262566,3.291445 -1.247186,0.0938 -0.262565,-3.282074 -0.03751,-0.45949 1.247186,-0.09377 z m 0.403225,4.988745 0.06564,0.84396 0,-0.009 0.271943,2.88822 -1.237809,0.12191 -0.28132,-2.8976 -0.06564,-0.85334 1.247185,-0.0938 z m 0.45949,4.96999 0.112528,1.12528 0,-0.009 0.290697,2.56001 0,-0.009 0,0.0375 -1.237808,0.15942 -0.0094,-0.0469 -0.290698,-2.56939 -0.10315,-1.13466 1.237808,-0.11252 z m 0.56264,4.95123 0.159415,1.22843 -0.0094,-0.009 0.356338,2.45687 0,-0.009 0.0094,0.0187 -1.237809,0.2063 0,-0.0281 -0.365716,-2.47561 -0.150037,-1.23781 1.237808,-0.15004 z m 0.712678,4.9231 0.178169,1.12528 0,-0.0187 0.440735,2.30682 0,-0.0187 0.05626,0.26256 -1.228431,0.26257 -0.05626,-0.26257 -0.440735,-2.32558 -0.187547,-1.13465 1.237809,-0.19693 z m 0.937733,4.87622 0.159415,0.74081 -0.0094,-0.0188 0.534508,2.13803 -0.0094,-0.0187 0.22505
 7,0.75956 -1.200299,0.34696 -0.225057,-0.76894 -0.534508,-2.15678 -0.168792,-0.75957 1.228431,-0.26256 z m 1.247186,4.80119 0.01875,0.075 0,-0.0281 0.647041,1.93173 -0.009,-0.0281 0.59077,1.51913 -1.16279,0.45949 -0.600149,-1.53789 -0.656414,-1.95986 -0.02813,-0.0844 1.200299,-0.34696 z m 1.744181,4.58552 0.009,0.0187 -0.009,-0.0187 0.45012,0.87209 -0.0281,-0.0469 1.05964,1.71606 -0.0281,-0.0375 0.43136,0.59077 -1.01275,0.73143 -0.44074,-0.60015 -1.08777,-1.76294 -0.45949,-0.90022 -0.0188,-0.0375 1.13465,-0.52513 z m 2.61628,4.10727 0.0469,0.0656 -0.0188,-0.0281 1.34096,1.6129 -0.0281,-0.0281 1.06902,1.15341 -0.91898,0.84396 -1.0784,-1.16279 -1.35971,-1.64103 -0.0656,-0.0844 1.01275,-0.73143 z m 3.26331,3.64779 0.98462,0.97524 -0.0187,-0.0188 1.50975,1.37847 -0.009,-0.009 0.23444,0.2063 -0.80645,0.94711 -0.25319,-0.21568 -1.51913,-1.38785 -1.00338,-0.98462 0.88147,-0.89084 z m 3.65716,3.33833 0.30946,0.27194 -0.0281,-0.0281 2.31621,1.78169 -0.75957,0.98462 -2.32558,-1.78169 -0.3282,
 -0.27194 0.81582,-0.95649 z m 3.37585,-1.26594 3.93848,7.39872 -8.28019,-1.2847 4.34171,-6.11402 z"
+         id="path3924"
+         style="fill:#724591;fill-opacity:1;fill-rule:nonzero;stroke:#724591;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 93.84838,93.604569 0.618904,3.694671 -1.237808,0.206301 -0.609527,-3.704048 1.228431,-0.196924 z m 0.825206,4.932479 0.609526,3.694672 -1.228431,0.2063 -0.618904,-3.694671 1.237809,-0.206301 z m 0.815828,4.932482 0.412603,2.49437 0.206301,1.2003 -1.237808,0.2063 -0.196924,-1.20968 -0.412603,-2.48499 1.228431,-0.2063 z m 0.815828,4.93248 0.609527,3.67591 0.0094,0.0188 -1.237808,0.2063 0,-0.0281 -0.609527,-3.66654 1.228431,-0.2063 z m 0.825206,4.93247 0.618904,3.69467 -1.237809,0.20631 -0.618904,-3.69467 1.237809,-0.20631 z m 0.825205,4.93248 0.628282,3.69467 -1.228431,0.20631 -0.637659,-3.69468 1.237808,-0.2063 z m 0.834583,4.93248 0.150038,0.86272 0.487621,2.83195 -1.237808,0.2063 -0.478245,-2.82257 -0.150037,-0.8721 1.228431,-0.2063 z m 0.84396,4.9231 0.300075,1.76294 0.337589,1.93173 -1.237813,0.21568 -0.328207,-1.93173 -0.300075,-1.76294 1.228431,-0.21568 z m 0.843964,4.93248 0.43135,2.50375 0,0 0.21568,1.19092 -1.237806,0.21568 -0.206301,-1.19092 -0.431357,-2.51313
  1.228434,-0.2063 z m 0.86271,4.9231 0.54389,3.09452 0.10315,0.60015 -1.22843,0.21568 -0.10315,-0.60015 -0.55327,-3.09452 1.23781,-0.21568 z m 0.87209,4.92311 0.61891,3.48836 0,-0.009 0.0375,0.2063 -1.22843,0.22505 -0.0375,-0.2063 -0.61891,-3.48837 1.22843,-0.21567 z m 0.88147,4.9231 0.2063,1.12528 0.45949,2.53188 0,-0.009 0.009,0.0375 -1.22843,0.22506 -0.009,-0.0281 -0.45949,-2.53188 -0.2063,-1.12528 1.22843,-0.22505 z m 0.90023,4.91372 0.22505,1.19092 0,0 0.45949,2.39122 0.0188,0.10315 -1.22843,0.23444 -0.0188,-0.10315 -0.45949,-2.39122 -0.22505,-1.2003 1.22843,-0.22506 z m 0.93773,4.91373 0.18755,0.99399 0,0 0.44073,2.25994 0,-0.009 0.0844,0.43136 -1.22843,0.24381 -0.0844,-0.43136 -0.44073,-2.25993 -0.18755,-0.994 1.22843,-0.23443 z m 0.95649,4.89496 0.11253,0.53451 0,0 0.43136,2.10053 0.21567,1.04088 -1.22843,0.25319 -0.21568,-1.04089 -0.43135,-2.10052 -0.10315,-0.53451 1.21905,-0.25319 z m 1.01275,4.89497 0.36572,1.71606 0,-0.009 0.40322,1.86609 -0.009,0 0.0281,0.0844 -1.21906,
 0.28132 -0.0281,-0.0938 -0.40322,-1.87547 -0.36572,-1.70667 1.22843,-0.26257 z m 1.05964,4.87622 0.10315,0.46887 0,-0.009 0.39385,1.67854 0,0 0.36572,1.50038 -1.20968,0.30007 -0.36572,-1.50975 -0.39384,-1.68792 -0.11253,-0.45949 1.21905,-0.28132 z m 1.17217,4.84808 0.0844,0.35634 0,-0.009 0.3751,1.39723 0,0 0.35634,1.30345 -0.009,-0.009 0.16879,0.56264 -1.2003,0.34696 -0.16879,-0.56264 -0.35634,-1.31283 -0.37509,-1.4066 -0.0938,-0.35634 1.21906,-0.30945 z m 1.33158,4.79182 0.17817,0.5814 0,-0.009 0.33758,1.04088 -0.009,-0.009 0.33758,0.97525 0,-0.0188 0.32821,0.89085 0,-0.009 0.0187,0.0656 -1.16279,0.45012 -0.0281,-0.0656 -0.32821,-0.90022 -0.34696,-0.98462 -0.33758,-1.05964 -0.17817,-0.58139 1.19092,-0.36572 z m 1.65041,4.65116 0.14066,0.34696 -0.15004,-0.21568 1.06902,0.994 -0.85334,0.91898 -1.17216,-1.09715 -0.18755,-0.46887 1.15341,-0.47824 z m 2.01613,-2.11928 3.47899,7.62378 -8.18641,-1.7817 4.70742,-5.84208 z"
+         id="path3926"
+         style="fill:#724591;fill-opacity:1;fill-rule:nonzero;stroke:#724591;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 94.795491,94.579812 0.337584,3.741557 -1.247186,0.112528 -0.337584,-3.741557 1.247186,-0.112528 z m 0.450112,4.988743 0.337584,3.732175 -1.247186,0.11253 -0.337584,-3.732177 1.247186,-0.112528 z m 0.450112,4.979365 0.337584,3.73218 -1.247185,0.11253 -0.337585,-3.73218 1.247186,-0.11253 z m 0.450112,4.97937 0.243811,2.68191 0.09377,1.05027 -1.247185,0.11252 -0.09377,-1.05026 -0.24381,-2.68192 1.247185,-0.11252 z m 0.450112,4.97936 0.346962,3.73218 -1.247186,0.12191 -0.346961,-3.74156 1.247185,-0.11253 z m 0.45949,4.97937 0.131283,1.45348 0.206301,2.2787 -1.237808,0.1219 -0.215679,-2.27869 -0.131283,-1.46287 1.247186,-0.11252 z m 0.459489,4.97936 0.337585,3.73218 -1.237809,0.12191 -0.346961,-3.74156 1.247185,-0.11253 z m 0.45949,4.97937 0.0094,0.1219 0.337584,3.61966 -1.247185,0.11252 -0.337584,-3.61027 -0.0094,-0.12191 1.247186,-0.1219 z m 0.459489,4.97936 0.337584,3.63841 0.0094,0.0938 -1.247186,0.12191 -0.0094,-0.0938 -0.337584,-3.63841 1.247185,-0.12191 z m 0.45949,4
 .97937 0.356339,3.73218 -1.247186,0.1219 -0.346962,-3.73218 1.237809,-0.1219 z m 0.478244,4.97936 0.187547,2.00675 0.168792,1.72543 -1.247186,0.12191 -0.168792,-1.72543 -0.187547,-2.00675 1.247186,-0.12191 z m 0.468867,4.97937 0.112528,1.12528 0.243814,2.6069 -1.237812,0.12191 -0.253188,-2.6069 -0.103151,-1.12528 1.237809,-0.12191 z m 0.478242,4.97937 0.0188,0.17816 0,0 0.34696,3.55402 -1.23781,0.1219 -0.356339,-3.55401 -0.01875,-0.17817 1.247189,-0.1219 z m 0.48762,4.97936 0.30946,3.09452 0.0656,0.62828 -1.247189,0.13129 -0.06564,-0.63766 -0.300075,-3.09452 1.237804,-0.12191 z m 0.497,4.96999 0.19693,1.97862 0.17817,1.75356 -1.24719,0.13128 -0.17817,-1.76294 -0.19692,-1.97862 1.24718,-0.1219 z m 0.497,4.97936 0.0844,0.77832 0.30007,2.95386 -1.24718,0.12191 -0.30008,-2.95386 -0.075,-0.77832 1.23781,-0.12191 z m 0.50638,4.96999 0.33758,3.13203 0.0563,0.60015 -1.23781,0.13129 -0.0656,-0.60015 -0.32821,-3.13203 1.23781,-0.13129 z m 0.52513,4.96999 0.18754,1.70668 0.21568,2.0255 -1.2378
 ,0.13128 -0.22506,-2.0255 -0.17817,-1.70668 1.23781,-0.13128 z m 0.54388,4.96999 0.0188,0.18755 0.37509,3.36646 0.0188,0.17817 -1.24719,0.14066 -0.0187,-0.17817 -0.3751,-3.36646 -0.0187,-0.18755 1.24718,-0.14066 z m 0.55327,4.96999 0.2063,1.84733 0,0 0.21568,1.87547 -1.24719,0.15004 -0.21568,-1.88485 -0.2063,-1.84733 1.24719,-0.14066 z m 0.56264,4.96999 0.009,0.0469 0,0 0.36572,3.06639 0.075,0.60953 -1.24719,0.15004 -0.075,-0.61891 -0.36571,-3.06639 0,-0.0469 1.23781,-0.14066 z m 0.59077,4.96061 0.14066,1.09714 0,0 0.33758,2.61628 -1.24718,0.15942 -0.33759,-2.61628 -0.13128,-1.10653 1.23781,-0.15003 z m 0.62828,4.96061 0.22506,1.70667 0,-0.009 0.27194,2.01613 -1.23781,0.16879 -0.28132,-2.01612 -0.21568,-1.70668 1.23781,-0.15941 z m 0.66579,4.95123 0.25319,1.82858 0,0 0.28132,1.87547 -1.23781,0.17817 -0.28132,-1.87547 -0.25319,-1.83796 1.23781,-0.16879 z m 0.72206,4.93248 0.22505,1.46286 0,0 0.33759,2.09115 0,-0.009 0.0281,0.15004 -1.22843,0.21568 -0.0281,-0.15942 -0.33759,-2.09114 -
 0.23443,-1.46287 1.23781,-0.19692 z m 0.80645,4.93248 0.0938,0.57202 0,0 0.32821,1.84733 0,-0.009 0.24381,1.26594 -1.22843,0.23444 -0.24381,-1.26594 -0.32821,-1.85672 -0.10315,-0.58139 1.23781,-0.2063 z m 0.90022,4.89497 0.17817,0.85333 0,-0.009 0.32821,1.50975 -0.009,0 0.30007,1.29407 -1.21905,0.28132 -0.30007,-1.30345 -0.32821,-1.51913 -0.16879,-0.85333 1.21905,-0.25319 z m 1.08777,4.85746 0.0469,0.18754 0,-0.0187 0.31883,1.20968 0,0 0.30945,1.1159 -0.009,-0.0188 0.31883,1.03151 0,-0.009 0.0188,0.0657 -1.19092,0.38447 -0.0188,-0.0657 -0.32821,-1.05026 -0.30945,-1.12528 -0.31883,-1.21905 -0.0469,-0.18755 1.20967,-0.30007 z m 1.39723,4.73555 0.19692,0.55327 0,-0.009 0.30007,0.78769 -0.0844,-0.15003 0.94711,1.26594 -1.00338,0.75018 -1.00337,-1.34096 -0.32821,-0.87209 -0.2063,-0.56264 1.18155,-0.42198 z m 2.90697,-0.56264 2.00675,8.13953 -7.71755,-3.26331 5.7108,-4.87622 z"
+         id="path3928"
+         style="fill:#724591;fill-opacity:1;fill-rule:nonzero;stroke:#724591;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 94.776736,94.645454 -0.04689,3.750934 -1.247186,-0.0094 0.04689,-3.750935 1.247185,0.0094 z m -0.06564,5.007497 -0.04689,3.750939 -1.247185,-0.0188 0.04689,-3.750934 1.247186,0.01875 z m -0.06564,4.998119 -0.04689,3.75094 -1.247186,-0.0188 0.04689,-3.75093 1.247186,0.0187 z m -0.06564,4.99812 -0.01875,1.45349 -0.01875,2.29745 -1.256563,-0.009 0.02813,-2.29745 0.01875,-1.46286 1.247186,0.0188 z m -0.05626,4.99812 -0.03751,3.75094 -1.247186,-0.009 0.03751,-3.75094 1.247186,0.009 z m -0.04689,5.0075 -0.03751,3.75093 -1.247186,-0.0187 0.02813,-3.75094 1.256564,0.0188 z m -0.04689,4.99812 -0.02813,2.75694 0,0 0,0.99399 -1.256563,-0.009 0.0094,-0.994 0.02813,-2.75694 1.247186,0.009 z m -0.03751,4.99812 -0.02813,3.75093 -1.247185,-0.009 0.01875,-3.75094 1.256563,0.009 z m -0.02813,4.99812 -0.0094,0.80645 0,0 -0.0094,2.94448 -1.247186,0 0.0094,-2.95386 0,-0.80645 1.256563,0.009 z m -0.01875,4.99812 -0.0094,3.75093 0,-0.009 0,0.009 -1.247185,0 0,-0.009 0.0094,-3.74156 1.247186,
 0 z m -0.0094,4.99812 0.0094,3.75093 -1.256563,0.009 0,-3.75093 1.247185,-0.009 z m 0.0094,5.0075 0,1.56601 0,0 0.01875,2.17554 -1.247185,0.009 -0.01875,-2.17554 0,-1.57539 1.247186,0 z m 0.02813,4.99812 0.03751,3.75093 -1.256563,0.009 -0.02813,-3.75094 1.247186,-0.009 z m 0.04689,4.99812 0.05627,3.75093 -1.256564,0.009 -0.04689,-3.75094 1.247185,-0.009 z m 0.06564,4.99812 0.02813,1.80982 0,-0.009 0.04689,1.9411 -1.256564,0.0281 -0.03751,-1.94111 -0.02813,-1.80983 1.247186,-0.0188 z m 0.09377,4.99812 0.0844,3.74156 -1.247185,0.0281 -0.0844,-3.75094 1.247186,-0.0188 z m 0.121906,4.98874 0.112528,3.75094 -1.247186,0.0375 -0.112528,-3.75093 1.247186,-0.0375 z m 0.159414,4.99812 0.03751,1.34096 0,0 0.09377,2.4006 -1.247186,0.0563 -0.09377,-2.40997 -0.04689,-1.35034 1.256563,-0.0375 z m 0.187547,4.98874 0.14066,3.3102 0,-0.009 0.01875,0.45011 -1.247186,0.0563 -0.01875,-0.44073 -0.14066,-3.3102 1.247186,-0.0563 z m 0.225056,4.99812 0.0844,1.6973 0,0 0.112528,2.04426 -1.247186,0.0656 -0.11
 2528,-2.04426 -0.0844,-1.70667 1.247186,-0.0563 z m 0.271943,4.98875 0,0.0375 0,-0.009 0.215679,3.27269 0,0 0.02813,0.43136 -1.247186,0.0844 -0.02813,-0.43135 -0.215679,-3.2727 0,-0.0469 1.247186,-0.0656 z m 0.337584,4.97936 0.103151,1.51913 0,-0.009 0.178169,2.22243 -1.247186,0.10315 -0.178169,-2.2318 -0.103151,-1.51913 1.247186,-0.0844 z m 0.384471,4.97937 0.24381,2.73818 0,-0.009 0.09377,0.994 -1.247186,0.12191 -0.09377,-0.994 -0.243811,-2.74756 1.247186,-0.10315 z m 0.459489,4.96999 0.07502,0.75018 0,-0.009 0.318829,2.91635 0,-0.009 0,0.0656 -1.237808,0.15004 -0.0094,-0.0656 -0.31883,-2.92573 -0.07502,-0.75019 1.247186,-0.1219 z m 0.553263,4.95123 0.187547,1.53788 0,0 0.28132,2.18492 -1.237809,0.15942 -0.290697,-2.18492 -0.187547,-1.53789 1.247186,-0.15941 z m 0.637659,4.95123 0.290697,2.03488 0,-0.009 0.262566,1.67854 -1.228431,0.18755 -0.262566,-1.67855 -0.300075,-2.03488 1.237809,-0.17817 z m 0.750187,4.93248 0.384471,2.25994 0,0 0.262565,1.42535 -1.237808,0.22506 -0.253188,-
 1.43473 -0.384471,-2.26932 1.228431,-0.2063 z m 0.862715,4.90435 0.440734,2.25994 0,-0.009 0.290696,1.42535 -1.228429,0.25319 -0.290698,-1.42535 -0.440734,-2.25994 1.228431,-0.24381 z m 0.98462,4.88559 0.45011,2.03488 -0.009,-0.009 0.38447,1.63165 -1.21905,0.28132 -0.384469,-1.63165 -0.440735,-2.03489 1.219054,-0.27194 z m 1.1159,4.85746 0.91898,3.63841 -1.20968,0.30945 -0.91898,-3.63841 1.20968,-0.30945 z m 1.24718,4.82933 0.86272,3.05701 -0.009,-0.009 0.16879,0.54389 -1.20029,0.36571 -0.1688,-0.55326 -0.85333,-3.06639 1.20029,-0.33758 z m 1.38785,4.79182 0.67517,2.2318 0,-0.009 0.44073,1.34096 -1.19092,0.39384 -0.44073,-1.35033 -0.68455,-2.24119 1.2003,-0.36571 z m 1.50975,4.75431 0.42198,1.28469 -0.009,-0.0187 0.80646,2.25994 -1.18155,0.42198 -0.80645,-2.26932 -0.42198,-1.29407 1.19092,-0.38447 z m 1.63166,4.70742 0.0844,0.24381 0,-0.009 1.22844,3.27269 -1.17217,0.43136 -1.22843,-3.27269 -0.0938,-0.24381 1.18155,-0.42198 z m 1.76294,4.66054 1.02213,2.6069 0,-0.009 0.35634,0.88147
  -1.15342,0.46886 -0.35634,-0.88147 -1.0315,-2.60689 1.16279,-0.45949 z m 1.85671,4.6324 0.55326,1.35971 0,-0.009 0.89085,2.10052 -1.14404,0.48762 -0.90022,-2.10052 -0.56264,-1.36909 1.16279,-0.46887 z m 1.94111,4.60427 0.39385,0.92836 -1.15342,0.48762 -0.39384,-0.92836 1.15341,-0.48762 z m 2.75694,-1.47224 -0.45949,8.37396 -6.42348,-5.39197 6.88297,-2.98199 z"
+         id="path3930"
+         style="fill:#724591;fill-opacity:1;fill-rule:nonzero;stroke:#724591;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 140.11616,146.20205 0.0281,1.25656 -1.24719,0.0188 -0.0281,-1.24719 1.24719,-0.0281 z m 0.0469,2.50375 0.0281,1.24718 -1.25657,0.0281 -0.0187,-1.25656 1.24718,-0.0187 z m 0.0469,2.50375 0.0187,1.24718 -1.24718,0.0188 -0.0188,-1.24719 1.24719,-0.0187 z m 0.0469,2.49437 0.0187,1.24718 -1.24718,0.0281 -0.0281,-1.24718 1.25657,-0.0281 z m 0.0375,2.50375 0.0281,1.24718 -1.25657,0.0188 -0.0187,-1.24719 1.24719,-0.0187 z m 0.0469,2.49437 0.0188,0.90022 -1.24719,0.0188 -0.0187,-0.89085 1.24718,-0.0281 z"
+         id="path3932"
+         style="fill:#8a3142;fill-opacity:1;fill-rule:nonzero;stroke:#8a3142;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 137.8656,157.1454 -0.63766,1.07839 1.06901,0.63766 0.64704,-1.06901 -1.07839,-0.64704 z m -1.27532,2.14741 -0.64704,1.07839 1.0784,0.63766 0.63765,-1.06901 -1.06901,-0.64704 z m -1.2847,2.14741 -0.64703,1.07839 1.07839,0.63766 0.63766,-1.07839 -1.06902,-0.63766 z m -1.28469,2.14741 -0.63766,1.06902 1.06902,0.64703 0.63765,-1.07839 -1.06901,-0.63766 z m -1.2847,2.14741 -0.63766,1.06902 1.06902,0.64703 0.64704,-1.07839 -1.0784,-0.63766 z"
+         id="path3934"
+         style="fill:#8a3142;fill-opacity:1;fill-rule:nonzero;stroke:#8a3142;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 139.17842,189.18776 0.0281,1.24718 -1.24718,0.0188 -0.0281,-1.24719 1.24718,-0.0188 z m 0.0469,2.49437 0.0281,1.25656 -1.25656,0.0188 -0.0188,-1.24719 1.24719,-0.0281 z m 0.0469,2.50375 0.0188,1.24718 -1.24718,0.0281 -0.0188,-1.25656 1.24719,-0.0188 

<TRUNCATED>

[77/89] [abbrv] flink git commit: [FLINK-4386] [rpc] Add a utility to verify calls happen in the Rpc Endpoint's main thread

Posted by se...@apache.org.
[FLINK-4386] [rpc] Add a utility to verify calls happen in the Rpc Endpoint's main thread


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/a2f3f317
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/a2f3f317
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/a2f3f317

Branch: refs/heads/flip-6
Commit: a2f3f317e5f748b3930339816309cd1a2bf25c27
Parents: 0d38da0
Author: Stephan Ewen <se...@apache.org>
Authored: Thu Aug 11 20:30:54 2016 +0200
Committer: Stephan Ewen <se...@apache.org>
Committed: Thu Aug 25 20:21:03 2016 +0200

----------------------------------------------------------------------
 .../flink/runtime/rpc/MainThreadExecutor.java   |  2 +-
 .../runtime/rpc/MainThreadValidatorUtil.java    | 47 ++++++++++
 .../apache/flink/runtime/rpc/RpcEndpoint.java   | 38 +++++++-
 .../flink/runtime/rpc/akka/AkkaRpcActor.java    | 37 +++++---
 .../flink/runtime/rpc/akka/AkkaRpcService.java  |  2 +-
 .../rpc/akka/MainThreadValidationTest.java      | 97 ++++++++++++++++++++
 6 files changed, 205 insertions(+), 18 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/a2f3f317/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/MainThreadExecutor.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/MainThreadExecutor.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/MainThreadExecutor.java
index 4efb382..5e4fead 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/MainThreadExecutor.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/MainThreadExecutor.java
@@ -30,7 +30,7 @@ import java.util.concurrent.TimeoutException;
  *
  * <p>This interface is intended to be implemented by the self gateway in a {@link RpcEndpoint}
  * implementation which allows to dispatch local procedures to the main thread of the underlying
- * rpc server.
+ * RPC endpoint.
  */
 public interface MainThreadExecutor {
 

http://git-wip-us.apache.org/repos/asf/flink/blob/a2f3f317/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/MainThreadValidatorUtil.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/MainThreadValidatorUtil.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/MainThreadValidatorUtil.java
new file mode 100644
index 0000000..b3fea77
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/MainThreadValidatorUtil.java
@@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * This utility exists to bridge between the visibility of the
+ * {@code currentMainThread} field in the {@link RpcEndpoint}.
+ * 
+ * The {@code currentMainThread} can be hidden from {@code RpcEndpoint} implementations
+ * and only be accessed via this utility from other packages.
+ */
+public final class MainThreadValidatorUtil {
+
+	private final RpcEndpoint<?> endpoint;
+
+	public MainThreadValidatorUtil(RpcEndpoint<?> endpoint) {
+		this.endpoint = checkNotNull(endpoint);
+	}
+
+	public void enterMainThread() {
+		assert(endpoint.currentMainThread.compareAndSet(null, Thread.currentThread())) : 
+				"The RpcEndpoint has concurrent access from " + endpoint.currentMainThread.get();
+	}
+	
+	public void exitMainThread() {
+		assert(endpoint.currentMainThread.compareAndSet(Thread.currentThread(), null)) :
+				"The RpcEndpoint has concurrent access from " + endpoint.currentMainThread.get();
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/a2f3f317/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java
index 44933d5..d36a283 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java
@@ -29,6 +29,7 @@ import scala.concurrent.Future;
 
 import java.util.concurrent.Callable;
 import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicReference;
 
 import static org.apache.flink.util.Preconditions.checkNotNull;
 
@@ -75,6 +76,9 @@ public abstract class RpcEndpoint<C extends RpcGateway> {
 	 * of the executing rpc server. */
 	private final MainThreadExecutionContext mainThreadExecutionContext;
 
+	/** A reference to the endpoint's main thread, if the current method is called by the main thread */
+	final AtomicReference<Thread> currentMainThread = new AtomicReference<>(null); 
+
 	/**
 	 * Initializes the RPC endpoint.
 	 * 
@@ -92,6 +96,15 @@ public abstract class RpcEndpoint<C extends RpcGateway> {
 		this.mainThreadExecutionContext = new MainThreadExecutionContext((MainThreadExecutor) self);
 	}
 
+	/**
+	 * Returns the class of the self gateway type.
+	 *
+	 * @return Class of the self gateway type
+	 */
+	public final Class<C> getSelfGatewayType() {
+		return selfGatewayType;
+	}
+	
 	// ------------------------------------------------------------------------
 	//  Shutdown
 	// ------------------------------------------------------------------------
@@ -193,13 +206,28 @@ public abstract class RpcEndpoint<C extends RpcGateway> {
 		return ((MainThreadExecutor) self).callAsync(callable, timeout);
 	}
 
+	// ------------------------------------------------------------------------
+	//  Main Thread Validation
+	// ------------------------------------------------------------------------
+
 	/**
-	 * Returns the class of the self gateway type.
-	 *
-	 * @return Class of the self gateway type
+	 * Validates that the method call happens in the RPC endpoint's main thread.
+	 * 
+	 * <p><b>IMPORTANT:</b> This check only happens when assertions are enabled,
+	 * such as when running tests.
+	 * 
+	 * <p>This can be used for additional checks, like
+	 * <pre>{@code
+	 * protected void concurrencyCriticalMethod() {
+	 *     validateRunsInMainThread();
+	 *     
+	 *     // some critical stuff
+	 * }
+	 * }</pre>
 	 */
-	public final Class<C> getSelfGatewayType() {
-		return selfGatewayType;
+	public void validateRunsInMainThread() {
+		// because the initialization is lazy, it can be that certain methods are
+		assert currentMainThread.get() == Thread.currentThread();
 	}
 
 	// ------------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/flink/blob/a2f3f317/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActor.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActor.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActor.java
index 18ccf1b..5e0a7da 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActor.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActor.java
@@ -22,14 +22,16 @@ import akka.actor.ActorRef;
 import akka.actor.Status;
 import akka.actor.UntypedActor;
 import akka.pattern.Patterns;
+import org.apache.flink.runtime.rpc.MainThreadValidatorUtil;
 import org.apache.flink.runtime.rpc.RpcEndpoint;
 import org.apache.flink.runtime.rpc.RpcGateway;
 import org.apache.flink.runtime.rpc.akka.messages.CallAsync;
 import org.apache.flink.runtime.rpc.akka.messages.RpcInvocation;
 import org.apache.flink.runtime.rpc.akka.messages.RunAsync;
-import org.apache.flink.util.Preconditions;
+
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
+
 import scala.concurrent.Future;
 import scala.concurrent.duration.FiniteDuration;
 
@@ -37,6 +39,8 @@ import java.lang.reflect.Method;
 import java.util.concurrent.Callable;
 import java.util.concurrent.TimeUnit;
 
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
 /**
  * Akka rpc actor which receives {@link RpcInvocation}, {@link RunAsync} and {@link CallAsync}
  * messages.
@@ -51,24 +55,35 @@ import java.util.concurrent.TimeUnit;
  * @param <T> Type of the {@link RpcEndpoint}
  */
 class AkkaRpcActor<C extends RpcGateway, T extends RpcEndpoint<C>> extends UntypedActor {
+	
 	private static final Logger LOG = LoggerFactory.getLogger(AkkaRpcActor.class);
 
+	/** the endpoint to invoke the methods on */
 	private final T rpcEndpoint;
 
+	/** the helper that tracks whether calls come from the main thread */
+	private final MainThreadValidatorUtil mainThreadValidator;
+
 	AkkaRpcActor(final T rpcEndpoint) {
-		this.rpcEndpoint = Preconditions.checkNotNull(rpcEndpoint, "rpc endpoint");
+		this.rpcEndpoint = checkNotNull(rpcEndpoint, "rpc endpoint");
+		this.mainThreadValidator = new MainThreadValidatorUtil(rpcEndpoint);
 	}
 
 	@Override
-	public void onReceive(final Object message)  {
-		if (message instanceof RunAsync) {
-			handleRunAsync((RunAsync) message);
-		} else if (message instanceof CallAsync) {
-			handleCallAsync((CallAsync) message);
-		} else if (message instanceof RpcInvocation) {
-			handleRpcInvocation((RpcInvocation) message);
-		} else {
-			LOG.warn("Received message of unknown type {}. Dropping this message!", message.getClass());
+	public void onReceive(final Object message) {
+		mainThreadValidator.enterMainThread();
+		try {
+			if (message instanceof RunAsync) {
+				handleRunAsync((RunAsync) message);
+			} else if (message instanceof CallAsync) {
+				handleCallAsync((CallAsync) message);
+			} else if (message instanceof RpcInvocation) {
+				handleRpcInvocation((RpcInvocation) message);
+			} else {
+				LOG.warn("Received message of unknown type {}. Dropping this message!", message.getClass());
+			}
+		} finally {
+			mainThreadValidator.exitMainThread();
 		}
 	}
 

http://git-wip-us.apache.org/repos/asf/flink/blob/a2f3f317/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
index 448216c..db40f10 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
@@ -174,7 +174,7 @@ public class AkkaRpcService implements RpcService {
 	}
 
 	@Override
-	public <C extends RpcGateway> String getAddress(C selfGateway) {
+	public String getAddress(RpcGateway selfGateway) {
 		checkState(!stopped, "RpcService is stopped");
 
 		if (selfGateway instanceof AkkaGateway) {

http://git-wip-us.apache.org/repos/asf/flink/blob/a2f3f317/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/MainThreadValidationTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/MainThreadValidationTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/MainThreadValidationTest.java
new file mode 100644
index 0000000..b854143
--- /dev/null
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/MainThreadValidationTest.java
@@ -0,0 +1,97 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.akka;
+
+import akka.util.Timeout;
+
+import org.apache.flink.runtime.akka.AkkaUtils;
+
+import org.apache.flink.runtime.rpc.RpcEndpoint;
+import org.apache.flink.runtime.rpc.RpcGateway;
+import org.apache.flink.runtime.rpc.RpcMethod;
+import org.apache.flink.runtime.rpc.RpcService;
+
+import org.junit.Test;
+
+import java.util.concurrent.TimeUnit;
+
+import static org.junit.Assert.assertTrue;
+
+public class MainThreadValidationTest {
+
+	@Test
+	public void failIfNotInMainThread() {
+		// test if assertions are activated. The test only works if assertions are loaded.
+		try {
+			assert false;
+			// apparently they are not activated
+			return;
+		} catch (AssertionError ignored) {}
+
+		// actual test
+		AkkaRpcService akkaRpcService = new AkkaRpcService(
+				AkkaUtils.createDefaultActorSystem(),
+				new Timeout(10000, TimeUnit.MILLISECONDS));
+
+		try {
+			TestEndpoint testEndpoint = new TestEndpoint(akkaRpcService);
+
+			// this works, because it is executed as an RPC call
+			testEndpoint.getSelf().someConcurrencyCriticalFunction();
+
+			// this fails, because it is executed directly
+			boolean exceptionThrown;
+			try {
+				testEndpoint.someConcurrencyCriticalFunction();
+				exceptionThrown = false;
+			}
+			catch (AssertionError e) {
+				exceptionThrown = true;
+			}
+			assertTrue("should fail with an assertion error", exceptionThrown);
+
+			akkaRpcService.stopServer(testEndpoint.getSelf());
+		}
+		finally {
+			akkaRpcService.stopService();
+		}
+	}
+
+	// ------------------------------------------------------------------------
+	//  test RPC endpoint
+	// ------------------------------------------------------------------------
+
+	interface TestGateway extends RpcGateway {
+
+		void someConcurrencyCriticalFunction();
+	}
+
+	@SuppressWarnings("unused")
+	public static class TestEndpoint extends RpcEndpoint<TestGateway> {
+
+		public TestEndpoint(RpcService rpcService) {
+			super(rpcService);
+		}
+
+		@RpcMethod
+		public void someConcurrencyCriticalFunction() {
+			validateRunsInMainThread();
+		}
+	}
+}


[07/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/fig/event_ingestion_processing_time.svg
----------------------------------------------------------------------
diff --git a/docs/fig/event_ingestion_processing_time.svg b/docs/fig/event_ingestion_processing_time.svg
new file mode 100644
index 0000000..fc80d91
--- /dev/null
+++ b/docs/fig/event_ingestion_processing_time.svg
@@ -0,0 +1,375 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<svg
+   xmlns:dc="http://purl.org/dc/elements/1.1/"
+   xmlns:cc="http://creativecommons.org/ns#"
+   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+   xmlns:svg="http://www.w3.org/2000/svg"
+   xmlns="http://www.w3.org/2000/svg"
+   version="1.1"
+   width="444.25604"
+   height="209.83659"
+   id="svg2">
+  <defs
+     id="defs4" />
+  <metadata
+     id="metadata7">
+    <rdf:RDF>
+      <cc:Work
+         rdf:about="">
+        <dc:format>image/svg+xml</dc:format>
+        <dc:type
+           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+        <dc:title></dc:title>
+      </cc:Work>
+    </rdf:RDF>
+  </metadata>
+  <g
+     transform="translate(-152.87198,-427.44388)"
+     id="layer1">
+    <g
+       transform="translate(113.23391,306.36012)"
+       id="g2989">
+      <path
+         d="m 190.5756,167.00098 38.65338,0 c 5.33571,0 9.66804,3.98537 9.66804,8.90847 0,4.9231 -4.33233,8.90847 -9.66804,8.90847 l -38.65338,0 c -5.3357,0 -9.66803,-3.98537 -9.66803,-8.90847 0,-4.9231 4.33233,-8.90847 9.66803,-8.90847 z"
+         id="path2991"
+         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 229.22898,184.81792 c -5.3357,0 -9.65865,-3.98537 -9.65865,-8.90847 0,-4.9231 4.32295,-8.90847 9.65865,-8.90847"
+         id="path2993"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 190.5756,167.00098 38.65338,0 c 5.33571,0 9.66804,3.98537 9.66804,8.90847 0,4.9231 -4.33233,8.90847 -9.66804,8.90847 l -38.65338,0 c -5.3357,0 -9.66803,-3.98537 -9.66803,-8.90847 0,-4.9231 4.33233,-8.90847 9.66803,-8.90847 z"
+         id="path2995"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 190.5756,239.67533 38.65338,0 c 5.33571,0 9.66804,3.95724 9.66804,8.83345 0,4.87622 -4.33233,8.82408 -9.66804,8.82408 l -38.65338,0 c -5.3357,0 -9.66803,-3.94786 -9.66803,-8.82408 0,-4.87621 4.33233,-8.83345 9.66803,-8.83345 z"
+         id="path2997"
+         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 229.22898,257.33286 c -5.3357,0 -9.65865,-3.94786 -9.65865,-8.82408 0,-4.87621 4.32295,-8.83345 9.65865,-8.83345"
+         id="path2999"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 190.5756,239.67533 38.65338,0 c 5.33571,0 9.66804,3.95724 9.66804,8.83345 0,4.87622 -4.33233,8.82408 -9.66804,8.82408 l -38.65338,0 c -5.3357,0 -9.66803,-3.94786 -9.66803,-8.82408 0,-4.87621 4.33233,-8.83345 9.66803,-8.83345 z"
+         id="path3001"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 80.570072,234.04893 c 0,-1.98799 1.575392,-3.59152 3.516501,-3.59152 1.950486,0 3.516501,1.60353 3.516501,3.59152 0,1.988 -1.566015,3.59152 -3.516501,3.59152 -1.941109,0 -3.516501,-1.60352 -3.516501,-3.59152"
+         id="path3003"
+         style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 80.570072,234.04893 c 0,-1.98799 1.575392,-3.59152 3.516501,-3.59152 1.950486,0 3.516501,1.60353 3.516501,3.59152 0,1.988 -1.566015,3.59152 -3.516501,3.59152 -1.941109,0 -3.516501,-1.60352 -3.516501,-3.59152"
+         id="path3005"
+         style="fill:none;stroke:#000000;stroke-width:1.24718571px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 84.170969,237.64045 0,10.90584"
+         id="path3007"
+         style="fill:none;stroke:#000000;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="M 84.170969,246.54892 70.414417,280.72931"
+         id="path3009"
+         style="fill:none;stroke:#000000;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 84.170969,246.54892 13.756552,34.18039"
+         id="path3011"
+         style="fill:none;stroke:#000000;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 78.694605,259.20833 10.549503,0"
+         id="path3013"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 76.819138,265.77246 15.003737,0"
+         id="path3015"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 74.474804,271.39886 19.223539,0"
+         id="path3017"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 72.13047,277.02526 24.615507,0"
+         id="path3019"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 90.041182,223.33064 c 5.476364,5.23255 5.673288,13.91596 0.440734,19.39233"
+         id="path3021"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 95.751979,223.33064 c 5.476361,5.23255 5.673291,13.91596 0.440735,19.39233"
+         id="path3023"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 101.4534,223.33064 c 5.47636,5.23255 5.67329,13.91596 0.44073,19.39233"
+         id="path3025"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="M 77.250495,242.68546 C 71.774131,237.45291 71.577207,228.76949 76.80976,223.29313"
+         id="path3027"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="M 71.549075,242.68546 C 66.07271,237.45291 65.875786,228.76949 71.10834,223.29313"
+         id="path3029"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="M 65.847654,242.68546 C 60.37129,237.45291 60.174366,228.76949 65.40692,223.29313"
+         id="path3031"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 66.194616,155.41059 c 0,-3.26331 2.644409,-5.9171 5.917099,-5.9171 l 23.649642,0 c 3.263313,0 5.907723,2.65379 5.907723,5.9171 l 0,40.2194 c 0,3.26331 -2.64441,5.90772 -5.907723,5.90772 l -23.649642,0 c -3.27269,0 -5.917099,-2.64441 -5.917099,-5.90772 z"
+         id="path3033"
+         style="fill:none;stroke:#000000;stroke-width:1.24718571px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 70.255002,156.05763 27.35369,0 0,4.37921 -27.35369,0 z"
+         id="path3035"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 70.255002,162.47173 27.35369,0 0,4.52925 -27.35369,0 z"
+         id="path3037"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 70.255002,169.03586 27.35369,0 0,4.52925 -27.35369,0 z"
+         id="path3039"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 70.255002,175.6 27.35369,0 0,4.36983 -27.35369,0 z"
+         id="path3041"
+         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="42.700401"
+         y="136.7509"
+         id="text3043"
+         xml:space="preserve"
+         style="font-size:11.2528038px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Event Producer</text>
+      <text
+         x="164.97949"
+         y="136.7509"
+         id="text3045"
+         xml:space="preserve"
+         style="font-size:11.2528038px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Message Queue</text>
+      <path
+         d="m 314.53461,174.8123 c 0,-8.79594 7.14553,-15.94147 15.94147,-15.94147 8.8147,0 15.94147,7.14553 15.94147,15.94147 0,8.8147 -7.12677,15.94147 -15.94147,15.94147 -8.79594,0 -15.94147,-7.12677 -15.94147,-15.94147"
+         id="path3047"
+         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <text
+         x="317.27582"
+         y="129.93326"
+         id="text3049"
+         xml:space="preserve"
+         style="font-size:11.2528038px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Flink</text>
+      <text
+         x="295.8205"
+         y="143.43661"
+         id="text3051"
+         xml:space="preserve"
+         style="font-size:11.2528038px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Data Source</text>
+      <text
+         x="415.83893"
+         y="129.93326"
+         id="text3053"
+         xml:space="preserve"
+         style="font-size:11.2528038px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Flink</text>
+      <text
+         x="379.52988"
+         y="143.43661"
+         id="text3055"
+         xml:space="preserve"
+         style="font-size:11.2528038px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Window Operator</text>
+      <path
+         d="m 314.53461,248.74322 c 0,-8.79594 7.14553,-15.94147 15.94147,-15.94147 8.8147,0 15.94147,7.14553 15.94147,15.94147 0,8.81469 -7.12677,15.94147 -15.94147,15.94147 -8.79594,0 -15.94147,-7.12678 -15.94147,-15.94147"
+         id="path3057"
+         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 408.94563,174.8123 c 0,-8.79594 7.16428,-15.94147 16.01649,-15.94147 8.8522,0 16.01649,7.14553 16.01649,15.94147 0,8.8147 -7.16429,15.94147 -16.01649,15.94147 -8.85221,0 -16.01649,-7.12677 -16.01649,-15.94147"
+         id="path3059"
+         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 408.94563,248.74322 c 0,-8.79594 7.16428,-15.94147 16.01649,-15.94147 8.8522,0 16.01649,7.14553 16.01649,15.94147 0,8.81469 -7.16429,15.94147 -16.01649,15.94147 -8.85221,0 -16.01649,-7.12678 -16.01649,-15.94147"
+         id="path3061"
+         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 432.38897,169.65477 c 2.41935,0 4.36984,0.16879 4.36984,0.37509 l 0,9.4336 c 0,0.18755 -1.95049,0.35634 -4.36984,0.35634"
+         id="path3063"
+         style="fill:none;stroke:#000000;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 418.62304,169.65477 c -2.49437,0 -4.51988,0.16879 -4.51988,0.39384 l 0,9.39609 c 0,0.20631 2.02551,0.3751 4.51988,0.3751"
+         id="path3065"
+         style="fill:none;stroke:#000000;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 432.38897,244.05455 c 2.41935,0 4.36984,0.16879 4.36984,0.37509 l 0,9.41485 c 0,0.2063 -1.95049,0.37509 -4.36984,0.37509"
+         id="path3067"
+         style="fill:none;stroke:#000000;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 418.62304,244.05455 c -2.49437,0 -4.51988,0.16879 -4.51988,0.37509 l 0,9.41485 c 0,0.2063 2.02551,0.37509 4.51988,0.37509"
+         id="path3069"
+         style="fill:none;stroke:#000000;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 323.61187,170.55499 7.50187,0 0,-4.18229 7.50187,8.36458 -7.50187,8.36459 0,-4.1823 -7.50187,0 z"
+         id="path3071"
+         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 323.61187,244.63594 7.50187,0 0,-4.18229 7.50187,8.36459 -7.50187,8.36458 0,-4.18229 -7.50187,0 z"
+         id="path3073"
+         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 117.46051,174.1934 49.70926,0 0,1.24718 -49.70926,0 z m 42.52622,-4.29482 8.42085,4.91372 -8.42085,4.9231 c -0.30007,0.16879 -0.68454,0.075 -0.85334,-0.22505 -0.17817,-0.30008 -0.075,-0.68455 0.22506,-0.85334 l 7.50187,-4.37922 0,1.0784 -7.50187,-4.37922 c -0.30007,-0.16879 -0.40323,-0.55326 -0.22506,-0.85334 0.1688,-0.30007 0.55327,-0.39385 0.85334,-0.22505 z"
+         id="path3075"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 117.46051,247.95552 49.70926,0 0,1.25656 -49.70926,0 z m 42.52622,-4.28544 8.42085,4.91372 -8.42085,4.91373 c -0.30007,0.17817 -0.68454,0.075 -0.85334,-0.22506 -0.17817,-0.2907 -0.075,-0.67517 0.22506,-0.85334 l 7.50187,-4.36983 0,1.07839 -7.50187,-4.37922 c -0.30007,-0.16879 -0.40323,-0.55326 -0.22506,-0.85333 0.1688,-0.30008 0.55327,-0.40323 0.85334,-0.22506 z"
+         id="path3077"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 117.88249,240.8475 52.54122,-48.79028 -0.84396,-0.91898 -52.5506,48.79028 z m 50.19688,-40.7539 2.83196,-9.33983 -9.518,2.13803 c -0.33758,0.075 -0.55326,0.40323 -0.47824,0.74081 0.075,0.33759 0.4126,0.55327 0.75018,0.47825 l 8.47712,-1.9036 -0.74081,-0.7877 -2.51313,8.30832 c -0.10315,0.33759 0.0844,0.68455 0.4126,0.77832 0.32821,0.10315 0.67517,-0.0844 0.77832,-0.4126 z"
+         id="path3079"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 252.17532,174.1934 49.71864,0 0,1.25656 -49.71864,0 z m 42.5356,-4.29482 8.42085,4.91372 -8.42085,4.93248 c -0.30007,0.16879 -0.67517,0.075 -0.86272,-0.22506 -0.16879,-0.30007 -0.075,-0.69392 0.22506,-0.86271 l 7.50187,-4.36984 0,1.06902 -7.50187,-4.36984 c -0.30007,-0.1688 -0.39385,-0.56264 -0.22506,-0.86272 0.18755,-0.30007 0.56265,-0.39385 0.86272,-0.22505 z"
+         id="path3081"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 252.17532,248.59318 49.71864,0 0,1.23781 -49.71864,0 z m 42.5356,-4.29482 8.42085,4.91372 -8.42085,4.91373 c -0.30007,0.18755 -0.67517,0.075 -0.86272,-0.22506 -0.16879,-0.30007 -0.075,-0.67517 0.22506,-0.84396 l 7.50187,-4.38859 0,1.08777 -7.50187,-4.36984 c -0.30007,-0.18754 -0.39385,-0.56264 -0.22506,-0.86271 0.18755,-0.30008 0.56265,-0.39385 0.86272,-0.22506 z"
+         id="path3083"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="189.89742"
+         y="177.54382"
+         id="text3085"
+         xml:space="preserve"
+         style="font-size:4.95123339px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">partition 1</text>
+      <text
+         x="189.89742"
+         y="251.09189"
+         id="text3087"
+         xml:space="preserve"
+         style="font-size:4.95123339px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">partition 2</text>
+      <path
+         d="m 352.51282,174.1934 49.71864,0 0,1.25656 -49.71864,0 z m 42.5356,-4.29482 8.42084,4.91372 -8.42084,4.93248 c -0.30008,0.16879 -0.67517,0.075 -0.86272,-0.22506 -0.16879,-0.30007 -0.075,-0.69392 0.22506,-0.86271 l 7.50187,-4.36984 0,1.06902 -7.50187,-4.36984 c -0.30008,-0.1688 -0.39385,-0.56264 -0.22506,-0.86272 0.18755,-0.30007 0.56264,-0.39385 0.86272,-0.22505 z"
+         id="path3089"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 352.51282,248.59318 49.71864,0 0,1.23781 -49.71864,0 z m 42.5356,-4.29482 8.42084,4.91372 -8.42084,4.91373 c -0.30008,0.18755 -0.67517,0.075 -0.86272,-0.22506 -0.16879,-0.30007 -0.075,-0.67517 0.22506,-0.84396 l 7.50187,-4.38859 0,1.08777 -7.50187,-4.36984 c -0.30008,-0.18754 -0.39385,-0.56264 -0.22506,-0.86271 0.18755,-0.30008 0.56264,-0.39385 0.86272,-0.22506 z"
+         id="path3091"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 352.98169,177.68176 55.6076,58.62711 -0.90022,0.86272 -55.62636,-58.64586 z m 53.76964,50.46883 2.23181,9.48986 -9.35858,-2.73818 c -0.33759,-0.0938 -0.52513,-0.43136 -0.43136,-0.76894 0.0938,-0.33759 0.45011,-0.52513 0.7877,-0.43136 l 8.32707,2.43811 -0.7877,0.75019 -1.98799,-8.45836 c -0.075,-0.33759 0.13128,-0.67517 0.46887,-0.75019 0.33758,-0.0938 0.67516,0.13128 0.75018,0.46887 z"
+         id="path3093"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 352.98169,246.90526 52.5881,-54.81991 -0.90023,-0.86271 -52.60685,54.80115 z m 50.71263,-46.66162 2.28807,-9.48987 -9.39609,2.68192 c -0.31883,0.0938 -0.52513,0.45011 -0.43136,0.76894 0.0938,0.33759 0.45011,0.52513 0.7877,0.43136 l 8.34583,-2.38184 -0.7877,-0.75019 -2.0255,8.4396 c -0.0938,0.33759 0.11252,0.67517 0.45011,0.76894 0.33758,0.075 0.67517,-0.13128 0.76894,-0.46886 z"
+         id="path3095"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 40.566356,302.98173 c 0,-5.0075 4.069764,-9.07726 9.058507,-9.07726 5.007497,0 9.077261,4.06976 9.077261,9.07726 0,5.0075 -4.069764,9.05851 -9.077261,9.05851 -4.988743,0 -9.058507,-4.05101 -9.058507,-9.05851"
+         id="path3097"
+         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 49.474825,303.61001 0,-7.67066"
+         id="path3099"
+         style="fill:none;stroke:#000000;stroke-width:1.24718571px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 52.869421,306.0575 -3.394596,-2.45687"
+         id="path3101"
+         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 417.38523,302.98173 c 0,-5.0075 4.08852,-9.07726 9.13353,-9.07726 5.06376,0 9.15228,4.06976 9.15228,9.07726 0,5.0075 -4.08852,9.05851 -9.15228,9.05851 -5.04501,0 -9.13353,-4.05101 -9.13353,-9.05851"
+         id="path3103"
+         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 426.44374,303.61939 0,-7.67066"
+         id="path3105"
+         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 429.83833,306.0575 -3.39459,-2.45687"
+         id="path3107"
+         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 297.35533,302.98173 c 0,-5.0075 4.05101,-9.07726 9.05851,-9.07726 5.00749,0 9.0585,4.06976 9.0585,9.07726 0,5.0075 -4.05101,9.05851 -9.0585,9.05851 -5.0075,0 -9.05851,-4.05101 -9.05851,-9.05851"
+         id="path3109"
+         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 306.2638,303.61939 0,-7.67066"
+         id="path3111"
+         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 309.65839,306.0575 -3.39459,-2.45687"
+         id="path3113"
+         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="67.701958"
+         y="308.1178"
+         id="text3115"
+         xml:space="preserve"
+         style="font-size:8.70216751px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Event</text>
+      <text
+         x="69.502403"
+         y="318.62042"
+         id="text3117"
+         xml:space="preserve"
+         style="font-size:8.70216751px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Time</text>
+      <text
+         x="319.48459"
+         y="308.11505"
+         id="text3119"
+         xml:space="preserve"
+         style="font-size:8.70216751px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Ingestion</text>
+      <text
+         x="329.23703"
+         y="318.61768"
+         id="text3121"
+         xml:space="preserve"
+         style="font-size:8.70216751px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Time</text>
+      <text
+         x="443.61313"
+         y="309.50467"
+         id="text3123"
+         xml:space="preserve"
+         style="font-size:8.70216751px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Window</text>
+      <text
+         x="437.46158"
+         y="320.00726"
+         id="text3125"
+         xml:space="preserve"
+         style="font-size:8.70216751px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Processing</text>
+      <text
+         x="449.9147"
+         y="330.50989"
+         id="text3127"
+         xml:space="preserve"
+         style="font-size:8.70216751px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Time</text>
+      <path
+         d="m 307.35157,271.39886 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.2378 1.25656,0 0,1.2378 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49
 438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.2378 1.25656,0 0,1.2378 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.2
 5656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.4066 1.08777,0 0,1.25656 -0.46887,0 0.63766,-0.6189 0,0.76894 -1.25656,0 z m 2.34433,-1.4066 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.51313,0 1.23781,0 0,1.2
 5656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.51313,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.51313,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51313,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51313,0 1.2378,0 0,1.25656 -1.2378,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51313,0 1.2378,0 0,1.25656 -1.2378,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51312,0 1.23781,
 0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49438,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51312,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.49438,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51312,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.51313,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.51313,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.51313,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51313
 ,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51313,0 1.2378,0 0,1.25656 -1.2378,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51313,0 1.2378,0 0,1.25656 -1.2378,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51312,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49438,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51312,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.49438,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51312,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 
 2.51313,0 0.93773,0 0,1.57539 -1.25656,0 0,-0.93773 0.61891,0.6189 -0.30008,0 0,-1.25656 z m 0.93773,2.8132 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51313 0,1.2378 -1.25656,0 0,-1.2378 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51313 0,1.2378 -1.25656,0 0,-1.2378 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51312 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49438 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51312 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.49438 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51312 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.513
 13 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.51313 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.51313 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51313 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51313 0,1.2378 -1.25656,0 0,-1.2378 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51313 0,1.2378 -1.25656,0 0,-1.2378 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 
 0,2.51312 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49438 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51312 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.49438 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51312 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.51313 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.51313 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.72543 -0.78769,0 0,-1.25656 0.15004,0 -0.61891,0.6189 0,-1.08777 1.25656,0 z m -2.04425,1.72543 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49438,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m
  -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49438,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.49438,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.
 25656 1.25656,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51313,0 -1.2378,0 0,-1.25656 1.2378,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51313,0 -1.2378,0 0,-1.25656 1.2378,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49438,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.49438,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2
 .49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51313,0 -1.2378,0 0,-1.25656 1.2378,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51313,0 -1.2378,0 0,-1.25656 1.2378,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1
 .25656,0 0,1.25656 z m -2.49437,0 -0.63766,0 0,-1.25656 0.63766,0 0,1.25656 z"
+         id="path3129"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 70.977057,294.514 12.753177,-25.4876 -1.115903,-0.56264 -12.753177,25.49697 z m 14.43172,-24.65302 c 0.618905,-1.22843 0.112529,-2.72881 -1.115903,-3.34771 -1.237808,-0.6189 -2.738182,-0.12191 -3.357086,1.1159 -0.618904,1.23781 -0.121905,2.73819 1.115903,3.35709 1.237809,0.6189 2.738182,0.11253 3.357086,-1.12528 z"
+         id="path3131"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 325.78741,293.19179 6.37659,-31.97671 -1.23781,-0.24382 -6.37659,31.97672 z m 8.21455,-31.60162 c 0.26256,-1.36909 -0.61891,-2.68192 -1.96924,-2.94448 -1.35034,-0.28132 -2.66317,0.60014 -2.94449,1.95048 -0.26256,1.36909 0.61891,2.68192 1.96924,2.94448 1.35034,0.28132 2.68192,-0.60014 2.94449,-1.95048 z"
+         id="path3133"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 440.43472,294.07326 -14.57238,-36.49659 1.16279,-0.46887 14.57238,36.49659 z m -16.31656,-35.80267 c -0.50638,-1.27532 0.11253,-2.73818 1.4066,-3.24456 1.27532,-0.52513 2.73818,0.11253 3.24456,1.38785 0.50637,1.27532 -0.11253,2.73818 -1.38785,3.24456 -1.29407,0.52513 -2.73818,-0.11253 -3.26331,-1.38785 z"
+         id="path3135"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+    </g>
+  </g>
+</svg>

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/fig/flink-on-emr.png
----------------------------------------------------------------------
diff --git a/docs/fig/flink-on-emr.png b/docs/fig/flink-on-emr.png
new file mode 100644
index 0000000..f71c004
Binary files /dev/null and b/docs/fig/flink-on-emr.png differ

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/fig/gelly-example-graph.png
----------------------------------------------------------------------
diff --git a/docs/fig/gelly-example-graph.png b/docs/fig/gelly-example-graph.png
new file mode 100644
index 0000000..abef960
Binary files /dev/null and b/docs/fig/gelly-example-graph.png differ

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/fig/gelly-filter.png
----------------------------------------------------------------------
diff --git a/docs/fig/gelly-filter.png b/docs/fig/gelly-filter.png
new file mode 100644
index 0000000..cb09744
Binary files /dev/null and b/docs/fig/gelly-filter.png differ

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/fig/gelly-gsa-sssp1.png
----------------------------------------------------------------------
diff --git a/docs/fig/gelly-gsa-sssp1.png b/docs/fig/gelly-gsa-sssp1.png
new file mode 100644
index 0000000..1eeb1e6
Binary files /dev/null and b/docs/fig/gelly-gsa-sssp1.png differ

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/fig/gelly-reduceOnEdges.png
----------------------------------------------------------------------
diff --git a/docs/fig/gelly-reduceOnEdges.png b/docs/fig/gelly-reduceOnEdges.png
new file mode 100644
index 0000000..ffb674d
Binary files /dev/null and b/docs/fig/gelly-reduceOnEdges.png differ

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/fig/gelly-reduceOnNeighbors.png
----------------------------------------------------------------------
diff --git a/docs/fig/gelly-reduceOnNeighbors.png b/docs/fig/gelly-reduceOnNeighbors.png
new file mode 100644
index 0000000..63137b8
Binary files /dev/null and b/docs/fig/gelly-reduceOnNeighbors.png differ

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/fig/gelly-union.png
----------------------------------------------------------------------
diff --git a/docs/fig/gelly-union.png b/docs/fig/gelly-union.png
new file mode 100644
index 0000000..b00f831
Binary files /dev/null and b/docs/fig/gelly-union.png differ

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/fig/gelly-vc-sssp1.png
----------------------------------------------------------------------
diff --git a/docs/fig/gelly-vc-sssp1.png b/docs/fig/gelly-vc-sssp1.png
new file mode 100644
index 0000000..9497d98
Binary files /dev/null and b/docs/fig/gelly-vc-sssp1.png differ

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/fig/iterations_delta_iterate_operator.png
----------------------------------------------------------------------
diff --git a/docs/fig/iterations_delta_iterate_operator.png b/docs/fig/iterations_delta_iterate_operator.png
new file mode 100644
index 0000000..470485a
Binary files /dev/null and b/docs/fig/iterations_delta_iterate_operator.png differ

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/fig/iterations_delta_iterate_operator_example.png
----------------------------------------------------------------------
diff --git a/docs/fig/iterations_delta_iterate_operator_example.png b/docs/fig/iterations_delta_iterate_operator_example.png
new file mode 100644
index 0000000..15f2b54
Binary files /dev/null and b/docs/fig/iterations_delta_iterate_operator_example.png differ

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/fig/iterations_iterate_operator.png
----------------------------------------------------------------------
diff --git a/docs/fig/iterations_iterate_operator.png b/docs/fig/iterations_iterate_operator.png
new file mode 100644
index 0000000..aaf4158
Binary files /dev/null and b/docs/fig/iterations_iterate_operator.png differ

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/fig/iterations_iterate_operator_example.png
----------------------------------------------------------------------
diff --git a/docs/fig/iterations_iterate_operator_example.png b/docs/fig/iterations_iterate_operator_example.png
new file mode 100644
index 0000000..be4841c
Binary files /dev/null and b/docs/fig/iterations_iterate_operator_example.png differ

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/fig/iterations_supersteps.png
----------------------------------------------------------------------
diff --git a/docs/fig/iterations_supersteps.png b/docs/fig/iterations_supersteps.png
new file mode 100644
index 0000000..331dbc7
Binary files /dev/null and b/docs/fig/iterations_supersteps.png differ


[68/89] [abbrv] flink git commit: [FLINK-4273] Modify JobClient to attach to running jobs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/259a3a55/flink-runtime/src/main/scala/org/apache/flink/runtime/testingUtils/TestingJobManagerLike.scala
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/scala/org/apache/flink/runtime/testingUtils/TestingJobManagerLike.scala b/flink-runtime/src/main/scala/org/apache/flink/runtime/testingUtils/TestingJobManagerLike.scala
index 9640fcd..df4f95a 100644
--- a/flink-runtime/src/main/scala/org/apache/flink/runtime/testingUtils/TestingJobManagerLike.scala
+++ b/flink-runtime/src/main/scala/org/apache/flink/runtime/testingUtils/TestingJobManagerLike.scala
@@ -26,7 +26,7 @@ import org.apache.flink.runtime.execution.ExecutionState
 import org.apache.flink.runtime.jobgraph.JobStatus
 import org.apache.flink.runtime.jobmanager.JobManager
 import org.apache.flink.runtime.messages.ExecutionGraphMessages.JobStatusChanged
-import org.apache.flink.runtime.messages.JobManagerMessages.GrantLeadership
+import org.apache.flink.runtime.messages.JobManagerMessages.{GrantLeadership, RegisterJobClient}
 import org.apache.flink.runtime.messages.Messages.{Acknowledge, Disconnect}
 import org.apache.flink.runtime.messages.RegistrationMessages.RegisterTaskManager
 import org.apache.flink.runtime.messages.TaskManagerMessages.Heartbeat
@@ -67,6 +67,8 @@ trait TestingJobManagerLike extends FlinkActor {
       override def compare(x: (Int, ActorRef), y: (Int, ActorRef)): Int = y._1 - x._1
     })
 
+  val waitForClient = scala.collection.mutable.HashSet[ActorRef]()
+
   val waitForShutdown = scala.collection.mutable.HashSet[ActorRef]()
 
   var disconnectDisabled = false
@@ -328,6 +330,14 @@ trait TestingJobManagerLike extends FlinkActor {
 
       waitForLeader.clear()
 
+    case NotifyWhenClientConnects =>
+      waitForClient += sender()
+      sender() ! true
+
+    case msg: RegisterJobClient =>
+      super.handleMessage(msg)
+      waitForClient.foreach(_ ! true)
+
     case NotifyWhenAtLeastNumTaskManagerAreRegistered(numRegisteredTaskManager) =>
       if (that.instanceManager.getNumberOfRegisteredTaskManagers >= numRegisteredTaskManager) {
         // there are already at least numRegisteredTaskManager registered --> send Acknowledge

http://git-wip-us.apache.org/repos/asf/flink/blob/259a3a55/flink-runtime/src/main/scala/org/apache/flink/runtime/testingUtils/TestingJobManagerMessages.scala
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/scala/org/apache/flink/runtime/testingUtils/TestingJobManagerMessages.scala b/flink-runtime/src/main/scala/org/apache/flink/runtime/testingUtils/TestingJobManagerMessages.scala
index a411c8b..a88ed43 100644
--- a/flink-runtime/src/main/scala/org/apache/flink/runtime/testingUtils/TestingJobManagerMessages.scala
+++ b/flink-runtime/src/main/scala/org/apache/flink/runtime/testingUtils/TestingJobManagerMessages.scala
@@ -83,6 +83,11 @@ object TestingJobManagerMessages {
   case object NotifyWhenLeader
 
   /**
+    * Notifies the sender when the [[TestingJobManager]] receives new clients for jobs
+    */
+  case object NotifyWhenClientConnects
+
+  /**
    * Registers to be notified by an [[org.apache.flink.runtime.messages.Messages.Acknowledge]]
    * message when at least numRegisteredTaskManager have registered at the JobManager.
    *
@@ -111,6 +116,7 @@ object TestingJobManagerMessages {
   case class ResponseSavepoint(savepoint: Savepoint)
 
   def getNotifyWhenLeader(): AnyRef = NotifyWhenLeader
+  def getNotifyWhenClientConnects(): AnyRef = NotifyWhenClientConnects
   def getDisablePostStop(): AnyRef = DisablePostStop
 
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/259a3a55/flink-runtime/src/test/java/org/apache/flink/runtime/client/JobClientActorTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/client/JobClientActorTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/client/JobClientActorTest.java
index 073164c0..2adf7eb 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/client/JobClientActorTest.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/client/JobClientActorTest.java
@@ -25,17 +25,17 @@ import akka.actor.Props;
 import akka.pattern.Patterns;
 import akka.testkit.JavaTestKit;
 import akka.util.Timeout;
-import org.apache.flink.api.common.ExecutionConfig;
 import org.apache.flink.configuration.Configuration;
 import org.apache.flink.runtime.akka.AkkaUtils;
 import org.apache.flink.runtime.akka.FlinkUntypedActor;
 import org.apache.flink.runtime.jobgraph.JobGraph;
 import org.apache.flink.runtime.leaderelection.TestingLeaderRetrievalService;
 import org.apache.flink.runtime.messages.JobClientMessages;
-import org.apache.flink.runtime.messages.JobManagerMessages;
+import org.apache.flink.runtime.messages.JobClientMessages.AttachToJobAndWait;
 import org.apache.flink.runtime.messages.Messages;
 import org.apache.flink.util.TestLogger;
 import org.junit.AfterClass;
+import org.junit.Assert;
 import org.junit.BeforeClass;
 import org.junit.Test;
 import scala.concurrent.Await;
@@ -45,6 +45,8 @@ import scala.concurrent.duration.FiniteDuration;
 import java.util.UUID;
 import java.util.concurrent.TimeUnit;
 
+import static org.apache.flink.runtime.messages.JobManagerMessages.*;
+
 public class JobClientActorTest extends TestLogger {
 
 	private static ActorSystem system;
@@ -62,8 +64,8 @@ public class JobClientActorTest extends TestLogger {
 	}
 
 	/** Tests that a {@link JobClientActorSubmissionTimeoutException} is thrown when the job cannot
-	 * be submitted by the JobClientActor. This is here the case, because the started JobManager
-	 * never replies to a SubmitJob message.
+	 * be submitted by the JobSubmissionClientActor. This is here the case, because the started JobManager
+	 * never replies to a {@link SubmitJob} message.
 	 *
 	 * @throws Exception
 	 */
@@ -84,7 +86,7 @@ public class JobClientActorTest extends TestLogger {
 			leaderSessionID
 		);
 
-		Props jobClientActorProps = JobClientActor.createJobClientActorProps(
+		Props jobClientActorProps = JobSubmissionClientActor.createActorProps(
 			testingLeaderRetrievalService,
 			jobClientActorTimeout,
 			false);
@@ -100,19 +102,56 @@ public class JobClientActorTest extends TestLogger {
 		Await.result(jobExecutionResult, timeout);
 	}
 
+
+	/** Tests that a {@link JobClientActorRegistrationTimeoutException} is thrown when the registration
+	 * cannot be performed at the JobManager by the JobAttachmentClientActor. This is here the case, because the
+	 * started JobManager never replies to a {@link RegisterJobClient} message.
+	 */
+	@Test(expected=JobClientActorRegistrationTimeoutException.class)
+	public void testRegistrationTimeout() throws Exception {
+		FiniteDuration jobClientActorTimeout = new FiniteDuration(5, TimeUnit.SECONDS);
+		FiniteDuration timeout = jobClientActorTimeout.$times(2);
+
+		UUID leaderSessionID = UUID.randomUUID();
+
+		ActorRef jobManager = system.actorOf(
+			Props.create(
+				PlainActor.class,
+				leaderSessionID));
+
+		TestingLeaderRetrievalService testingLeaderRetrievalService = new TestingLeaderRetrievalService(
+			jobManager.path().toString(),
+			leaderSessionID
+		);
+
+		Props jobClientActorProps = JobAttachmentClientActor.createActorProps(
+			testingLeaderRetrievalService,
+			jobClientActorTimeout,
+			false);
+
+		ActorRef jobClientActor = system.actorOf(jobClientActorProps);
+
+		Future<Object> jobExecutionResult = Patterns.ask(
+			jobClientActor,
+			new JobClientMessages.AttachToJobAndWait(testJobGraph.getJobID()),
+			new Timeout(timeout));
+
+		Await.result(jobExecutionResult, timeout);
+	}
+
 	/** Tests that a {@link org.apache.flink.runtime.client.JobClientActorConnectionTimeoutException}
-	 * is thrown when the JobClientActor wants to submit a job but has not connected to a JobManager.
+	 * is thrown when the JobSubmissionClientActor wants to submit a job but has not connected to a JobManager.
 	 *
 	 * @throws Exception
 	 */
 	@Test(expected=JobClientActorConnectionTimeoutException.class)
-	public void testConnectionTimeoutWithoutJobManager() throws Exception {
+	public void testConnectionTimeoutWithoutJobManagerForSubmission() throws Exception {
 		FiniteDuration jobClientActorTimeout = new FiniteDuration(5, TimeUnit.SECONDS);
 		FiniteDuration timeout = jobClientActorTimeout.$times(2);
 
 		TestingLeaderRetrievalService testingLeaderRetrievalService = new TestingLeaderRetrievalService();
 
-		Props jobClientActorProps = JobClientActor.createJobClientActorProps(
+		Props jobClientActorProps = JobSubmissionClientActor.createActorProps(
 			testingLeaderRetrievalService,
 			jobClientActorTimeout,
 			false);
@@ -128,6 +167,32 @@ public class JobClientActorTest extends TestLogger {
 	}
 
 	/** Tests that a {@link org.apache.flink.runtime.client.JobClientActorConnectionTimeoutException}
+	 * is thrown when the JobAttachmentClientActor attach to a job at the JobManager
+	 * but has not connected to a JobManager.
+	 */
+	@Test(expected=JobClientActorConnectionTimeoutException.class)
+	public void testConnectionTimeoutWithoutJobManagerForRegistration() throws Exception {
+		FiniteDuration jobClientActorTimeout = new FiniteDuration(5, TimeUnit.SECONDS);
+		FiniteDuration timeout = jobClientActorTimeout.$times(2);
+
+		TestingLeaderRetrievalService testingLeaderRetrievalService = new TestingLeaderRetrievalService();
+
+		Props jobClientActorProps = JobAttachmentClientActor.createActorProps(
+			testingLeaderRetrievalService,
+			jobClientActorTimeout,
+			false);
+
+		ActorRef jobClientActor = system.actorOf(jobClientActorProps);
+
+		Future<Object> jobExecutionResult = Patterns.ask(
+			jobClientActor,
+			new JobClientMessages.AttachToJobAndWait(testJobGraph.getJobID()),
+			new Timeout(timeout));
+
+		Await.result(jobExecutionResult, timeout);
+	}
+
+	/** Tests that a {@link org.apache.flink.runtime.client.JobClientActorConnectionTimeoutException}
 	 * is thrown after a successful job submission if the JobManager dies.
 	 *
 	 * @throws Exception
@@ -149,7 +214,7 @@ public class JobClientActorTest extends TestLogger {
 			leaderSessionID
 		);
 
-		Props jobClientActorProps = JobClientActor.createJobClientActorProps(
+		Props jobClientActorProps = JobSubmissionClientActor.createActorProps(
 			testingLeaderRetrievalService,
 			jobClientActorTimeout,
 			false);
@@ -170,6 +235,91 @@ public class JobClientActorTest extends TestLogger {
 		Await.result(jobExecutionResult, timeout);
 	}
 
+	/** Tests that a {@link JobClientActorConnectionTimeoutException}
+	 * is thrown after a successful registration of the client at the JobManager.
+	 */
+	@Test(expected=JobClientActorConnectionTimeoutException.class)
+	public void testConnectionTimeoutAfterJobRegistration() throws Exception {
+		FiniteDuration jobClientActorTimeout = new FiniteDuration(5, TimeUnit.SECONDS);
+		FiniteDuration timeout = jobClientActorTimeout.$times(2);
+
+		UUID leaderSessionID = UUID.randomUUID();
+
+		ActorRef jobManager = system.actorOf(
+			Props.create(
+				JobAcceptingActor.class,
+				leaderSessionID));
+
+		TestingLeaderRetrievalService testingLeaderRetrievalService = new TestingLeaderRetrievalService(
+			jobManager.path().toString(),
+			leaderSessionID
+		);
+
+		Props jobClientActorProps = JobAttachmentClientActor.createActorProps(
+			testingLeaderRetrievalService,
+			jobClientActorTimeout,
+			false);
+
+		ActorRef jobClientActor = system.actorOf(jobClientActorProps);
+
+		Future<Object> jobExecutionResult = Patterns.ask(
+			jobClientActor,
+			new AttachToJobAndWait(testJobGraph.getJobID()),
+			new Timeout(timeout));
+
+		Future<Object> waitFuture = Patterns.ask(jobManager, new RegisterTest(), new Timeout(timeout));
+
+		Await.result(waitFuture, timeout);
+
+		jobManager.tell(PoisonPill.getInstance(), ActorRef.noSender());
+
+		Await.result(jobExecutionResult, timeout);
+	}
+
+
+	/** Tests that JobClient throws an Exception if the JobClientActor dies and can't answer to
+	 * {@link akka.actor.Identify} message anymore.
+	 */
+	@Test
+	public void testGuaranteedAnswerIfJobClientDies() throws Exception {
+		FiniteDuration timeout = new FiniteDuration(2, TimeUnit.SECONDS);
+
+			UUID leaderSessionID = UUID.randomUUID();
+
+		ActorRef jobManager = system.actorOf(
+			Props.create(
+				JobAcceptingActor.class,
+				leaderSessionID));
+
+		TestingLeaderRetrievalService testingLeaderRetrievalService = new TestingLeaderRetrievalService(
+			jobManager.path().toString(),
+			leaderSessionID
+		);
+
+		JobListeningContext jobListeningContext =
+			JobClient.submitJob(
+				system,
+				testingLeaderRetrievalService,
+				testJobGraph,
+				timeout,
+				false,
+				getClass().getClassLoader());
+
+		Future<Object> waitFuture = Patterns.ask(jobManager, new RegisterTest(), new Timeout(timeout));
+		Await.result(waitFuture, timeout);
+
+		// kill the job client actor which has been registered at the JobManager
+		jobListeningContext.getJobClientActor().tell(PoisonPill.getInstance(), ActorRef.noSender());
+
+		try {
+			// should not block but return an error
+			JobClient.awaitJobResult(jobListeningContext);
+			Assert.fail();
+		} catch (JobExecutionException e) {
+			// this is what we want
+		}
+	}
+
 	public static class PlainActor extends FlinkUntypedActor {
 
 		private final UUID leaderSessionID;
@@ -180,7 +330,6 @@ public class JobClientActorTest extends TestLogger {
 
 		@Override
 		protected void handleMessage(Object message) throws Exception {
-
 		}
 
 		@Override
@@ -200,17 +349,29 @@ public class JobClientActorTest extends TestLogger {
 
 		@Override
 		protected void handleMessage(Object message) throws Exception {
-			if (message instanceof JobManagerMessages.SubmitJob) {
+			if (message instanceof SubmitJob) {
 				getSender().tell(
-					new JobManagerMessages.JobSubmitSuccess(((JobManagerMessages.SubmitJob) message).jobGraph().getJobID()),
+					new JobSubmitSuccess(((SubmitJob) message).jobGraph().getJobID()),
 					getSelf());
 
 				jobAccepted = true;
 
-				if(testFuture != ActorRef.noSender()) {
+				if (testFuture != ActorRef.noSender()) {
 					testFuture.tell(Messages.getAcknowledge(), getSelf());
 				}
-			} else if (message instanceof RegisterTest) {
+			}
+			else if (message instanceof RegisterJobClient) {
+				getSender().tell(
+					new RegisterJobClientSuccess(((RegisterJobClient) message).jobID()),
+					getSelf());
+
+				jobAccepted = true;
+
+				if (testFuture != ActorRef.noSender()) {
+					testFuture.tell(Messages.getAcknowledge(), getSelf());
+				}
+			}
+			else if (message instanceof RegisterTest) {
 				testFuture = getSender();
 
 				if (jobAccepted) {
@@ -226,4 +387,5 @@ public class JobClientActorTest extends TestLogger {
 	}
 
 	public static class RegisterTest{}
+
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/259a3a55/flink-runtime/src/test/java/org/apache/flink/runtime/jobmanager/ZooKeeperSubmittedJobGraphsStoreITCase.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/jobmanager/ZooKeeperSubmittedJobGraphsStoreITCase.java b/flink-runtime/src/test/java/org/apache/flink/runtime/jobmanager/ZooKeeperSubmittedJobGraphsStoreITCase.java
index c71bd35..426dfba 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/jobmanager/ZooKeeperSubmittedJobGraphsStoreITCase.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/jobmanager/ZooKeeperSubmittedJobGraphsStoreITCase.java
@@ -286,7 +286,6 @@ public class ZooKeeperSubmittedJobGraphsStoreITCase extends TestLogger {
 		JobInfo expectedJobInfo = expected.getJobInfo();
 		JobInfo actualJobInfo = actual.getJobInfo();
 
-		assertEquals(expectedJobInfo.listeningBehaviour(), actualJobInfo.listeningBehaviour());
-		assertEquals(expectedJobInfo.start(), actualJobInfo.start());
+		assertEquals(expectedJobInfo, actualJobInfo);
 	}
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/259a3a55/flink-tests/src/test/java/org/apache/flink/test/clients/examples/JobRetrievalITCase.java
----------------------------------------------------------------------
diff --git a/flink-tests/src/test/java/org/apache/flink/test/clients/examples/JobRetrievalITCase.java b/flink-tests/src/test/java/org/apache/flink/test/clients/examples/JobRetrievalITCase.java
new file mode 100644
index 0000000..db17ee8
--- /dev/null
+++ b/flink-tests/src/test/java/org/apache/flink/test/clients/examples/JobRetrievalITCase.java
@@ -0,0 +1,138 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.test.clients.examples;
+
+import akka.actor.ActorRef;
+import akka.actor.ActorSystem;
+import akka.testkit.JavaTestKit;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.client.program.ClusterClient;
+import org.apache.flink.client.program.StandaloneClusterClient;
+import org.apache.flink.runtime.client.JobRetrievalException;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.runtime.client.JobExecutionException;
+import org.apache.flink.runtime.jobgraph.JobGraph;
+import org.apache.flink.runtime.jobgraph.JobVertex;
+import org.apache.flink.runtime.jobgraph.tasks.AbstractInvokable;
+import org.apache.flink.runtime.minicluster.FlinkMiniCluster;
+import org.apache.flink.runtime.testingUtils.TestingJobManagerMessages;
+import org.apache.flink.test.util.ForkableFlinkMiniCluster;
+import org.apache.flink.util.TestLogger;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import scala.collection.Seq;
+
+import java.util.concurrent.Semaphore;
+
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.fail;
+
+
+/**
+ * Tests retrieval of a job from a running Flink cluster
+ */
+public class JobRetrievalITCase extends TestLogger {
+
+	private static final Semaphore lock = new Semaphore(1);
+
+	private static FlinkMiniCluster cluster;
+
+	@BeforeClass
+	public static void before() {
+		cluster = new ForkableFlinkMiniCluster(new Configuration(), false);
+		cluster.start();
+	}
+
+	@AfterClass
+	public static void after() {
+		cluster.stop();
+		cluster = null;
+	}
+
+	@Test
+	public void testJobRetrieval() throws Exception {
+		final JobID jobID = new JobID();
+
+		final JobVertex imalock = new JobVertex("imalock");
+		imalock.setInvokableClass(SemaphoreInvokable.class);
+
+		final JobGraph jobGraph = new JobGraph(jobID, "testjob", imalock);
+
+		final ClusterClient client = new StandaloneClusterClient(cluster.configuration());
+
+		// acquire the lock to make sure that the job cannot complete until the job client
+		// has been attached in resumingThread
+		lock.acquire();
+		client.runDetached(jobGraph, JobRetrievalITCase.class.getClassLoader());
+
+		final Thread resumingThread = new Thread(new Runnable() {
+			@Override
+			public void run() {
+				try {
+					assertNotNull(client.retrieveJob(jobID));
+				} catch (JobExecutionException e) {
+					fail(e.getMessage());
+				}
+			}
+		});
+
+		final Seq<ActorSystem> actorSystemSeq = cluster.jobManagerActorSystems().get();
+		final ActorSystem actorSystem = actorSystemSeq.last();
+		JavaTestKit testkit = new JavaTestKit(actorSystem);
+
+		final ActorRef jm = cluster.getJobManagersAsJava().get(0);
+		// wait until client connects
+		jm.tell(TestingJobManagerMessages.getNotifyWhenClientConnects(), testkit.getRef());
+		// confirm registration
+		testkit.expectMsgEquals(true);
+
+		// kick off resuming
+		resumingThread.start();
+
+		// wait for client to connect
+		testkit.expectMsgEquals(true);
+		// client has connected, we can release the lock
+		lock.release();
+
+		resumingThread.join();
+	}
+
+	@Test
+	public void testNonExistingJobRetrieval() throws Exception {
+		final JobID jobID = new JobID();
+		ClusterClient client = new StandaloneClusterClient(cluster.configuration());
+
+		try {
+			client.retrieveJob(jobID);
+			fail();
+		} catch (JobRetrievalException e) {
+			// this is what we want
+		}
+	}
+
+
+	public static class SemaphoreInvokable extends AbstractInvokable {
+
+		@Override
+		public void invoke() throws Exception {
+			lock.acquire();
+		}
+	}
+
+}


[49/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/examples.md
----------------------------------------------------------------------
diff --git a/docs/apis/batch/examples.md b/docs/apis/batch/examples.md
deleted file mode 100644
index 3523185..0000000
--- a/docs/apis/batch/examples.md
+++ /dev/null
@@ -1,521 +0,0 @@
----
-title:  "Bundled Examples"
-
-# Sub-level navigation
-sub-nav-group: batch
-sub-nav-pos: 5
-sub-nav-title: Examples
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-The following example programs showcase different applications of Flink
-from simple word counting to graph algorithms. The code samples illustrate the
-use of [Flink's API](index.html).
-
-The full source code of the following and more examples can be found in the __flink-examples-batch__
-or __flink-examples-streaming__ module of the Flink source repository.
-
-* This will be replaced by the TOC
-{:toc}
-
-
-## Running an example
-
-In order to run a Flink example, we assume you have a running Flink instance available. The "Setup" tab in the navigation describes various ways of starting Flink.
-
-The easiest way is running the `./bin/start-local.sh` script, which will start a JobManager locally.
-
-Each binary release of Flink contains an `examples` directory with jar files for each of the examples on this page.
-
-To run the WordCount example, issue the following command:
-
-~~~bash
-./bin/flink run ./examples/batch/WordCount.jar
-~~~
-
-The other examples can be started in a similar way.
-
-Note that many examples run without passing any arguments for them, by using build-in data. To run WordCount with real data, you have to pass the path to the data:
-
-~~~bash
-./bin/flink run ./examples/batch/WordCount.jar --input /path/to/some/text/data --output /path/to/result
-~~~
-
-Note that non-local file systems require a schema prefix, such as `hdfs://`.
-
-
-## Word Count
-WordCount is the "Hello World" of Big Data processing systems. It computes the frequency of words in a text collection. The algorithm works in two steps: First, the texts are splits the text to individual words. Second, the words are grouped and counted.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-
-DataSet<String> text = env.readTextFile("/path/to/file");
-
-DataSet<Tuple2<String, Integer>> counts =
-        // split up the lines in pairs (2-tuples) containing: (word,1)
-        text.flatMap(new Tokenizer())
-        // group by the tuple field "0" and sum up tuple field "1"
-        .groupBy(0)
-        .sum(1);
-
-counts.writeAsCsv(outputPath, "\n", " ");
-
-// User-defined functions
-public static class Tokenizer implements FlatMapFunction<String, Tuple2<String, Integer>> {
-
-    @Override
-    public void flatMap(String value, Collector<Tuple2<String, Integer>> out) {
-        // normalize and split the line
-        String[] tokens = value.toLowerCase().split("\\W+");
-
-        // emit the pairs
-        for (String token : tokens) {
-            if (token.length() > 0) {
-                out.collect(new Tuple2<String, Integer>(token, 1));
-            }   
-        }
-    }
-}
-~~~
-
-The {% gh_link /flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/wordcount/WordCount.java  "WordCount example" %} implements the above described algorithm with input parameters: `--input <path> --output <path>`. As test data, any text file will do.
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-val env = ExecutionEnvironment.getExecutionEnvironment
-
-// get input data
-val text = env.readTextFile("/path/to/file")
-
-val counts = text.flatMap { _.toLowerCase.split("\\W+") filter { _.nonEmpty } }
-  .map { (_, 1) }
-  .groupBy(0)
-  .sum(1)
-
-counts.writeAsCsv(outputPath, "\n", " ")
-~~~
-
-The {% gh_link /flink-examples/flink-examples-batch/src/main/scala/org/apache/flink/examples/scala/wordcount/WordCount.scala  "WordCount example" %} implements the above described algorithm with input parameters: `--input <path> --output <path>`. As test data, any text file will do.
-
-
-</div>
-</div>
-
-## Page Rank
-
-The PageRank algorithm computes the "importance" of pages in a graph defined by links, which point from one pages to another page. It is an iterative graph algorithm, which means that it repeatedly applies the same computation. In each iteration, each page distributes its current rank over all its neighbors, and compute its new rank as a taxed sum of the ranks it received from its neighbors. The PageRank algorithm was popularized by the Google search engine which uses the importance of webpages to rank the results of search queries.
-
-In this simple example, PageRank is implemented with a [bulk iteration](iterations.html) and a fixed number of iterations.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-
-// read the pages and initial ranks by parsing a CSV file
-DataSet<Tuple2<Long, Double>> pagesWithRanks = env.readCsvFile(pagesInputPath)
-						   .types(Long.class, Double.class)
-
-// the links are encoded as an adjacency list: (page-id, Array(neighbor-ids))
-DataSet<Tuple2<Long, Long[]>> pageLinkLists = getLinksDataSet(env);
-
-// set iterative data set
-IterativeDataSet<Tuple2<Long, Double>> iteration = pagesWithRanks.iterate(maxIterations);
-
-DataSet<Tuple2<Long, Double>> newRanks = iteration
-        // join pages with outgoing edges and distribute rank
-        .join(pageLinkLists).where(0).equalTo(0).flatMap(new JoinVertexWithEdgesMatch())
-        // collect and sum ranks
-        .groupBy(0).sum(1)
-        // apply dampening factor
-        .map(new Dampener(DAMPENING_FACTOR, numPages));
-
-DataSet<Tuple2<Long, Double>> finalPageRanks = iteration.closeWith(
-        newRanks,
-        newRanks.join(iteration).where(0).equalTo(0)
-        // termination condition
-        .filter(new EpsilonFilter()));
-
-finalPageRanks.writeAsCsv(outputPath, "\n", " ");
-
-// User-defined functions
-
-public static final class JoinVertexWithEdgesMatch
-                    implements FlatJoinFunction<Tuple2<Long, Double>, Tuple2<Long, Long[]>,
-                                            Tuple2<Long, Double>> {
-
-    @Override
-    public void join(<Tuple2<Long, Double> page, Tuple2<Long, Long[]> adj,
-                        Collector<Tuple2<Long, Double>> out) {
-        Long[] neighbors = adj.f1;
-        double rank = page.f1;
-        double rankToDistribute = rank / ((double) neigbors.length);
-
-        for (int i = 0; i < neighbors.length; i++) {
-            out.collect(new Tuple2<Long, Double>(neighbors[i], rankToDistribute));
-        }
-    }
-}
-
-public static final class Dampener implements MapFunction<Tuple2<Long,Double>, Tuple2<Long,Double>> {
-    private final double dampening, randomJump;
-
-    public Dampener(double dampening, double numVertices) {
-        this.dampening = dampening;
-        this.randomJump = (1 - dampening) / numVertices;
-    }
-
-    @Override
-    public Tuple2<Long, Double> map(Tuple2<Long, Double> value) {
-        value.f1 = (value.f1 * dampening) + randomJump;
-        return value;
-    }
-}
-
-public static final class EpsilonFilter
-                implements FilterFunction<Tuple2<Tuple2<Long, Double>, Tuple2<Long, Double>>> {
-
-    @Override
-    public boolean filter(Tuple2<Tuple2<Long, Double>, Tuple2<Long, Double>> value) {
-        return Math.abs(value.f0.f1 - value.f1.f1) > EPSILON;
-    }
-}
-~~~
-
-The {% gh_link /flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/graph/PageRank.java "PageRank program" %} implements the above example.
-It requires the following parameters to run: `--pages <path> --links <path> --output <path> --numPages <n> --iterations <n>`.
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-// User-defined types
-case class Link(sourceId: Long, targetId: Long)
-case class Page(pageId: Long, rank: Double)
-case class AdjacencyList(sourceId: Long, targetIds: Array[Long])
-
-// set up execution environment
-val env = ExecutionEnvironment.getExecutionEnvironment
-
-// read the pages and initial ranks by parsing a CSV file
-val pages = env.readCsvFile[Page](pagesInputPath)
-
-// the links are encoded as an adjacency list: (page-id, Array(neighbor-ids))
-val links = env.readCsvFile[Link](linksInputPath)
-
-// assign initial ranks to pages
-val pagesWithRanks = pages.map(p => Page(p, 1.0 / numPages))
-
-// build adjacency list from link input
-val adjacencyLists = links
-  // initialize lists
-  .map(e => AdjacencyList(e.sourceId, Array(e.targetId)))
-  // concatenate lists
-  .groupBy("sourceId").reduce {
-  (l1, l2) => AdjacencyList(l1.sourceId, l1.targetIds ++ l2.targetIds)
-  }
-
-// start iteration
-val finalRanks = pagesWithRanks.iterateWithTermination(maxIterations) {
-  currentRanks =>
-    val newRanks = currentRanks
-      // distribute ranks to target pages
-      .join(adjacencyLists).where("pageId").equalTo("sourceId") {
-        (page, adjacent, out: Collector[Page]) =>
-        for (targetId <- adjacent.targetIds) {
-          out.collect(Page(targetId, page.rank / adjacent.targetIds.length))
-        }
-      }
-      // collect ranks and sum them up
-      .groupBy("pageId").aggregate(SUM, "rank")
-      // apply dampening factor
-      .map { p =>
-        Page(p.pageId, (p.rank * DAMPENING_FACTOR) + ((1 - DAMPENING_FACTOR) / numPages))
-      }
-
-    // terminate if no rank update was significant
-    val termination = currentRanks.join(newRanks).where("pageId").equalTo("pageId") {
-      (current, next, out: Collector[Int]) =>
-        // check for significant update
-        if (math.abs(current.rank - next.rank) > EPSILON) out.collect(1)
-    }
-
-    (newRanks, termination)
-}
-
-val result = finalRanks
-
-// emit result
-result.writeAsCsv(outputPath, "\n", " ")
-~~~
-
-he {% gh_link /flink-examples/flink-examples-batch/src/main/scala/org/apache/flink/examples/scala/graph/PageRankBasic.scala "PageRank program" %} implements the above example.
-It requires the following parameters to run: `--pages <path> --links <path> --output <path> --numPages <n> --iterations <n>`.
-</div>
-</div>
-
-Input files are plain text files and must be formatted as follows:
-- Pages represented as an (long) ID separated by new-line characters.
-    * For example `"1\n2\n12\n42\n63\n"` gives five pages with IDs 1, 2, 12, 42, and 63.
-- Links are represented as pairs of page IDs which are separated by space characters. Links are separated by new-line characters:
-    * For example `"1 2\n2 12\n1 12\n42 63\n"` gives four (directed) links (1)->(2), (2)->(12), (1)->(12), and (42)->(63).
-
-For this simple implementation it is required that each page has at least one incoming and one outgoing link (a page can point to itself).
-
-## Connected Components
-
-The Connected Components algorithm identifies parts of a larger graph which are connected by assigning all vertices in the same connected part the same component ID. Similar to PageRank, Connected Components is an iterative algorithm. In each step, each vertex propagates its current component ID to all its neighbors. A vertex accepts the component ID from a neighbor, if it is smaller than its own component ID.
-
-This implementation uses a [delta iteration](iterations.html): Vertices that have not changed their component ID do not participate in the next step. This yields much better performance, because the later iterations typically deal only with a few outlier vertices.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-// read vertex and edge data
-DataSet<Long> vertices = getVertexDataSet(env);
-DataSet<Tuple2<Long, Long>> edges = getEdgeDataSet(env).flatMap(new UndirectEdge());
-
-// assign the initial component IDs (equal to the vertex ID)
-DataSet<Tuple2<Long, Long>> verticesWithInitialId = vertices.map(new DuplicateValue<Long>());
-
-// open a delta iteration
-DeltaIteration<Tuple2<Long, Long>, Tuple2<Long, Long>> iteration =
-        verticesWithInitialId.iterateDelta(verticesWithInitialId, maxIterations, 0);
-
-// apply the step logic:
-DataSet<Tuple2<Long, Long>> changes = iteration.getWorkset()
-        // join with the edges
-        .join(edges).where(0).equalTo(0).with(new NeighborWithComponentIDJoin())
-        // select the minimum neighbor component ID
-        .groupBy(0).aggregate(Aggregations.MIN, 1)
-        // update if the component ID of the candidate is smaller
-        .join(iteration.getSolutionSet()).where(0).equalTo(0)
-        .flatMap(new ComponentIdFilter());
-
-// close the delta iteration (delta and new workset are identical)
-DataSet<Tuple2<Long, Long>> result = iteration.closeWith(changes, changes);
-
-// emit result
-result.writeAsCsv(outputPath, "\n", " ");
-
-// User-defined functions
-
-public static final class DuplicateValue<T> implements MapFunction<T, Tuple2<T, T>> {
-
-    @Override
-    public Tuple2<T, T> map(T vertex) {
-        return new Tuple2<T, T>(vertex, vertex);
-    }
-}
-
-public static final class UndirectEdge
-                    implements FlatMapFunction<Tuple2<Long, Long>, Tuple2<Long, Long>> {
-    Tuple2<Long, Long> invertedEdge = new Tuple2<Long, Long>();
-
-    @Override
-    public void flatMap(Tuple2<Long, Long> edge, Collector<Tuple2<Long, Long>> out) {
-        invertedEdge.f0 = edge.f1;
-        invertedEdge.f1 = edge.f0;
-        out.collect(edge);
-        out.collect(invertedEdge);
-    }
-}
-
-public static final class NeighborWithComponentIDJoin
-                implements JoinFunction<Tuple2<Long, Long>, Tuple2<Long, Long>, Tuple2<Long, Long>> {
-
-    @Override
-    public Tuple2<Long, Long> join(Tuple2<Long, Long> vertexWithComponent, Tuple2<Long, Long> edge) {
-        return new Tuple2<Long, Long>(edge.f1, vertexWithComponent.f1);
-    }
-}
-
-public static final class ComponentIdFilter
-                    implements FlatMapFunction<Tuple2<Tuple2<Long, Long>, Tuple2<Long, Long>>,
-                                            Tuple2<Long, Long>> {
-
-    @Override
-    public void flatMap(Tuple2<Tuple2<Long, Long>, Tuple2<Long, Long>> value,
-                        Collector<Tuple2<Long, Long>> out) {
-        if (value.f0.f1 < value.f1.f1) {
-            out.collect(value.f0);
-        }
-    }
-}
-~~~
-
-The {% gh_link /flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/graph/ConnectedComponents.java "ConnectedComponents program" %} implements the above example. It requires the following parameters to run: `--vertices <path> --edges <path> --output <path> --iterations <n>`.
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-// set up execution environment
-val env = ExecutionEnvironment.getExecutionEnvironment
-
-// read vertex and edge data
-// assign the initial components (equal to the vertex id)
-val vertices = getVerticesDataSet(env).map { id => (id, id) }
-
-// undirected edges by emitting for each input edge the input edges itself and an inverted
-// version
-val edges = getEdgesDataSet(env).flatMap { edge => Seq(edge, (edge._2, edge._1)) }
-
-// open a delta iteration
-val verticesWithComponents = vertices.iterateDelta(vertices, maxIterations, Array(0)) {
-  (s, ws) =>
-
-    // apply the step logic: join with the edges
-    val allNeighbors = ws.join(edges).where(0).equalTo(0) { (vertex, edge) =>
-      (edge._2, vertex._2)
-    }
-
-    // select the minimum neighbor
-    val minNeighbors = allNeighbors.groupBy(0).min(1)
-
-    // update if the component of the candidate is smaller
-    val updatedComponents = minNeighbors.join(s).where(0).equalTo(0) {
-      (newVertex, oldVertex, out: Collector[(Long, Long)]) =>
-        if (newVertex._2 < oldVertex._2) out.collect(newVertex)
-    }
-
-    // delta and new workset are identical
-    (updatedComponents, updatedComponents)
-}
-
-verticesWithComponents.writeAsCsv(outputPath, "\n", " ")
-
-~~~
-
-The {% gh_link /flink-examples/flink-examples-batch/src/main/scala/org/apache/flink/examples/scala/graph/ConnectedComponents.scala "ConnectedComponents program" %} implements the above example. It requires the following parameters to run: `--vertices <path> --edges <path> --output <path> --iterations <n>`.
-</div>
-</div>
-
-Input files are plain text files and must be formatted as follows:
-- Vertices represented as IDs and separated by new-line characters.
-    * For example `"1\n2\n12\n42\n63\n"` gives five vertices with (1), (2), (12), (42), and (63).
-- Edges are represented as pairs for vertex IDs which are separated by space characters. Edges are separated by new-line characters:
-    * For example `"1 2\n2 12\n1 12\n42 63\n"` gives four (undirected) links (1)-(2), (2)-(12), (1)-(12), and (42)-(63).
-
-## Relational Query
-
-The Relational Query example assumes two tables, one with `orders` and the other with `lineitems` as specified by the [TPC-H decision support benchmark](http://www.tpc.org/tpch/). TPC-H is a standard benchmark in the database industry. See below for instructions how to generate the input data.
-
-The example implements the following SQL query.
-
-~~~sql
-SELECT l_orderkey, o_shippriority, sum(l_extendedprice) as revenue
-    FROM orders, lineitem
-WHERE l_orderkey = o_orderkey
-    AND o_orderstatus = "F"
-    AND YEAR(o_orderdate) > 1993
-    AND o_orderpriority LIKE "5%"
-GROUP BY l_orderkey, o_shippriority;
-~~~
-
-The Flink program, which implements the above query looks as follows.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-// get orders data set: (orderkey, orderstatus, orderdate, orderpriority, shippriority)
-DataSet<Tuple5<Integer, String, String, String, Integer>> orders = getOrdersDataSet(env);
-// get lineitem data set: (orderkey, extendedprice)
-DataSet<Tuple2<Integer, Double>> lineitems = getLineitemDataSet(env);
-
-// orders filtered by year: (orderkey, custkey)
-DataSet<Tuple2<Integer, Integer>> ordersFilteredByYear =
-        // filter orders
-        orders.filter(
-            new FilterFunction<Tuple5<Integer, String, String, String, Integer>>() {
-                @Override
-                public boolean filter(Tuple5<Integer, String, String, String, Integer> t) {
-                    // status filter
-                    if(!t.f1.equals(STATUS_FILTER)) {
-                        return false;
-                    // year filter
-                    } else if(Integer.parseInt(t.f2.substring(0, 4)) <= YEAR_FILTER) {
-                        return false;
-                    // order priority filter
-                    } else if(!t.f3.startsWith(OPRIO_FILTER)) {
-                        return false;
-                    }
-                    return true;
-                }
-            })
-        // project fields out that are no longer required
-        .project(0,4).types(Integer.class, Integer.class);
-
-// join orders with lineitems: (orderkey, shippriority, extendedprice)
-DataSet<Tuple3<Integer, Integer, Double>> lineitemsOfOrders =
-        ordersFilteredByYear.joinWithHuge(lineitems)
-                            .where(0).equalTo(0)
-                            .projectFirst(0,1).projectSecond(1)
-                            .types(Integer.class, Integer.class, Double.class);
-
-// extendedprice sums: (orderkey, shippriority, sum(extendedprice))
-DataSet<Tuple3<Integer, Integer, Double>> priceSums =
-        // group by order and sum extendedprice
-        lineitemsOfOrders.groupBy(0,1).aggregate(Aggregations.SUM, 2);
-
-// emit result
-priceSums.writeAsCsv(outputPath);
-~~~
-
-The {% gh_link /flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/relational/TPCHQuery10.java "Relational Query program" %} implements the above query. It requires the following parameters to run: `--orders <path> --lineitem <path> --output <path>`.
-
-</div>
-<div data-lang="scala" markdown="1">
-Coming soon...
-
-The {% gh_link /flink-examples/flink-examples-batch/src/main/scala/org/apache/flink/examples/scala/relational/TPCHQuery3.scala "Relational Query program" %} implements the above query. It requires the following parameters to run: `--orders <path> --lineitem <path> --output <path>`.
-
-</div>
-</div>
-
-The orders and lineitem files can be generated using the [TPC-H benchmark](http://www.tpc.org/tpch/) suite's data generator tool (DBGEN).
-Take the following steps to generate arbitrary large input files for the provided Flink programs:
-
-1.  Download and unpack DBGEN
-2.  Make a copy of *makefile.suite* called *Makefile* and perform the following changes:
-
-~~~bash
-DATABASE = DB2
-MACHINE  = LINUX
-WORKLOAD = TPCH
-CC       = gcc
-~~~
-
-1.  Build DBGEN using *make*
-2.  Generate lineitem and orders relations using dbgen. A scale factor
-    (-s) of 1 results in a generated data set with about 1 GB size.
-
-~~~bash
-./dbgen -T o -s 1
-~~~

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/fault_tolerance.md
----------------------------------------------------------------------
diff --git a/docs/apis/batch/fault_tolerance.md b/docs/apis/batch/fault_tolerance.md
deleted file mode 100644
index 51a6b41..0000000
--- a/docs/apis/batch/fault_tolerance.md
+++ /dev/null
@@ -1,100 +0,0 @@
----
-title: "Fault Tolerance"
-
-# Sub-level navigation
-sub-nav-group: batch
-sub-nav-pos: 2
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-Flink's fault tolerance mechanism recovers programs in the presence of failures and
-continues to execute them. Such failures include machine hardware failures, network failures,
-transient program failures, etc.
-
-* This will be replaced by the TOC
-{:toc}
-
-Batch Processing Fault Tolerance (DataSet API)
-----------------------------------------------
-
-Fault tolerance for programs in the *DataSet API* works by retrying failed executions.
-The number of time that Flink retries the execution before the job is declared as failed is configurable
-via the *execution retries* parameter. A value of *0* effectively means that fault tolerance is deactivated.
-
-To activate the fault tolerance, set the *execution retries* to a value larger than zero. A common choice is a value
-of three.
-
-This example shows how to configure the execution retries for a Flink DataSet program.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-env.setNumberOfExecutionRetries(3);
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = ExecutionEnvironment.getExecutionEnvironment()
-env.setNumberOfExecutionRetries(3)
-{% endhighlight %}
-</div>
-</div>
-
-
-You can also define default values for the number of execution retries and the retry delay in the `flink-conf.yaml`:
-
-~~~
-execution-retries.default: 3
-~~~
-
-
-Retry Delays
-------------
-
-Execution retries can be configured to be delayed. Delaying the retry means that after a failed execution, the re-execution does not start
-immediately, but only after a certain delay.
-
-Delaying the retries can be helpful when the program interacts with external systems where for example connections or pending transactions should reach a timeout before re-execution is attempted.
-
-You can set the retry delay for each program as follows (the sample shows the DataStream API - the DataSet API works similarly):
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
-env.getConfig().setExecutionRetryDelay(5000); // 5000 milliseconds delay
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = StreamExecutionEnvironment.getExecutionEnvironment()
-env.getConfig.setExecutionRetryDelay(5000) // 5000 milliseconds delay
-{% endhighlight %}
-</div>
-</div>
-
-You can also define the default value for the retry delay in the `flink-conf.yaml`:
-
-~~~
-execution-retries.delay: 10 s
-~~~
-
-{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/fig/LICENSE.txt
----------------------------------------------------------------------
diff --git a/docs/apis/batch/fig/LICENSE.txt b/docs/apis/batch/fig/LICENSE.txt
deleted file mode 100644
index 35b8673..0000000
--- a/docs/apis/batch/fig/LICENSE.txt
+++ /dev/null
@@ -1,17 +0,0 @@
-All image files in the folder and its subfolders are
-licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/fig/iterations_delta_iterate_operator.png
----------------------------------------------------------------------
diff --git a/docs/apis/batch/fig/iterations_delta_iterate_operator.png b/docs/apis/batch/fig/iterations_delta_iterate_operator.png
deleted file mode 100644
index 470485a..0000000
Binary files a/docs/apis/batch/fig/iterations_delta_iterate_operator.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/fig/iterations_delta_iterate_operator_example.png
----------------------------------------------------------------------
diff --git a/docs/apis/batch/fig/iterations_delta_iterate_operator_example.png b/docs/apis/batch/fig/iterations_delta_iterate_operator_example.png
deleted file mode 100644
index 15f2b54..0000000
Binary files a/docs/apis/batch/fig/iterations_delta_iterate_operator_example.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/fig/iterations_iterate_operator.png
----------------------------------------------------------------------
diff --git a/docs/apis/batch/fig/iterations_iterate_operator.png b/docs/apis/batch/fig/iterations_iterate_operator.png
deleted file mode 100644
index aaf4158..0000000
Binary files a/docs/apis/batch/fig/iterations_iterate_operator.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/fig/iterations_iterate_operator_example.png
----------------------------------------------------------------------
diff --git a/docs/apis/batch/fig/iterations_iterate_operator_example.png b/docs/apis/batch/fig/iterations_iterate_operator_example.png
deleted file mode 100644
index be4841c..0000000
Binary files a/docs/apis/batch/fig/iterations_iterate_operator_example.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/fig/iterations_supersteps.png
----------------------------------------------------------------------
diff --git a/docs/apis/batch/fig/iterations_supersteps.png b/docs/apis/batch/fig/iterations_supersteps.png
deleted file mode 100644
index 331dbc7..0000000
Binary files a/docs/apis/batch/fig/iterations_supersteps.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/hadoop_compatibility.md
----------------------------------------------------------------------
diff --git a/docs/apis/batch/hadoop_compatibility.md b/docs/apis/batch/hadoop_compatibility.md
deleted file mode 100644
index 2bea8db..0000000
--- a/docs/apis/batch/hadoop_compatibility.md
+++ /dev/null
@@ -1,249 +0,0 @@
----
-title: "Hadoop Compatibility"
-is_beta: true
-# Sub-level navigation
-sub-nav-group: batch
-sub-nav-pos: 7
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-Flink is compatible with Apache Hadoop MapReduce interfaces and therefore allows
-reusing code that was implemented for Hadoop MapReduce.
-
-You can:
-
-- use Hadoop's `Writable` [data types](index.html#data-types) in Flink programs.
-- use any Hadoop `InputFormat` as a [DataSource](index.html#data-sources).
-- use any Hadoop `OutputFormat` as a [DataSink](index.html#data-sinks).
-- use a Hadoop `Mapper` as [FlatMapFunction](dataset_transformations.html#flatmap).
-- use a Hadoop `Reducer` as [GroupReduceFunction](dataset_transformations.html#groupreduce-on-grouped-dataset).
-
-This document shows how to use existing Hadoop MapReduce code with Flink. Please refer to the
-[Connecting to other systems]({{ site.baseurl }}/apis/connectors.html) guide for reading from Hadoop supported file systems.
-
-* This will be replaced by the TOC
-{:toc}
-
-### Project Configuration
-
-Support for Haddop input/output formats is part of the `flink-java` and
-`flink-scala` Maven modules that are always required when writing Flink jobs.
-The code is located in `org.apache.flink.api.java.hadoop` and
-`org.apache.flink.api.scala.hadoop` in an additional sub-package for the
-`mapred` and `mapreduce` API.
-
-Support for Hadoop Mappers and Reducers is contained in the `flink-hadoop-compatibility`
-Maven module.
-This code resides in the `org.apache.flink.hadoopcompatibility`
-package.
-
-Add the following dependency to your `pom.xml` if you want to reuse Mappers
-and Reducers.
-
-~~~xml
-<dependency>
-	<groupId>org.apache.flink</groupId>
-	<artifactId>flink-hadoop-compatibility{{ site.scala_version_suffix }}</artifactId>
-	<version>{{site.version}}</version>
-</dependency>
-~~~
-
-### Using Hadoop Data Types
-
-Flink supports all Hadoop `Writable` and `WritableComparable` data types
-out-of-the-box. You do not need to include the Hadoop Compatibility dependency,
-if you only want to use your Hadoop data types. See the
-[Programming Guide](index.html#data-types) for more details.
-
-### Using Hadoop InputFormats
-
-Hadoop input formats can be used to create a data source by using
-one of the methods `readHadoopFile` or `createHadoopInput` of the
-`ExecutionEnvironment`. The former is used for input formats derived
-from `FileInputFormat` while the latter has to be used for general purpose
-input formats.
-
-The resulting `DataSet` contains 2-tuples where the first field
-is the key and the second field is the value retrieved from the Hadoop
-InputFormat.
-
-The following example shows how to use Hadoop's `TextInputFormat`.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-
-DataSet<Tuple2<LongWritable, Text>> input =
-    env.readHadoopFile(new TextInputFormat(), LongWritable.class, Text.class, textPath);
-
-// Do something with the data.
-[...]
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-val env = ExecutionEnvironment.getExecutionEnvironment
-
-val input: DataSet[(LongWritable, Text)] =
-  env.readHadoopFile(new TextInputFormat, classOf[LongWritable], classOf[Text], textPath)
-
-// Do something with the data.
-[...]
-~~~
-
-</div>
-
-</div>
-
-### Using Hadoop OutputFormats
-
-Flink provides a compatibility wrapper for Hadoop `OutputFormats`. Any class
-that implements `org.apache.hadoop.mapred.OutputFormat` or extends
-`org.apache.hadoop.mapreduce.OutputFormat` is supported.
-The OutputFormat wrapper expects its input data to be a DataSet containing
-2-tuples of key and value. These are to be processed by the Hadoop OutputFormat.
-
-The following example shows how to use Hadoop's `TextOutputFormat`.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-~~~java
-// Obtain the result we want to emit
-DataSet<Tuple2<Text, IntWritable>> hadoopResult = [...]
-
-// Set up the Hadoop TextOutputFormat.
-HadoopOutputFormat<Text, IntWritable> hadoopOF =
-  // create the Flink wrapper.
-  new HadoopOutputFormat<Text, IntWritable>(
-    // set the Hadoop OutputFormat and specify the job.
-    new TextOutputFormat<Text, IntWritable>(), job
-  );
-hadoopOF.getConfiguration().set("mapreduce.output.textoutputformat.separator", " ");
-TextOutputFormat.setOutputPath(job, new Path(outputPath));
-
-// Emit data using the Hadoop TextOutputFormat.
-hadoopResult.output(hadoopOF);
-~~~
-
-</div>
-<div data-lang="scala" markdown="1">
-
-~~~scala
-// Obtain your result to emit.
-val hadoopResult: DataSet[(Text, IntWritable)] = [...]
-
-val hadoopOF = new HadoopOutputFormat[Text,IntWritable](
-  new TextOutputFormat[Text, IntWritable],
-  new JobConf)
-
-hadoopOF.getJobConf.set("mapred.textoutputformat.separator", " ")
-FileOutputFormat.setOutputPath(hadoopOF.getJobConf, new Path(resultPath))
-
-hadoopResult.output(hadoopOF)
-
-
-~~~
-
-</div>
-
-</div>
-
-### Using Hadoop Mappers and Reducers
-
-Hadoop Mappers are semantically equivalent to Flink's [FlatMapFunctions](dataset_transformations.html#flatmap) and Hadoop Reducers are equivalent to Flink's [GroupReduceFunctions](dataset_transformations.html#groupreduce-on-grouped-dataset). Flink provides wrappers for implementations of Hadoop MapReduce's `Mapper` and `Reducer` interfaces, i.e., you can reuse your Hadoop Mappers and Reducers in regular Flink programs. At the moment, only the Mapper and Reduce interfaces of Hadoop's mapred API (`org.apache.hadoop.mapred`) are supported.
-
-The wrappers take a `DataSet<Tuple2<KEYIN,VALUEIN>>` as input and produce a `DataSet<Tuple2<KEYOUT,VALUEOUT>>` as output where `KEYIN` and `KEYOUT` are the keys and `VALUEIN` and `VALUEOUT` are the values of the Hadoop key-value pairs that are processed by the Hadoop functions. For Reducers, Flink offers a wrapper for a GroupReduceFunction with (`HadoopReduceCombineFunction`) and without a Combiner (`HadoopReduceFunction`). The wrappers accept an optional `JobConf` object to configure the Hadoop Mapper or Reducer.
-
-Flink's function wrappers are
-
-- `org.apache.flink.hadoopcompatibility.mapred.HadoopMapFunction`,
-- `org.apache.flink.hadoopcompatibility.mapred.HadoopReduceFunction`, and
-- `org.apache.flink.hadoopcompatibility.mapred.HadoopReduceCombineFunction`.
-
-and can be used as regular Flink [FlatMapFunctions](dataset_transformations.html#flatmap) or [GroupReduceFunctions](dataset_transformations.html#groupreduce-on-grouped-dataset).
-
-The following example shows how to use Hadoop `Mapper` and `Reducer` functions.
-
-~~~java
-// Obtain data to process somehow.
-DataSet<Tuple2<Text, LongWritable>> text = [...]
-
-DataSet<Tuple2<Text, LongWritable>> result = text
-  // use Hadoop Mapper (Tokenizer) as MapFunction
-  .flatMap(new HadoopMapFunction<LongWritable, Text, Text, LongWritable>(
-    new Tokenizer()
-  ))
-  .groupBy(0)
-  // use Hadoop Reducer (Counter) as Reduce- and CombineFunction
-  .reduceGroup(new HadoopReduceCombineFunction<Text, LongWritable, Text, LongWritable>(
-    new Counter(), new Counter()
-  ));
-~~~
-
-**Please note:** The Reducer wrapper works on groups as defined by Flink's [groupBy()](dataset_transformations.html#transformations-on-grouped-dataset) operation. It does not consider any custom partitioners, sort or grouping comparators you might have set in the `JobConf`.
-
-### Complete Hadoop WordCount Example
-
-The following example shows a complete WordCount implementation using Hadoop data types, Input- and OutputFormats, and Mapper and Reducer implementations.
-
-~~~java
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-
-// Set up the Hadoop TextInputFormat.
-Job job = Job.getInstance();
-HadoopInputFormat<LongWritable, Text> hadoopIF =
-  new HadoopInputFormat<LongWritable, Text>(
-    new TextInputFormat(), LongWritable.class, Text.class, job
-  );
-TextInputFormat.addInputPath(job, new Path(inputPath));
-
-// Read data using the Hadoop TextInputFormat.
-DataSet<Tuple2<LongWritable, Text>> text = env.createInput(hadoopIF);
-
-DataSet<Tuple2<Text, LongWritable>> result = text
-  // use Hadoop Mapper (Tokenizer) as MapFunction
-  .flatMap(new HadoopMapFunction<LongWritable, Text, Text, LongWritable>(
-    new Tokenizer()
-  ))
-  .groupBy(0)
-  // use Hadoop Reducer (Counter) as Reduce- and CombineFunction
-  .reduceGroup(new HadoopReduceCombineFunction<Text, LongWritable, Text, LongWritable>(
-    new Counter(), new Counter()
-  ));
-
-// Set up the Hadoop TextOutputFormat.
-HadoopOutputFormat<Text, IntWritable> hadoopOF =
-  new HadoopOutputFormat<Text, IntWritable>(
-    new TextOutputFormat<Text, IntWritable>(), job
-  );
-hadoopOF.getConfiguration().set("mapreduce.output.textoutputformat.separator", " ");
-TextOutputFormat.setOutputPath(job, new Path(outputPath));
-
-// Emit data using the Hadoop TextOutputFormat.
-result.output(hadoopOF);
-
-// Execute Program
-env.execute("Hadoop WordCount");
-~~~


[64/89] [abbrv] flink git commit: [hotfix] [tests] Fix mini cluster usage and logging/printing in CustomDistributionITCase

Posted by se...@apache.org.
[hotfix] [tests] Fix mini cluster usage and logging/printing in CustomDistributionITCase


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/addad1af
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/addad1af
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/addad1af

Branch: refs/heads/flip-6
Commit: addad1af453a088c559db234370db565a35fbc11
Parents: 635c869
Author: Stephan Ewen <se...@apache.org>
Authored: Wed Aug 24 21:02:09 2016 +0200
Committer: Stephan Ewen <se...@apache.org>
Committed: Wed Aug 24 21:19:04 2016 +0200

----------------------------------------------------------------------
 .../CustomDistributionITCase.java               | 110 +++++++++++--------
 1 file changed, 64 insertions(+), 46 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/addad1af/flink-tests/src/test/java/org/apache/flink/test/javaApiOperators/CustomDistributionITCase.java
----------------------------------------------------------------------
diff --git a/flink-tests/src/test/java/org/apache/flink/test/javaApiOperators/CustomDistributionITCase.java b/flink-tests/src/test/java/org/apache/flink/test/javaApiOperators/CustomDistributionITCase.java
index c6bc08e..ca2c156 100644
--- a/flink-tests/src/test/java/org/apache/flink/test/javaApiOperators/CustomDistributionITCase.java
+++ b/flink-tests/src/test/java/org/apache/flink/test/javaApiOperators/CustomDistributionITCase.java
@@ -30,30 +30,60 @@ import org.apache.flink.api.java.utils.DataSetUtils;
 import org.apache.flink.core.memory.DataInputView;
 import org.apache.flink.core.memory.DataOutputView;
 import org.apache.flink.test.javaApiOperators.util.CollectionDataSets;
+import org.apache.flink.test.util.ForkableFlinkMiniCluster;
+import org.apache.flink.test.util.TestBaseUtils;
+import org.apache.flink.test.util.TestEnvironment;
 import org.apache.flink.util.Collector;
-import org.junit.Test;
+import org.apache.flink.util.TestLogger;
 
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
 
 import java.io.IOException;
 
 import static org.junit.Assert.fail;
 
+@SuppressWarnings("serial")
+public class CustomDistributionITCase extends TestLogger {
 
-public class CustomDistributionITCase {
+	// ------------------------------------------------------------------------
+	//  The mini cluster that is shared across tests
+	// ------------------------------------------------------------------------
 
-	@Test
-	public void testPartitionWithDistribution1() throws Exception{
-		/*
-		 * Test the record partitioned rightly with one field according to the customized data distribution
-		 */
+	private static ForkableFlinkMiniCluster cluster;
 
-		ExecutionEnvironment env = ExecutionEnvironment.createLocalEnvironment();
+	@BeforeClass
+	public static void setup() throws Exception {
+		cluster = TestBaseUtils.startCluster(1, 8, false, false, true);
+	}
 
-		DataSet<Tuple3<Integer, Long, String>> input = CollectionDataSets.get3TupleDataSet(env);
+	@AfterClass
+	public static void teardown() throws Exception {
+		TestBaseUtils.stopCluster(cluster, TestBaseUtils.DEFAULT_TIMEOUT);
+	}
+
+	@Before
+	public void prepare() {
+		TestEnvironment clusterEnv = new TestEnvironment(cluster, 1);
+		clusterEnv.setAsContext();
+	}
+
+	// ------------------------------------------------------------------------
+
+	/**
+	 * Test the record partitioned rightly with one field according to the customized data distribution
+	 */
+	@Test
+	public void testPartitionWithDistribution1() throws Exception {
 		final TestDataDist1 dist = new TestDataDist1();
 
+		final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
 		env.setParallelism(dist.getParallelism());
 
+		DataSet<Tuple3<Integer, Long, String>> input = CollectionDataSets.get3TupleDataSet(env);
+
 		DataSet<Boolean> result = DataSetUtils
 			.partitionByRange(input, dist, 0)
 			.mapPartition(new RichMapPartitionFunction<Tuple3<Integer, Long, String>, Boolean>() {
@@ -96,13 +126,15 @@ public class CustomDistributionITCase {
 		env.execute();
 	}
 
+	/**
+	 * Test the record partitioned rightly with two fields according to the customized data distribution
+	 */
 	@Test
-	public void testRangeWithDistribution2() throws Exception{
-		/*
-		 * Test the record partitioned rightly with two fields according to the customized data distribution
-		 */
+	public void testRangeWithDistribution2() throws Exception {
+		final TestDataDist2 dist = new TestDataDist2();
 
-		ExecutionEnvironment env = ExecutionEnvironment.createLocalEnvironment();
+		final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+		env.setParallelism(dist.getParallelism());
 
 		DataSet<Tuple3<Integer, Integer, String>> input = env.fromElements(
 						new Tuple3<>(1, 5, "Hi"),
@@ -122,10 +154,6 @@ public class CustomDistributionITCase {
 						new Tuple3<>(5, 3, "Hi Java again")
 			);
 
-		final TestDataDist2 dist = new TestDataDist2();
-
-		env.setParallelism(dist.getParallelism());
-
 		DataSet<Boolean> result = DataSetUtils
 			.partitionByRange(input, dist, 0, 1)
 			.mapPartition(new RichMapPartitionFunction<Tuple3<Integer, Integer, String>, Boolean>() {
@@ -175,18 +203,18 @@ public class CustomDistributionITCase {
 		env.execute();
 	}
 
+	/*
+	 * Test the number of partition keys less than the number of distribution fields
+	 */
 	@Test
-	public void testPartitionKeyLessDistribution() throws Exception{
-		/*
-		 * Test the number of partition keys less than the number of distribution fields
-		 */
-		ExecutionEnvironment env = ExecutionEnvironment.createLocalEnvironment();
-
-		DataSet<Tuple3<Integer, Long, String>> input = CollectionDataSets.get3TupleDataSet(env);
+	public void testPartitionKeyLessDistribution() throws Exception {
 		final TestDataDist2 dist = new TestDataDist2();
 
+		final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
 		env.setParallelism(dist.getParallelism());
 
+		DataSet<Tuple3<Integer, Long, String>> input = CollectionDataSets.get3TupleDataSet(env);
+
 		DataSet<Boolean> result = DataSetUtils
 			.partitionByRange(input, dist, 0)
 			.mapPartition(new RichMapPartitionFunction<Tuple3<Integer, Long, String>, Boolean>() {
@@ -229,19 +257,17 @@ public class CustomDistributionITCase {
 		env.execute();
 	}
 
+	/*
+	 * Test the number of partition keys larger than the number of distribution fields
+	 */
 	@Test(expected = IllegalArgumentException.class)
-	public void testPartitionMoreThanDistribution() throws Exception{
-		/*
-		 * Test the number of partition keys larger than the number of distribution fields
-		 */
+	public void testPartitionMoreThanDistribution() throws Exception {
+		final TestDataDist2 dist = new TestDataDist2();
 
-		ExecutionEnvironment env = ExecutionEnvironment.createLocalEnvironment();
+		ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
 
 		DataSet<Tuple3<Integer, Long, String>> input = CollectionDataSets.get3TupleDataSet(env);
-		final TestDataDist2 dist = new TestDataDist2();
-
-		DataSet<Tuple3<Integer, Long, String>> result = DataSetUtils
-				.partitionByRange(input, dist, 0, 1, 2);
+		DataSetUtils.partitionByRange(input, dist, 0, 1, 2);
 	}
 	
 	/**
@@ -278,14 +304,10 @@ public class CustomDistributionITCase {
 		}
 
 		@Override
-		public void write(DataOutputView out) throws IOException {
-			
-		}
+		public void write(DataOutputView out) throws IOException {}
 
 		@Override
-		public void read(DataInputView in) throws IOException {
-			
-		}
+		public void read(DataInputView in) throws IOException {}
 	}
 
 	/**
@@ -323,13 +345,9 @@ public class CustomDistributionITCase {
 		}
 
 		@Override
-		public void write(DataOutputView out) throws IOException {
-			
-		}
+		public void write(DataOutputView out) throws IOException {}
 
 		@Override
-		public void read(DataInputView in) throws IOException {
-			
-		}
+		public void read(DataInputView in) throws IOException {}
 	}
 }


[47/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/iterations.md
----------------------------------------------------------------------
diff --git a/docs/apis/batch/iterations.md b/docs/apis/batch/iterations.md
deleted file mode 100644
index a1bd0e9..0000000
--- a/docs/apis/batch/iterations.md
+++ /dev/null
@@ -1,213 +0,0 @@
----
-title:  "Iterations"
-
-# Sub-level navigation
-sub-nav-group: batch
-sub-nav-pos: 3
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-Iterative algorithms occur in many domains of data analysis, such as *machine learning* or *graph analysis*. Such algorithms are crucial in order to realize the promise of Big Data to extract meaningful information out of your data. With increasing interest to run these kinds of algorithms on very large data sets, there is a need to execute iterations in a massively parallel fashion.
-
-Flink programs implement iterative algorithms by defining a **step function** and embedding it into a special iteration operator. There are two  variants of this operator: **Iterate** and **Delta Iterate**. Both operators repeatedly invoke the step function on the current iteration state until a certain termination condition is reached.
-
-Here, we provide background on both operator variants and outline their usage. The [programming guide](index.html) explains how to implement the operators in both Scala and Java. We also support **vertex-centric, scatter-gather, and gather-sum-apply iterations** through Flink's graph processing API, [Gelly]({{site.baseurl}}/libs/gelly_guide.html).
-
-The following table provides an overview of both operators:
-
-
-<table class="table table-striped table-hover table-bordered">
-	<thead>
-		<th></th>
-		<th class="text-center">Iterate</th>
-		<th class="text-center">Delta Iterate</th>
-	</thead>
-	<tr>
-		<td class="text-center" width="20%"><strong>Iteration Input</strong></td>
-		<td class="text-center" width="40%"><strong>Partial Solution</strong></td>
-		<td class="text-center" width="40%"><strong>Workset</strong> and <strong>Solution Set</strong></td>
-	</tr>
-	<tr>
-		<td class="text-center"><strong>Step Function</strong></td>
-		<td colspan="2" class="text-center">Arbitrary Data Flows</td>
-	</tr>
-	<tr>
-		<td class="text-center"><strong>State Update</strong></td>
-		<td class="text-center">Next <strong>partial solution</strong></td>
-		<td>
-			<ul>
-				<li>Next workset</li>
-				<li><strong>Changes to solution set</strong></li>
-			</ul>
-		</td>
-	</tr>
-	<tr>
-		<td class="text-center"><strong>Iteration Result</strong></td>
-		<td class="text-center">Last partial solution</td>
-		<td class="text-center">Solution set state after last iteration</td>
-	</tr>
-	<tr>
-		<td class="text-center"><strong>Termination</strong></td>
-		<td>
-			<ul>
-				<li><strong>Maximum number of iterations</strong> (default)</li>
-				<li>Custom aggregator convergence</li>
-			</ul>
-		</td>
-		<td>
-			<ul>
-				<li><strong>Maximum number of iterations or empty workset</strong> (default)</li>
-				<li>Custom aggregator convergence</li>
-			</ul>
-		</td>
-	</tr>
-</table>
-
-
-* This will be replaced by the TOC
-{:toc}
-
-Iterate Operator
-----------------
-
-The **iterate operator** covers the *simple form of iterations*: in each iteration, the **step function** consumes the **entire input** (the *result of the previous iteration*, or the *initial data set*), and computes the **next version of the partial solution** (e.g. `map`, `reduce`, `join`, etc.).
-
-<p class="text-center">
-    <img alt="Iterate Operator" width="60%" src="fig/iterations_iterate_operator.png" />
-</p>
-
-  1. **Iteration Input**: Initial input for the *first iteration* from a *data source* or *previous operators*.
-  2. **Step Function**: The step function will be executed in each iteration. It is an arbitrary data flow consisting of operators like `map`, `reduce`, `join`, etc. and depends on your specific task at hand.
-  3. **Next Partial Solution**: In each iteration, the output of the step function will be fed back into the *next iteration*.
-  4. **Iteration Result**: Output of the *last iteration* is written to a *data sink* or used as input to the *following operators*.
-
-There are multiple options to specify **termination conditions** for an iteration:
-
-  - **Maximum number of iterations**: Without any further conditions, the iteration will be executed this many times.
-  - **Custom aggregator convergence**: Iterations allow to specify *custom aggregators* and *convergence criteria* like sum aggregate the number of emitted records (aggregator) and terminate if this number is zero (convergence criterion).
-
-You can also think about the iterate operator in pseudo-code:
-
-~~~java
-IterationState state = getInitialState();
-
-while (!terminationCriterion()) {
-	state = step(state);
-}
-
-setFinalState(state);
-~~~
-
-<div class="panel panel-default">
-	<div class="panel-body">
-	See the <strong><a href="index.html">Programming Guide</a> </strong> for details and code examples.</div>
-</div>
-
-### Example: Incrementing Numbers
-
-In the following example, we **iteratively incremenet a set numbers**:
-
-<p class="text-center">
-    <img alt="Iterate Operator Example" width="60%" src="fig/iterations_iterate_operator_example.png" />
-</p>
-
-  1. **Iteration Input**: The inital input is read from a data source and consists of five single-field records (integers `1` to `5`).
-  2. **Step function**: The step function is a single `map` operator, which increments the integer field from `i` to `i+1`. It will be applied to every record of the input.
-  3. **Next Partial Solution**: The output of the step function will be the output of the map operator, i.e. records with incremented integers.
-  4. **Iteration Result**: After ten iterations, the initial numbers will have been incremented ten times, resulting in integers `11` to `15`.
-
-~~~
-// 1st           2nd                       10th
-map(1) -> 2      map(2) -> 3      ...      map(10) -> 11
-map(2) -> 3      map(3) -> 4      ...      map(11) -> 12
-map(3) -> 4      map(4) -> 5      ...      map(12) -> 13
-map(4) -> 5      map(5) -> 6      ...      map(13) -> 14
-map(5) -> 6      map(6) -> 7      ...      map(14) -> 15
-~~~
-
-Note that **1**, **2**, and **4** can be arbitrary data flows.
-
-
-Delta Iterate Operator
-----------------------
-
-The **delta iterate operator** covers the case of **incremental iterations**. Incremental iterations **selectively modify elements** of their **solution** and evolve the solution rather than fully recompute it.
-
-Where applicable, this leads to **more efficient algorithms**, because not every element in the solution set changes in each iteration. This allows to **focus on the hot parts** of the solution and leave the **cold parts untouched**. Frequently, the majority of the solution cools down comparatively fast and the later iterations operate only on a small subset of the data.
-
-<p class="text-center">
-    <img alt="Delta Iterate Operator" width="60%" src="fig/iterations_delta_iterate_operator.png" />
-</p>
-
-  1. **Iteration Input**: The initial workset and solution set are read from *data sources* or *previous operators* as input to the first iteration.
-  2. **Step Function**: The step function will be executed in each iteration. It is an arbitrary data flow consisting of operators like `map`, `reduce`, `join`, etc. and depends on your specific task at hand.
-  3. **Next Workset/Update Solution Set**: The *next workset* drives the iterative computation and will be fed back into the *next iteration*. Furthermore, the solution set will be updated and implicitly forwarded (it is not required to be rebuild). Both data sets can be updated by different operators of the step function.
-  4. **Iteration Result**: After the *last iteration*, the *solution set* is written to a *data sink* or used as input to the *following operators*.
-
-The default **termination condition** for delta iterations is specified by the **empty workset convergence criterion** and a **maximum number of iterations**. The iteration will terminate when a produced *next workset* is empty or when the maximum number of iterations is reached. It is also possible to specify a **custom aggregator** and **convergence criterion**.
-
-You can also think about the iterate operator in pseudo-code:
-
-~~~java
-IterationState workset = getInitialState();
-IterationState solution = getInitialSolution();
-
-while (!terminationCriterion()) {
-	(delta, workset) = step(workset, solution);
-
-	solution.update(delta)
-}
-
-setFinalState(solution);
-~~~
-
-<div class="panel panel-default">
-	<div class="panel-body">
-	See the <strong><a href="index.html">programming guide</a></strong> for details and code examples.</div>
-</div>
-
-### Example: Propagate Minimum in Graph
-
-In the following example, every vertex has an **ID** and a **coloring**. Each vertex will propagate its vertex ID to neighboring vertices. The **goal** is to *assign the minimum ID to every vertex in a subgraph*. If a received ID is smaller then the current one, it changes to the color of the vertex with the received ID. One application of this can be found in *community analysis* or *connected components* computation.
-
-<p class="text-center">
-    <img alt="Delta Iterate Operator Example" width="100%" src="fig/iterations_delta_iterate_operator_example.png" />
-</p>
-
-The **initial input** is set as **both workset and solution set.** In the above figure, the colors visualize the **evolution of the solution set**. With each iteration, the color of the minimum ID is spreading in the respective subgraph. At the same time, the amount of work (exchanged and compared vertex IDs) decreases with each iteration. This corresponds to the **decreasing size of the workset**, which goes from all seven vertices to zero after three iterations, at which time the iteration terminates. The **important observation** is that *the lower subgraph converges before the upper half* does and the delta iteration is able to capture this with the workset abstraction.
-
-In the upper subgraph **ID 1** (*orange*) is the **minimum ID**. In the **first iteration**, it will get propagated to vertex 2, which will subsequently change its color to orange. Vertices 3 and 4 will receive **ID 2** (in *yellow*) as their current minimum ID and change to yellow. Because the color of *vertex 1* didn't change in the first iteration, it can be skipped it in the next workset.
-
-In the lower subgraph **ID 5** (*cyan*) is the **minimum ID**. All vertices of the lower subgraph will receive it in the first iteration. Again, we can skip the unchanged vertices (*vertex 5*) for the next workset.
-
-In the **2nd iteration**, the workset size has already decreased from seven to five elements (vertices 2, 3, 4, 6, and 7). These are part of the iteration and further propagate their current minimum IDs. After this iteration, the lower subgraph has already converged (**cold part** of the graph), as it has no elements in the workset, whereas the upper half needs a further iteration (**hot part** of the graph) for the two remaining workset elements (vertices 3 and 4).
-
-The iteration **terminates**, when the workset is empty after the **3rd iteration**.
-
-<a href="#supersteps"></a>
-
-Superstep Synchronization
--------------------------
-
-We referred to each execution of the step function of an iteration operator as *a single iteration*. In parallel setups, **multiple instances of the step function are evaluated in parallel** on different partitions of the iteration state. In many settings, one evaluation of the step function on all parallel instances forms a so called **superstep**, which is also the granularity of synchronization. Therefore, *all* parallel tasks of an iteration need to complete the superstep, before a next superstep will be initialized. **Termination criteria** will also be evaluated at superstep barriers.
-
-<p class="text-center">
-    <img alt="Supersteps" width="50%" src="fig/iterations_supersteps.png" />
-</p>

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/libs/fig/LICENSE.txt
----------------------------------------------------------------------
diff --git a/docs/apis/batch/libs/fig/LICENSE.txt b/docs/apis/batch/libs/fig/LICENSE.txt
deleted file mode 100644
index 5d0d22b..0000000
--- a/docs/apis/batch/libs/fig/LICENSE.txt
+++ /dev/null
@@ -1,17 +0,0 @@
-All image files in the folder and its subfolders are
-licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/libs/fig/gelly-example-graph.png
----------------------------------------------------------------------
diff --git a/docs/apis/batch/libs/fig/gelly-example-graph.png b/docs/apis/batch/libs/fig/gelly-example-graph.png
deleted file mode 100644
index abef960..0000000
Binary files a/docs/apis/batch/libs/fig/gelly-example-graph.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/libs/fig/gelly-filter.png
----------------------------------------------------------------------
diff --git a/docs/apis/batch/libs/fig/gelly-filter.png b/docs/apis/batch/libs/fig/gelly-filter.png
deleted file mode 100644
index cb09744..0000000
Binary files a/docs/apis/batch/libs/fig/gelly-filter.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/libs/fig/gelly-gsa-sssp1.png
----------------------------------------------------------------------
diff --git a/docs/apis/batch/libs/fig/gelly-gsa-sssp1.png b/docs/apis/batch/libs/fig/gelly-gsa-sssp1.png
deleted file mode 100644
index 1eeb1e6..0000000
Binary files a/docs/apis/batch/libs/fig/gelly-gsa-sssp1.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/libs/fig/gelly-reduceOnEdges.png
----------------------------------------------------------------------
diff --git a/docs/apis/batch/libs/fig/gelly-reduceOnEdges.png b/docs/apis/batch/libs/fig/gelly-reduceOnEdges.png
deleted file mode 100644
index ffb674d..0000000
Binary files a/docs/apis/batch/libs/fig/gelly-reduceOnEdges.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/libs/fig/gelly-reduceOnNeighbors.png
----------------------------------------------------------------------
diff --git a/docs/apis/batch/libs/fig/gelly-reduceOnNeighbors.png b/docs/apis/batch/libs/fig/gelly-reduceOnNeighbors.png
deleted file mode 100644
index 63137b8..0000000
Binary files a/docs/apis/batch/libs/fig/gelly-reduceOnNeighbors.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/libs/fig/gelly-union.png
----------------------------------------------------------------------
diff --git a/docs/apis/batch/libs/fig/gelly-union.png b/docs/apis/batch/libs/fig/gelly-union.png
deleted file mode 100644
index b00f831..0000000
Binary files a/docs/apis/batch/libs/fig/gelly-union.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/libs/fig/gelly-vc-sssp1.png
----------------------------------------------------------------------
diff --git a/docs/apis/batch/libs/fig/gelly-vc-sssp1.png b/docs/apis/batch/libs/fig/gelly-vc-sssp1.png
deleted file mode 100644
index 9497d98..0000000
Binary files a/docs/apis/batch/libs/fig/gelly-vc-sssp1.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/libs/fig/vertex-centric supersteps.png
----------------------------------------------------------------------
diff --git a/docs/apis/batch/libs/fig/vertex-centric supersteps.png b/docs/apis/batch/libs/fig/vertex-centric supersteps.png
deleted file mode 100644
index 6498a25..0000000
Binary files a/docs/apis/batch/libs/fig/vertex-centric supersteps.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/libs/gelly.md
----------------------------------------------------------------------
diff --git a/docs/apis/batch/libs/gelly.md b/docs/apis/batch/libs/gelly.md
deleted file mode 100644
index 773642d..0000000
--- a/docs/apis/batch/libs/gelly.md
+++ /dev/null
@@ -1,26 +0,0 @@
----
-title: Gelly
----
-
-<meta http-equiv="refresh" content="1; url={{ site.baseurl }}/apis/batch/libs/gelly/index.html" />
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-The *Gelly guide* has been moved. Redirecting to [{{ site.baseurl }}/apis/batch/libs/gelly/index.html]({{ site.baseurl }}/apis/batch/libs/gelly/index.html) in 1 second.

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/libs/gelly/graph_algorithms.md
----------------------------------------------------------------------
diff --git a/docs/apis/batch/libs/gelly/graph_algorithms.md b/docs/apis/batch/libs/gelly/graph_algorithms.md
deleted file mode 100644
index b443a29..0000000
--- a/docs/apis/batch/libs/gelly/graph_algorithms.md
+++ /dev/null
@@ -1,311 +0,0 @@
----
-title: Graph Algorithms
-
-# Sub navigation
-sub-nav-group: batch
-sub-nav-parent: gelly
-sub-nav-title: Graph Algorithms
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-The logic blocks with which the `Graph` API and top-level algorithms are assembled are accessible in Gelly as graph
-algorithms in the `org.apache.flink.graph.asm` package. These algorithms provide optimization and tuning through
-configuration parameters and may provide implicit runtime reuse when processing the same input with a similar
-configuration.
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 20%">Algorithm</th>
-      <th class="text-center">Description</th>
-    </tr>
-  </thead>
-
-  <tbody>
-    <tr>
-      <td>degree.annotate.directed.<br/><strong>VertexInDegree</strong></td>
-      <td>
-        <p>Annotate vertices of a <a href="#graph-representation">directed graph</a> with the in-degree.</p>
-{% highlight java %}
-DataSet<Vertex<K, LongValue>> inDegree = graph
-  .run(new VertexInDegree()
-    .setIncludeZeroDegreeVertices(true));
-{% endhighlight %}
-        <p>Optional configuration:</p>
-        <ul>
-          <li><p><strong>setIncludeZeroDegreeVertices</strong>: by default only the edge set is processed for the computation of degree; when this flag is set an additional join is performed against the vertex set in order to output vertices with an in-degree of zero</p></li>
-          <li><p><strong>setParallelism</strong>: override the operator parallelism</p></li>
-        </ul>
-      </td>
-    </tr>
-
-    <tr>
-      <td>degree.annotate.directed.<br/><strong>VertexOutDegree</strong></td>
-      <td>
-        <p>Annotate vertices of a <a href="#graph-representation">directed graph</a> with the out-degree.</p>
-{% highlight java %}
-DataSet<Vertex<K, LongValue>> outDegree = graph
-  .run(new VertexOutDegree()
-    .setIncludeZeroDegreeVertices(true));
-{% endhighlight %}
-        <p>Optional configuration:</p>
-        <ul>
-          <li><p><strong>setIncludeZeroDegreeVertices</strong>: by default only the edge set is processed for the computation of degree; when this flag is set an additional join is performed against the vertex set in order to output vertices with an out-degree of zero</p></li>
-          <li><p><strong>setParallelism</strong>: override the operator parallelism</p></li>
-        </ul>
-      </td>
-    </tr>
-
-    <tr>
-      <td>degree.annotate.directed.<br/><strong>VertexDegrees</strong></td>
-      <td>
-        <p>Annotate vertices of a <a href="#graph-representation">directed graph</a> with the degree, out-degree, and in-degree.</p>
-{% highlight java %}
-DataSet<Vertex<K, Tuple2<LongValue, LongValue>>> degrees = graph
-  .run(new VertexDegrees()
-    .setIncludeZeroDegreeVertices(true));
-{% endhighlight %}
-        <p>Optional configuration:</p>
-        <ul>
-          <li><p><strong>setIncludeZeroDegreeVertices</strong>: by default only the edge set is processed for the computation of degree; when this flag is set an additional join is performed against the vertex set in order to output vertices with out- and in-degree of zero</p></li>
-          <li><p><strong>setParallelism</strong>: override the operator parallelism</p></li>
-        </ul>
-      </td>
-    </tr>
-
-    <tr>
-      <td>degree.annotate.directed.<br/><strong>EdgeSourceDegrees</strong></td>
-      <td>
-        <p>Annotate edges of a <a href="#graph-representation">directed graph</a> with the degree, out-degree, and in-degree of the source ID.</p>
-{% highlight java %}
-DataSet<Edge<K, Tuple2<EV, Degrees>>> sourceDegrees = graph
-  .run(new EdgeSourceDegrees());
-{% endhighlight %}
-        <p>Optional configuration:</p>
-        <ul>
-          <li><p><strong>setParallelism</strong>: override the operator parallelism</p></li>
-        </ul>
-      </td>
-    </tr>
-
-    <tr>
-      <td>degree.annotate.directed.<br/><strong>EdgeTargetDegrees</strong></td>
-      <td>
-        <p>Annotate edges of a <a href="#graph-representation">directed graph</a> with the degree, out-degree, and in-degree of the target ID.</p>
-{% highlight java %}
-DataSet<Edge<K, Tuple2<EV, Degrees>>> targetDegrees = graph
-  .run(new EdgeTargetDegrees();
-{% endhighlight %}
-        <p>Optional configuration:</p>
-        <ul>
-          <li><p><strong>setParallelism</strong>: override the operator parallelism</p></li>
-        </ul>
-      </td>
-    </tr>
-
-    <tr>
-      <td>degree.annotate.directed.<br/><strong>EdgeDegreesPair</strong></td>
-      <td>
-        <p>Annotate edges of a <a href="#graph-representation">directed graph</a> with the degree, out-degree, and in-degree of both the source and target vertices.</p>
-{% highlight java %}
-DataSet<Edge<K, Tuple2<EV, Degrees>>> degrees = graph
-  .run(new EdgeDegreesPair());
-{% endhighlight %}
-        <p>Optional configuration:</p>
-        <ul>
-          <li><p><strong>setParallelism</strong>: override the operator parallelism</p></li>
-        </ul>
-      </td>
-    </tr>
-
-    <tr>
-      <td>degree.annotate.undirected.<br/><strong>VertexDegree</strong></td>
-      <td>
-        <p>Annotate vertices of an <a href="#graph-representation">undirected graph</a> with the degree.</p>
-{% highlight java %}
-DataSet<Vertex<K, LongValue>> degree = graph
-  .run(new VertexDegree()
-    .setIncludeZeroDegreeVertices(true)
-    .setReduceOnTargetId(true));
-{% endhighlight %}
-        <p>Optional configuration:</p>
-        <ul>
-          <li><p><strong>setIncludeZeroDegreeVertices</strong>: by default only the edge set is processed for the computation of degree; when this flag is set an additional join is performed against the vertex set in order to output vertices with a degree of zero</p></li>
-          <li><p><strong>setParallelism</strong>: override the operator parallelism</p></li>
-          <li><p><strong>setReduceOnTargetId</strong>: the degree can be counted from either the edge source or target IDs. By default the source IDs are counted. Reducing on target IDs may optimize the algorithm if the input edge list is sorted by target ID.</p></li>
-        </ul>
-      </td>
-    </tr>
-
-    <tr>
-      <td>degree.annotate.undirected.<br/><strong>EdgeSourceDegree</strong></td>
-      <td>
-        <p>Annotate edges of an <a href="#graph-representation">undirected graph</a> with degree of the source ID.</p>
-{% highlight java %}
-DataSet<Edge<K, Tuple2<EV, LongValue>>> sourceDegree = graph
-  .run(new EdgeSourceDegree()
-    .setReduceOnTargetId(true));
-{% endhighlight %}
-        <p>Optional configuration:</p>
-        <ul>
-          <li><p><strong>setParallelism</strong>: override the operator parallelism</p></li>
-          <li><p><strong>setReduceOnTargetId</strong>: the degree can be counted from either the edge source or target IDs. By default the source IDs are counted. Reducing on target IDs may optimize the algorithm if the input edge list is sorted by target ID.</p></li>
-        </ul>
-      </td>
-    </tr>
-
-    <tr>
-      <td>degree.annotate.undirected.<br/><strong>EdgeTargetDegree</strong></td>
-      <td>
-        <p>Annotate edges of an <a href="#graph-representation">undirected graph</a> with degree of the target ID.</p>
-{% highlight java %}
-DataSet<Edge<K, Tuple2<EV, LongValue>>> targetDegree = graph
-  .run(new EdgeTargetDegree()
-    .setReduceOnSourceId(true));
-{% endhighlight %}
-        <p>Optional configuration:</p>
-        <ul>
-          <li><p><strong>setParallelism</strong>: override the operator parallelism</p></li>
-          <li><p><strong>setReduceOnSourceId</strong>: the degree can be counted from either the edge source or target IDs. By default the target IDs are counted. Reducing on source IDs may optimize the algorithm if the input edge list is sorted by source ID.</p></li>
-        </ul>
-      </td>
-    </tr>
-
-    <tr>
-      <td>degree.annotate.undirected.<br/><strong>EdgeDegreePair</strong></td>
-      <td>
-        <p>Annotate edges of an <a href="#graph-representation">undirected graph</a> with the degree of both the source and target vertices.</p>
-{% highlight java %}
-DataSet<Edge<K, Tuple3<EV, LongValue, LongValue>>> pairDegree = graph
-  .run(new EdgeDegreePair()
-    .setReduceOnTargetId(true));
-{% endhighlight %}
-        <p>Optional configuration:</p>
-        <ul>
-          <li><p><strong>setParallelism</strong>: override the operator parallelism</p></li>
-          <li><p><strong>setReduceOnTargetId</strong>: the degree can be counted from either the edge source or target IDs. By default the source IDs are counted. Reducing on target IDs may optimize the algorithm if the input edge list is sorted by target ID.</p></li>
-        </ul>
-      </td>
-    </tr>
-
-    <tr>
-      <td>degree.filter.undirected.<br/><strong>MaximumDegree</strong></td>
-      <td>
-        <p>Filter an <a href="#graph-representation">undirected graph</a> by maximum degree.</p>
-{% highlight java %}
-Graph<K, VV, EV> filteredGraph = graph
-  .run(new MaximumDegree(5000)
-    .setBroadcastHighDegreeVertices(true)
-    .setReduceOnTargetId(true));
-{% endhighlight %}
-        <p>Optional configuration:</p>
-        <ul>
-          <li><p><strong>setBroadcastHighDegreeVertices</strong>: join high-degree vertices using a broadcast-hash to reduce data shuffling when removing a relatively small number of high-degree vertices.</p></li>
-          <li><p><strong>setParallelism</strong>: override the operator parallelism</p></li>
-          <li><p><strong>setReduceOnTargetId</strong>: the degree can be counted from either the edge source or target IDs. By default the source IDs are counted. Reducing on target IDs may optimize the algorithm if the input edge list is sorted by target ID.</p></li>
-        </ul>
-      </td>
-    </tr>
-
-    <tr>
-      <td>simple.directed.<br/><strong>Simplify</strong></td>
-      <td>
-        <p>Remove self-loops and duplicate edges from a <a href="#graph-representation">directed graph</a>.</p>
-{% highlight java %}
-graph.run(new Simplify());
-{% endhighlight %}
-        <p>Optional configuration:</p>
-        <ul>
-          <li><p><strong>setParallelism</strong>: override the operator parallelism</p></li>
-        </ul>
-      </td>
-    </tr>
-
-    <tr>
-      <td>simple.undirected.<br/><strong>Simplify</strong></td>
-      <td>
-        <p>Add symmetric edges and remove self-loops and duplicate edges from an <a href="#graph-representation">undirected graph</a>.</p>
-{% highlight java %}
-graph.run(new Simplify());
-{% endhighlight %}
-        <p>Optional configuration:</p>
-        <ul>
-          <li><p><strong>setParallelism</strong>: override the operator parallelism</p></li>
-        </ul>
-      </td>
-    </tr>
-
-    <tr>
-      <td>translate.<br/><strong>TranslateGraphIds</strong></td>
-      <td>
-        <p>Translate vertex and edge IDs using the given <code>TranslateFunction</code>.</p>
-{% highlight java %}
-graph.run(new TranslateGraphIds(new LongValueToStringValue()));
-{% endhighlight %}
-        <p>Required configuration:</p>
-        <ul>
-          <li><p><strong>translator</strong>: implements type or value conversion</p></li>
-        </ul>
-        <p>Optional configuration:</p>
-        <ul>
-          <li><p><strong>setParallelism</strong>: override the operator parallelism</p></li>
-        </ul>
-      </td>
-    </tr>
-
-    <tr>
-      <td>translate.<br/><strong>TranslateVertexValues</strong></td>
-      <td>
-        <p>Translate vertex values using the given <code>TranslateFunction</code>.</p>
-{% highlight java %}
-graph.run(new TranslateVertexValues(new LongValueAddOffset(vertexCount)));
-{% endhighlight %}
-        <p>Required configuration:</p>
-        <ul>
-          <li><p><strong>translator</strong>: implements type or value conversion</p></li>
-        </ul>
-        <p>Optional configuration:</p>
-        <ul>
-          <li><p><strong>setParallelism</strong>: override the operator parallelism</p></li>
-        </ul>
-      </td>
-    </tr>
-
-    <tr>
-      <td>translate.<br/><strong>TranslateEdgeValues</strong></td>
-      <td>
-        <p>Translate edge values using the given <code>TranslateFunction</code>.</p>
-{% highlight java %}
-graph.run(new TranslateEdgeValues(new Nullify()));
-{% endhighlight %}
-        <p>Required configuration:</p>
-        <ul>
-          <li><p><strong>translator</strong>: implements type or value conversion</p></li>
-        </ul>
-        <p>Optional configuration:</p>
-        <ul>
-          <li><p><strong>setParallelism</strong>: override the operator parallelism</p></li>
-        </ul>
-      </td>
-    </tr>
-  </tbody>
-</table>
-
-{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/libs/gelly/graph_api.md
----------------------------------------------------------------------
diff --git a/docs/apis/batch/libs/gelly/graph_api.md b/docs/apis/batch/libs/gelly/graph_api.md
deleted file mode 100644
index 6f30911..0000000
--- a/docs/apis/batch/libs/gelly/graph_api.md
+++ /dev/null
@@ -1,836 +0,0 @@
----
-title: Graph API
-
-# Sub navigation
-sub-nav-group: batch
-sub-nav-parent: gelly
-sub-nav-title: Graph API
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-* This will be replaced by the TOC
-{:toc}
-
-Graph Representation
------------
-
-In Gelly, a `Graph` is represented by a `DataSet` of vertices and a `DataSet` of edges.
-
-The `Graph` nodes are represented by the `Vertex` type. A `Vertex` is defined by a unique ID and a value. `Vertex` IDs should implement the `Comparable` interface. Vertices without value can be represented by setting the value type to `NullValue`.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-// create a new vertex with a Long ID and a String value
-Vertex<Long, String> v = new Vertex<Long, String>(1L, "foo");
-
-// create a new vertex with a Long ID and no value
-Vertex<Long, NullValue> v = new Vertex<Long, NullValue>(1L, NullValue.getInstance());
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-// create a new vertex with a Long ID and a String value
-val v = new Vertex(1L, "foo")
-
-// create a new vertex with a Long ID and no value
-val v = new Vertex(1L, NullValue.getInstance())
-{% endhighlight %}
-</div>
-</div>
-
-The graph edges are represented by the `Edge` type. An `Edge` is defined by a source ID (the ID of the source `Vertex`), a target ID (the ID of the target `Vertex`) and an optional value. The source and target IDs should be of the same type as the `Vertex` IDs. Edges with no value have a `NullValue` value type.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-Edge<Long, Double> e = new Edge<Long, Double>(1L, 2L, 0.5);
-
-// reverse the source and target of this edge
-Edge<Long, Double> reversed = e.reverse();
-
-Double weight = e.getValue(); // weight = 0.5
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val e = new Edge(1L, 2L, 0.5)
-
-// reverse the source and target of this edge
-val reversed = e.reverse
-
-val weight = e.getValue // weight = 0.5
-{% endhighlight %}
-</div>
-</div>
-
-In Gelly an `Edge` is always directed from the source vertex to the target vertex. A `Graph` may be undirected if for
-every `Edge` it contains a matching `Edge` from the target vertex to the source vertex.
-
-{% top %}
-
-Graph Creation
------------
-
-You can create a `Graph` in the following ways:
-
-* from a `DataSet` of edges and an optional `DataSet` of vertices:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-
-DataSet<Vertex<String, Long>> vertices = ...
-
-DataSet<Edge<String, Double>> edges = ...
-
-Graph<String, Long, Double> graph = Graph.fromDataSet(vertices, edges, env);
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = ExecutionEnvironment.getExecutionEnvironment
-
-val vertices: DataSet[Vertex[String, Long]] = ...
-
-val edges: DataSet[Edge[String, Double]] = ...
-
-val graph = Graph.fromDataSet(vertices, edges, env)
-{% endhighlight %}
-</div>
-</div>
-
-* from a `DataSet` of `Tuple2` representing the edges. Gelly will convert each `Tuple2` to an `Edge`, where the first field will be the source ID and the second field will be the target ID. Both vertex and edge values will be set to `NullValue`.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-
-DataSet<Tuple2<String, String>> edges = ...
-
-Graph<String, NullValue, NullValue> graph = Graph.fromTuple2DataSet(edges, env);
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = ExecutionEnvironment.getExecutionEnvironment
-
-val edges: DataSet[(String, String)] = ...
-
-val graph = Graph.fromTuple2DataSet(edges, env)
-{% endhighlight %}
-</div>
-</div>
-
-* from a `DataSet` of `Tuple3` and an optional `DataSet` of `Tuple2`. In this case, Gelly will convert each `Tuple3` to an `Edge`, where the first field will be the source ID, the second field will be the target ID and the third field will be the edge value. Equivalently, each `Tuple2` will be converted to a `Vertex`, where the first field will be the vertex ID and the second field will be the vertex value:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-
-DataSet<Tuple2<String, Long>> vertexTuples = env.readCsvFile("path/to/vertex/input").types(String.class, Long.class);
-
-DataSet<Tuple3<String, String, Double>> edgeTuples = env.readCsvFile("path/to/edge/input").types(String.class, String.class, Double.class);
-
-Graph<String, Long, Double> graph = Graph.fromTupleDataSet(vertexTuples, edgeTuples, env);
-{% endhighlight %}
-
-* from a CSV file of Edge data and an optional CSV file of Vertex data. In this case, Gelly will convert each row from the Edge CSV file to an `Edge`, where the first field will be the source ID, the second field will be the target ID and the third field (if present) will be the edge value. Equivalently, each row from the optional Vertex CSV file will be converted to a `Vertex`, where the first field will be the vertex ID and the second field (if present) will be the vertex value. In order to get a `Graph` from a `GraphCsvReader` one has to specify the types, using one of the following methods:
-
-- `types(Class<K> vertexKey, Class<VV> vertexValue,Class<EV> edgeValue)`: both vertex and edge values are present.
-- `edgeTypes(Class<K> vertexKey, Class<EV> edgeValue)`: the Graph has edge values, but no vertex values.
-- `vertexTypes(Class<K> vertexKey, Class<VV> vertexValue)`: the Graph has vertex values, but no edge values.
-- `keyType(Class<K> vertexKey)`: the Graph has no vertex values and no edge values.
-
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-
-// create a Graph with String Vertex IDs, Long Vertex values and Double Edge values
-Graph<String, Long, Double> graph = Graph.fromCsvReader("path/to/vertex/input", "path/to/edge/input", env)
-					.types(String.class, Long.class, Double.class);
-
-
-// create a Graph with neither Vertex nor Edge values
-Graph<Long, NullValue, NullValue> simpleGraph = Graph.fromCsvReader("path/to/edge/input", env).keyType(Long.class);
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = ExecutionEnvironment.getExecutionEnvironment
-
-val vertexTuples = env.readCsvFile[String, Long]("path/to/vertex/input")
-
-val edgeTuples = env.readCsvFile[String, String, Double]("path/to/edge/input")
-
-val graph = Graph.fromTupleDataSet(vertexTuples, edgeTuples, env)
-{% endhighlight %}
-
-* from a CSV file of Edge data and an optional CSV file of Vertex data.
-In this case, Gelly will convert each row from the Edge CSV file to an `Edge`.
-The first field of the each row will be the source ID, the second field will be the target ID and the third field (if present) will be the edge value.
-If the edges have no associated value, set the edge value type parameter (3rd type argument) to `NullValue`.
-You can also specify that the vertices are initialized with a vertex value.
-If you provide a path to a CSV file via `pathVertices`, each row of this file will be converted to a `Vertex`.
-The first field of each row will be the vertex ID and the second field will be the vertex value.
-If you provide a vertex value initializer `MapFunction` via the `vertexValueInitializer` parameter, then this function is used to generate the vertex values.
-The set of vertices will be created automatically from the edges input.
-If the vertices have no associated value, set the vertex value type parameter (2nd type argument) to `NullValue`.
-The vertices will then be automatically created from the edges input with vertex value of type `NullValue`.
-
-{% highlight scala %}
-val env = ExecutionEnvironment.getExecutionEnvironment
-
-// create a Graph with String Vertex IDs, Long Vertex values and Double Edge values
-val graph = Graph.fromCsvReader[String, Long, Double](
-		pathVertices = "path/to/vertex/input",
-		pathEdges = "path/to/edge/input",
-		env = env)
-
-
-// create a Graph with neither Vertex nor Edge values
-val simpleGraph = Graph.fromCsvReader[Long, NullValue, NullValue](
-		pathEdges = "path/to/edge/input",
-		env = env)
-
-// create a Graph with Double Vertex values generated by a vertex value initializer and no Edge values
-val simpleGraph = Graph.fromCsvReader[Long, Double, NullValue](
-        pathEdges = "path/to/edge/input",
-        vertexValueInitializer = new MapFunction[Long, Double]() {
-            def map(id: Long): Double = {
-                id.toDouble
-            }
-        },
-        env = env)
-{% endhighlight %}
-</div>
-</div>
-
-
-* from a `Collection` of edges and an optional `Collection` of vertices:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-
-List<Vertex<Long, Long>> vertexList = new ArrayList...
-
-List<Edge<Long, String>> edgeList = new ArrayList...
-
-Graph<Long, Long, String> graph = Graph.fromCollection(vertexList, edgeList, env);
-{% endhighlight %}
-
-If no vertex input is provided during Graph creation, Gelly will automatically produce the `Vertex` `DataSet` from the edge input. In this case, the created vertices will have no values. Alternatively, you can provide a `MapFunction` as an argument to the creation method, in order to initialize the `Vertex` values:
-
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-
-// initialize the vertex value to be equal to the vertex ID
-Graph<Long, Long, String> graph = Graph.fromCollection(edgeList,
-				new MapFunction<Long, Long>() {
-					public Long map(Long value) {
-						return value;
-					}
-				}, env);
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = ExecutionEnvironment.getExecutionEnvironment
-
-val vertexList = List(...)
-
-val edgeList = List(...)
-
-val graph = Graph.fromCollection(vertexList, edgeList, env)
-{% endhighlight %}
-
-If no vertex input is provided during Graph creation, Gelly will automatically produce the `Vertex` `DataSet` from the edge input. In this case, the created vertices will have no values. Alternatively, you can provide a `MapFunction` as an argument to the creation method, in order to initialize the `Vertex` values:
-
-{% highlight java %}
-val env = ExecutionEnvironment.getExecutionEnvironment
-
-// initialize the vertex value to be equal to the vertex ID
-val graph = Graph.fromCollection(edgeList,
-    new MapFunction[Long, Long] {
-       def map(id: Long): Long = id
-    }, env)
-{% endhighlight %}
-</div>
-</div>
-
-{% top %}
-
-Graph Properties
-------------
-
-Gelly includes the following methods for retrieving various Graph properties and metrics:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-// get the Vertex DataSet
-DataSet<Vertex<K, VV>> getVertices()
-
-// get the Edge DataSet
-DataSet<Edge<K, EV>> getEdges()
-
-// get the IDs of the vertices as a DataSet
-DataSet<K> getVertexIds()
-
-// get the source-target pairs of the edge IDs as a DataSet
-DataSet<Tuple2<K, K>> getEdgeIds()
-
-// get a DataSet of <vertex ID, in-degree> pairs for all vertices
-DataSet<Tuple2<K, LongValue>> inDegrees()
-
-// get a DataSet of <vertex ID, out-degree> pairs for all vertices
-DataSet<Tuple2<K, LongValue>> outDegrees()
-
-// get a DataSet of <vertex ID, degree> pairs for all vertices, where degree is the sum of in- and out- degrees
-DataSet<Tuple2<K, LongValue>> getDegrees()
-
-// get the number of vertices
-long numberOfVertices()
-
-// get the number of edges
-long numberOfEdges()
-
-// get a DataSet of Triplets<srcVertex, trgVertex, edge>
-DataSet<Triplet<K, VV, EV>> getTriplets()
-
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-// get the Vertex DataSet
-getVertices: DataSet[Vertex[K, VV]]
-
-// get the Edge DataSet
-getEdges: DataSet[Edge[K, EV]]
-
-// get the IDs of the vertices as a DataSet
-getVertexIds: DataSet[K]
-
-// get the source-target pairs of the edge IDs as a DataSet
-getEdgeIds: DataSet[(K, K)]
-
-// get a DataSet of <vertex ID, in-degree> pairs for all vertices
-inDegrees: DataSet[(K, LongValue)]
-
-// get a DataSet of <vertex ID, out-degree> pairs for all vertices
-outDegrees: DataSet[(K, LongValue)]
-
-// get a DataSet of <vertex ID, degree> pairs for all vertices, where degree is the sum of in- and out- degrees
-getDegrees: DataSet[(K, LongValue)]
-
-// get the number of vertices
-numberOfVertices: Long
-
-// get the number of edges
-numberOfEdges: Long
-
-// get a DataSet of Triplets<srcVertex, trgVertex, edge>
-getTriplets: DataSet[Triplet[K, VV, EV]]
-
-{% endhighlight %}
-</div>
-</div>
-
-{% top %}
-
-Graph Transformations
------------------
-
-* <strong>Map</strong>: Gelly provides specialized methods for applying a map transformation on the vertex values or edge values. `mapVertices` and `mapEdges` return a new `Graph`, where the IDs of the vertices (or edges) remain unchanged, while the values are transformed according to the provided user-defined map function. The map functions also allow changing the type of the vertex or edge values.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-Graph<Long, Long, Long> graph = Graph.fromDataSet(vertices, edges, env);
-
-// increment each vertex value by one
-Graph<Long, Long, Long> updatedGraph = graph.mapVertices(
-				new MapFunction<Vertex<Long, Long>, Long>() {
-					public Long map(Vertex<Long, Long> value) {
-						return value.getValue() + 1;
-					}
-				});
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = ExecutionEnvironment.getExecutionEnvironment
-val graph = Graph.fromDataSet(vertices, edges, env)
-
-// increment each vertex value by one
-val updatedGraph = graph.mapVertices(v => v.getValue + 1)
-{% endhighlight %}
-</div>
-</div>
-
-* <strong>Translate</strong>: Gelly provides specialized methods for translating the value and/or type of vertex and edge IDs (`translateGraphIDs`), vertex values (`translateVertexValues`), or edge values (`translateEdgeValues`). Translation is performed by the user-defined map function, several of which are provided in the `org.apache.flink.graph.asm.translate` package. The same `MapFunction` can be used for all the three translate methods.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-Graph<Long, Long, Long> graph = Graph.fromDataSet(vertices, edges, env);
-
-// translate each vertex and edge ID to a String
-Graph<String, Long, Long> updatedGraph = graph.translateGraphIds(
-				new MapFunction<Long, String>() {
-					public String map(Long id) {
-						return id.toString();
-					}
-				});
-
-// translate vertex IDs, edge IDs, vertex values, and edge values to LongValue
-Graph<LongValue, LongValue, LongValue> updatedGraph = graph
-                .translateGraphIds(new LongToLongValue())
-                .translateVertexValues(new LongToLongValue())
-                .translateEdgeValues(new LongToLongValue())
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = ExecutionEnvironment.getExecutionEnvironment
-val graph = Graph.fromDataSet(vertices, edges, env)
-
-// translate each vertex and edge ID to a String
-val updatedGraph = graph.translateGraphIds(id => id.toString)
-{% endhighlight %}
-</div>
-</div>
-
-
-* <strong>Filter</strong>: A filter transformation applies a user-defined filter function on the vertices or edges of the `Graph`. `filterOnEdges` will create a sub-graph of the original graph, keeping only the edges that satisfy the provided predicate. Note that the vertex dataset will not be modified. Respectively, `filterOnVertices` applies a filter on the vertices of the graph. Edges whose source and/or target do not satisfy the vertex predicate are removed from the resulting edge dataset. The `subgraph` method can be used to apply a filter function to the vertices and the edges at the same time.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-Graph<Long, Long, Long> graph = ...
-
-graph.subgraph(
-		new FilterFunction<Vertex<Long, Long>>() {
-			   	public boolean filter(Vertex<Long, Long> vertex) {
-					// keep only vertices with positive values
-					return (vertex.getValue() > 0);
-			   }
-		   },
-		new FilterFunction<Edge<Long, Long>>() {
-				public boolean filter(Edge<Long, Long> edge) {
-					// keep only edges with negative values
-					return (edge.getValue() < 0);
-				}
-		})
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val graph: Graph[Long, Long, Long] = ...
-
-// keep only vertices with positive values
-// and only edges with negative values
-graph.subgraph((vertex => vertex.getValue > 0), (edge => edge.getValue < 0))
-{% endhighlight %}
-</div>
-</div>
-
-<p class="text-center">
-    <img alt="Filter Transformations" width="80%" src="fig/gelly-filter.png"/>
-</p>
-
-* <strong>Join</strong>: Gelly provides specialized methods for joining the vertex and edge datasets with other input datasets. `joinWithVertices` joins the vertices with a `Tuple2` input data set. The join is performed using the vertex ID and the first field of the `Tuple2` input as the join keys. The method returns a new `Graph` where the vertex values have been updated according to a provided user-defined transformation function.
-Similarly, an input dataset can be joined with the edges, using one of three methods. `joinWithEdges` expects an input `DataSet` of `Tuple3` and joins on the composite key of both source and target vertex IDs. `joinWithEdgesOnSource` expects a `DataSet` of `Tuple2` and joins on the source key of the edges and the first attribute of the input dataset and `joinWithEdgesOnTarget` expects a `DataSet` of `Tuple2` and joins on the target key of the edges and the first attribute of the input dataset. All three methods apply a transformation function on the edge and the input data set values.
-Note that if the input dataset contains a key multiple times, all Gelly join methods will only consider the first value encountered.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-Graph<Long, Double, Double> network = ...
-
-DataSet<Tuple2<Long, LongValue>> vertexOutDegrees = network.outDegrees();
-
-// assign the transition probabilities as the edge weights
-Graph<Long, Double, Double> networkWithWeights = network.joinWithEdgesOnSource(vertexOutDegrees,
-				new VertexJoinFunction<Double, LongValue>() {
-					public Double vertexJoin(Double vertexValue, LongValue inputValue) {
-						return vertexValue / inputValue.getValue();
-					}
-				});
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val network: Graph[Long, Double, Double] = ...
-
-val vertexOutDegrees: DataSet[(Long, LongValue)] = network.outDegrees
-
-// assign the transition probabilities as the edge weights
-val networkWithWeights = network.joinWithEdgesOnSource(vertexOutDegrees, (v1: Double, v2: LongValue) => v1 / v2.getValue)
-{% endhighlight %}
-</div>
-</div>
-
-* <strong>Reverse</strong>: the `reverse()` method returns a new `Graph` where the direction of all edges has been reversed.
-
-* <strong>Undirected</strong>: In Gelly, a `Graph` is always directed. Undirected graphs can be represented by adding all opposite-direction edges to a graph. For this purpose, Gelly provides the `getUndirected()` method.
-
-* <strong>Union</strong>: Gelly's `union()` method performs a union operation on the vertex and edge sets of the specified graph and the current graph. Duplicate vertices are removed from the resulting `Graph`, while if duplicate edges exist, these will be preserved.
-
-<p class="text-center">
-    <img alt="Union Transformation" width="50%" src="fig/gelly-union.png"/>
-</p>
-
-* <strong>Difference</strong>: Gelly's `difference()` method performs a difference on the vertex and edge sets of the current graph and the specified graph.
-
-* <strong>Intersect</strong>: Gelly's `intersect()` method performs an intersect on the edge
- sets of the current graph and the specified graph. The result is a new `Graph` that contains all
- edges that exist in both input graphs. Two edges are considered equal, if they have the same source
- identifier, target identifier and edge value. Vertices in the resulting graph have no
- value. If vertex values are required, one can for example retrieve them from one of the input graphs using
- the `joinWithVertices()` method.
- Depending on the parameter `distinct`, equal edges are either contained once in the resulting
- `Graph` or as often as there are pairs of equal edges in the input graphs.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-
-// create first graph from edges {(1, 3, 12) (1, 3, 13), (1, 3, 13)}
-List<Edge<Long, Long>> edges1 = ...
-Graph<Long, NullValue, Long> graph1 = Graph.fromCollection(edges1, env);
-
-// create second graph from edges {(1, 3, 13)}
-List<Edge<Long, Long>> edges2 = ...
-Graph<Long, NullValue, Long> graph2 = Graph.fromCollection(edges2, env);
-
-// Using distinct = true results in {(1,3,13)}
-Graph<Long, NullValue, Long> intersect1 = graph1.intersect(graph2, true);
-
-// Using distinct = false results in {(1,3,13),(1,3,13)} as there is one edge pair
-Graph<Long, NullValue, Long> intersect2 = graph1.intersect(graph2, false);
-
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = ExecutionEnvironment.getExecutionEnvironment
-
-// create first graph from edges {(1, 3, 12) (1, 3, 13), (1, 3, 13)}
-val edges1: List[Edge[Long, Long]] = ...
-val graph1 = Graph.fromCollection(edges1, env)
-
-// create second graph from edges {(1, 3, 13)}
-val edges2: List[Edge[Long, Long]] = ...
-val graph2 = Graph.fromCollection(edges2, env)
-
-
-// Using distinct = true results in {(1,3,13)}
-val intersect1 = graph1.intersect(graph2, true)
-
-// Using distinct = false results in {(1,3,13),(1,3,13)} as there is one edge pair
-val intersect2 = graph1.intersect(graph2, false)
-{% endhighlight %}
-</div>
-</div>
-
--{% top %}
-
-Graph Mutations
------------
-
-Gelly includes the following methods for adding and removing vertices and edges from an input `Graph`:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-// adds a Vertex to the Graph. If the Vertex already exists, it will not be added again.
-Graph<K, VV, EV> addVertex(final Vertex<K, VV> vertex)
-
-// adds a list of vertices to the Graph. If the vertices already exist in the graph, they will not be added once more.
-Graph<K, VV, EV> addVertices(List<Vertex<K, VV>> verticesToAdd)
-
-// adds an Edge to the Graph. If the source and target vertices do not exist in the graph, they will also be added.
-Graph<K, VV, EV> addEdge(Vertex<K, VV> source, Vertex<K, VV> target, EV edgeValue)
-
-// adds a list of edges to the Graph. When adding an edge for a non-existing set of vertices, the edge is considered invalid and ignored.
-Graph<K, VV, EV> addEdges(List<Edge<K, EV>> newEdges)
-
-// removes the given Vertex and its edges from the Graph.
-Graph<K, VV, EV> removeVertex(Vertex<K, VV> vertex)
-
-// removes the given list of vertices and their edges from the Graph
-Graph<K, VV, EV> removeVertices(List<Vertex<K, VV>> verticesToBeRemoved)
-
-// removes *all* edges that match the given Edge from the Graph.
-Graph<K, VV, EV> removeEdge(Edge<K, EV> edge)
-
-// removes *all* edges that match the edges in the given list
-Graph<K, VV, EV> removeEdges(List<Edge<K, EV>> edgesToBeRemoved)
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-// adds a Vertex to the Graph. If the Vertex already exists, it will not be added again.
-addVertex(vertex: Vertex[K, VV])
-
-// adds a list of vertices to the Graph. If the vertices already exist in the graph, they will not be added once more.
-addVertices(verticesToAdd: List[Vertex[K, VV]])
-
-// adds an Edge to the Graph. If the source and target vertices do not exist in the graph, they will also be added.
-addEdge(source: Vertex[K, VV], target: Vertex[K, VV], edgeValue: EV)
-
-// adds a list of edges to the Graph. When adding an edge for a non-existing set of vertices, the edge is considered invalid and ignored.
-addEdges(edges: List[Edge[K, EV]])
-
-// removes the given Vertex and its edges from the Graph.
-removeVertex(vertex: Vertex[K, VV])
-
-// removes the given list of vertices and their edges from the Graph
-removeVertices(verticesToBeRemoved: List[Vertex[K, VV]])
-
-// removes *all* edges that match the given Edge from the Graph.
-removeEdge(edge: Edge[K, EV])
-
-// removes *all* edges that match the edges in the given list
-removeEdges(edgesToBeRemoved: List[Edge[K, EV]])
-{% endhighlight %}
-</div>
-</div>
-
-Neighborhood Methods
------------
-
-Neighborhood methods allow vertices to perform an aggregation on their first-hop neighborhood.
-`reduceOnEdges()` can be used to compute an aggregation on the values of the neighboring edges of a vertex and `reduceOnNeighbors()` can be used to compute an aggregation on the values of the neighboring vertices. These methods assume associative and commutative aggregations and exploit combiners internally, significantly improving performance.
-The neighborhood scope is defined by the `EdgeDirection` parameter, which takes the values `IN`, `OUT` or `ALL`. `IN` will gather all in-coming edges (neighbors) of a vertex, `OUT` will gather all out-going edges (neighbors), while `ALL` will gather all edges (neighbors).
-
-For example, assume that you want to select the minimum weight of all out-edges for each vertex in the following graph:
-
-<p class="text-center">
-    <img alt="reduceOnEdges Example" width="50%" src="fig/gelly-example-graph.png"/>
-</p>
-
-The following code will collect the out-edges for each vertex and apply the `SelectMinWeight()` user-defined function on each of the resulting neighborhoods:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-Graph<Long, Long, Double> graph = ...
-
-DataSet<Tuple2<Long, Double>> minWeights = graph.reduceOnEdges(new SelectMinWeight(), EdgeDirection.OUT);
-
-// user-defined function to select the minimum weight
-static final class SelectMinWeight implements ReduceEdgesFunction<Double> {
-
-		@Override
-		public Double reduceEdges(Double firstEdgeValue, Double secondEdgeValue) {
-			return Math.min(firstEdgeValue, secondEdgeValue);
-		}
-}
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val graph: Graph[Long, Long, Double] = ...
-
-val minWeights = graph.reduceOnEdges(new SelectMinWeight, EdgeDirection.OUT)
-
-// user-defined function to select the minimum weight
-final class SelectMinWeight extends ReduceEdgesFunction[Double] {
-	override def reduceEdges(firstEdgeValue: Double, secondEdgeValue: Double): Double = {
-		Math.min(firstEdgeValue, secondEdgeValue)
-	}
- }
-{% endhighlight %}
-</div>
-</div>
-
-<p class="text-center">
-    <img alt="reduceOnEdges Example" width="50%" src="fig/gelly-reduceOnEdges.png"/>
-</p>
-
-Similarly, assume that you would like to compute the sum of the values of all in-coming neighbors, for every vertex. The following code will collect the in-coming neighbors for each vertex and apply the `SumValues()` user-defined function on each neighborhood:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-Graph<Long, Long, Double> graph = ...
-
-DataSet<Tuple2<Long, Long>> verticesWithSum = graph.reduceOnNeighbors(new SumValues(), EdgeDirection.IN);
-
-// user-defined function to sum the neighbor values
-static final class SumValues implements ReduceNeighborsFunction<Long> {
-
-	    	@Override
-	    	public Long reduceNeighbors(Long firstNeighbor, Long secondNeighbor) {
-		    	return firstNeighbor + secondNeighbor;
-	  	}
-}
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val graph: Graph[Long, Long, Double] = ...
-
-val verticesWithSum = graph.reduceOnNeighbors(new SumValues, EdgeDirection.IN)
-
-// user-defined function to sum the neighbor values
-final class SumValues extends ReduceNeighborsFunction[Long] {
-   	override def reduceNeighbors(firstNeighbor: Long, secondNeighbor: Long): Long = {
-    	firstNeighbor + secondNeighbor
-    }
-}
-{% endhighlight %}
-</div>
-</div>
-
-<p class="text-center">
-    <img alt="reduceOnNeighbors Example" width="70%" src="fig/gelly-reduceOnNeighbors.png"/>
-</p>
-
-When the aggregation function is not associative and commutative or when it is desirable to return more than one values per vertex, one can use the more general
-`groupReduceOnEdges()` and `groupReduceOnNeighbors()` methods.
-These methods return zero, one or more values per vertex and provide access to the whole neighborhood.
-
-For example, the following code will output all the vertex pairs which are connected with an edge having a weight of 0.5 or more:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-Graph<Long, Long, Double> graph = ...
-
-DataSet<Tuple2<Vertex<Long, Long>, Vertex<Long, Long>>> vertexPairs = graph.groupReduceOnNeighbors(new SelectLargeWeightNeighbors(), EdgeDirection.OUT);
-
-// user-defined function to select the neighbors which have edges with weight > 0.5
-static final class SelectLargeWeightNeighbors implements NeighborsFunctionWithVertexValue<Long, Long, Double,
-		Tuple2<Vertex<Long, Long>, Vertex<Long, Long>>> {
-
-		@Override
-		public void iterateNeighbors(Vertex<Long, Long> vertex,
-				Iterable<Tuple2<Edge<Long, Double>, Vertex<Long, Long>>> neighbors,
-				Collector<Tuple2<Vertex<Long, Long>, Vertex<Long, Long>>> out) {
-
-			for (Tuple2<Edge<Long, Double>, Vertex<Long, Long>> neighbor : neighbors) {
-				if (neighbor.f0.f2 > 0.5) {
-					out.collect(new Tuple2<Vertex<Long, Long>, Vertex<Long, Long>>(vertex, neighbor.f1));
-				}
-			}
-		}
-}
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val graph: Graph[Long, Long, Double] = ...
-
-val vertexPairs = graph.groupReduceOnNeighbors(new SelectLargeWeightNeighbors, EdgeDirection.OUT)
-
-// user-defined function to select the neighbors which have edges with weight > 0.5
-final class SelectLargeWeightNeighbors extends NeighborsFunctionWithVertexValue[Long, Long, Double,
-  (Vertex[Long, Long], Vertex[Long, Long])] {
-
-	override def iterateNeighbors(vertex: Vertex[Long, Long],
-		neighbors: Iterable[(Edge[Long, Double], Vertex[Long, Long])],
-		out: Collector[(Vertex[Long, Long], Vertex[Long, Long])]) = {
-
-			for (neighbor <- neighbors) {
-				if (neighbor._1.getValue() > 0.5) {
-					out.collect(vertex, neighbor._2);
-				}
-			}
-		}
-   }
-{% endhighlight %}
-</div>
-</div>
-
-When the aggregation computation does not require access to the vertex value (for which the aggregation is performed), it is advised to use the more efficient `EdgesFunction` and `NeighborsFunction` for the user-defined functions. When access to the vertex value is required, one should use `EdgesFunctionWithVertexValue` and `NeighborsFunctionWithVertexValue` instead.
-
-{% top %}
-
-Graph Validation
------------
-
-Gelly provides a simple utility for performing validation checks on input graphs. Depending on the application context, a graph may or may not be valid according to certain criteria. For example, a user might need to validate whether their graph contains duplicate edges or whether its structure is bipartite. In order to validate a graph, one can define a custom `GraphValidator` and implement its `validate()` method. `InvalidVertexIdsValidator` is Gelly's pre-defined validator. It checks that the edge set contains valid vertex IDs, i.e. that all edge IDs
-also exist in the vertex IDs set.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-
-// create a list of vertices with IDs = {1, 2, 3, 4, 5}
-List<Vertex<Long, Long>> vertices = ...
-
-// create a list of edges with IDs = {(1, 2) (1, 3), (2, 4), (5, 6)}
-List<Edge<Long, Long>> edges = ...
-
-Graph<Long, Long, Long> graph = Graph.fromCollection(vertices, edges, env);
-
-// will return false: 6 is an invalid ID
-graph.validate(new InvalidVertexIdsValidator<Long, Long, Long>());
-
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = ExecutionEnvironment.getExecutionEnvironment
-
-// create a list of vertices with IDs = {1, 2, 3, 4, 5}
-val vertices: List[Vertex[Long, Long]] = ...
-
-// create a list of edges with IDs = {(1, 2) (1, 3), (2, 4), (5, 6)}
-val edges: List[Edge[Long, Long]] = ...
-
-val graph = Graph.fromCollection(vertices, edges, env)
-
-// will return false: 6 is an invalid ID
-graph.validate(new InvalidVertexIdsValidator[Long, Long, Long])
-
-{% endhighlight %}
-</div>
-</div>
-
-{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/libs/gelly/graph_generators.md
----------------------------------------------------------------------
diff --git a/docs/apis/batch/libs/gelly/graph_generators.md b/docs/apis/batch/libs/gelly/graph_generators.md
deleted file mode 100644
index 029bede..0000000
--- a/docs/apis/batch/libs/gelly/graph_generators.md
+++ /dev/null
@@ -1,657 +0,0 @@
----
-title: Graph Generators
-
-# Sub navigation
-sub-nav-group: batch
-sub-nav-parent: gelly
-sub-nav-title: Graph Generators
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-* This will be replaced by the TOC
-{:toc}
-
-Gelly provides a collection of scalable graph generators. Each generator is
-
-* parallelizable, in order to create large datasets
-* scale-free, generating the same graph regardless of parallelism
-* thrifty, using as few operators as possible
-
-Graph generators are configured using the builder pattern. The parallelism of generator
-operators can be set explicitly by calling `setParallelism(parallelism)`. Lowering the
-parallelism will reduce the allocation of memory and network buffers.
-
-Graph-specific configuration must be called first, then configuration common to all
-generators, and lastly the call to `generate()`. The following example configures a
-grid graph with two dimensions, configures the parallelism, and generates the graph.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-
-boolean wrapEndpoints = false;
-
-int parallelism = 4;
-
-Graph<LongValue,NullValue,NullValue> graph = new GridGraph(env)
-    .addDimension(2, wrapEndpoints)
-    .addDimension(4, wrapEndpoints)
-    .setParallelism(parallelism)
-    .generate();
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-import org.apache.flink.api.scala._
-import org.apache.flink.graph.generator.GridGraph
-
-val env: ExecutionEnvironment = ExecutionEnvironment.getExecutionEnvironment
-
-wrapEndpoints = false
-
-val parallelism = 4
-
-val graph = new GridGraph(env.getJavaEnv).addDimension(2, wrapEndpoints).addDimension(4, wrapEndpoints).setParallelism(parallelism).generate()
-{% endhighlight %}
-</div>
-</div>
-
-## Complete Graph
-
-An undirected graph connecting every distinct pair of vertices.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-
-long vertexCount = 5;
-
-Graph<LongValue,NullValue,NullValue> graph = new CompleteGraph(env, vertexCount)
-    .generate();
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-import org.apache.flink.api.scala._
-import org.apache.flink.graph.generator.CompleteGraph
-
-val env: ExecutionEnvironment = ExecutionEnvironment.getExecutionEnvironment
-
-val vertexCount = 5
-
-val graph = new CompleteGraph(env.getJavaEnv, vertexCount).generate()
-{% endhighlight %}
-</div>
-</div>
-
-<svg class="graph" width="540" height="540"
-    xmlns="http://www.w3.org/2000/svg"
-    xmlns:xlink="http://www.w3.org/1999/xlink">
-
-    <line x1="270" y1="40" x2="489" y2="199" />
-    <line x1="270" y1="40" x2="405" y2="456" />
-    <line x1="270" y1="40" x2="135" y2="456" />
-    <line x1="270" y1="40" x2="51" y2="199" />
-
-    <line x1="489" y1="199" x2="405" y2="456" />
-    <line x1="489" y1="199" x2="135" y2="456" />
-    <line x1="489" y1="199" x2="51" y2="199" />
-
-    <line x1="405" y1="456" x2="135" y2="456" />
-    <line x1="405" y1="456" x2="51" y2="199" />
-
-    <line x1="135" y1="456" x2="51" y2="199" />
-
-    <circle cx="270" cy="40" r="20" />
-    <text x="270" y="40">0</text>
-
-    <circle cx="489" cy="199" r="20" />
-    <text x="489" y="199">1</text>
-
-    <circle cx="405" cy="456" r="20" />
-    <text x="405" y="456">2</text>
-
-    <circle cx="135" cy="456" r="20" />
-    <text x="135" y="456">3</text>
-
-    <circle cx="51" cy="199" r="20" />
-    <text x="51" y="199">4</text>
-</svg>
-
-## Cycle Graph
-
-An undirected graph where all edges form a single cycle.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-
-long vertexCount = 5;
-
-Graph<LongValue,NullValue,NullValue> graph = new CycleGraph(env, vertexCount)
-    .generate();
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-import org.apache.flink.api.scala._
-import org.apache.flink.graph.generator.CycleGraph
-
-val env: ExecutionEnvironment = ExecutionEnvironment.getExecutionEnvironment
-
-val vertexCount = 5
-
-val graph = new CycleGraph(env.getJavaEnv, vertexCount).generate()
-{% endhighlight %}
-</div>
-</div>
-
-<svg class="graph" width="540" height="540"
-    xmlns="http://www.w3.org/2000/svg"
-    xmlns:xlink="http://www.w3.org/1999/xlink">
-
-    <line x1="270" y1="40" x2="489" y2="199" />
-    <line x1="489" y1="199" x2="405" y2="456" />
-    <line x1="405" y1="456" x2="135" y2="456" />
-    <line x1="135" y1="456" x2="51" y2="199" />
-    <line x1="51" y1="199" x2="270" y2="40" />
-
-    <circle cx="270" cy="40" r="20" />
-    <text x="270" y="40">0</text>
-
-    <circle cx="489" cy="199" r="20" />
-    <text x="489" y="199">1</text>
-
-    <circle cx="405" cy="456" r="20" />
-    <text x="405" y="456">2</text>
-
-    <circle cx="135" cy="456" r="20" />
-    <text x="135" y="456">3</text>
-
-    <circle cx="51" cy="199" r="20" />
-    <text x="51" y="199">4</text>
-</svg>
-
-## Empty Graph
-
-The graph containing no edges.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-
-long vertexCount = 5;
-
-Graph<LongValue,NullValue,NullValue> graph = new EmptyGraph(env, vertexCount)
-    .generate();
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-import org.apache.flink.api.scala._
-import org.apache.flink.graph.generator.EmptyGraph
-
-val env: ExecutionEnvironment = ExecutionEnvironment.getExecutionEnvironment
-
-val vertexCount = 5
-
-val graph = new EmptyGraph(env.getJavaEnv, vertexCount).generate()
-{% endhighlight %}
-</div>
-</div>
-
-<svg class="graph" width="540" height="80"
-    xmlns="http://www.w3.org/2000/svg"
-    xmlns:xlink="http://www.w3.org/1999/xlink">
-
-    <circle cx="30" cy="40" r="20" />
-    <text x="30" y="40">0</text>
-
-    <circle cx="150" cy="40" r="20" />
-    <text x="150" y="40">1</text>
-
-    <circle cx="270" cy="40" r="20" />
-    <text x="270" y="40">2</text>
-
-    <circle cx="390" cy="40" r="20" />
-    <text x="390" y="40">3</text>
-
-    <circle cx="510" cy="40" r="20" />
-    <text x="510" y="40">4</text>
-</svg>
-
-## Grid Graph
-
-An undirected graph connecting vertices in a regular tiling in one or more dimensions.
-Each dimension is configured separately. When the dimension size is at least three the
-endpoints are optionally connected by setting `wrapEndpoints`. Changing the following
-example to `addDimension(4, true)` would connect `0` to `3` and `4` to `7`.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-
-boolean wrapEndpoints = false;
-
-Graph<LongValue,NullValue,NullValue> graph = new GridGraph(env)
-    .addDimension(2, wrapEndpoints)
-    .addDimension(4, wrapEndpoints)
-    .generate();
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-import org.apache.flink.api.scala._
-import org.apache.flink.graph.generator.GridGraph
-
-val env: ExecutionEnvironment = ExecutionEnvironment.getExecutionEnvironment
-
-val wrapEndpoints = false
-
-val graph = new GridGraph(env.getJavaEnv).addDimension(2, wrapEndpoints).addDimension(4, wrapEndpoints).generate()
-{% endhighlight %}
-</div>
-</div>
-
-<svg class="graph" width="540" height="200"
-    xmlns="http://www.w3.org/2000/svg"
-    xmlns:xlink="http://www.w3.org/1999/xlink">
-
-    <line x1="30" y1="40" x2="510" y2="40" />
-    <line x1="30" y1="160" x2="510" y2="160" />
-
-    <line x1="30" y1="40" x2="30" y2="160" />
-    <line x1="190" y1="40" x2="190" y2="160" />
-    <line x1="350" y1="40" x2="350" y2="160" />
-    <line x1="510" y1="40" x2="510" y2="160" />
-
-    <circle cx="30" cy="40" r="20" />
-    <text x="30" y="40">0</text>
-
-    <circle cx="190" cy="40" r="20" />
-    <text x="190" y="40">1</text>
-
-    <circle cx="350" cy="40" r="20" />
-    <text x="350" y="40">2</text>
-
-    <circle cx="510" cy="40" r="20" />
-    <text x="510" y="40">3</text>
-
-    <circle cx="30" cy="160" r="20" />
-    <text x="30" y="160">4</text>
-
-    <circle cx="190" cy="160" r="20" />
-    <text x="190" y="160">5</text>
-
-    <circle cx="350" cy="160" r="20" />
-    <text x="350" y="160">6</text>
-
-    <circle cx="510" cy="160" r="20" />
-    <text x="510" y="160">7</text>
-</svg>
-
-## Hypercube Graph
-
-An undirected graph where edges form an n-dimensional hypercube. Each vertex
-in a hypercube connects to one other vertex in each dimension.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-
-long dimensions = 3;
-
-Graph<LongValue,NullValue,NullValue> graph = new HypercubeGraph(env, dimensions)
-    .generate();
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-import org.apache.flink.api.scala._
-import org.apache.flink.graph.generator.HypercubeGraph
-
-val env: ExecutionEnvironment = ExecutionEnvironment.getExecutionEnvironment
-
-val dimensions = 3
-
-val graph = new HypercubeGraph(env.getJavaEnv, dimensions).generate()
-{% endhighlight %}
-</div>
-</div>
-
-<svg class="graph" width="540" height="320"
-    xmlns="http://www.w3.org/2000/svg"
-    xmlns:xlink="http://www.w3.org/1999/xlink">
-
-    <line x1="190" y1="120" x2="350" y2="120" />
-    <line x1="190" y1="200" x2="350" y2="200" />
-    <line x1="190" y1="120" x2="190" y2="200" />
-    <line x1="350" y1="120" x2="350" y2="200" />
-
-    <line x1="30" y1="40" x2="510" y2="40" />
-    <line x1="30" y1="280" x2="510" y2="280" />
-    <line x1="30" y1="40" x2="30" y2="280" />
-    <line x1="510" y1="40" x2="510" y2="280" />
-
-    <line x1="190" y1="120" x2="30" y2="40" />
-    <line x1="350" y1="120" x2="510" y2="40" />
-    <line x1="190" y1="200" x2="30" y2="280" />
-    <line x1="350" y1="200" x2="510" y2="280" />
-
-    <circle cx="190" cy="120" r="20" />
-    <text x="190" y="120">0</text>
-
-    <circle cx="350" cy="120" r="20" />
-    <text x="350" y="120">1</text>
-
-    <circle cx="190" cy="200" r="20" />
-    <text x="190" y="200">2</text>
-
-    <circle cx="350" cy="200" r="20" />
-    <text x="350" y="200">3</text>
-
-    <circle cx="30" cy="40" r="20" />
-    <text x="30" y="40">4</text>
-
-    <circle cx="510" cy="40" r="20" />
-    <text x="510" y="40">5</text>
-
-    <circle cx="30" cy="280" r="20" />
-    <text x="30" y="280">6</text>
-
-    <circle cx="510" cy="280" r="20" />
-    <text x="510" y="280">7</text>
-</svg>
-
-## Path Graph
-
-An undirected Graph where all edges form a single path.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-
-long vertexCount = 5
-
-Graph<LongValue,NullValue,NullValue> graph = new PathGraph(env, vertexCount)
-    .generate();
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-import org.apache.flink.api.scala._
-import org.apache.flink.graph.generator.PathGraph
-
-val env: ExecutionEnvironment = ExecutionEnvironment.getExecutionEnvironment
-
-val vertexCount = 5
-
-val graph = new PathGraph(env.getJavaEnv, vertexCount).generate()
-{% endhighlight %}
-</div>
-</div>
-
-<svg class="graph" width="540" height="80"
-    xmlns="http://www.w3.org/2000/svg"
-    xmlns:xlink="http://www.w3.org/1999/xlink">
-
-    <line x1="30" y1="40" x2="510" y2="40" />
-
-    <circle cx="30" cy="40" r="20" />
-    <text x="30" y="40">0</text>
-
-    <circle cx="150" cy="40" r="20" />
-    <text x="150" y="40">1</text>
-
-    <circle cx="270" cy="40" r="20" />
-    <text x="270" y="40">2</text>
-
-    <circle cx="390" cy="40" r="20" />
-    <text x="390" y="40">3</text>
-
-    <circle cx="510" cy="40" r="20" />
-    <text x="510" y="40">4</text>
-</svg>
-
-## RMat Graph
-
-A directed or undirected power-law graph generated using the
-[Recursive Matrix (R-Mat)](http://www.cs.cmu.edu/~christos/PUBLICATIONS/siam04.pdf) model.
-
-RMat is a stochastic generator configured with a source of randomness implementing the
-`RandomGenerableFactory` interface. Provided implementations are `JDKRandomGeneratorFactory`
-and `MersenneTwisterFactory`. These generate an initial sequence of random values which are
-then used as seeds for generating the edges.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-
-RandomGenerableFactory<JDKRandomGenerator> rnd = new JDKRandomGeneratorFactory();
-
-int vertexCount = 1 << scale;
-int edgeCount = edgeFactor * vertexCount;
-
-Graph<LongValue,NullValue,NullValue> graph = new RMatGraph<>(env, rnd, vertexCount, edgeCount)
-    .generate();
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-import org.apache.flink.api.scala._
-import org.apache.flink.graph.generator.RMatGraph
-
-val env = ExecutionEnvironment.getExecutionEnvironment
-
-val vertexCount = 1 << scale
-val edgeCount = edgeFactor * vertexCount
-
-val graph = new RMatGraph(env.getJavaEnv, rnd, vertexCount, edgeCount).generate()
-{% endhighlight %}
-</div>
-</div>
-
-The default RMat contants can be overridden as shown in the following example.
-The contants define the interdependence of bits from each generated edge's source
-and target labels. The RMat noise can be enabled and progressively perturbs the
-contants while generating each edge.
-
-The RMat generator can be configured to produce a simple graph by removing self-loops
-and duplicate edges. Symmetrization is performed either by a "clip-and-flip" throwing away
-the half matrix above the diagonal or a full "flip" preserving and mirroring all edges.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-
-RandomGenerableFactory<JDKRandomGenerator> rnd = new JDKRandomGeneratorFactory();
-
-int vertexCount = 1 << scale;
-int edgeCount = edgeFactor * vertexCount;
-
-boolean clipAndFlip = false;
-
-Graph<LongValue,NullValue,NullValue> graph = new RMatGraph<>(env, rnd, vertexCount, edgeCount)
-    .setConstants(0.57f, 0.19f, 0.19f)
-    .setNoise(true, 0.10f)
-    .generate();
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-import org.apache.flink.api.scala._
-import org.apache.flink.graph.generator.RMatGraph
-
-val env = ExecutionEnvironment.getExecutionEnvironment
-
-val vertexCount = 1 << scale
-val edgeCount = edgeFactor * vertexCount
-
-clipAndFlip = false
-
-val graph = new RMatGraph(env.getJavaEnv, rnd, vertexCount, edgeCount).setConstants(0.57f, 0.19f, 0.19f).setNoise(true, 0.10f).generate()
-{% endhighlight %}
-</div>
-</div>
-
-## Singleton Edge Graph
-
-An undirected graph containing isolated two-paths.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-
-long vertexPairCount = 4
-
-// note: configured with the number of vertex pairs
-Graph<LongValue,NullValue,NullValue> graph = new SingletonEdgeGraph(env, vertexPairCount)
-    .generate();
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-import org.apache.flink.api.scala._
-import org.apache.flink.graph.generator.SingletonEdgeGraph
-
-val env: ExecutionEnvironment = ExecutionEnvironment.getExecutionEnvironment
-
-val vertexPairCount = 4
-
-// note: configured with the number of vertex pairs
-val graph = new SingletonEdgeGraph(env.getJavaEnv, vertexPairCount).generate()
-{% endhighlight %}
-</div>
-</div>
-
-<svg class="graph" width="540" height="200"
-    xmlns="http://www.w3.org/2000/svg"
-    xmlns:xlink="http://www.w3.org/1999/xlink">
-
-    <line x1="30" y1="40" x2="190" y2="40" />
-    <line x1="350" y1="40" x2="510" y2="40" />
-    <line x1="30" y1="160" x2="190" y2="160" />
-    <line x1="350" y1="160" x2="510" y2="160" />
-
-    <circle cx="30" cy="40" r="20" />
-    <text x="30" y="40">0</text>
-
-    <circle cx="190" cy="40" r="20" />
-    <text x="190" y="40">1</text>
-
-    <circle cx="350" cy="40" r="20" />
-    <text x="350" y="40">2</text>
-
-    <circle cx="510" cy="40" r="20" />
-    <text x="510" y="40">3</text>
-
-    <circle cx="30" cy="160" r="20" />
-    <text x="30" y="160">4</text>
-
-    <circle cx="190" cy="160" r="20" />
-    <text x="190" y="160">5</text>
-
-    <circle cx="350" cy="160" r="20" />
-    <text x="350" y="160">6</text>
-
-    <circle cx="510" cy="160" r="20" />
-    <text x="510" y="160">7</text>
-</svg>
-
-## Star Graph
-
-An undirected graph containing a single central vertex connected to all other leaf vertices.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-
-long vertexCount = 6;
-
-Graph<LongValue,NullValue,NullValue> graph = new StarGraph(env, vertexCount)
-    .generate();
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-import org.apache.flink.api.scala._
-import org.apache.flink.graph.generator.StarGraph
-
-val env: ExecutionEnvironment = ExecutionEnvironment.getExecutionEnvironment
-
-val vertexCount = 6
-
-val graph = new StarGraph(env.getJavaEnv, vertexCount).generate()
-{% endhighlight %}
-</div>
-</div>
-
-<svg class="graph" width="540" height="540"
-    xmlns="http://www.w3.org/2000/svg"
-    xmlns:xlink="http://www.w3.org/1999/xlink">
-
-    <line x1="270" y1="270" x2="270" y2="40" />
-    <line x1="270" y1="270" x2="489" y2="199" />
-    <line x1="270" y1="270" x2="405" y2="456" />
-    <line x1="270" y1="270" x2="135" y2="456" />
-    <line x1="270" y1="270" x2="51" y2="199" />
-
-    <circle cx="270" cy="270" r="20" />
-    <text x="270" y="270">0</text>
-
-    <circle cx="270" cy="40" r="20" />
-    <text x="270" y="40">1</text>
-
-    <circle cx="489" cy="199" r="20" />
-    <text x="489" y="199">2</text>
-
-    <circle cx="405" cy="456" r="20" />
-    <text x="405" y="456">3</text>
-
-    <circle cx="135" cy="456" r="20" />
-    <text x="135" y="456">4</text>
-
-    <circle cx="51" cy="199" r="20" />
-    <text x="51" y="199">5</text>
-</svg>
-
-{% top %}


[82/89] [abbrv] flink git commit: [FLINK-4414] [cluster] Add getAddress method to RpcGateway

Posted by se...@apache.org.
[FLINK-4414] [cluster] Add getAddress method to RpcGateway

The RpcGateway.getAddress method allows to retrieve the fully qualified address of the
associated RpcEndpoint.

This closes #2392.


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/24520140
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/24520140
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/24520140

Branch: refs/heads/flip-6
Commit: 2452014057432a061c55caa70ac5140e78b3cbd5
Parents: 4501ca1
Author: Till Rohrmann <tr...@apache.org>
Authored: Thu Aug 18 16:34:47 2016 +0200
Committer: Stephan Ewen <se...@apache.org>
Committed: Thu Aug 25 20:21:04 2016 +0200

----------------------------------------------------------------------
 .../apache/flink/runtime/rpc/RpcEndpoint.java   |  6 +-----
 .../apache/flink/runtime/rpc/RpcGateway.java    |  7 +++++++
 .../apache/flink/runtime/rpc/RpcService.java    | 11 ----------
 .../runtime/rpc/akka/AkkaInvocationHandler.java | 14 +++++++++++--
 .../flink/runtime/rpc/akka/AkkaRpcService.java  | 21 ++++++--------------
 .../runtime/rpc/akka/AkkaRpcActorTest.java      | 16 +++++++++++++++
 6 files changed, 42 insertions(+), 33 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/24520140/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java
index a28bc14..7b3f8a1 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java
@@ -69,9 +69,6 @@ public abstract class RpcEndpoint<C extends RpcGateway> {
 	/** Self gateway which can be used to schedule asynchronous calls on yourself */
 	private final C self;
 
-	/** the fully qualified address of the this RPC endpoint */
-	private final String selfAddress;
-
 	/** The main thread execution context to be used to execute future callbacks in the main thread
 	 * of the executing rpc server. */
 	private final ExecutionContext mainThreadExecutionContext;
@@ -92,7 +89,6 @@ public abstract class RpcEndpoint<C extends RpcGateway> {
 		this.selfGatewayType = ReflectionUtil.getTemplateType1(getClass());
 		this.self = rpcService.startServer(this);
 		
-		this.selfAddress = rpcService.getAddress(self);
 		this.mainThreadExecutionContext = new MainThreadExecutionContext((MainThreadExecutor) self);
 	}
 
@@ -156,7 +152,7 @@ public abstract class RpcEndpoint<C extends RpcGateway> {
 	 * @return Fully qualified address of the underlying RPC endpoint
 	 */
 	public String getAddress() {
-		return selfAddress;
+		return self.getAddress();
 	}
 
 	/**

http://git-wip-us.apache.org/repos/asf/flink/blob/24520140/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcGateway.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcGateway.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcGateway.java
index e3a16b4..81075ee 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcGateway.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcGateway.java
@@ -22,4 +22,11 @@ package org.apache.flink.runtime.rpc;
  * Rpc gateway interface which has to be implemented by Rpc gateways.
  */
 public interface RpcGateway {
+
+	/**
+	 * Returns the fully qualified address under which the associated rpc endpoint is reachable.
+	 *
+	 * @return Fully qualified address under which the associated rpc endpoint is reachable
+	 */
+	String getAddress();
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/24520140/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcService.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcService.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcService.java
index fabdb05..bc0f5cb 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcService.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcService.java
@@ -65,17 +65,6 @@ public interface RpcService {
 	void stopService();
 
 	/**
-	 * Get the fully qualified address of the underlying rpc server represented by the self gateway.
-	 * It must be possible to connect from a remote host to the rpc server via the returned fully
-	 * qualified address.
-	 *
-	 * @param selfGateway Self gateway associated with the underlying rpc server
-	 * @param <C> Type of the rpc gateway
-	 * @return Fully qualified address
-	 */
-	<C extends RpcGateway> String getAddress(C selfGateway);
-
-	/**
 	 * Gets the execution context, provided by this RPC service. This execution
 	 * context can be used for example for the {@code onComplete(...)} or {@code onSuccess(...)}
 	 * methods of Futures.

http://git-wip-us.apache.org/repos/asf/flink/blob/24520140/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaInvocationHandler.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaInvocationHandler.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaInvocationHandler.java
index 524bf74..bfa04f6 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaInvocationHandler.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaInvocationHandler.java
@@ -23,6 +23,7 @@ import akka.pattern.Patterns;
 import akka.util.Timeout;
 import org.apache.flink.api.java.tuple.Tuple2;
 import org.apache.flink.runtime.rpc.MainThreadExecutor;
+import org.apache.flink.runtime.rpc.RpcGateway;
 import org.apache.flink.runtime.rpc.RpcTimeout;
 import org.apache.flink.runtime.rpc.StartStoppable;
 import org.apache.flink.runtime.rpc.akka.messages.CallAsync;
@@ -55,6 +56,8 @@ import static org.apache.flink.util.Preconditions.checkArgument;
 class AkkaInvocationHandler implements InvocationHandler, AkkaGateway, MainThreadExecutor, StartStoppable {
 	private static final Logger LOG = Logger.getLogger(AkkaInvocationHandler.class);
 
+	private final String address;
+
 	private final ActorRef rpcEndpoint;
 
 	// whether the actor ref is local and thus no message serialization is needed
@@ -65,7 +68,8 @@ class AkkaInvocationHandler implements InvocationHandler, AkkaGateway, MainThrea
 
 	private final long maximumFramesize;
 
-	AkkaInvocationHandler(ActorRef rpcEndpoint, Timeout timeout, long maximumFramesize) {
+	AkkaInvocationHandler(String address, ActorRef rpcEndpoint, Timeout timeout, long maximumFramesize) {
+		this.address = Preconditions.checkNotNull(address);
 		this.rpcEndpoint = Preconditions.checkNotNull(rpcEndpoint);
 		this.isLocal = this.rpcEndpoint.path().address().hasLocalScope();
 		this.timeout = Preconditions.checkNotNull(timeout);
@@ -79,7 +83,8 @@ class AkkaInvocationHandler implements InvocationHandler, AkkaGateway, MainThrea
 		Object result;
 
 		if (declaringClass.equals(AkkaGateway.class) || declaringClass.equals(MainThreadExecutor.class) ||
-			declaringClass.equals(Object.class) || declaringClass.equals(StartStoppable.class)) {
+			declaringClass.equals(Object.class) || declaringClass.equals(StartStoppable.class) ||
+			declaringClass.equals(RpcGateway.class)) {
 			result = method.invoke(this, args);
 		} else {
 			String methodName = method.getName();
@@ -290,4 +295,9 @@ class AkkaInvocationHandler implements InvocationHandler, AkkaGateway, MainThrea
 
 		return false;
 	}
+
+	@Override
+	public String getAddress() {
+		return address;
+	}
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/24520140/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
index d987c2f..00a6932 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
@@ -102,7 +102,9 @@ public class AkkaRpcService implements RpcService {
 			public C apply(Object obj) {
 				ActorRef actorRef = ((ActorIdentity) obj).getRef();
 
-				InvocationHandler akkaInvocationHandler = new AkkaInvocationHandler(actorRef, timeout, maximumFramesize);
+				final String address = AkkaUtils.getAkkaURL(actorSystem, actorRef);
+
+				InvocationHandler akkaInvocationHandler = new AkkaInvocationHandler(address, actorRef, timeout, maximumFramesize);
 
 				// Rather than using the System ClassLoader directly, we derive the ClassLoader
 				// from this class . That works better in cases where Flink runs embedded and all Flink
@@ -135,7 +137,9 @@ public class AkkaRpcService implements RpcService {
 
 		LOG.info("Starting RPC endpoint for {} at {} .", rpcEndpoint.getClass().getName(), actorRef.path());
 
-		InvocationHandler akkaInvocationHandler = new AkkaInvocationHandler(actorRef, timeout, maximumFramesize);
+		final String address = AkkaUtils.getAkkaURL(actorSystem, actorRef);
+
+		InvocationHandler akkaInvocationHandler = new AkkaInvocationHandler(address, actorRef, timeout, maximumFramesize);
 
 		// Rather than using the System ClassLoader directly, we derive the ClassLoader
 		// from this class . That works better in cases where Flink runs embedded and all Flink
@@ -197,19 +201,6 @@ public class AkkaRpcService implements RpcService {
 	}
 
 	@Override
-	public String getAddress(RpcGateway selfGateway) {
-		checkState(!stopped, "RpcService is stopped");
-
-		if (selfGateway instanceof AkkaGateway) {
-			ActorRef actorRef = ((AkkaGateway) selfGateway).getRpcEndpoint();
-			return AkkaUtils.getAkkaURL(actorSystem, actorRef);
-		} else {
-			String className = AkkaGateway.class.getName();
-			throw new IllegalArgumentException("Cannot get address for non " + className + '.');
-		}
-	}
-
-	@Override
 	public ExecutionContext getExecutionContext() {
 		return actorSystem.dispatcher();
 	}

http://git-wip-us.apache.org/repos/asf/flink/blob/24520140/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActorTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActorTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActorTest.java
index 1653fac..82d13f0 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActorTest.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActorTest.java
@@ -34,6 +34,7 @@ import scala.concurrent.Future;
 
 import java.util.concurrent.TimeUnit;
 
+import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertThat;
 
 public class AkkaRpcActorTest extends TestLogger {
@@ -57,6 +58,21 @@ public class AkkaRpcActorTest extends TestLogger {
 	}
 
 	/**
+	 * Tests that the rpc endpoint and the associated rpc gateway have the same addresses.
+	 * @throws Exception
+	 */
+	@Test
+	public void testAddressResolution() throws Exception {
+		DummyRpcEndpoint rpcEndpoint = new DummyRpcEndpoint(akkaRpcService);
+
+		Future<DummyRpcGateway> futureRpcGateway = akkaRpcService.connect(rpcEndpoint.getAddress(), DummyRpcGateway.class);
+
+		DummyRpcGateway rpcGateway = Await.result(futureRpcGateway, timeout.duration());
+
+		assertEquals(rpcEndpoint.getAddress(), rpcGateway.getAddress());
+	}
+
+	/**
 	 * Tests that the {@link AkkaRpcActor} stashes messages until the corresponding
 	 * {@link RpcEndpoint} has been started.
 	 */


[21/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/batch/examples.md
----------------------------------------------------------------------
diff --git a/docs/dev/batch/examples.md b/docs/dev/batch/examples.md
new file mode 100644
index 0000000..63d6c7a
--- /dev/null
+++ b/docs/dev/batch/examples.md
@@ -0,0 +1,519 @@
+---
+title:  "Bundled Examples"
+nav-title: Examples
+nav-parent_id: batch
+nav-pos: 5
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+The following example programs showcase different applications of Flink
+from simple word counting to graph algorithms. The code samples illustrate the
+use of [Flink's API](index.html).
+
+The full source code of the following and more examples can be found in the __flink-examples-batch__
+or __flink-examples-streaming__ module of the Flink source repository.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+## Running an example
+
+In order to run a Flink example, we assume you have a running Flink instance available. The "Setup" tab in the navigation describes various ways of starting Flink.
+
+The easiest way is running the `./bin/start-local.sh` script, which will start a JobManager locally.
+
+Each binary release of Flink contains an `examples` directory with jar files for each of the examples on this page.
+
+To run the WordCount example, issue the following command:
+
+~~~bash
+./bin/flink run ./examples/batch/WordCount.jar
+~~~
+
+The other examples can be started in a similar way.
+
+Note that many examples run without passing any arguments for them, by using build-in data. To run WordCount with real data, you have to pass the path to the data:
+
+~~~bash
+./bin/flink run ./examples/batch/WordCount.jar --input /path/to/some/text/data --output /path/to/result
+~~~
+
+Note that non-local file systems require a schema prefix, such as `hdfs://`.
+
+
+## Word Count
+WordCount is the "Hello World" of Big Data processing systems. It computes the frequency of words in a text collection. The algorithm works in two steps: First, the texts are splits the text to individual words. Second, the words are grouped and counted.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+DataSet<String> text = env.readTextFile("/path/to/file");
+
+DataSet<Tuple2<String, Integer>> counts =
+        // split up the lines in pairs (2-tuples) containing: (word,1)
+        text.flatMap(new Tokenizer())
+        // group by the tuple field "0" and sum up tuple field "1"
+        .groupBy(0)
+        .sum(1);
+
+counts.writeAsCsv(outputPath, "\n", " ");
+
+// User-defined functions
+public static class Tokenizer implements FlatMapFunction<String, Tuple2<String, Integer>> {
+
+    @Override
+    public void flatMap(String value, Collector<Tuple2<String, Integer>> out) {
+        // normalize and split the line
+        String[] tokens = value.toLowerCase().split("\\W+");
+
+        // emit the pairs
+        for (String token : tokens) {
+            if (token.length() > 0) {
+                out.collect(new Tuple2<String, Integer>(token, 1));
+            }   
+        }
+    }
+}
+~~~
+
+The {% gh_link /flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/wordcount/WordCount.java  "WordCount example" %} implements the above described algorithm with input parameters: `--input <path> --output <path>`. As test data, any text file will do.
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+val env = ExecutionEnvironment.getExecutionEnvironment
+
+// get input data
+val text = env.readTextFile("/path/to/file")
+
+val counts = text.flatMap { _.toLowerCase.split("\\W+") filter { _.nonEmpty } }
+  .map { (_, 1) }
+  .groupBy(0)
+  .sum(1)
+
+counts.writeAsCsv(outputPath, "\n", " ")
+~~~
+
+The {% gh_link /flink-examples/flink-examples-batch/src/main/scala/org/apache/flink/examples/scala/wordcount/WordCount.scala  "WordCount example" %} implements the above described algorithm with input parameters: `--input <path> --output <path>`. As test data, any text file will do.
+
+
+</div>
+</div>
+
+## Page Rank
+
+The PageRank algorithm computes the "importance" of pages in a graph defined by links, which point from one pages to another page. It is an iterative graph algorithm, which means that it repeatedly applies the same computation. In each iteration, each page distributes its current rank over all its neighbors, and compute its new rank as a taxed sum of the ranks it received from its neighbors. The PageRank algorithm was popularized by the Google search engine which uses the importance of webpages to rank the results of search queries.
+
+In this simple example, PageRank is implemented with a [bulk iteration](iterations.html) and a fixed number of iterations.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+// read the pages and initial ranks by parsing a CSV file
+DataSet<Tuple2<Long, Double>> pagesWithRanks = env.readCsvFile(pagesInputPath)
+						   .types(Long.class, Double.class)
+
+// the links are encoded as an adjacency list: (page-id, Array(neighbor-ids))
+DataSet<Tuple2<Long, Long[]>> pageLinkLists = getLinksDataSet(env);
+
+// set iterative data set
+IterativeDataSet<Tuple2<Long, Double>> iteration = pagesWithRanks.iterate(maxIterations);
+
+DataSet<Tuple2<Long, Double>> newRanks = iteration
+        // join pages with outgoing edges and distribute rank
+        .join(pageLinkLists).where(0).equalTo(0).flatMap(new JoinVertexWithEdgesMatch())
+        // collect and sum ranks
+        .groupBy(0).sum(1)
+        // apply dampening factor
+        .map(new Dampener(DAMPENING_FACTOR, numPages));
+
+DataSet<Tuple2<Long, Double>> finalPageRanks = iteration.closeWith(
+        newRanks,
+        newRanks.join(iteration).where(0).equalTo(0)
+        // termination condition
+        .filter(new EpsilonFilter()));
+
+finalPageRanks.writeAsCsv(outputPath, "\n", " ");
+
+// User-defined functions
+
+public static final class JoinVertexWithEdgesMatch
+                    implements FlatJoinFunction<Tuple2<Long, Double>, Tuple2<Long, Long[]>,
+                                            Tuple2<Long, Double>> {
+
+    @Override
+    public void join(<Tuple2<Long, Double> page, Tuple2<Long, Long[]> adj,
+                        Collector<Tuple2<Long, Double>> out) {
+        Long[] neighbors = adj.f1;
+        double rank = page.f1;
+        double rankToDistribute = rank / ((double) neigbors.length);
+
+        for (int i = 0; i < neighbors.length; i++) {
+            out.collect(new Tuple2<Long, Double>(neighbors[i], rankToDistribute));
+        }
+    }
+}
+
+public static final class Dampener implements MapFunction<Tuple2<Long,Double>, Tuple2<Long,Double>> {
+    private final double dampening, randomJump;
+
+    public Dampener(double dampening, double numVertices) {
+        this.dampening = dampening;
+        this.randomJump = (1 - dampening) / numVertices;
+    }
+
+    @Override
+    public Tuple2<Long, Double> map(Tuple2<Long, Double> value) {
+        value.f1 = (value.f1 * dampening) + randomJump;
+        return value;
+    }
+}
+
+public static final class EpsilonFilter
+                implements FilterFunction<Tuple2<Tuple2<Long, Double>, Tuple2<Long, Double>>> {
+
+    @Override
+    public boolean filter(Tuple2<Tuple2<Long, Double>, Tuple2<Long, Double>> value) {
+        return Math.abs(value.f0.f1 - value.f1.f1) > EPSILON;
+    }
+}
+~~~
+
+The {% gh_link /flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/graph/PageRank.java "PageRank program" %} implements the above example.
+It requires the following parameters to run: `--pages <path> --links <path> --output <path> --numPages <n> --iterations <n>`.
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+// User-defined types
+case class Link(sourceId: Long, targetId: Long)
+case class Page(pageId: Long, rank: Double)
+case class AdjacencyList(sourceId: Long, targetIds: Array[Long])
+
+// set up execution environment
+val env = ExecutionEnvironment.getExecutionEnvironment
+
+// read the pages and initial ranks by parsing a CSV file
+val pages = env.readCsvFile[Page](pagesInputPath)
+
+// the links are encoded as an adjacency list: (page-id, Array(neighbor-ids))
+val links = env.readCsvFile[Link](linksInputPath)
+
+// assign initial ranks to pages
+val pagesWithRanks = pages.map(p => Page(p, 1.0 / numPages))
+
+// build adjacency list from link input
+val adjacencyLists = links
+  // initialize lists
+  .map(e => AdjacencyList(e.sourceId, Array(e.targetId)))
+  // concatenate lists
+  .groupBy("sourceId").reduce {
+  (l1, l2) => AdjacencyList(l1.sourceId, l1.targetIds ++ l2.targetIds)
+  }
+
+// start iteration
+val finalRanks = pagesWithRanks.iterateWithTermination(maxIterations) {
+  currentRanks =>
+    val newRanks = currentRanks
+      // distribute ranks to target pages
+      .join(adjacencyLists).where("pageId").equalTo("sourceId") {
+        (page, adjacent, out: Collector[Page]) =>
+        for (targetId <- adjacent.targetIds) {
+          out.collect(Page(targetId, page.rank / adjacent.targetIds.length))
+        }
+      }
+      // collect ranks and sum them up
+      .groupBy("pageId").aggregate(SUM, "rank")
+      // apply dampening factor
+      .map { p =>
+        Page(p.pageId, (p.rank * DAMPENING_FACTOR) + ((1 - DAMPENING_FACTOR) / numPages))
+      }
+
+    // terminate if no rank update was significant
+    val termination = currentRanks.join(newRanks).where("pageId").equalTo("pageId") {
+      (current, next, out: Collector[Int]) =>
+        // check for significant update
+        if (math.abs(current.rank - next.rank) > EPSILON) out.collect(1)
+    }
+
+    (newRanks, termination)
+}
+
+val result = finalRanks
+
+// emit result
+result.writeAsCsv(outputPath, "\n", " ")
+~~~
+
+he {% gh_link /flink-examples/flink-examples-batch/src/main/scala/org/apache/flink/examples/scala/graph/PageRankBasic.scala "PageRank program" %} implements the above example.
+It requires the following parameters to run: `--pages <path> --links <path> --output <path> --numPages <n> --iterations <n>`.
+</div>
+</div>
+
+Input files are plain text files and must be formatted as follows:
+- Pages represented as an (long) ID separated by new-line characters.
+    * For example `"1\n2\n12\n42\n63\n"` gives five pages with IDs 1, 2, 12, 42, and 63.
+- Links are represented as pairs of page IDs which are separated by space characters. Links are separated by new-line characters:
+    * For example `"1 2\n2 12\n1 12\n42 63\n"` gives four (directed) links (1)->(2), (2)->(12), (1)->(12), and (42)->(63).
+
+For this simple implementation it is required that each page has at least one incoming and one outgoing link (a page can point to itself).
+
+## Connected Components
+
+The Connected Components algorithm identifies parts of a larger graph which are connected by assigning all vertices in the same connected part the same component ID. Similar to PageRank, Connected Components is an iterative algorithm. In each step, each vertex propagates its current component ID to all its neighbors. A vertex accepts the component ID from a neighbor, if it is smaller than its own component ID.
+
+This implementation uses a [delta iteration](iterations.html): Vertices that have not changed their component ID do not participate in the next step. This yields much better performance, because the later iterations typically deal only with a few outlier vertices.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+// read vertex and edge data
+DataSet<Long> vertices = getVertexDataSet(env);
+DataSet<Tuple2<Long, Long>> edges = getEdgeDataSet(env).flatMap(new UndirectEdge());
+
+// assign the initial component IDs (equal to the vertex ID)
+DataSet<Tuple2<Long, Long>> verticesWithInitialId = vertices.map(new DuplicateValue<Long>());
+
+// open a delta iteration
+DeltaIteration<Tuple2<Long, Long>, Tuple2<Long, Long>> iteration =
+        verticesWithInitialId.iterateDelta(verticesWithInitialId, maxIterations, 0);
+
+// apply the step logic:
+DataSet<Tuple2<Long, Long>> changes = iteration.getWorkset()
+        // join with the edges
+        .join(edges).where(0).equalTo(0).with(new NeighborWithComponentIDJoin())
+        // select the minimum neighbor component ID
+        .groupBy(0).aggregate(Aggregations.MIN, 1)
+        // update if the component ID of the candidate is smaller
+        .join(iteration.getSolutionSet()).where(0).equalTo(0)
+        .flatMap(new ComponentIdFilter());
+
+// close the delta iteration (delta and new workset are identical)
+DataSet<Tuple2<Long, Long>> result = iteration.closeWith(changes, changes);
+
+// emit result
+result.writeAsCsv(outputPath, "\n", " ");
+
+// User-defined functions
+
+public static final class DuplicateValue<T> implements MapFunction<T, Tuple2<T, T>> {
+
+    @Override
+    public Tuple2<T, T> map(T vertex) {
+        return new Tuple2<T, T>(vertex, vertex);
+    }
+}
+
+public static final class UndirectEdge
+                    implements FlatMapFunction<Tuple2<Long, Long>, Tuple2<Long, Long>> {
+    Tuple2<Long, Long> invertedEdge = new Tuple2<Long, Long>();
+
+    @Override
+    public void flatMap(Tuple2<Long, Long> edge, Collector<Tuple2<Long, Long>> out) {
+        invertedEdge.f0 = edge.f1;
+        invertedEdge.f1 = edge.f0;
+        out.collect(edge);
+        out.collect(invertedEdge);
+    }
+}
+
+public static final class NeighborWithComponentIDJoin
+                implements JoinFunction<Tuple2<Long, Long>, Tuple2<Long, Long>, Tuple2<Long, Long>> {
+
+    @Override
+    public Tuple2<Long, Long> join(Tuple2<Long, Long> vertexWithComponent, Tuple2<Long, Long> edge) {
+        return new Tuple2<Long, Long>(edge.f1, vertexWithComponent.f1);
+    }
+}
+
+public static final class ComponentIdFilter
+                    implements FlatMapFunction<Tuple2<Tuple2<Long, Long>, Tuple2<Long, Long>>,
+                                            Tuple2<Long, Long>> {
+
+    @Override
+    public void flatMap(Tuple2<Tuple2<Long, Long>, Tuple2<Long, Long>> value,
+                        Collector<Tuple2<Long, Long>> out) {
+        if (value.f0.f1 < value.f1.f1) {
+            out.collect(value.f0);
+        }
+    }
+}
+~~~
+
+The {% gh_link /flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/graph/ConnectedComponents.java "ConnectedComponents program" %} implements the above example. It requires the following parameters to run: `--vertices <path> --edges <path> --output <path> --iterations <n>`.
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+// set up execution environment
+val env = ExecutionEnvironment.getExecutionEnvironment
+
+// read vertex and edge data
+// assign the initial components (equal to the vertex id)
+val vertices = getVerticesDataSet(env).map { id => (id, id) }
+
+// undirected edges by emitting for each input edge the input edges itself and an inverted
+// version
+val edges = getEdgesDataSet(env).flatMap { edge => Seq(edge, (edge._2, edge._1)) }
+
+// open a delta iteration
+val verticesWithComponents = vertices.iterateDelta(vertices, maxIterations, Array(0)) {
+  (s, ws) =>
+
+    // apply the step logic: join with the edges
+    val allNeighbors = ws.join(edges).where(0).equalTo(0) { (vertex, edge) =>
+      (edge._2, vertex._2)
+    }
+
+    // select the minimum neighbor
+    val minNeighbors = allNeighbors.groupBy(0).min(1)
+
+    // update if the component of the candidate is smaller
+    val updatedComponents = minNeighbors.join(s).where(0).equalTo(0) {
+      (newVertex, oldVertex, out: Collector[(Long, Long)]) =>
+        if (newVertex._2 < oldVertex._2) out.collect(newVertex)
+    }
+
+    // delta and new workset are identical
+    (updatedComponents, updatedComponents)
+}
+
+verticesWithComponents.writeAsCsv(outputPath, "\n", " ")
+
+~~~
+
+The {% gh_link /flink-examples/flink-examples-batch/src/main/scala/org/apache/flink/examples/scala/graph/ConnectedComponents.scala "ConnectedComponents program" %} implements the above example. It requires the following parameters to run: `--vertices <path> --edges <path> --output <path> --iterations <n>`.
+</div>
+</div>
+
+Input files are plain text files and must be formatted as follows:
+- Vertices represented as IDs and separated by new-line characters.
+    * For example `"1\n2\n12\n42\n63\n"` gives five vertices with (1), (2), (12), (42), and (63).
+- Edges are represented as pairs for vertex IDs which are separated by space characters. Edges are separated by new-line characters:
+    * For example `"1 2\n2 12\n1 12\n42 63\n"` gives four (undirected) links (1)-(2), (2)-(12), (1)-(12), and (42)-(63).
+
+## Relational Query
+
+The Relational Query example assumes two tables, one with `orders` and the other with `lineitems` as specified by the [TPC-H decision support benchmark](http://www.tpc.org/tpch/). TPC-H is a standard benchmark in the database industry. See below for instructions how to generate the input data.
+
+The example implements the following SQL query.
+
+~~~sql
+SELECT l_orderkey, o_shippriority, sum(l_extendedprice) as revenue
+    FROM orders, lineitem
+WHERE l_orderkey = o_orderkey
+    AND o_orderstatus = "F"
+    AND YEAR(o_orderdate) > 1993
+    AND o_orderpriority LIKE "5%"
+GROUP BY l_orderkey, o_shippriority;
+~~~
+
+The Flink program, which implements the above query looks as follows.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+// get orders data set: (orderkey, orderstatus, orderdate, orderpriority, shippriority)
+DataSet<Tuple5<Integer, String, String, String, Integer>> orders = getOrdersDataSet(env);
+// get lineitem data set: (orderkey, extendedprice)
+DataSet<Tuple2<Integer, Double>> lineitems = getLineitemDataSet(env);
+
+// orders filtered by year: (orderkey, custkey)
+DataSet<Tuple2<Integer, Integer>> ordersFilteredByYear =
+        // filter orders
+        orders.filter(
+            new FilterFunction<Tuple5<Integer, String, String, String, Integer>>() {
+                @Override
+                public boolean filter(Tuple5<Integer, String, String, String, Integer> t) {
+                    // status filter
+                    if(!t.f1.equals(STATUS_FILTER)) {
+                        return false;
+                    // year filter
+                    } else if(Integer.parseInt(t.f2.substring(0, 4)) <= YEAR_FILTER) {
+                        return false;
+                    // order priority filter
+                    } else if(!t.f3.startsWith(OPRIO_FILTER)) {
+                        return false;
+                    }
+                    return true;
+                }
+            })
+        // project fields out that are no longer required
+        .project(0,4).types(Integer.class, Integer.class);
+
+// join orders with lineitems: (orderkey, shippriority, extendedprice)
+DataSet<Tuple3<Integer, Integer, Double>> lineitemsOfOrders =
+        ordersFilteredByYear.joinWithHuge(lineitems)
+                            .where(0).equalTo(0)
+                            .projectFirst(0,1).projectSecond(1)
+                            .types(Integer.class, Integer.class, Double.class);
+
+// extendedprice sums: (orderkey, shippriority, sum(extendedprice))
+DataSet<Tuple3<Integer, Integer, Double>> priceSums =
+        // group by order and sum extendedprice
+        lineitemsOfOrders.groupBy(0,1).aggregate(Aggregations.SUM, 2);
+
+// emit result
+priceSums.writeAsCsv(outputPath);
+~~~
+
+The {% gh_link /flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/relational/TPCHQuery10.java "Relational Query program" %} implements the above query. It requires the following parameters to run: `--orders <path> --lineitem <path> --output <path>`.
+
+</div>
+<div data-lang="scala" markdown="1">
+Coming soon...
+
+The {% gh_link /flink-examples/flink-examples-batch/src/main/scala/org/apache/flink/examples/scala/relational/TPCHQuery3.scala "Relational Query program" %} implements the above query. It requires the following parameters to run: `--orders <path> --lineitem <path> --output <path>`.
+
+</div>
+</div>
+
+The orders and lineitem files can be generated using the [TPC-H benchmark](http://www.tpc.org/tpch/) suite's data generator tool (DBGEN).
+Take the following steps to generate arbitrary large input files for the provided Flink programs:
+
+1.  Download and unpack DBGEN
+2.  Make a copy of *makefile.suite* called *Makefile* and perform the following changes:
+
+~~~bash
+DATABASE = DB2
+MACHINE  = LINUX
+WORKLOAD = TPCH
+CC       = gcc
+~~~
+
+1.  Build DBGEN using *make*
+2.  Generate lineitem and orders relations using dbgen. A scale factor
+    (-s) of 1 results in a generated data set with about 1 GB size.
+
+~~~bash
+./dbgen -T o -s 1
+~~~

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/batch/fault_tolerance.md
----------------------------------------------------------------------
diff --git a/docs/dev/batch/fault_tolerance.md b/docs/dev/batch/fault_tolerance.md
new file mode 100644
index 0000000..ab870d0
--- /dev/null
+++ b/docs/dev/batch/fault_tolerance.md
@@ -0,0 +1,98 @@
+---
+title: "Fault Tolerance"
+nav-parent_id: batch
+nav-pos: 2
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Flink's fault tolerance mechanism recovers programs in the presence of failures and
+continues to execute them. Such failures include machine hardware failures, network failures,
+transient program failures, etc.
+
+* This will be replaced by the TOC
+{:toc}
+
+Batch Processing Fault Tolerance (DataSet API)
+----------------------------------------------
+
+Fault tolerance for programs in the *DataSet API* works by retrying failed executions.
+The number of time that Flink retries the execution before the job is declared as failed is configurable
+via the *execution retries* parameter. A value of *0* effectively means that fault tolerance is deactivated.
+
+To activate the fault tolerance, set the *execution retries* to a value larger than zero. A common choice is a value
+of three.
+
+This example shows how to configure the execution retries for a Flink DataSet program.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+env.setNumberOfExecutionRetries(3);
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = ExecutionEnvironment.getExecutionEnvironment()
+env.setNumberOfExecutionRetries(3)
+{% endhighlight %}
+</div>
+</div>
+
+
+You can also define default values for the number of execution retries and the retry delay in the `flink-conf.yaml`:
+
+~~~
+execution-retries.default: 3
+~~~
+
+
+Retry Delays
+------------
+
+Execution retries can be configured to be delayed. Delaying the retry means that after a failed execution, the re-execution does not start
+immediately, but only after a certain delay.
+
+Delaying the retries can be helpful when the program interacts with external systems where for example connections or pending transactions should reach a timeout before re-execution is attempted.
+
+You can set the retry delay for each program as follows (the sample shows the DataStream API - the DataSet API works similarly):
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
+env.getConfig().setExecutionRetryDelay(5000); // 5000 milliseconds delay
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = StreamExecutionEnvironment.getExecutionEnvironment()
+env.getConfig.setExecutionRetryDelay(5000) // 5000 milliseconds delay
+{% endhighlight %}
+</div>
+</div>
+
+You can also define the default value for the retry delay in the `flink-conf.yaml`:
+
+~~~
+execution-retries.delay: 10 s
+~~~
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/batch/hadoop_compatibility.md
----------------------------------------------------------------------
diff --git a/docs/dev/batch/hadoop_compatibility.md b/docs/dev/batch/hadoop_compatibility.md
new file mode 100644
index 0000000..9548c29
--- /dev/null
+++ b/docs/dev/batch/hadoop_compatibility.md
@@ -0,0 +1,248 @@
+---
+title: "Hadoop Compatibility"
+is_beta: true
+nav-parent_id: batch
+nav-pos: 7
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Flink is compatible with Apache Hadoop MapReduce interfaces and therefore allows
+reusing code that was implemented for Hadoop MapReduce.
+
+You can:
+
+- use Hadoop's `Writable` [data types](index.html#data-types) in Flink programs.
+- use any Hadoop `InputFormat` as a [DataSource](index.html#data-sources).
+- use any Hadoop `OutputFormat` as a [DataSink](index.html#data-sinks).
+- use a Hadoop `Mapper` as [FlatMapFunction](dataset_transformations.html#flatmap).
+- use a Hadoop `Reducer` as [GroupReduceFunction](dataset_transformations.html#groupreduce-on-grouped-dataset).
+
+This document shows how to use existing Hadoop MapReduce code with Flink. Please refer to the
+[Connecting to other systems]({{ site.baseurl }}/dev/batch/connectors.html) guide for reading from Hadoop supported file systems.
+
+* This will be replaced by the TOC
+{:toc}
+
+### Project Configuration
+
+Support for Haddop input/output formats is part of the `flink-java` and
+`flink-scala` Maven modules that are always required when writing Flink jobs.
+The code is located in `org.apache.flink.api.java.hadoop` and
+`org.apache.flink.api.scala.hadoop` in an additional sub-package for the
+`mapred` and `mapreduce` API.
+
+Support for Hadoop Mappers and Reducers is contained in the `flink-hadoop-compatibility`
+Maven module.
+This code resides in the `org.apache.flink.hadoopcompatibility`
+package.
+
+Add the following dependency to your `pom.xml` if you want to reuse Mappers
+and Reducers.
+
+~~~xml
+<dependency>
+	<groupId>org.apache.flink</groupId>
+	<artifactId>flink-hadoop-compatibility{{ site.scala_version_suffix }}</artifactId>
+	<version>{{site.version}}</version>
+</dependency>
+~~~
+
+### Using Hadoop Data Types
+
+Flink supports all Hadoop `Writable` and `WritableComparable` data types
+out-of-the-box. You do not need to include the Hadoop Compatibility dependency,
+if you only want to use your Hadoop data types. See the
+[Programming Guide](index.html#data-types) for more details.
+
+### Using Hadoop InputFormats
+
+Hadoop input formats can be used to create a data source by using
+one of the methods `readHadoopFile` or `createHadoopInput` of the
+`ExecutionEnvironment`. The former is used for input formats derived
+from `FileInputFormat` while the latter has to be used for general purpose
+input formats.
+
+The resulting `DataSet` contains 2-tuples where the first field
+is the key and the second field is the value retrieved from the Hadoop
+InputFormat.
+
+The following example shows how to use Hadoop's `TextInputFormat`.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+DataSet<Tuple2<LongWritable, Text>> input =
+    env.readHadoopFile(new TextInputFormat(), LongWritable.class, Text.class, textPath);
+
+// Do something with the data.
+[...]
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+val env = ExecutionEnvironment.getExecutionEnvironment
+
+val input: DataSet[(LongWritable, Text)] =
+  env.readHadoopFile(new TextInputFormat, classOf[LongWritable], classOf[Text], textPath)
+
+// Do something with the data.
+[...]
+~~~
+
+</div>
+
+</div>
+
+### Using Hadoop OutputFormats
+
+Flink provides a compatibility wrapper for Hadoop `OutputFormats`. Any class
+that implements `org.apache.hadoop.mapred.OutputFormat` or extends
+`org.apache.hadoop.mapreduce.OutputFormat` is supported.
+The OutputFormat wrapper expects its input data to be a DataSet containing
+2-tuples of key and value. These are to be processed by the Hadoop OutputFormat.
+
+The following example shows how to use Hadoop's `TextOutputFormat`.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+~~~java
+// Obtain the result we want to emit
+DataSet<Tuple2<Text, IntWritable>> hadoopResult = [...]
+
+// Set up the Hadoop TextOutputFormat.
+HadoopOutputFormat<Text, IntWritable> hadoopOF =
+  // create the Flink wrapper.
+  new HadoopOutputFormat<Text, IntWritable>(
+    // set the Hadoop OutputFormat and specify the job.
+    new TextOutputFormat<Text, IntWritable>(), job
+  );
+hadoopOF.getConfiguration().set("mapreduce.output.textoutputformat.separator", " ");
+TextOutputFormat.setOutputPath(job, new Path(outputPath));
+
+// Emit data using the Hadoop TextOutputFormat.
+hadoopResult.output(hadoopOF);
+~~~
+
+</div>
+<div data-lang="scala" markdown="1">
+
+~~~scala
+// Obtain your result to emit.
+val hadoopResult: DataSet[(Text, IntWritable)] = [...]
+
+val hadoopOF = new HadoopOutputFormat[Text,IntWritable](
+  new TextOutputFormat[Text, IntWritable],
+  new JobConf)
+
+hadoopOF.getJobConf.set("mapred.textoutputformat.separator", " ")
+FileOutputFormat.setOutputPath(hadoopOF.getJobConf, new Path(resultPath))
+
+hadoopResult.output(hadoopOF)
+
+
+~~~
+
+</div>
+
+</div>
+
+### Using Hadoop Mappers and Reducers
+
+Hadoop Mappers are semantically equivalent to Flink's [FlatMapFunctions](dataset_transformations.html#flatmap) and Hadoop Reducers are equivalent to Flink's [GroupReduceFunctions](dataset_transformations.html#groupreduce-on-grouped-dataset). Flink provides wrappers for implementations of Hadoop MapReduce's `Mapper` and `Reducer` interfaces, i.e., you can reuse your Hadoop Mappers and Reducers in regular Flink programs. At the moment, only the Mapper and Reduce interfaces of Hadoop's mapred API (`org.apache.hadoop.mapred`) are supported.
+
+The wrappers take a `DataSet<Tuple2<KEYIN,VALUEIN>>` as input and produce a `DataSet<Tuple2<KEYOUT,VALUEOUT>>` as output where `KEYIN` and `KEYOUT` are the keys and `VALUEIN` and `VALUEOUT` are the values of the Hadoop key-value pairs that are processed by the Hadoop functions. For Reducers, Flink offers a wrapper for a GroupReduceFunction with (`HadoopReduceCombineFunction`) and without a Combiner (`HadoopReduceFunction`). The wrappers accept an optional `JobConf` object to configure the Hadoop Mapper or Reducer.
+
+Flink's function wrappers are
+
+- `org.apache.flink.hadoopcompatibility.mapred.HadoopMapFunction`,
+- `org.apache.flink.hadoopcompatibility.mapred.HadoopReduceFunction`, and
+- `org.apache.flink.hadoopcompatibility.mapred.HadoopReduceCombineFunction`.
+
+and can be used as regular Flink [FlatMapFunctions](dataset_transformations.html#flatmap) or [GroupReduceFunctions](dataset_transformations.html#groupreduce-on-grouped-dataset).
+
+The following example shows how to use Hadoop `Mapper` and `Reducer` functions.
+
+~~~java
+// Obtain data to process somehow.
+DataSet<Tuple2<Text, LongWritable>> text = [...]
+
+DataSet<Tuple2<Text, LongWritable>> result = text
+  // use Hadoop Mapper (Tokenizer) as MapFunction
+  .flatMap(new HadoopMapFunction<LongWritable, Text, Text, LongWritable>(
+    new Tokenizer()
+  ))
+  .groupBy(0)
+  // use Hadoop Reducer (Counter) as Reduce- and CombineFunction
+  .reduceGroup(new HadoopReduceCombineFunction<Text, LongWritable, Text, LongWritable>(
+    new Counter(), new Counter()
+  ));
+~~~
+
+**Please note:** The Reducer wrapper works on groups as defined by Flink's [groupBy()](dataset_transformations.html#transformations-on-grouped-dataset) operation. It does not consider any custom partitioners, sort or grouping comparators you might have set in the `JobConf`.
+
+### Complete Hadoop WordCount Example
+
+The following example shows a complete WordCount implementation using Hadoop data types, Input- and OutputFormats, and Mapper and Reducer implementations.
+
+~~~java
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+// Set up the Hadoop TextInputFormat.
+Job job = Job.getInstance();
+HadoopInputFormat<LongWritable, Text> hadoopIF =
+  new HadoopInputFormat<LongWritable, Text>(
+    new TextInputFormat(), LongWritable.class, Text.class, job
+  );
+TextInputFormat.addInputPath(job, new Path(inputPath));
+
+// Read data using the Hadoop TextInputFormat.
+DataSet<Tuple2<LongWritable, Text>> text = env.createInput(hadoopIF);
+
+DataSet<Tuple2<Text, LongWritable>> result = text
+  // use Hadoop Mapper (Tokenizer) as MapFunction
+  .flatMap(new HadoopMapFunction<LongWritable, Text, Text, LongWritable>(
+    new Tokenizer()
+  ))
+  .groupBy(0)
+  // use Hadoop Reducer (Counter) as Reduce- and CombineFunction
+  .reduceGroup(new HadoopReduceCombineFunction<Text, LongWritable, Text, LongWritable>(
+    new Counter(), new Counter()
+  ));
+
+// Set up the Hadoop TextOutputFormat.
+HadoopOutputFormat<Text, IntWritable> hadoopOF =
+  new HadoopOutputFormat<Text, IntWritable>(
+    new TextOutputFormat<Text, IntWritable>(), job
+  );
+hadoopOF.getConfiguration().set("mapreduce.output.textoutputformat.separator", " ");
+TextOutputFormat.setOutputPath(job, new Path(outputPath));
+
+// Emit data using the Hadoop TextOutputFormat.
+result.output(hadoopOF);
+
+// Execute Program
+env.execute("Hadoop WordCount");
+~~~


[03/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/fig/parallel_streams_watermarks.svg
----------------------------------------------------------------------
diff --git a/docs/fig/parallel_streams_watermarks.svg b/docs/fig/parallel_streams_watermarks.svg
new file mode 100644
index 0000000..f6a4c4b
--- /dev/null
+++ b/docs/fig/parallel_streams_watermarks.svg
@@ -0,0 +1,516 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<svg
+   xmlns:dc="http://purl.org/dc/elements/1.1/"
+   xmlns:cc="http://creativecommons.org/ns#"
+   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+   xmlns:svg="http://www.w3.org/2000/svg"
+   xmlns="http://www.w3.org/2000/svg"
+   version="1.1"
+   width="468.91"
+   height="285.20001"
+   id="svg2">
+  <defs
+     id="defs4" />
+  <metadata
+     id="metadata7">
+    <rdf:RDF>
+      <cc:Work
+         rdf:about="">
+        <dc:format>image/svg+xml</dc:format>
+        <dc:type
+           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+        <dc:title></dc:title>
+      </cc:Work>
+    </rdf:RDF>
+  </metadata>
+  <g
+     transform="translate(-355.61783,-283.04674)"
+     id="layer1">
+    <g
+       transform="translate(229.75524,151.68574)"
+       id="g2989">
+      <path
+         d="m 127.90999,194.24654 c 0,-13.41733 10.88576,-24.29371 24.30309,-24.29371 13.41733,0 24.30308,10.87638 24.30308,24.29371 0,13.42671 -10.88575,24.30309 -24.30308,24.30309 -13.41733,0 -24.30309,-10.87638 -24.30309,-24.30309"
+         id="path2991"
+         style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <text
+         x="134.8311"
+         y="192.20834"
+         id="text2993"
+         xml:space="preserve"
+         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source</text>
+      <text
+         x="144.43231"
+         y="204.20988"
+         id="text2995"
+         xml:space="preserve"
+         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">(1)</text>
+      <path
+         d="m 127.29116,327.47283 c 0,-13.37044 10.83888,-24.22807 24.22808,-24.22807 13.37045,0 24.20932,10.85763 24.20932,24.22807 0,13.3892 -10.83887,24.22808 -24.20932,24.22808 -13.3892,0 -24.22808,-10.83888 -24.22808,-24.22808"
+         id="path2997"
+         style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <text
+         x="134.18349"
+         y="325.44901"
+         id="text2999"
+         xml:space="preserve"
+         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source</text>
+      <text
+         x="143.7847"
+         y="337.45053"
+         id="text3001"
+         xml:space="preserve"
+         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">(2)</text>
+      <path
+         d="m 266.05878,194.25592 c 0,-13.42671 10.83888,-24.30309 24.22808,-24.30309 13.37045,0 24.20933,10.87638 24.20933,24.30309 0,13.4267 -10.83888,24.30308 -24.20933,24.30308 -13.3892,0 -24.22808,-10.87638 -24.22808,-24.30308"
+         id="path3003"
+         style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <text
+         x="279.25809"
+         y="192.20834"
+         id="text3005"
+         xml:space="preserve"
+         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">map</text>
+      <text
+         x="282.55853"
+         y="204.20988"
+         id="text3007"
+         xml:space="preserve"
+         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">(1)</text>
+      <path
+         d="m 266.05878,327.47283 c 0,-13.37044 10.83888,-24.22807 24.22808,-24.22807 13.37045,0 24.20933,10.85763 24.20933,24.22807 0,13.3892 -10.83888,24.22808 -24.20933,24.22808 -13.3892,0 -24.22808,-10.83888 -24.22808,-24.22808"
+         id="path3009"
+         style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <text
+         x="279.25809"
+         y="325.44901"
+         id="text3011"
+         xml:space="preserve"
+         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">map</text>
+      <text
+         x="282.55853"
+         y="337.45053"
+         id="text3013"
+         xml:space="preserve"
+         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">(2)</text>
+      <path
+         d="m 473.2726,194.25592 c 0,-13.42671 10.83887,-24.30309 24.22807,-24.30309 13.37045,0 24.20933,10.87638 24.20933,24.30309 0,13.4267 -10.83888,24.30308 -24.20933,24.30308 -13.3892,0 -24.22807,-10.87638 -24.22807,-24.30308"
+         id="path3015"
+         style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <text
+         x="478.6647"
+         y="192.20834"
+         id="text3017"
+         xml:space="preserve"
+         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">window</text>
+      <text
+         x="489.76611"
+         y="204.20988"
+         id="text3019"
+         xml:space="preserve"
+         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">(1)</text>
+      <path
+         d="m 473.2726,327.47283 c 0,-13.37044 10.83887,-24.22807 24.22807,-24.22807 13.37045,0 24.20933,10.85763 24.20933,24.22807 0,13.3892 -10.83888,24.22808 -24.20933,24.22808 -13.3892,0 -24.22807,-10.83888 -24.22807,-24.22808"
+         id="path3021"
+         style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <text
+         x="478.6647"
+         y="325.44901"
+         id="text3023"
+         xml:space="preserve"
+         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">window</text>
+      <text
+         x="489.76611"
+         y="337.45053"
+         id="text3025"
+         xml:space="preserve"
+         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">(2)</text>
+      <path
+         d="m 159.32023,167.68379 c 0,-1.67834 1.36892,-3.04726 3.04726,-3.04726 l 12.18905,0 c 1.68771,0 3.04726,1.36892 3.04726,3.04726 l 0,12.18905 c 0,1.68771 -1.35955,3.04726 -3.04726,3.04726 l -12.18905,0 c -1.67834,0 -3.04726,-1.35955 -3.04726,-3.04726 z"
+         id="path3027"
+         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <text
+         x="161.9245"
+         y="177.71732"
+         id="text3029"
+         xml:space="preserve"
+         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">33</text>
+      <path
+         d="m 159.32023,302.70094 c 0,-1.66896 1.36892,-3.03789 3.05664,-3.03789 l 12.18905,0 c 1.68771,0 3.03788,1.36893 3.03788,3.03789 l 0,12.18905 c 0,1.68771 -1.35017,3.05663 -3.03788,3.05663 l -12.18905,0 c -1.68772,0 -3.05664,-1.36892 -3.05664,-3.05663 z"
+         id="path3031"
+         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <text
+         x="161.9245"
+         y="312.73444"
+         id="text3033"
+         xml:space="preserve"
+         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">17</text>
+      <path
+         d="m 184.95474,189.95225 64.2269,0 0,-4.21929 8.43857,8.43857 -8.43857,8.43857 0,-4.21928 -64.2269,0 z"
+         id="path3035"
+         style="fill:#d9d9d9;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 321.22829,189.96162 137.19242,0 0,-4.21928 8.43857,8.43857 -8.43857,8.43857 0,-4.21929 -137.19242,0 z"
+         id="path3037"
+         style="fill:#d9d9d9;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 184.95474,322.16591 64.2269,0 0,-4.21929 8.43857,8.43857 -8.43857,8.43858 0,-4.21929 -64.2269,0 z"
+         id="path3039"
+         style="fill:#d9d9d9;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 321.22829,322.16591 137.19242,0 0,-4.21929 8.43857,8.43857 -8.43857,8.43858 0,-4.21929 -137.19242,0 z"
+         id="path3041"
+         style="fill:#d9d9d9;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 324.97877,206.16368 136.91113,94.32448 2.40031,-3.48795 2.15652,11.73899 -11.73899,2.15653 2.4003,-3.46919 -136.91113,-94.32448 z"
+         id="path3043"
+         style="fill:#d9d9d9;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 325.99139,314.43993 136.89239,-94.32448 2.4003,3.46919 2.15653,-11.73899 -11.73899,-2.15652 2.4003,3.46919 -136.91113,94.32447 z"
+         id="path3045"
+         style="fill:#d9d9d9;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 302.62593,167.68379 c 0,-1.66896 1.36892,-3.03788 3.05664,-3.03788 l 12.18904,0 c 1.66897,0 3.03789,1.36892 3.03789,3.03788 l 0,12.18905 c 0,1.68771 -1.36892,3.05664 -3.03789,3.05664 l -12.18904,0 c -1.68772,0 -3.05664,-1.36893 -3.05664,-3.05664 z"
+         id="path3047"
+         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <text
+         x="305.31848"
+         y="177.71732"
+         id="text3049"
+         xml:space="preserve"
+         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">29</text>
+      <path
+         d="m 448.57571,176.59117 c 0,-1.66896 1.36892,-3.03788 3.05664,-3.03788 l 12.18905,0 c 1.68771,0 3.03788,1.36892 3.03788,3.03788 l 0,12.18905 c 0,1.68772 -1.35017,3.05664 -3.03788,3.05664 l -12.18905,0 c -1.68772,0 -3.05664,-1.36892 -3.05664,-3.05664 z"
+         id="path3051"
+         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <text
+         x="451.24167"
+         y="186.67395"
+         id="text3053"
+         xml:space="preserve"
+         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">29</text>
+      <path
+         d="m 302.77595,302.70094 c 0,-1.66896 1.36892,-3.03789 3.05664,-3.03789 l 12.18904,0 c 1.68772,0 3.03789,1.36893 3.03789,3.03789 l 0,12.18905 c 0,1.68771 -1.35017,3.05663 -3.03789,3.05663 l -12.18904,0 c -1.68772,0 -3.05664,-1.36892 -3.05664,-3.05663 z"
+         id="path3055"
+         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <text
+         x="305.4187"
+         y="312.73444"
+         id="text3057"
+         xml:space="preserve"
+         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">17</text>
+      <path
+         d="m 454.98903,216.43998 c 0,-1.66896 1.36892,-3.03788 3.05663,-3.03788 l 12.18905,0 c 1.66896,0 3.03789,1.36892 3.03789,3.03788 l 0,12.18905 c 0,1.68772 -1.36893,3.05664 -3.03789,3.05664 l -12.18905,0 c -1.68771,0 -3.05663,-1.36892 -3.05663,-3.05664 z"
+         id="path3059"
+         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <text
+         x="457.60751"
+         y="226.43639"
+         id="text3061"
+         xml:space="preserve"
+         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">14</text>
+      <path
+         d="m 454.98903,334.42997 c 0,-1.68772 1.36892,-3.05664 3.05663,-3.05664 l 12.18905,0 c 1.66896,0 3.03789,1.36892 3.03789,3.05664 l 0,12.18904 c 0,1.68772 -1.36893,3.03789 -3.03789,3.03789 l -12.18905,0 c -1.68771,0 -3.05663,-1.35017 -3.05663,-3.03789 z"
+         id="path3063"
+         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <text
+         x="457.60751"
+         y="344.52374"
+         id="text3065"
+         xml:space="preserve"
+         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">14</text>
+      <path
+         d="m 463.10881,287.54901 c 0,-1.68771 1.36892,-3.05663 3.05663,-3.05663 l 12.18905,0 c 1.68772,0 3.03789,1.36892 3.03789,3.05663 l 0,12.18905 c 0,1.68772 -1.35017,3.03789 -3.03789,3.03789 l -12.18905,0 c -1.68771,0 -3.05663,-1.35017 -3.05663,-3.03789 z"
+         id="path3067"
+         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <text
+         x="465.84369"
+         y="297.68774"
+         id="text3069"
+         xml:space="preserve"
+         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">29</text>
+      <path
+         d="m 509.83974,302.70094 c 0,-1.66896 1.36892,-3.03789 3.05664,-3.03789 l 12.18905,0 c 1.66896,0 3.03788,1.36893 3.03788,3.03789 l 0,12.18905 c 0,1.68771 -1.36892,3.05663 -3.03788,3.05663 l -12.18905,0 c -1.68772,0 -3.05664,-1.36892 -3.05664,-3.05663 z"
+         id="path3071"
+         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <text
+         x="512.54437"
+         y="312.73444"
+         id="text3073"
+         xml:space="preserve"
+         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">14</text>
+      <path
+         d="m 509.83974,167.68379 c 0,-1.66896 1.36892,-3.03788 3.05664,-3.03788 l 12.18905,0 c 1.66896,0 3.03788,1.36892 3.03788,3.03788 l 0,12.18905 c 0,1.68771 -1.36892,3.05664 -3.03788,3.05664 l -12.18905,0 c -1.68772,0 -3.05664,-1.36893 -3.05664,-3.05664 z"
+         id="path3075"
+         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <text
+         x="512.55664"
+         y="177.71732"
+         id="text3077"
+         xml:space="preserve"
+         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">14</text>
+      <path
+         d="m 234.32976,180.73545 0,1.87524 -1.87524,0 0,-1.87524 1.87524,0 z m 0,3.75048 0,1.87523 -1.87524,0 0,-1.87523 1.87524,0 z m 0,3.75047 0,1.87524 -1.87524,0 0,-1.87524 1.87524,0 z m 0,3.75048 0,1.87524 -1.87524,0 0,-1.87524 1.87524,0 z m 0,3.75048 0,1.87523 -1.87524,0 0,-1.87523 1.87524,0 z m 0,3.75047 0,1.87524 -1.87524,0 0,-1.87524 1.87524,0 z m 0,3.75048 0,1.87524 -1.87524,0 0,-1.87524 1.87524,0 z m 0,3.75047 0,1.87524 -1.87524,0 0,-1.87524 1.87524,0 z"
+         id="path3079"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937619px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="219.60442"
+         y="218.16707"
+         id="text3081"
+         xml:space="preserve"
+         style="font-size:8.70110512px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">W(33)</text>
+      <path
+         d="m 355.11384,273.31596 1.10639,1.51894 -1.51894,1.10639 -1.10639,-1.51894 1.51894,-1.10639 z m 2.21278,3.03788 1.10639,1.51894 -1.51894,1.1064 -1.10639,-1.51895 1.51894,-1.10639 z m 2.21278,3.01914 1.1064,1.51894 -1.51895,1.10639 -1.10639,-1.51894 1.51894,-1.10639 z m 2.21279,3.03788 1.10639,1.51894 -1.51895,1.10639 -1.10639,-1.51894 1.51895,-1.10639 z m 2.19402,3.03789 1.10639,1.50019 -1.50019,1.10639 -1.10639,-1.51895 1.50019,-1.08763 z m 2.21279,3.01913 1.10639,1.51894 -1.50019,1.10639 -1.1064,-1.51894 1.5002,-1.10639 z m 2.21278,3.03789 0.99387,1.35017 -1.50019,1.10639 -1.01263,-1.35017 1.51895,-1.10639 z"
+         id="path3083"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 374.8226,312.15214 0,1.87524 -1.87524,0 0,-1.87524 1.87524,0 z m 0,3.75047 0,1.87524 -1.87524,0 0,-1.87524 1.87524,0 z m 0,3.75048 0,1.87524 -1.87524,0 0,-1.87524 1.87524,0 z m 0,3.75048 0,1.87523 -1.87524,0 0,-1.87523 1.87524,0 z m 0,3.75047 0,1.87524 -1.87524,0 0,-1.87524 1.87524,0 z m 0,3.75048 0,1.87524 -1.87524,0 0,-1.87524 1.87524,0 z m 0,3.75047 0,1.87524 -1.87524,0 0,-1.87524 1.87524,0 z m 0,3.75048 0,1.87524 -1.87524,0 0,-1.87524 1.87524,0 z"
+         id="path3085"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="337.06772"
+         y="270.31641"
+         id="text3087"
+         xml:space="preserve"
+         style="font-size:8.70110512px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">W(17)</text>
+      <text
+         x="359.68753"
+         y="351.43448"
+         id="text3089"
+         xml:space="preserve"
+         style="font-size:8.70110512px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">W(17)</text>
+      <path
+         d="m 414.9902,187.14876 c 0,-1.55644 1.25641,-2.81285 2.81285,-2.81285 l 16.08955,0 c 1.55644,0 2.81285,1.25641 2.81285,2.81285 l 0,11.25143 c 0,1.55645 -1.25641,2.81286 -2.81285,2.81286 l -16.08955,0 c -1.55644,0 -2.81285,-1.25641 -2.81285,-2.81286 z"
+         id="path3091"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 414.9902,187.14876 c 0,-1.55644 1.25641,-2.81285 2.81285,-2.81285 l 16.08955,0 c 1.55644,0 2.81285,1.25641 2.81285,2.81285 l 0,11.25143 c 0,1.55645 -1.25641,2.81286 -2.81285,2.81286 l -16.08955,0 c -1.55644,0 -2.81285,-1.25641 -2.81285,-2.81286 z"
+         id="path3093"
+         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="416.82651"
+         y="195.85332"
+         id="text3095"
+         xml:space="preserve"
+         style="font-size:7.50095272px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">A|30</text>
+      <path
+         d="m 526.8669,189.96162 19.07117,0 0,-4.21928 8.43858,8.43857 -8.43858,8.43857 0,-4.21929 -19.07117,0 z"
+         id="path3097"
+         style="fill:#d9d9d9;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 526.8669,322.16591 19.07117,0 0,-4.21929 8.43858,8.43857 -8.43858,8.43858 0,-4.21929 -19.07117,0 z"
+         id="path3099"
+         style="fill:#d9d9d9;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 359.82069,187.14876 c 0,-1.55644 1.25641,-2.81285 2.81286,-2.81285 l 16.08954,0 c 1.55645,0 2.81286,1.25641 2.81286,2.81285 l 0,11.25143 c 0,1.55645 -1.25641,2.81286 -2.81286,2.81286 l -16.08954,0 c -1.55645,0 -2.81286,-1.25641 -2.81286,-2.81286 z"
+         id="path3101"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 359.82069,187.14876 c 0,-1.55644 1.25641,-2.81285 2.81286,-2.81285 l 16.08954,0 c 1.55645,0 2.81286,1.25641 2.81286,2.81285 l 0,11.25143 c 0,1.55645 -1.25641,2.81286 -2.81286,2.81286 l -16.08954,0 c -1.55645,0 -2.81286,-1.25641 -2.81286,-2.81286 z"
+         id="path3103"
+         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="361.68771"
+         y="195.85332"
+         id="text3105"
+         xml:space="preserve"
+         style="font-size:7.50095272px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">B|31</text>
+      <path
+         d="m 334.03617,219.02781 c 0,-1.55645 1.25641,-2.81285 2.81285,-2.81285 l 16.08955,0 c 1.55644,0 2.81285,1.2564 2.81285,2.81285 l 0,11.25143 c 0,1.55645 -1.25641,2.81286 -2.81285,2.81286 l -16.08955,0 c -1.55644,0 -2.81285,-1.25641 -2.81285,-2.81286 z"
+         id="path3107"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 334.03617,219.02781 c 0,-1.55645 1.25641,-2.81285 2.81285,-2.81285 l 16.08955,0 c 1.55644,0 2.81285,1.2564 2.81285,2.81285 l 0,11.25143 c 0,1.55645 -1.25641,2.81286 -2.81285,2.81286 l -16.08955,0 c -1.55644,0 -2.81285,-1.25641 -2.81285,-2.81286 z"
+         id="path3109"
+         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="335.68719"
+         y="227.65872"
+         id="text3111"
+         xml:space="preserve"
+         style="font-size:7.50095272px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">C|30</text>
+      <path
+         d="m 402.48236,241.3619 c 0,-1.5377 1.25641,-2.81286 2.81285,-2.81286 l 16.08955,0 c 1.55645,0 2.81286,1.27516 2.81286,2.81286 l 0,11.25143 c 0,1.55644 -1.25641,2.81285 -2.81286,2.81285 l -16.08955,0 c -1.55644,0 -2.81285,-1.25641 -2.81285,-2.81285 z"
+         id="path3113"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 402.48236,241.3619 c 0,-1.5377 1.25641,-2.81286 2.81285,-2.81286 l 16.08955,0 c 1.55645,0 2.81286,1.27516 2.81286,2.81286 l 0,11.25143 c 0,1.55644 -1.25641,2.81285 -2.81286,2.81285 l -16.08955,0 c -1.55644,0 -2.81285,-1.25641 -2.81285,-2.81285 z"
+         id="path3115"
+         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="404.02338"
+         y="250.06659"
+         id="text3117"
+         xml:space="preserve"
+         style="font-size:7.50095272px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">D|15</text>
+      <path
+         d="m 432.01736,286.06758 c 0,-1.55645 1.25641,-2.81286 2.81286,-2.81286 l 16.08954,0 c 1.55645,0 2.81286,1.25641 2.81286,2.81286 l 0,11.25142 c 0,1.55645 -1.25641,2.81286 -2.81286,2.81286 l -16.08954,0 c -1.55645,0 -2.81286,-1.25641 -2.81286,-2.81286 z"
+         id="path3119"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 432.01736,286.06758 c 0,-1.55645 1.25641,-2.81286 2.81286,-2.81286 l 16.08954,0 c 1.55645,0 2.81286,1.25641 2.81286,2.81286 l 0,11.25142 c 0,1.55645 -1.25641,2.81286 -2.81286,2.81286 l -16.08954,0 c -1.55645,0 -2.81286,-1.25641 -2.81286,-2.81286 z"
+         id="path3121"
+         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="434.04199"
+         y="294.68573"
+         id="text3123"
+         xml:space="preserve"
+         style="font-size:7.50095272px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">E|30</text>
+      <path
+         d="m 391.69974,321.37831 c 0,-1.55645 1.25641,-2.81286 2.81286,-2.81286 l 16.08954,0 c 1.55645,0 2.81286,1.25641 2.81286,2.81286 l 0,11.25143 c 0,1.55645 -1.25641,2.81286 -2.81286,2.81286 l -16.08954,0 c -1.55645,0 -2.81286,-1.25641 -2.81286,-2.81286 z"
+         id="path3125"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 391.69974,321.37831 c 0,-1.55645 1.25641,-2.81286 2.81286,-2.81286 l 16.08954,0 c 1.55645,0 2.81286,1.25641 2.81286,2.81286 l 0,11.25143 c 0,1.55645 -1.25641,2.81286 -2.81286,2.81286 l -16.08954,0 c -1.55645,0 -2.81286,-1.25641 -2.81286,-2.81286 z"
+         id="path3127"
+         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="393.75818"
+         y="330.10825"
+         id="text3129"
+         xml:space="preserve"
+         style="font-size:7.50095272px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">F|30</text>
+      <path
+         d="m 325.74761,321.37831 c 0,-1.55645 1.27517,-2.81286 2.81286,-2.81286 l 16.1083,0 c 1.55644,0 2.81285,1.25641 2.81285,2.81286 l 0,11.25143 c 0,1.55645 -1.25641,2.81286 -2.81285,2.81286 l -16.1083,0 c -1.53769,0 -2.81286,-1.25641 -2.81286,-2.81286 z"
+         id="path3131"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 325.74761,321.37831 c 0,-1.55645 1.27517,-2.81286 2.81286,-2.81286 l 16.1083,0 c 1.55644,0 2.81285,1.25641 2.81285,2.81286 l 0,11.25143 c 0,1.55645 -1.25641,2.81286 -2.81285,2.81286 l -16.1083,0 c -1.53769,0 -2.81286,-1.25641 -2.81286,-2.81286 z"
+         id="path3133"
+         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="327.10162"
+         y="330.10825"
+         id="text3135"
+         xml:space="preserve"
+         style="font-size:7.50095272px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">G|18</text>
+      <path
+         d="m 204.02591,321.37831 c 0,-1.55645 1.25641,-2.81286 2.81285,-2.81286 l 16.08955,0 c 1.55645,0 2.81285,1.25641 2.81285,2.81286 l 0,11.25143 c 0,1.55645 -1.2564,2.81286 -2.81285,2.81286 l -16.08955,0 c -1.55644,0 -2.81285,-1.25641 -2.81285,-2.81286 z"
+         id="path3137"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 204.02591,321.37831 c 0,-1.55645 1.25641,-2.81286 2.81285,-2.81286 l 16.08955,0 c 1.55645,0 2.81285,1.25641 2.81285,2.81286 l 0,11.25143 c 0,1.55645 -1.2564,2.81286 -2.81285,2.81286 l -16.08955,0 c -1.55644,0 -2.81285,-1.25641 -2.81285,-2.81286 z"
+         id="path3139"
+         style="fill:none;stroke:#000000;stroke-width:0.61882859px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="205.55592"
+         y="330.10825"
+         id="text3141"
+         xml:space="preserve"
+         style="font-size:7.50095272px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">H|20</text>
+      <path
+         d="m 189.79285,187.13939 c 0,-1.55645 1.26579,-2.81286 2.81286,-2.81286 l 16.09892,0 c 1.55644,0 2.81285,1.25641 2.81285,2.81286 l 0,11.25143 c 0,1.55644 -1.25641,2.81285 -2.81285,2.81285 l -16.09892,0 c -1.54707,0 -2.81286,-1.25641 -2.81286,-2.81285 z"
+         id="path3143"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 189.79285,187.13939 c 0,-1.55645 1.26579,-2.81286 2.81286,-2.81286 l 16.09892,0 c 1.55644,0 2.81285,1.25641 2.81285,2.81286 l 0,11.25143 c 0,1.55644 -1.25641,2.81285 -2.81285,2.81285 l -16.09892,0 c -1.54707,0 -2.81286,-1.25641 -2.81286,-2.81285 z"
+         id="path3145"
+         style="fill:none;stroke:#000000;stroke-width:0.62820476px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="191.56601"
+         y="195.85332"
+         id="text3147"
+         xml:space="preserve"
+         style="font-size:7.50095272px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">B|35</text>
+      <text
+         x="195.19138"
+         y="151.27718"
+         id="text3149"
+         xml:space="preserve"
+         style="font-size:11.2514286px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Watermark</text>
+      <path
+         d="m 227.26948,158.18571 5.54133,16.22081 -1.1814,0.40318 -5.54133,-16.22081 1.1814,-0.40318 z m 6.91026,14.42996 -0.7501,5.54133 -3.98488,-3.92863 4.73498,-1.6127 z"
+         id="path3151"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937619px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="517.86865"
+         y="400.08151"
+         id="text3153"
+         xml:space="preserve"
+         style="font-size:11.2514286px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Event Time</text>
+      <text
+         x="506.91727"
+         y="413.58322"
+         id="text3155"
+         xml:space="preserve"
+         style="font-size:11.2514286px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">at the operator</text>
+      <text
+         x="375.68878"
+         y="140.82939"
+         id="text3157"
+         xml:space="preserve"
+         style="font-size:11.2514286px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Event</text>
+      <text
+         x="353.63599"
+         y="153.13097"
+         id="text3159"
+         xml:space="preserve"
+         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[</text>
+      <text
+         x="358.13657"
+         y="153.13097"
+         id="text3161"
+         xml:space="preserve"
+         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">id|timestamp</text>
+      <text
+         x="425.19507"
+         y="153.13097"
+         id="text3163"
+         xml:space="preserve"
+         style="font-size:10.05127621px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">]</text>
+      <path
+         d="m 375.29141,161.458 -1.65021,16.40834 1.23765,0.13126 1.65021,-16.42708 -1.23765,-0.11252 z m -3.39419,14.98315 1.98776,5.21317 2.98163,-4.7256 -4.96939,-0.48757 z"
+         id="path3165"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 549.12598,384.68635 -22.37159,-63.15802 1.1814,-0.41255 22.37159,63.13926 -1.1814,0.43131 z m -23.72176,-61.35779 0.69383,-5.55071 4.03177,3.88174 -4.7256,1.66897 z"
+         id="path3167"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 505.15164,404.65763 -180.37915,-49.78757 -8.96364,-33.77304 1.20015,-0.31879 8.88863,33.41675 -0.45005,-0.45006 180.02286,49.69381 -0.3188,1.2189 z m -190.84298,-81.87289 1.12514,-5.4757 3.71298,4.20054 -4.83812,1.27516 z"
+         id="path3169"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="502.02127"
+         y="254.94814"
+         id="text3171"
+         xml:space="preserve"
+         style="font-size:11.2514286px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Event Time</text>
+      <text
+         x="487.3194"
+         y="268.44983"
+         id="text3173"
+         xml:space="preserve"
+         style="font-size:11.2514286px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">at input streams</text>
+      <path
+         d="m 513.4777,274.59112 -39.69879,53.01298 0.99388,0.75009 39.69879,-53.01298 -0.99388,-0.75009 z m -40.44888,50.87521 -1.01263,5.5132 5.00688,-2.51282 -3.99425,-3.00038 z"
+         id="path3175"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 510.42106,270.05304 -26.85341,15.3207 0.61883,1.08764 26.85341,-15.3207 -0.61883,-1.08764 z m -26.70339,13.07041 -3.09414,4.65059 5.56946,-0.30004 -2.47532,-4.35055 z"
+         id="path3177"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875238px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+    </g>
+  </g>
+</svg>

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/fig/plan_visualizer.png
----------------------------------------------------------------------
diff --git a/docs/fig/plan_visualizer.png b/docs/fig/plan_visualizer.png
new file mode 100644
index 0000000..85b8c55
Binary files /dev/null and b/docs/fig/plan_visualizer.png differ


[65/89] [abbrv] flink git commit: [FLINK-4457] Make ExecutionGraph independent of actors.

Posted by se...@apache.org.
[FLINK-4457] Make ExecutionGraph independent of actors.

This introduced types JobStatusListener and ExecutionStatusListener interfaces
that replace the ActorRefs and ActorGateway for listeners


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/635c8693
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/635c8693
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/635c8693

Branch: refs/heads/flip-6
Commit: 635c869326cc77e4199e4d8ee597aed69ed16cd2
Parents: 4e9d177
Author: Stephan Ewen <se...@apache.org>
Authored: Wed Aug 24 19:12:07 2016 +0200
Committer: Stephan Ewen <se...@apache.org>
Committed: Wed Aug 24 21:19:04 2016 +0200

----------------------------------------------------------------------
 .../checkpoint/CheckpointCoordinator.java       | 34 +++-----
 .../CheckpointCoordinatorDeActivator.java       | 45 ++++-------
 .../runtime/executiongraph/ExecutionGraph.java  | 81 ++++++++++----------
 .../executiongraph/ExecutionStatusListener.java | 54 +++++++++++++
 .../executiongraph/JobStatusListener.java       | 39 ++++++++++
 .../executiongraph/StatusListenerMessenger.java | 70 +++++++++++++++++
 .../flink/runtime/jobmanager/JobManager.scala   | 12 ++-
 ...ExecutionGraphCheckpointCoordinatorTest.java |  3 -
 .../LeaderChangeJobRecoveryTest.java            | 73 ++++--------------
 .../flink/core/testutils/OneShotLatch.java      | 55 +++++++++++++
 .../flink/core/testutils/OneShotLatchTest.java  | 55 +++++++++++++
 11 files changed, 360 insertions(+), 161 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/635c8693/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java b/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java
index b710324..3619f48 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java
@@ -18,9 +18,6 @@
 
 package org.apache.flink.runtime.checkpoint;
 
-import akka.actor.ActorSystem;
-import akka.actor.PoisonPill;
-import akka.actor.Props;
 import akka.dispatch.Futures;
 
 import org.apache.flink.api.common.JobID;
@@ -31,8 +28,7 @@ import org.apache.flink.runtime.executiongraph.Execution;
 import org.apache.flink.runtime.executiongraph.ExecutionAttemptID;
 import org.apache.flink.runtime.executiongraph.ExecutionJobVertex;
 import org.apache.flink.runtime.executiongraph.ExecutionVertex;
-import org.apache.flink.runtime.instance.ActorGateway;
-import org.apache.flink.runtime.instance.AkkaActorGateway;
+import org.apache.flink.runtime.executiongraph.JobStatusListener;
 import org.apache.flink.runtime.jobgraph.JobVertexID;
 import org.apache.flink.runtime.messages.checkpoint.AcknowledgeCheckpoint;
 import org.apache.flink.runtime.messages.checkpoint.DeclineCheckpoint;
@@ -57,7 +53,6 @@ import java.util.Map;
 import java.util.Set;
 import java.util.Timer;
 import java.util.TimerTask;
-import java.util.UUID;
 
 import static org.apache.flink.util.Preconditions.checkArgument;
 import static org.apache.flink.util.Preconditions.checkNotNull;
@@ -137,7 +132,7 @@ public class CheckpointCoordinator {
 	private final Timer timer;
 
 	/** Actor that receives status updates from the execution graph this coordinator works for */
-	private ActorGateway jobStatusListener;
+	private JobStatusListener jobStatusListener;
 
 	/** The number of consecutive failed trigger attempts */
 	private int numUnsuccessfulCheckpointsTriggers;
@@ -266,12 +261,6 @@ public class CheckpointCoordinator {
 				// shut down the thread that handles the timeouts and pending triggers
 				timer.cancel();
 
-				// make sure that the actor does not linger
-				if (jobStatusListener != null) {
-					jobStatusListener.tell(PoisonPill.getInstance());
-					jobStatusListener = null;
-				}
-
 				// clear and discard all pending checkpoints
 				for (PendingCheckpoint pending : pendingCheckpoints.values()) {
 					pending.abortError(new Exception("Checkpoint Coordinator is shutting down"));
@@ -903,7 +892,7 @@ public class CheckpointCoordinator {
 	//  Periodic scheduling of checkpoints
 	// --------------------------------------------------------------------------------------------
 
-	public void startCheckpointScheduler() throws Exception {
+	public void startCheckpointScheduler() {
 		synchronized (lock) {
 			if (shutdown) {
 				throw new IllegalArgumentException("Checkpoint coordinator is shut down");
@@ -918,7 +907,7 @@ public class CheckpointCoordinator {
 		}
 	}
 
-	public void stopCheckpointScheduler() throws Exception {
+	public void stopCheckpointScheduler() {
 		synchronized (lock) {
 			triggerRequestQueued = false;
 			periodicScheduling = false;
@@ -929,10 +918,14 @@ public class CheckpointCoordinator {
 			}
 
 			for (PendingCheckpoint p : pendingCheckpoints.values()) {
-				p.abortError(new Exception("Checkpoint Coordinator is suspending."));
+				try {
+					p.abortError(new Exception("Checkpoint Coordinator is suspending."));
+				} catch (Throwable t) {
+					LOG.error("Error while disposing pending checkpoint", t);
+				}
 			}
-			pendingCheckpoints.clear();
 
+			pendingCheckpoints.clear();
 			numUnsuccessfulCheckpointsTriggers = 0;
 		}
 	}
@@ -941,17 +934,14 @@ public class CheckpointCoordinator {
 	//  job status listener that schedules / cancels periodic checkpoints
 	// ------------------------------------------------------------------------
 
-	public ActorGateway createActivatorDeactivator(ActorSystem actorSystem, UUID leaderSessionID) {
+	public JobStatusListener createActivatorDeactivator() {
 		synchronized (lock) {
 			if (shutdown) {
 				throw new IllegalArgumentException("Checkpoint coordinator is shut down");
 			}
 
 			if (jobStatusListener == null) {
-				Props props = Props.create(CheckpointCoordinatorDeActivator.class, this, leaderSessionID);
-
-				// wrap the ActorRef in a AkkaActorGateway to support message decoration
-				jobStatusListener = new AkkaActorGateway(actorSystem.actorOf(props), leaderSessionID);
+				jobStatusListener = new CheckpointCoordinatorDeActivator(this);
 			}
 
 			return jobStatusListener;

http://git-wip-us.apache.org/repos/asf/flink/blob/635c8693/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorDeActivator.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorDeActivator.java b/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorDeActivator.java
index 7e26f71..2e23d6a 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorDeActivator.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorDeActivator.java
@@ -18,51 +18,32 @@
 
 package org.apache.flink.runtime.checkpoint;
 
-import org.apache.flink.runtime.akka.FlinkUntypedActor;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.runtime.executiongraph.JobStatusListener;
 import org.apache.flink.runtime.jobgraph.JobStatus;
-import org.apache.flink.runtime.messages.ExecutionGraphMessages;
-import org.apache.flink.util.Preconditions;
 
-import java.util.UUID;
+import static org.apache.flink.util.Preconditions.checkNotNull;
 
 /**
  * This actor listens to changes in the JobStatus and activates or deactivates the periodic
  * checkpoint scheduler.
  */
-public class CheckpointCoordinatorDeActivator extends FlinkUntypedActor {
+public class CheckpointCoordinatorDeActivator implements JobStatusListener {
 
 	private final CheckpointCoordinator coordinator;
-	private final UUID leaderSessionID;
-	
-	public CheckpointCoordinatorDeActivator(
-			CheckpointCoordinator coordinator,
-			UUID leaderSessionID) {
 
-		LOG.info("Create CheckpointCoordinatorDeActivator");
-
-		this.coordinator = Preconditions.checkNotNull(coordinator, "The checkpointCoordinator must not be null.");
-		this.leaderSessionID = leaderSessionID;
+	public CheckpointCoordinatorDeActivator(CheckpointCoordinator coordinator) {
+		this.coordinator = checkNotNull(coordinator);
 	}
 
 	@Override
-	public void handleMessage(Object message) throws Exception {
-		if (message instanceof ExecutionGraphMessages.JobStatusChanged) {
-			JobStatus status = ((ExecutionGraphMessages.JobStatusChanged) message).newJobStatus();
-			
-			if (status == JobStatus.RUNNING) {
-				// start the checkpoint scheduler
-				coordinator.startCheckpointScheduler();
-			} else {
-				// anything else should stop the trigger for now
-				coordinator.stopCheckpointScheduler();
-			}
+	public void jobStatusChanges(JobID jobId, JobStatus newJobStatus, long timestamp, Throwable error) {
+		if (newJobStatus == JobStatus.RUNNING) {
+			// start the checkpoint scheduler
+			coordinator.startCheckpointScheduler();
+		} else {
+			// anything else should stop the trigger for now
+			coordinator.stopCheckpointScheduler();
 		}
-		
-		// we ignore all other messages
-	}
-
-	@Override
-	public UUID getLeaderSessionID() {
-		return leaderSessionID;
 	}
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/635c8693/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionGraph.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionGraph.java b/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionGraph.java
index 12d8e66..7a94c0f 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionGraph.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionGraph.java
@@ -18,7 +18,6 @@
 
 package org.apache.flink.runtime.executiongraph;
 
-import akka.actor.ActorSystem;
 import org.apache.flink.api.common.ExecutionConfig;
 import org.apache.flink.api.common.JobID;
 import org.apache.flink.api.common.accumulators.Accumulator;
@@ -41,7 +40,6 @@ import org.apache.flink.runtime.checkpoint.stats.CheckpointStatsTracker;
 import org.apache.flink.runtime.execution.ExecutionState;
 import org.apache.flink.runtime.execution.SuppressRestartsException;
 import org.apache.flink.runtime.executiongraph.restart.RestartStrategy;
-import org.apache.flink.runtime.instance.ActorGateway;
 import org.apache.flink.runtime.io.network.partition.ResultPartitionID;
 import org.apache.flink.runtime.jobgraph.IntermediateDataSetID;
 import org.apache.flink.runtime.jobgraph.JobStatus;
@@ -50,15 +48,16 @@ import org.apache.flink.runtime.jobgraph.JobVertexID;
 import org.apache.flink.runtime.jobgraph.ScheduleMode;
 import org.apache.flink.runtime.jobmanager.scheduler.CoLocationGroup;
 import org.apache.flink.runtime.jobmanager.scheduler.Scheduler;
-import org.apache.flink.runtime.messages.ExecutionGraphMessages;
 import org.apache.flink.runtime.query.KvStateLocationRegistry;
 import org.apache.flink.runtime.taskmanager.TaskExecutionState;
 import org.apache.flink.runtime.util.SerializableObject;
 import org.apache.flink.runtime.util.SerializedThrowable;
 import org.apache.flink.util.ExceptionUtils;
 import org.apache.flink.util.SerializedValue;
+
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
+
 import scala.concurrent.ExecutionContext;
 import scala.concurrent.duration.FiniteDuration;
 
@@ -75,12 +74,12 @@ import java.util.List;
 import java.util.Map;
 import java.util.NoSuchElementException;
 import java.util.Objects;
-import java.util.UUID;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.CopyOnWriteArrayList;
 import java.util.concurrent.atomic.AtomicReferenceFieldUpdater;
 
 import static org.apache.flink.util.Preconditions.checkNotNull;
+
 /**
  * The execution graph is the central data structure that coordinates the distributed
  * execution of a data flow. It keeps representations of each parallel task, each
@@ -151,12 +150,12 @@ public class ExecutionGraph {
 	 * accessible on all nodes in the cluster. */
 	private final List<URL> requiredClasspaths;
 
-	/** Listeners that receive messages when the entire job switches it status (such as from
-	 * RUNNING to FINISHED) */
-	private final List<ActorGateway> jobStatusListenerActors;
+	/** Listeners that receive messages when the entire job switches it status
+	 * (such as from RUNNING to FINISHED) */
+	private final List<JobStatusListener> jobStatusListeners;
 
 	/** Listeners that receive messages whenever a single task execution changes its status */
-	private final List<ActorGateway> executionListenerActors;
+	private final List<ExecutionStatusListener> executionListeners;
 
 	/** Timestamps (in milliseconds as returned by {@code System.currentTimeMillis()} when
 	 * the execution graph transitioned into a certain state. The index into this array is the
@@ -284,8 +283,8 @@ public class ExecutionGraph {
 		this.verticesInCreationOrder = new ArrayList<ExecutionJobVertex>();
 		this.currentExecutions = new ConcurrentHashMap<ExecutionAttemptID, Execution>();
 
-		this.jobStatusListenerActors  = new CopyOnWriteArrayList<ActorGateway>();
-		this.executionListenerActors = new CopyOnWriteArrayList<ActorGateway>();
+		this.jobStatusListeners  = new CopyOnWriteArrayList<>();
+		this.executionListeners = new CopyOnWriteArrayList<>();
 
 		this.stateTimestamps = new long[JobStatus.values().length];
 		this.stateTimestamps[JobStatus.CREATED.ordinal()] = System.currentTimeMillis();
@@ -345,8 +344,6 @@ public class ExecutionGraph {
 			List<ExecutionJobVertex> verticesToTrigger,
 			List<ExecutionJobVertex> verticesToWaitFor,
 			List<ExecutionJobVertex> verticesToCommitTo,
-			ActorSystem actorSystem,
-			UUID leaderSessionID,
 			CheckpointIDCounter checkpointIDCounter,
 			CompletedCheckpointStore checkpointStore,
 			SavepointStore savepointStore,
@@ -388,8 +385,7 @@ public class ExecutionGraph {
 
 		// the periodic checkpoint scheduler is activated and deactivated as a result of
 		// job status changes (running -> on, all other states -> off)
-		registerJobStatusListener(
-				checkpointCoordinator.createActivatorDeactivator(actorSystem, leaderSessionID));
+		registerJobStatusListener(checkpointCoordinator.createActivatorDeactivator());
 	}
 
 	/**
@@ -935,8 +931,8 @@ public class ExecutionGraph {
 		intermediateResults.clear();
 		currentExecutions.clear();
 		requiredJarFiles.clear();
-		jobStatusListenerActors.clear();
-		executionListenerActors.clear();
+		jobStatusListeners.clear();
+		executionListeners.clear();
 
 		isArchived = true;
 	}
@@ -1173,45 +1169,52 @@ public class ExecutionGraph {
 	//  Listeners & Observers
 	// --------------------------------------------------------------------------------------------
 
-	public void registerJobStatusListener(ActorGateway listener) {
+	public void registerJobStatusListener(JobStatusListener listener) {
 		if (listener != null) {
-			this.jobStatusListenerActors.add(listener);
+			jobStatusListeners.add(listener);
 		}
 	}
 
-	public void registerExecutionListener(ActorGateway listener) {
+	public void registerExecutionListener(ExecutionStatusListener listener) {
 		if (listener != null) {
-			this.executionListenerActors.add(listener);
+			executionListeners.add(listener);
 		}
 	}
 
 	private void notifyJobStatusChange(JobStatus newState, Throwable error) {
-		if (jobStatusListenerActors.size() > 0) {
-			ExecutionGraphMessages.JobStatusChanged message =
-					new ExecutionGraphMessages.JobStatusChanged(jobID, newState, System.currentTimeMillis(),
-							error == null ? null : new SerializedThrowable(error));
-
-			for (ActorGateway listener: jobStatusListenerActors) {
-				listener.tell(message);
+		if (jobStatusListeners.size() > 0) {
+			final long timestamp = System.currentTimeMillis();
+			final Throwable serializedError = error == null ? null : new SerializedThrowable(error);
+
+			for (JobStatusListener listener : jobStatusListeners) {
+				try {
+					listener.jobStatusChanges(jobID, newState, timestamp, serializedError);
+				} catch (Throwable t) {
+					LOG.warn("Error while notifying JobStatusListener", t);
+				}
 			}
 		}
 	}
 
-	void notifyExecutionChange(JobVertexID vertexId, int subtask, ExecutionAttemptID executionID, ExecutionState
-							newExecutionState, Throwable error)
+	void notifyExecutionChange(
+			JobVertexID vertexId, int subtask, ExecutionAttemptID executionID,
+			ExecutionState newExecutionState, Throwable error)
 	{
 		ExecutionJobVertex vertex = getJobVertex(vertexId);
 
-		if (executionListenerActors.size() > 0) {
-			String message = error == null ? null : ExceptionUtils.stringifyException(error);
-			ExecutionGraphMessages.ExecutionStateChanged actorMessage =
-					new ExecutionGraphMessages.ExecutionStateChanged(jobID, vertexId,  vertex.getJobVertex().getName(),
-																	vertex.getParallelism(), subtask,
-																	executionID, newExecutionState,
-																	System.currentTimeMillis(), message);
-
-			for (ActorGateway listener : executionListenerActors) {
-				listener.tell(actorMessage);
+		if (executionListeners.size() > 0) {
+			final String message = error == null ? null : ExceptionUtils.stringifyException(error);
+			final long timestamp = System.currentTimeMillis();
+
+			for (ExecutionStatusListener listener : executionListeners) {
+				try {
+					listener.executionStatusChanged(
+							jobID, vertexId, vertex.getJobVertex().getName(),
+							vertex.getParallelism(), subtask, executionID, newExecutionState,
+							timestamp, message);
+				} catch (Throwable t) {
+					LOG.warn("Error while notifying ExecutionStatusListener", t);
+				}
 			}
 		}
 

http://git-wip-us.apache.org/repos/asf/flink/blob/635c8693/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionStatusListener.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionStatusListener.java b/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionStatusListener.java
new file mode 100644
index 0000000..6fb5a1a
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionStatusListener.java
@@ -0,0 +1,54 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.executiongraph;
+
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.runtime.execution.ExecutionState;
+import org.apache.flink.runtime.jobgraph.JobVertexID;
+
+/**
+ * Interface for observers that monitor the status of individual task executions.
+ */
+public interface ExecutionStatusListener {
+
+	/**
+	 * Called whenever the execution status of a task changes.
+	 * 
+	 * @param jobID                  The ID of the job
+	 * @param vertexID               The ID of the task vertex
+	 * @param taskName               The name of the task
+	 * @param totalNumberOfSubTasks  The parallelism of the task
+	 * @param subtaskIndex           The subtask's parallel index
+	 * @param executionID            The ID of the execution attempt
+	 * @param newExecutionState      The status to which the task switched
+	 * @param timestamp              The timestamp when the change occurred. Informational only.
+	 * @param optionalMessage        An optional message attached to the status change, like an
+	 *                               exception message.
+	 */
+	void executionStatusChanged(
+			JobID jobID,
+			JobVertexID vertexID,
+			String taskName,
+			int totalNumberOfSubTasks,
+			int subtaskIndex,
+			ExecutionAttemptID executionID,
+			ExecutionState newExecutionState,
+			long timestamp,
+			String optionalMessage);
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/635c8693/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/JobStatusListener.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/JobStatusListener.java b/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/JobStatusListener.java
new file mode 100644
index 0000000..1d97a5c
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/JobStatusListener.java
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.executiongraph;
+
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.runtime.jobgraph.JobStatus;
+
+/**
+ * Interface for observers that monitor the status of a job.
+ */
+public interface JobStatusListener {
+
+	/**
+	 * This method is called whenever the status of the job changes.
+	 * 
+	 * @param jobId         The ID of the job.
+	 * @param newJobStatus  The status the job switched to.
+	 * @param timestamp     The timestamp when the status transition occurred.
+	 * @param error         In case the job status switches to a failure state, this is the
+	 *                      exception that caused the failure.
+	 */
+	void jobStatusChanges(JobID jobId, JobStatus newJobStatus, long timestamp, Throwable error);
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/635c8693/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/StatusListenerMessenger.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/StatusListenerMessenger.java b/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/StatusListenerMessenger.java
new file mode 100644
index 0000000..01f1e75
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/StatusListenerMessenger.java
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.executiongraph;
+
+import akka.actor.ActorRef;
+
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.runtime.execution.ExecutionState;
+import org.apache.flink.runtime.instance.AkkaActorGateway;
+import org.apache.flink.runtime.jobgraph.JobStatus;
+import org.apache.flink.runtime.jobgraph.JobVertexID;
+import org.apache.flink.runtime.messages.ExecutionGraphMessages;
+import org.apache.flink.runtime.util.SerializedThrowable;
+
+import java.util.UUID;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * A {@code JobStatusListener} and {@code ExecutionStatusListener} that sends an actor message
+ * for each status change.
+ */
+public class StatusListenerMessenger implements JobStatusListener, ExecutionStatusListener {
+
+	private final AkkaActorGateway target;
+
+	public StatusListenerMessenger(ActorRef target, UUID leaderSessionId) {
+		this.target = new AkkaActorGateway(checkNotNull(target), leaderSessionId);
+	}
+
+	@Override
+	public void jobStatusChanges(JobID jobId, JobStatus newJobStatus, long timestamp, Throwable error) {
+		ExecutionGraphMessages.JobStatusChanged message =
+				new ExecutionGraphMessages.JobStatusChanged(jobId, newJobStatus, timestamp,
+						error == null ? null : new SerializedThrowable(error));
+
+		target.tell(message);
+	}
+
+	@Override
+	public void executionStatusChanged(
+			JobID jobID, JobVertexID vertexID,
+			String taskName, int taskParallelism, int subtaskIndex,
+			ExecutionAttemptID executionID, ExecutionState newExecutionState,
+			long timestamp, String optionalMessage) {
+		
+		ExecutionGraphMessages.ExecutionStateChanged message = 
+				new ExecutionGraphMessages.ExecutionStateChanged(
+					jobID, vertexID, taskName, taskParallelism, subtaskIndex,
+					executionID, newExecutionState, timestamp, optionalMessage);
+
+		target.tell(message);
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/635c8693/flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobManager.scala
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobManager.scala b/flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobManager.scala
index 34fed3f..0587987 100644
--- a/flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobManager.scala
+++ b/flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobManager.scala
@@ -51,7 +51,7 @@ import org.apache.flink.runtime.clusterframework.standalone.StandaloneResourceMa
 import org.apache.flink.runtime.clusterframework.types.ResourceID
 import org.apache.flink.runtime.execution.librarycache.BlobLibraryCacheManager
 import org.apache.flink.runtime.executiongraph.restart.RestartStrategyFactory
-import org.apache.flink.runtime.executiongraph.{ExecutionGraph, ExecutionJobVertex}
+import org.apache.flink.runtime.executiongraph.{StatusListenerMessenger, ExecutionGraph, ExecutionJobVertex}
 import org.apache.flink.runtime.instance.{AkkaActorGateway, InstanceManager}
 import org.apache.flink.runtime.jobgraph.jsonplan.JsonPlanGenerator
 import org.apache.flink.runtime.jobgraph.{JobGraph, JobStatus, JobVertexID}
@@ -1249,8 +1249,6 @@ class JobManager(
             triggerVertices,
             ackVertices,
             confirmVertices,
-            context.system,
-            leaderSessionID.orNull,
             checkpointIdCounter,
             completedCheckpoints,
             savepointStore,
@@ -1259,14 +1257,14 @@ class JobManager(
 
         // get notified about job status changes
         executionGraph.registerJobStatusListener(
-          new AkkaActorGateway(self, leaderSessionID.orNull))
+          new StatusListenerMessenger(self, leaderSessionID.orNull))
 
         if (jobInfo.listeningBehaviour == ListeningBehaviour.EXECUTION_RESULT_AND_STATE_CHANGES) {
           // the sender wants to be notified about state changes
-          val gateway = new AkkaActorGateway(jobInfo.client, leaderSessionID.orNull)
+          val listener  = new StatusListenerMessenger(jobInfo.client, leaderSessionID.orNull)
 
-          executionGraph.registerExecutionListener(gateway)
-          executionGraph.registerJobStatusListener(gateway)
+          executionGraph.registerExecutionListener(listener)
+          executionGraph.registerJobStatusListener(listener)
         }
       } catch {
         case t: Throwable =>

http://git-wip-us.apache.org/repos/asf/flink/blob/635c8693/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/ExecutionGraphCheckpointCoordinatorTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/ExecutionGraphCheckpointCoordinatorTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/ExecutionGraphCheckpointCoordinatorTest.java
index 7b05fd7..49a9449 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/ExecutionGraphCheckpointCoordinatorTest.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/ExecutionGraphCheckpointCoordinatorTest.java
@@ -40,7 +40,6 @@ import scala.concurrent.duration.FiniteDuration;
 
 import java.net.URL;
 import java.util.Collections;
-import java.util.UUID;
 import java.util.concurrent.TimeUnit;
 
 import static org.mockito.Mockito.mock;
@@ -117,8 +116,6 @@ public class ExecutionGraphCheckpointCoordinatorTest {
 				Collections.<ExecutionJobVertex>emptyList(),
 				Collections.<ExecutionJobVertex>emptyList(),
 				Collections.<ExecutionJobVertex>emptyList(),
-				system,
-				UUID.randomUUID(),
 				counter,
 				store,
 				new HeapSavepointStore(),

http://git-wip-us.apache.org/repos/asf/flink/blob/635c8693/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/LeaderChangeJobRecoveryTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/LeaderChangeJobRecoveryTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/LeaderChangeJobRecoveryTest.java
index 57de2cd..450f9fb 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/LeaderChangeJobRecoveryTest.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/LeaderChangeJobRecoveryTest.java
@@ -18,28 +18,28 @@
 
 package org.apache.flink.runtime.leaderelection;
 
-import akka.actor.ActorRef;
 import org.apache.flink.api.common.ExecutionConfig;
+import org.apache.flink.api.common.JobID;
 import org.apache.flink.configuration.ConfigConstants;
 import org.apache.flink.configuration.Configuration;
+import org.apache.flink.core.testutils.OneShotLatch;
 import org.apache.flink.runtime.executiongraph.ExecutionGraph;
+import org.apache.flink.runtime.executiongraph.JobStatusListener;
 import org.apache.flink.runtime.instance.ActorGateway;
-import org.apache.flink.runtime.instance.AkkaActorGateway;
 import org.apache.flink.runtime.jobgraph.DistributionPattern;
 import org.apache.flink.runtime.jobgraph.JobGraph;
 import org.apache.flink.runtime.jobgraph.JobStatus;
 import org.apache.flink.runtime.jobgraph.JobVertex;
 import org.apache.flink.runtime.jobmanager.Tasks;
 import org.apache.flink.runtime.jobmanager.scheduler.SlotSharingGroup;
-import org.apache.flink.runtime.messages.ExecutionGraphMessages;
 import org.apache.flink.runtime.testingUtils.TestingJobManagerMessages;
 import org.apache.flink.util.TestLogger;
+
 import org.junit.Before;
 import org.junit.Test;
+
 import scala.concurrent.Await;
-import scala.concurrent.ExecutionContext;
 import scala.concurrent.Future;
-import scala.concurrent.Promise;
 import scala.concurrent.duration.FiniteDuration;
 
 import java.util.UUID;
@@ -113,15 +113,12 @@ public class LeaderChangeJobRecoveryTest extends TestLogger {
 
 		ExecutionGraph executionGraph = ((TestingJobManagerMessages.ExecutionGraphFound) responseExecutionGraph).executionGraph();
 
-		TestActorGateway testActorGateway = new TestActorGateway();
-
-		executionGraph.registerJobStatusListener(testActorGateway);
+		TestJobStatusListener testListener = new TestJobStatusListener();
+		executionGraph.registerJobStatusListener(testListener);
 
 		cluster.revokeLeadership();
 
-		Future<Boolean> hasReachedTerminalState = testActorGateway.hasReachedTerminalState();
-
-		assertTrue("The job should have reached a terminal state.", Await.result(hasReachedTerminalState, timeout));
+		testListener.waitForTerminalState(30000);
 	}
 
 	public JobGraph createBlockingJob(int parallelism) {
@@ -150,59 +147,19 @@ public class LeaderChangeJobRecoveryTest extends TestLogger {
 		return jobGraph;
 	}
 
-	public static class TestActorGateway implements ActorGateway {
-
-		private static final long serialVersionUID = -736146686160538227L;
-		private transient Promise<Boolean> terminalState = new scala.concurrent.impl.Promise.DefaultPromise<>();
-
-		public Future<Boolean> hasReachedTerminalState() {
-			return terminalState.future();
-		}
+	public static class TestJobStatusListener implements JobStatusListener {
 
-		@Override
-		public Future<Object> ask(Object message, FiniteDuration timeout) {
-			return null;
-		}
+		private final OneShotLatch terminalStateLatch = new OneShotLatch();
 
-		@Override
-		public void tell(Object message) {
-			this.tell(message, new AkkaActorGateway(ActorRef.noSender(), null));
+		public void waitForTerminalState(long timeoutMillis) throws InterruptedException, TimeoutException {
+			terminalStateLatch.await(timeoutMillis, TimeUnit.MILLISECONDS);
 		}
 
 		@Override
-		public void tell(Object message, ActorGateway sender) {
-			if (message instanceof ExecutionGraphMessages.JobStatusChanged) {
-				ExecutionGraphMessages.JobStatusChanged jobStatusChanged = (ExecutionGraphMessages.JobStatusChanged) message;
-
-				if (jobStatusChanged.newJobStatus().isGloballyTerminalState() || jobStatusChanged.newJobStatus() == JobStatus.SUSPENDED) {
-					terminalState.success(true);
-				}
+		public void jobStatusChanges(JobID jobId, JobStatus newJobStatus, long timestamp, Throwable error) {
+			if (newJobStatus.isGloballyTerminalState() || newJobStatus == JobStatus.SUSPENDED) {
+				terminalStateLatch.trigger();
 			}
 		}
-
-		@Override
-		public void forward(Object message, ActorGateway sender) {
-
-		}
-
-		@Override
-		public Future<Object> retry(Object message, int numberRetries, FiniteDuration timeout, ExecutionContext executionContext) {
-			return null;
-		}
-
-		@Override
-		public String path() {
-			return null;
-		}
-
-		@Override
-		public ActorRef actor() {
-			return null;
-		}
-
-		@Override
-		public UUID leaderSessionID() {
-			return null;
-		}
 	}
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/635c8693/flink-test-utils-parent/flink-test-utils-junit/src/main/java/org/apache/flink/core/testutils/OneShotLatch.java
----------------------------------------------------------------------
diff --git a/flink-test-utils-parent/flink-test-utils-junit/src/main/java/org/apache/flink/core/testutils/OneShotLatch.java b/flink-test-utils-parent/flink-test-utils-junit/src/main/java/org/apache/flink/core/testutils/OneShotLatch.java
index 54ac110..0418bf5 100644
--- a/flink-test-utils-parent/flink-test-utils-junit/src/main/java/org/apache/flink/core/testutils/OneShotLatch.java
+++ b/flink-test-utils-parent/flink-test-utils-junit/src/main/java/org/apache/flink/core/testutils/OneShotLatch.java
@@ -18,6 +18,9 @@
 
 package org.apache.flink.core.testutils;
 
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+
 /**
  * Latch for synchronizing parts of code in tests. Once the latch has fired once calls to
  * {@link #await()} will return immediately in the future.
@@ -44,6 +47,8 @@ public final class OneShotLatch {
 	/**
 	 * Waits until {@link #trigger())} is called. Once {@code #trigger()} has been called this
 	 * call will always return immediately.
+	 * 
+	 * @throws InterruptedException Thrown if the thread is interrupted while waiting.
 	 */
 	public void await() throws InterruptedException {
 		synchronized (lock) {
@@ -52,4 +57,54 @@ public final class OneShotLatch {
 			}
 		}
 	}
+
+	/**
+	 * Waits until {@link #trigger())} is called. Once {@code #trigger()} has been called this
+	 * call will always return immediately.
+	 * 
+	 * <p>If the latch is not triggered within the given timeout, a {@code TimeoutException}
+	 * will be thrown after the timeout.
+	 * 
+	 * <p>A timeout value of zero means infinite timeout and make this equivalent to {@link #await()}.
+	 * 
+	 * @param timeout   The value of the timeout, a value of zero indicating infinite timeout.
+	 * @param timeUnit  The unit of the timeout
+	 * 
+	 * @throws InterruptedException Thrown if the thread is interrupted while waiting.
+	 * @throws TimeoutException Thrown, if the latch is not triggered within the timeout time.
+	 */
+	public void await(long timeout, TimeUnit timeUnit) throws InterruptedException, TimeoutException {
+		if (timeout < 0) {
+			throw new IllegalArgumentException("time may not be negative");
+		}
+		if (timeUnit == null) {
+			throw new NullPointerException("timeUnit");
+		}
+
+		if (timeout == 0) {
+			await();
+		} else {
+			final long deadline = System.nanoTime() + timeUnit.toNanos(timeout);
+			long millisToWait;
+
+			synchronized (lock) {
+				while (!triggered && (millisToWait = (deadline - System.nanoTime()) / 1_000_000) > 0) {
+					lock.wait(millisToWait);
+				}
+
+				if (!triggered) {
+					throw new TimeoutException();
+				}
+			}
+		}
+	}
+
+	/**
+	 * Checks if the latch was triggered.
+	 * 
+	 * @return True, if the latch was triggered, false if not.
+	 */
+	public boolean isTriggered() {
+		return triggered;
+	}
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/635c8693/flink-test-utils-parent/flink-test-utils-junit/src/test/java/org/apache/flink/core/testutils/OneShotLatchTest.java
----------------------------------------------------------------------
diff --git a/flink-test-utils-parent/flink-test-utils-junit/src/test/java/org/apache/flink/core/testutils/OneShotLatchTest.java b/flink-test-utils-parent/flink-test-utils-junit/src/test/java/org/apache/flink/core/testutils/OneShotLatchTest.java
new file mode 100644
index 0000000..575c84c
--- /dev/null
+++ b/flink-test-utils-parent/flink-test-utils-junit/src/test/java/org/apache/flink/core/testutils/OneShotLatchTest.java
@@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.core.testutils;
+
+import org.junit.Test;
+
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+public class OneShotLatchTest {
+
+	@Test
+	public void testAwaitWithTimeout() throws Exception {
+		OneShotLatch latch = new OneShotLatch();
+		assertFalse(latch.isTriggered());
+
+		try {
+			latch.await(1, TimeUnit.MILLISECONDS);
+			fail("should fail with a TimeoutException");
+		} catch (TimeoutException e) {
+			// expected
+		}
+
+		assertFalse(latch.isTriggered());
+
+		latch.trigger();
+		assertTrue(latch.isTriggered());
+
+		latch.await(100, TimeUnit.DAYS);
+		assertTrue(latch.isTriggered());
+
+		latch.await(0, TimeUnit.MILLISECONDS);
+		assertTrue(latch.isTriggered());
+	}
+}


[61/89] [abbrv] flink git commit: [hotfix] Reduce string concatenations in ExecutionVertex

Posted by se...@apache.org.
[hotfix] Reduce string concatenations in ExecutionVertex


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/4e45659a
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/4e45659a
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/4e45659a

Branch: refs/heads/flip-6
Commit: 4e45659a5abefbfbd693e3754a19fab57b405427
Parents: 42f65e4
Author: Stephan Ewen <se...@apache.org>
Authored: Wed Aug 24 14:02:25 2016 +0200
Committer: Stephan Ewen <se...@apache.org>
Committed: Wed Aug 24 19:27:29 2016 +0200

----------------------------------------------------------------------
 .../org/apache/flink/runtime/executiongraph/ExecutionVertex.java   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/4e45659a/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionVertex.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionVertex.java b/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionVertex.java
index 08bf57f..2495316 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionVertex.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionVertex.java
@@ -702,7 +702,7 @@ public class ExecutionVertex {
 	 * @return A simple name representation.
 	 */
 	public String getSimpleName() {
-		return getTaskName() + " (" + (getParallelSubtaskIndex()+1) + '/' + getTotalNumberOfParallelSubtasks() + ')';
+		return taskNameWithSubtask;
 	}
 
 	@Override


[33/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/table.md
----------------------------------------------------------------------
diff --git a/docs/apis/table.md b/docs/apis/table.md
deleted file mode 100644
index 21a7dbe..0000000
--- a/docs/apis/table.md
+++ /dev/null
@@ -1,2082 +0,0 @@
----
-title: "Table API and SQL"
-is_beta: true
-# Top-level navigation
-top-nav-group: apis
-top-nav-pos: 4
-top-nav-title: "<strong>Table API and SQL</strong>"
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-
-**Table API and SQL are experimental features**
-
-The Table API is a SQL-like expression language for relational stream and batch processing that can be easily embedded in Flink's DataSet and DataStream APIs (Java and Scala).
-The Table API and SQL interface operate on a relational `Table` abstraction, which can be created from external data sources, or existing DataSets and DataStreams. With the Table API, you can apply relational operators such as selection, aggregation, and joins on `Table`s.
-
-`Table`s can also be queried with regular SQL, as long as they are registered (see [Registering Tables](#registering-tables)). The Table API and SQL offer equivalent functionality and can be mixed in the same program. When a `Table` is converted back into a `DataSet` or `DataStream`, the logical plan, which was defined by relational operators and SQL queries, is optimized using [Apache Calcite](https://calcite.apache.org/) and transformed into a `DataSet` or `DataStream` program.
-
-* This will be replaced by the TOC
-{:toc}
-
-Using the Table API and SQL
-----------------------------
-
-The Table API and SQL are part of the *flink-table* Maven project.
-The following dependency must be added to your project in order to use the Table API and SQL:
-
-{% highlight xml %}
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-table{{ site.scala_version_suffix }}</artifactId>
-  <version>{{site.version }}</version>
-</dependency>
-{% endhighlight %}
-
-*Note: The Table API is currently not part of the binary distribution. See linking with it for cluster execution [here]({{ site.baseurl }}/apis/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution).*
-
-
-Registering Tables
---------------------------------
-
-`TableEnvironment`s have an internal table catalog to which tables can be registered with a unique name. After registration, a table can be accessed from the `TableEnvironment` by its name.
-
-*Note: `DataSet`s or `DataStream`s can be directly converted into `Table`s without registering them in the `TableEnvironment`.*
-
-### Register a DataSet
-
-A `DataSet` is registered as a `Table` in a `BatchTableEnvironment` as follows:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-BatchTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);
-
-// register the DataSet cust as table "Customers" with fields derived from the dataset
-tableEnv.registerDataSet("Customers", cust)
-
-// register the DataSet ord as table "Orders" with fields user, product, and amount
-tableEnv.registerDataSet("Orders", ord, "user, product, amount");
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = ExecutionEnvironment.getExecutionEnvironment
-val tableEnv = TableEnvironment.getTableEnvironment(env)
-
-// register the DataSet cust as table "Customers" with fields derived from the dataset
-tableEnv.registerDataSet("Customers", cust)
-
-// register the DataSet ord as table "Orders" with fields user, product, and amount
-tableEnv.registerDataSet("Orders", ord, 'user, 'product, 'amount)
-{% endhighlight %}
-</div>
-</div>
-
-*Note: The name of a `DataSet` `Table` must not match the `^_DataSetTable_[0-9]+` pattern which is reserved for internal use only.*
-
-### Register a DataStream
-
-A `DataStream` is registered as a `Table` in a `StreamTableEnvironment` as follows:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
-StreamTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);
-
-// register the DataStream cust as table "Customers" with fields derived from the datastream
-tableEnv.registerDataStream("Customers", cust)
-
-// register the DataStream ord as table "Orders" with fields user, product, and amount
-tableEnv.registerDataStream("Orders", ord, "user, product, amount");
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = StreamExecutionEnvironment.getExecutionEnvironment
-val tableEnv = TableEnvironment.getTableEnvironment(env)
-
-// register the DataStream cust as table "Customers" with fields derived from the datastream
-tableEnv.registerDataStream("Customers", cust)
-
-// register the DataStream ord as table "Orders" with fields user, product, and amount
-tableEnv.registerDataStream("Orders", ord, 'user, 'product, 'amount)
-{% endhighlight %}
-</div>
-</div>
-
-*Note: The name of a `DataStream` `Table` must not match the `^_DataStreamTable_[0-9]+` pattern which is reserved for internal use only.*
-
-### Register a Table
-
-A `Table` that originates from a Table API operation or a SQL query is registered in a `TableEnvironment` as follows:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-// works for StreamExecutionEnvironment identically
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-BatchTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);
-
-// convert a DataSet into a Table
-Table custT = tableEnv
-  .toTable(custDs, "name, zipcode")
-  .where("zipcode = '12345'")
-  .select("name")
-
-// register the Table custT as table "custNames"
-tableEnv.registerTable("custNames", custT)
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-// works for StreamExecutionEnvironment identically
-val env = ExecutionEnvironment.getExecutionEnvironment
-val tableEnv = TableEnvironment.getTableEnvironment(env)
-
-// convert a DataSet into a Table
-val custT = custDs
-  .toTable(tableEnv, 'name, 'zipcode)
-  .where('zipcode === "12345")
-  .select('name)
-
-// register the Table custT as table "custNames"
-tableEnv.registerTable("custNames", custT)
-{% endhighlight %}
-</div>
-</div>
-
-A registered `Table` that originates from a Table API operation or SQL query is treated similarly as a view as known from relational DBMS, i.e., it can be inlined when optimizing the query.
-
-### Register an external Table using a TableSource
-
-An external table is registered in a `TableEnvironment` using a `TableSource` as follows:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-// works for StreamExecutionEnvironment identically
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-BatchTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);
-
-TableSource custTS = new CsvTableSource("/path/to/file", ...)
-
-// register a `TableSource` as external table "Customers"
-tableEnv.registerTableSource("Customers", custTS)
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-// works for StreamExecutionEnvironment identically
-val env = ExecutionEnvironment.getExecutionEnvironment
-val tableEnv = TableEnvironment.getTableEnvironment(env)
-
-val custTS: TableSource = new CsvTableSource("/path/to/file", ...)
-
-// register a `TableSource` as external table "Customers"
-tableEnv.registerTableSource("Customers", custTS)
-
-{% endhighlight %}
-</div>
-</div>
-
-A `TableSource` can provide access to data stored in various storage systems such as databases (MySQL, HBase, ...), file formats (CSV, Apache Parquet, Avro, ORC, ...), or messaging systems (Apache Kafka, RabbitMQ, ...).
-
-Currently, Flink provides the `CsvTableSource` to read CSV files and the `Kafka08JsonTableSource`/`Kafka09JsonTableSource` to read JSON objects from Kafka. 
-A custom `TableSource` can be defined by implementing the `BatchTableSource` or `StreamTableSource` interface.
-
-### Available Table Sources
-
-| **Class name** | **Maven dependency** | **Batch?** | **Streaming?** | **Description**
-| `CsvTableSouce` | `flink-table` | Y | Y | A simple source for CSV files.
-| `Kafka08JsonTableSource` | `flink-connector-kafka-0.8` | N | Y | A Kafka 0.8 source for JSON data.
-| `Kafka09JsonTableSource` | `flink-connector-kafka-0.9` | N | Y | A Kafka 0.9 source for JSON data.
-
-All sources that come with the `flink-table` dependency can be directly used by your Table programs. For all other table sources, you have to add the respective dependency in addition to the `flink-table` dependency.
-
-#### KafkaJsonTableSource
-
-To use the Kafka JSON source, you have to add the Kafka connector dependency to your project:
-
-  - `flink-connector-kafka-0.8` for Kafka 0.8, and
-  - `flink-connector-kafka-0.9` for Kafka 0.9, respectively.
-
-You can then create the source as follows (example for Kafka 0.8):
-
-```java
-// The JSON field names and types
-String[] fieldNames =  new String[] { "id", "name", "score"};
-Class<?>[] fieldTypes = new Class<?>[] { Integer.class, String.class, Double.class };
-
-KafkaJsonTableSource kafkaTableSource = new Kafka08JsonTableSource(
-    kafkaTopic,
-    kafkaProperties,
-    fieldNames,
-    fieldTypes);
-```
-
-By default, a missing JSON field does not fail the source. You can configure this via:
-
-```java
-// Fail on missing JSON field
-tableSource.setFailOnMissingField(true);
-```
-
-You can work with the Table as explained in the rest of the Table API guide:
-
-```java
-tableEnvironment.registerTableSource("kafka-source", kafkaTableSource);
-Table result = tableEnvironment.ingest("kafka-source");
-```
-
-#### CsvTableSource
-
-The `CsvTableSource` is already included in `flink-table` without additional dependecies.
-
-It can be configured with the following properties:
-
- - `path` The path to the CSV file, required.
- - `fieldNames` The names of the table fields, required.
- - `fieldTypes` The types of the table fields, required.
- - `fieldDelim` The field delimiter, `","` by default.
- - `rowDelim` The row delimiter, `"\n"` by default.
- - `quoteCharacter` An optional quote character for String values, `null` by default.
- - `ignoreFirstLine` Flag to ignore the first line, `false` by default.
- - `ignoreComments` An optional prefix to indicate comments, `null` by default.
- - `lenient` Flag to skip records with parse error instead to fail, `false` by default.
-
-You can create the source as follows:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-CsvTableSource csvTableSource = new CsvTableSource(
-    "/path/to/your/file.csv",
-    new String[] { "name", "id", "score", "comments" },
-    new TypeInformation<?>[] {
-      Types.STRING(),
-      Types.INT(),
-      Types.DOUBLE(),
-      Types.STRING()
-    },
-    "#",    // fieldDelim
-    "$",    // rowDelim
-    null,   // quoteCharacter
-    true,   // ignoreFirstLine
-    "%",    // ignoreComments
-    false); // lenient
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val csvTableSource = new CsvTableSource(
-    "/path/to/your/file.csv",
-    Array("name", "id", "score", "comments"),
-    Array(
-      Types.STRING,
-      Types.INT,
-      Types.DOUBLE,
-      Types.STRING
-    ),
-    fieldDelim = "#",
-    rowDelim = "$",
-    ignoreFirstLine = true,
-    ignoreComments = "%")
-{% endhighlight %}
-</div>
-</div>
-
-You can work with the Table as explained in the rest of the Table API guide in both stream and batch `TableEnvironment`s:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-tableEnvironment.registerTableSource("mycsv", csvTableSource);
-
-Table streamTable = streamTableEnvironment.ingest("mycsv");
-
-Table batchTable = batchTableEnvironment.scan("mycsv");
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-tableEnvironment.registerTableSource("mycsv", csvTableSource)
-
-val streamTable = streamTableEnvironment.ingest("mycsv")
-
-val batchTable = batchTableEnvironment.scan("mycsv")
-{% endhighlight %}
-</div>
-</div>
-
-
-Table API
-----------
-The Table API provides methods to apply relational operations on DataSets and Datastreams both in Scala and Java.
-
-The central concept of the Table API is a `Table` which represents a table with relational schema (or relation). Tables can be created from a `DataSet` or `DataStream`, converted into a `DataSet` or `DataStream`, or registered in a table catalog using a `TableEnvironment`. A `Table` is always bound to a specific `TableEnvironment`. It is not possible to combine Tables of different TableEnvironments.
-
-*Note: The only operations currently supported on streaming Tables are selection, projection, and union.*
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-When using Flink's Java DataSet API, DataSets are converted to Tables and Tables to DataSets using a `TableEnvironment`.
-The following example shows:
-
-- how a `DataSet` is converted to a `Table`,
-- how relational queries are specified, and
-- how a `Table` is converted back to a `DataSet`.
-
-{% highlight java %}
-public class WC {
-
-  public WC(String word, int count) {
-    this.word = word; this.count = count;
-  }
-
-  public WC() {} // empty constructor to satisfy POJO requirements
-
-  public String word;
-  public int count;
-}
-
-...
-
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-BatchTableEnvironment tEnv = TableEnvironment.getTableEnvironment(env);
-
-DataSet<WC> input = env.fromElements(
-        new WC("Hello", 1),
-        new WC("Ciao", 1),
-        new WC("Hello", 1));
-
-Table table = tEnv.fromDataSet(input);
-
-Table wordCounts = table
-        .groupBy("word")
-        .select("word, count.sum as count");
-
-DataSet<WC> result = tableEnv.toDataSet(wordCounts, WC.class);
-{% endhighlight %}
-
-With Java, expressions must be specified by Strings. The embedded expression DSL is not supported.
-
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-BatchTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);
-
-// register the DataSet cust as table "Customers" with fields derived from the dataset
-tableEnv.registerDataSet("Customers", cust)
-
-// register the DataSet ord as table "Orders" with fields user, product, and amount
-tableEnv.registerDataSet("Orders", ord, "user, product, amount");
-{% endhighlight %}
-
-Please refer to the Javadoc for a full list of supported operations and a description of the expression syntax.
-</div>
-
-<div data-lang="scala" markdown="1">
-The Table API is enabled by importing `org.apache.flink.api.scala.table._`. This enables
-implicit conversions to convert a `DataSet` or `DataStream` to a Table. The following example shows:
-
-- how a `DataSet` is converted to a `Table`,
-- how relational queries are specified, and
-- how a `Table` is converted back to a `DataSet`.
-
-{% highlight scala %}
-import org.apache.flink.api.scala._
-import org.apache.flink.api.scala.table._
-
-case class WC(word: String, count: Int)
-
-val env = ExecutionEnvironment.getExecutionEnvironment
-val tEnv = TableEnvironment.getTableEnvironment(env)
-
-val input = env.fromElements(WC("hello", 1), WC("hello", 1), WC("ciao", 1))
-val expr = input.toTable(tEnv)
-val result = expr
-               .groupBy('word)
-               .select('word, 'count.sum as 'count)
-               .toDataSet[WC]
-{% endhighlight %}
-
-The expression DSL uses Scala symbols to refer to field names and code generation to
-transform expressions to efficient runtime code. Please note that the conversion to and from
-Tables only works when using Scala case classes or Java POJOs. Please refer to the [Type Extraction and Serialization]({{ site.baseurl }}/internals/types_serialization.html) section
-to learn the characteristics of a valid POJO.
-
-Another example shows how to join two Tables:
-
-{% highlight scala %}
-case class MyResult(a: String, d: Int)
-
-val input1 = env.fromElements(...).toTable(tEnv).as('a, 'b)
-val input2 = env.fromElements(...).toTable(tEnv, 'c, 'd)
-
-val joined = input1.join(input2)
-               .where("a = c && d > 42")
-               .select("a, d")
-               .toDataSet[MyResult]
-{% endhighlight %}
-
-Notice, how the field names of a Table can be changed with `as()` or specified with `toTable()` when converting a DataSet to a Table. In addition, the example shows how to use Strings to specify relational expressions.
-
-Creating a `Table` from a `DataStream` works in a similar way.
-The following example shows how to convert a `DataStream` to a `Table` and filter it with the Table API.
-
-{% highlight scala %}
-import org.apache.flink.api.scala._
-import org.apache.flink.api.scala.table._
-
-val env = StreamExecutionEnvironment.getExecutionEnvironment
-val tEnv = TableEnvironment.getTableEnvironment(env)
-
-val inputStream = env.addSource(...)
-val result = inputStream
-                .toTable(tEnv, 'a, 'b, 'c)
-                .filter('a === 3)
-val resultStream = result.toDataStream[Row]
-{% endhighlight %}
-
-Please refer to the Scaladoc for a full list of supported operations and a description of the expression syntax.
-</div>
-</div>
-
-{% top %}
-
-
-### Access a registered Table
-
-A registered table can be accessed from a `TableEnvironment` as follows:
-
-- `tEnv.scan("tName")` scans a `Table` that was registered as `"tName"` in a `BatchTableEnvironment`.
-- `tEnv.ingest("tName")` ingests a `Table` that was registered as `"tName"` in a `StreamTableEnvironment`.
-
-{% top %}
-
-### Table API Operators
-
-The Table API features a domain-specific language to execute language-integrated queries on structured data in Scala and Java.
-This section gives a brief overview of the available operators. You can find more details of operators in the [Javadoc]({{site.baseurl}}/api/java/org/apache/flink/api/table/Table.html).
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 20%">Operators</th>
-      <th class="text-center">Description</th>
-    </tr>
-  </thead>
-
-  <tbody>
-    <tr>
-      <td><strong>Select</strong></td>
-      <td>
-        <p>Similar to a SQL SELECT statement. Performs a select operation.</p>
-{% highlight java %}
-Table in = tableEnv.fromDataSet(ds, "a, b, c");
-Table result = in.select("a, c as d");
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>As</strong></td>
-      <td>
-        <p>Renames fields.</p>
-{% highlight java %}
-Table in = tableEnv.fromDataSet(ds, "a, b, c");
-Table result = in.as("d, e, f");
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>Where / Filter</strong></td>
-      <td>
-        <p>Similar to a SQL WHERE clause. Filters out rows that do not pass the filter predicate.</p>
-{% highlight java %}
-Table in = tableEnv.fromDataSet(ds, "a, b, c");
-Table result = in.where("b = 'red'");
-{% endhighlight %}
-or
-{% highlight java %}
-Table in = tableEnv.fromDataSet(ds, "a, b, c");
-Table result = in.filter("a % 2 = 0");
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>GroupBy</strong></td>
-      <td>
-        <p>Similar to a SQL GROUPBY clause. Groups the rows on the grouping keys, with a following aggregation
-        operator to aggregate rows group-wise.</p>
-{% highlight java %}
-Table in = tableEnv.fromDataSet(ds, "a, b, c");
-Table result = in.groupBy("a").select("a, b.sum as d");
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>Join</strong></td>
-      <td>
-        <p>Similar to a SQL JOIN clause. Joins two tables. Both tables must have distinct field names and at least one equality join predicate must be defined through join operator or using a where or filter operator.</p>
-{% highlight java %}
-Table left = tableEnv.fromDataSet(ds1, "a, b, c");
-Table right = tableEnv.fromDataSet(ds2, "d, e, f");
-Table result = left.join(right).where("a = d").select("a, b, e");
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>LeftOuterJoin</strong></td>
-      <td>
-        <p>Similar to a SQL LEFT OUTER JOIN clause. Joins two tables. Both tables must have distinct field names and at least one equality join predicate must be defined.</p>
-{% highlight java %}
-Table left = tableEnv.fromDataSet(ds1, "a, b, c");
-Table right = tableEnv.fromDataSet(ds2, "d, e, f");
-Table result = left.leftOuterJoin(right, "a = d").select("a, b, e");
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>RightOuterJoin</strong></td>
-      <td>
-        <p>Similar to a SQL RIGHT OUTER JOIN clause. Joins two tables. Both tables must have distinct field names and at least one equality join predicate must be defined.</p>
-{% highlight java %}
-Table left = tableEnv.fromDataSet(ds1, "a, b, c");
-Table right = tableEnv.fromDataSet(ds2, "d, e, f");
-Table result = left.rightOuterJoin(right, "a = d").select("a, b, e");
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>FullOuterJoin</strong></td>
-      <td>
-        <p>Similar to a SQL FULL OUTER JOIN clause. Joins two tables. Both tables must have distinct field names and at least one equality join predicate must be defined.</p>
-{% highlight java %}
-Table left = tableEnv.fromDataSet(ds1, "a, b, c");
-Table right = tableEnv.fromDataSet(ds2, "d, e, f");
-Table result = left.fullOuterJoin(right, "a = d").select("a, b, e");
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>Union</strong></td>
-      <td>
-        <p>Similar to a SQL UNION clause. Unions two tables with duplicate records removed. Both tables must have identical field types.</p>
-{% highlight java %}
-Table left = tableEnv.fromDataSet(ds1, "a, b, c");
-Table right = tableEnv.fromDataSet(ds2, "a, b, c");
-Table result = left.union(right);
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>UnionAll</strong></td>
-      <td>
-        <p>Similar to a SQL UNION ALL clause. Unions two tables. Both tables must have identical field types.</p>
-{% highlight java %}
-Table left = tableEnv.fromDataSet(ds1, "a, b, c");
-Table right = tableEnv.fromDataSet(ds2, "a, b, c");
-Table result = left.unionAll(right);
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>Intersect</strong></td>
-      <td>
-        <p>Similar to a SQL INTERSECT clause. Intersect returns records that exist in both tables. If a record is present one or both tables more than once, it is returned just once, i.e., the resulting table has no duplicate records. Both tables must have identical field types.</p>
-{% highlight java %}
-Table left = tableEnv.fromDataSet(ds1, "a, b, c");
-Table right = tableEnv.fromDataSet(ds2, "d, e, f");
-Table result = left.intersect(right);
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>IntersectAll</strong></td>
-      <td>
-        <p>Similar to a SQL INTERSECT ALL clause. IntersectAll returns records that exist in both tables. If a record is present in both tables more than once, it is returned as many times as it is present in both tables, i.e., the resulting table might have duplicate records. Both tables must have identical field types.</p>
-{% highlight java %}
-Table left = tableEnv.fromDataSet(ds1, "a, b, c");
-Table right = tableEnv.fromDataSet(ds2, "d, e, f");
-Table result = left.intersectAll(right);
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>Minus</strong></td>
-      <td>
-        <p>Similar to a SQL EXCEPT clause. Minus returns records from the left table that do not exist in the right table. Duplicate records in the left table are returned exactly once, i.e., duplicates are removed. Both tables must have identical field types.</p>
-{% highlight java %}
-Table left = tableEnv.fromDataSet(ds1, "a, b, c");
-Table right = tableEnv.fromDataSet(ds2, "a, b, c");
-Table result = left.minus(right);
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>MinusAll</strong></td>
-      <td>
-        <p>Similar to a SQL EXCEPT ALL clause. MinusAll returns the records that do not exist in the right table. A record that is present n times in the left table and m times in the right table is returned (n - m) times, i.e., as many duplicates as are present in the right table are removed. Both tables must have identical field types.</p>
-{% highlight java %}
-Table left = tableEnv.fromDataSet(ds1, "a, b, c");
-Table right = tableEnv.fromDataSet(ds2, "a, b, c");
-Table result = left.minusAll(right);
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>Distinct</strong></td>
-      <td>
-        <p>Similar to a SQL DISTINCT clause. Returns records with distinct value combinations.</p>
-{% highlight java %}
-Table in = tableEnv.fromDataSet(ds, "a, b, c");
-Table result = in.distinct();
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>Order By</strong></td>
-      <td>
-        <p>Similar to a SQL ORDER BY clause. Returns records globally sorted across all parallel partitions.</p>
-{% highlight java %}
-Table in = tableEnv.fromDataSet(ds, "a, b, c");
-Table result = in.orderBy("a.asc");
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>Limit</strong></td>
-      <td>
-        <p>Similar to a SQL LIMIT clause. Limits a sorted result to a specified number of records from an offset position. Limit is technically part of the Order By operator and thus must be preceded by it.</p>
-{% highlight java %}
-Table in = tableEnv.fromDataSet(ds, "a, b, c");
-Table result = in.orderBy("a.asc").limit(3); // returns unlimited number of records beginning with the 4th record 
-{% endhighlight %}
-or
-{% highlight java %}
-Table in = tableEnv.fromDataSet(ds, "a, b, c");
-Table result = in.orderBy("a.asc").limit(3, 5); // returns 5 records beginning with the 4th record 
-{% endhighlight %}
-      </td>
-    </tr>
-
-  </tbody>
-</table>
-
-</div>
-<div data-lang="scala" markdown="1">
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 20%">Operators</th>
-      <th class="text-center">Description</th>
-    </tr>
-  </thead>
-
-  <tbody>
-    <tr>
-      <td><strong>Select</strong></td>
-      <td>
-        <p>Similar to a SQL SELECT statement. Performs a select operation.</p>
-{% highlight scala %}
-val in = ds.toTable(tableEnv, 'a, 'b, 'c);
-val result = in.select('a, 'c as 'd);
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>As</strong></td>
-      <td>
-        <p>Renames fields.</p>
-{% highlight scala %}
-val in = ds.toTable(tableEnv).as('a, 'b, 'c);
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>Where / Filter</strong></td>
-      <td>
-        <p>Similar to a SQL WHERE clause. Filters out rows that do not pass the filter predicate.</p>
-{% highlight scala %}
-val in = ds.toTable(tableEnv, 'a, 'b, 'c);
-val result = in.filter('a % 2 === 0)
-{% endhighlight %}
-or
-{% highlight scala %}
-val in = ds.toTable(tableEnv, 'a, 'b, 'c);
-val result = in.where('b === "red");
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>GroupBy</strong></td>
-      <td>
-        <p>Similar to a SQL GROUPBY clause. Groups rows on the grouping keys, with a following aggregation
-        operator to aggregate rows group-wise.</p>
-{% highlight scala %}
-val in = ds.toTable(tableEnv, 'a, 'b, 'c);
-val result = in.groupBy('a).select('a, 'b.sum as 'd);
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>Join</strong></td>
-      <td>
-        <p>Similar to a SQL JOIN clause. Joins two tables. Both tables must have distinct field names and an equality join predicate must be defined using a where or filter operator.</p>
-{% highlight scala %}
-val left = ds1.toTable(tableEnv, 'a, 'b, 'c);
-val right = ds2.toTable(tableEnv, 'd, 'e, 'f);
-val result = left.join(right).where('a === 'd).select('a, 'b, 'e);
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>LeftOuterJoin</strong></td>
-      <td>
-        <p>Similar to a SQL LEFT OUTER JOIN clause. Joins two tables. Both tables must have distinct field names and at least one equality join predicate must be defined.</p>
-{% highlight scala %}
-val left = tableEnv.fromDataSet(ds1, 'a, 'b, 'c)
-val right = tableEnv.fromDataSet(ds2, 'd, 'e, 'f)
-val result = left.leftOuterJoin(right, 'a === 'd).select('a, 'b, 'e)
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>RightOuterJoin</strong></td>
-      <td>
-        <p>Similar to a SQL RIGHT OUTER JOIN clause. Joins two tables. Both tables must have distinct field names and at least one equality join predicate must be defined.</p>
-{% highlight scala %}
-val left = tableEnv.fromDataSet(ds1, 'a, 'b, 'c)
-val right = tableEnv.fromDataSet(ds2, 'd, 'e, 'f)
-val result = left.rightOuterJoin(right, 'a === 'd).select('a, 'b, 'e)
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>FullOuterJoin</strong></td>
-      <td>
-        <p>Similar to a SQL FULL OUTER JOIN clause. Joins two tables. Both tables must have distinct field names and at least one equality join predicate must be defined.</p>
-{% highlight scala %}
-val left = tableEnv.fromDataSet(ds1, 'a, 'b, 'c)
-val right = tableEnv.fromDataSet(ds2, 'd, 'e, 'f)
-val result = left.fullOuterJoin(right, 'a === 'd).select('a, 'b, 'e)
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>Union</strong></td>
-      <td>
-        <p>Similar to a SQL UNION clause. Unions two tables with duplicate records removed, both tables must have identical field types.</p>
-{% highlight scala %}
-val left = ds1.toTable(tableEnv, 'a, 'b, 'c);
-val right = ds2.toTable(tableEnv, 'a, 'b, 'c);
-val result = left.union(right);
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>UnionAll</strong></td>
-      <td>
-        <p>Similar to a SQL UNION ALL clause. Unions two tables, both tables must have identical field types.</p>
-{% highlight scala %}
-val left = ds1.toTable(tableEnv, 'a, 'b, 'c);
-val right = ds2.toTable(tableEnv, 'a, 'b, 'c);
-val result = left.unionAll(right);
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>Intersect</strong></td>
-      <td>
-        <p>Similar to a SQL INTERSECT clause. Intersect returns records that exist in both tables. If a record is present in one or both tables more than once, it is returned just once, i.e., the resulting table has no duplicate records. Both tables must have identical field types.</p>
-{% highlight scala %}
-val left = ds1.toTable(tableEnv, 'a, 'b, 'c);
-val right = ds2.toTable(tableEnv, 'e, 'f, 'g);
-val result = left.intersect(right);
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>IntersectAll</strong></td>
-      <td>
-        <p>Similar to a SQL INTERSECT ALL clause. IntersectAll returns records that exist in both tables. If a record is present in both tables more than once, it is returned as many times as it is present in both tables, i.e., the resulting table might have duplicate records. Both tables must have identical field types.</p>
-{% highlight scala %}
-val left = ds1.toTable(tableEnv, 'a, 'b, 'c);
-val right = ds2.toTable(tableEnv, 'e, 'f, 'g);
-val result = left.intersectAll(right);
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>Minus</strong></td>
-      <td>
-        <p>Similar to a SQL EXCEPT clause. Minus returns records from the left table that do not exist in the right table. Duplicate records in the left table are returned exactly once, i.e., duplicates are removed. Both tables must have identical field types.</p>
-{% highlight scala %}
-val left = ds1.toTable(tableEnv, 'a, 'b, 'c);
-val right = ds2.toTable(tableEnv, 'a, 'b, 'c);
-val result = left.minus(right);
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>MinusAll</strong></td>
-      <td>
-        <p>Similar to a SQL EXCEPT ALL clause. MinusAll returns the records that do not exist in the right table. A record that is present n times in the left table and m times in the right table is returned (n - m) times, i.e., as many duplicates as are present in the right table are removed. Both tables must have identical field types.</p>
-{% highlight scala %}
-val left = ds1.toTable(tableEnv, 'a, 'b, 'c);
-val right = ds2.toTable(tableEnv, 'a, 'b, 'c);
-val result = left.minusAll(right);
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>Distinct</strong></td>
-      <td>
-        <p>Similar to a SQL DISTINCT clause. Returns records with distinct value combinations.</p>
-{% highlight scala %}
-val in = ds.toTable(tableEnv, 'a, 'b, 'c);
-val result = in.distinct();
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>Order By</strong></td>
-      <td>
-        <p>Similar to a SQL ORDER BY clause. Returns records globally sorted across all parallel partitions.</p>
-{% highlight scala %}
-val in = ds.toTable(tableEnv, 'a, 'b, 'c);
-val result = in.orderBy('a.asc);
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>Limit</strong></td>
-      <td>
-        <p>Similar to a SQL LIMIT clause. Limits a sorted result to a specified number of records from an offset position. Limit is technically part of the Order By operator and thus must be preceded by it.</p>
-{% highlight scala %}
-val in = ds.toTable(tableEnv, 'a, 'b, 'c);
-val result = in.orderBy('a.asc).limit(3); // returns unlimited number of records beginning with the 4th record 
-{% endhighlight %}
-or
-{% highlight scala %}
-val in = ds.toTable(tableEnv, 'a, 'b, 'c);
-val result = in.orderBy('a.asc).limit(3, 5); // returns 5 records beginning with the 4th record 
-{% endhighlight %}
-      </td>
-    </tr>
-
-  </tbody>
-</table>
-</div>
-</div>
-
-{% top %}
-
-### Expression Syntax
-Some of the operators in previous sections expect one or more expressions. Expressions can be specified using an embedded Scala DSL or as Strings. Please refer to the examples above to learn how expressions can be specified.
-
-This is the EBNF grammar for expressions:
-
-{% highlight ebnf %}
-
-expressionList = expression , { "," , expression } ;
-
-expression = alias ;
-
-alias = logic | ( logic , "AS" , fieldReference ) ;
-
-logic = comparison , [ ( "&&" | "||" ) , comparison ] ;
-
-comparison = term , [ ( "=" | "===" | "!=" | "!==" | ">" | ">=" | "<" | "<=" ) , term ] ;
-
-term = product , [ ( "+" | "-" ) , product ] ;
-
-product = unary , [ ( "*" | "/" | "%") , unary ] ;
-
-unary = [ "!" | "-" ] , composite ;
-
-composite = suffixed | atom ;
-
-suffixed = interval | cast | as | aggregation | nullCheck | if | functionCall ;
-
-interval = composite , "." , ("year" | "month" | "day" | "hour" | "minute" | "second" | "milli") ;
-
-cast = composite , ".cast(" , dataType , ")" ;
-
-dataType = "BYTE" | "SHORT" | "INT" | "LONG" | "FLOAT" | "DOUBLE" | "BOOLEAN" | "STRING" | "DECIMAL" | "DATE" | "TIME" | "TIMESTAMP" | "INTERVAL_MONTHS" | "INTERVAL_MILLIS" ;
-
-as = composite , ".as(" , fieldReference , ")" ;
-
-aggregation = composite , ( ".sum" | ".min" | ".max" | ".count" | ".avg" ) , [ "()" ] ;
-
-nullCheck = composite , ( ".isNull" | ".isNotNull" ) , [ "()" ] ;
-
-if = composite , ".?(" , expression , "," , expression , ")" ;
-
-functionCall = composite , "." , functionIdentifier , "(" , [ expression , { "," , expression } ] , ")" ;
-
-atom = ( "(" , expression , ")" ) | literal | nullLiteral | fieldReference ;
-
-nullLiteral = "Null(" , dataType , ")" ;
-
-timeIntervalUnit = "YEAR" | "YEAR_TO_MONTH" | "MONTH" | "DAY" | "DAY_TO_HOUR" | "DAY_TO_MINUTE" | "DAY_TO_SECOND" | "HOUR" | "HOUR_TO_MINUTE" | "HOUR_TO_SECOND" | "MINUTE" | "MINUTE_TO_SECOND" | "SECOND" ;
-
-timePointUnit = "YEAR" | "MONTH" | "DAY" | "HOUR" | "MINUTE" | "SECOND" | "QUARTER" | "WEEK" | "MILLISECOND" | "MICROSECOND" ;
-
-{% endhighlight %}
-
-Here, `literal` is a valid Java literal, `fieldReference` specifies a column in the data, and `functionIdentifier` specifies a supported scalar function. The
-column names and function names follow Java identifier syntax. Expressions specified as Strings can also use prefix notation instead of suffix notation to call operators and functions.
-
-If working with exact numeric values or large decimals is required, the Table API also supports Java's BigDecimal type. In the Scala Table API decimals can be defined by `BigDecimal("123456")` and in Java by appending a "p" for precise e.g. `123456p`.
-
-In order to work with temporal values the Table API supports Java SQL's Date, Time, and Timestamp types. In the Scala Table API literals can be defined by using `java.sql.Date.valueOf("2016-06-27")`, `java.sql.Time.valueOf("10:10:42")`, or `java.sql.Timestamp.valueOf("2016-06-27 10:10:42.123")`. The Java and Scala Table API also support calling `"2016-06-27".toDate()`, `"10:10:42".toTime()`, and `"2016-06-27 10:10:42.123".toTimestamp()` for converting Strings into temporal types. *Note:* Since Java's temporal SQL types are time zone dependent, please make sure that the Flink Client and all TaskManagers use the same time zone.
-
-Temporal intervals can be represented as number of months (`Types.INTERVAL_MONTHS`) or number of milliseconds (`Types.INTERVAL_MILLIS`). Intervals of same type can be added or subtracted (e.g. `2.hour + 10.minutes`). Intervals of milliseconds can be added to time points (e.g. `"2016-08-10".toDate + 5.day`).
-
-{% top %}
-
-
-SQL
-----
-SQL queries are specified using the `sql()` method of the `TableEnvironment`. The method returns the result of the SQL query as a `Table` which can be converted into a `DataSet` or `DataStream`, used in subsequent Table API queries, or written to a `TableSink` (see [Writing Tables to External Sinks](#writing-tables-to-external-sinks)). SQL and Table API queries can seamlessly mixed and are holistically optimized and translated into a single DataStream or DataSet program.
-
-A `Table`, `DataSet`, `DataStream`, or external `TableSource` must be registered in the `TableEnvironment` in order to be accessible by a SQL query (see [Registering Tables](#registering-tables)).
-
-*Note: Flink's SQL support is not feature complete, yet. Queries that include unsupported SQL features will cause a `TableException`. The limitations of SQL on batch and streaming tables are listed in the following sections.*
-
-### SQL on Batch Tables
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-BatchTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);
-
-// read a DataSet from an external source
-DataSet<Tuple3<Long, String, Integer>> ds = env.readCsvFile(...);
-// register the DataSet as table "Orders"
-tableEnv.registerDataSet("Orders", ds, "user, product, amount");
-// run a SQL query on the Table and retrieve the result as a new Table
-Table result = tableEnv.sql(
-  "SELECT SUM(amount) FROM Orders WHERE product LIKE '%Rubber%'");
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = ExecutionEnvironment.getExecutionEnvironment
-val tableEnv = TableEnvironment.getTableEnvironment(env)
-
-// read a DataSet from an external source
-val ds: DataSet[(Long, String, Integer)] = env.readCsvFile(...)
-// register the DataSet under the name "Orders"
-tableEnv.registerDataSet("Orders", ds, 'user, 'product, 'amount)
-// run a SQL query on the Table and retrieve the result as a new Table
-val result = tableEnv.sql(
-  "SELECT SUM(amount) FROM Orders WHERE product LIKE '%Rubber%'")
-{% endhighlight %}
-</div>
-</div>
-
-#### Limitations
-
-The current version supports selection (filter), projection, inner equi-joins, grouping, non-distinct aggregates, and sorting on batch tables.
-
-Among others, the following SQL features are not supported, yet:
-
-- Timestamps and intervals are limited to milliseconds precision
-- Interval arithmetic is currenly limited
-- Distinct aggregates (e.g., `COUNT(DISTINCT name)`)
-- Non-equi joins and Cartesian products
-- Grouping sets
-
-*Note: Tables are joined in the order in which they are specified in the `FROM` clause. In some cases the table order must be manually tweaked to resolve Cartesian products.*
-
-### SQL on Streaming Tables
-
-SQL queries can be executed on streaming Tables (Tables backed by `DataStream` or `StreamTableSource`) by using the `SELECT STREAM` keywords instead of `SELECT`. Please refer to the [Apache Calcite's Streaming SQL documentation](https://calcite.apache.org/docs/stream.html) for more information on the Streaming SQL syntax.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
-StreamTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);
-
-// ingest a DataStream from an external source
-DataStream<Tuple3<Long, String, Integer>> ds = env.addSource(...);
-// register the DataStream as table "Orders"
-tableEnv.registerDataStream("Orders", ds, "user, product, amount");
-// run a SQL query on the Table and retrieve the result as a new Table
-Table result = tableEnv.sql(
-  "SELECT STREAM product, amount FROM Orders WHERE product LIKE '%Rubber%'");
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = StreamExecutionEnvironment.getExecutionEnvironment
-val tEnv = TableEnvironment.getTableEnvironment(env)
-
-// read a DataStream from an external source
-val ds: DataStream[(Long, String, Integer)] = env.addSource(...)
-// register the DataStream under the name "Orders"
-tableEnv.registerDataStream("Orders", ds, 'user, 'product, 'amount)
-// run a SQL query on the Table and retrieve the result as a new Table
-val result = tableEnv.sql(
-  "SELECT STREAM product, amount FROM Orders WHERE product LIKE '%Rubber%'")
-{% endhighlight %}
-</div>
-</div>
-
-#### Limitations
-
-The current version of streaming SQL only supports `SELECT`, `FROM`, `WHERE`, and `UNION` clauses. Aggregations or joins are not supported yet.
-
-{% top %}
-
-### SQL Syntax
-
-Flink uses [Apache Calcite](https://calcite.apache.org/docs/reference.html) for SQL parsing. Currently, Flink SQL only supports query-related SQL syntax and only a subset of the comprehensive SQL standard. The following BNF-grammar describes the supported SQL features:
-
-```
-
-query:
-  values
-  | {
-      select
-      | selectWithoutFrom
-      | query UNION [ ALL ] query
-      | query EXCEPT query
-      | query INTERSECT query
-    }
-    [ ORDER BY orderItem [, orderItem ]* ]
-    [ LIMIT { count | ALL } ]
-    [ OFFSET start { ROW | ROWS } ]
-    [ FETCH { FIRST | NEXT } [ count ] { ROW | ROWS } ONLY]
-
-orderItem:
-  expression [ ASC | DESC ]
-
-select:
-  SELECT [ STREAM ] [ ALL | DISTINCT ]
-  { * | projectItem [, projectItem ]* }
-  FROM tableExpression
-  [ WHERE booleanExpression ]
-  [ GROUP BY { groupItem [, groupItem ]* } ]
-  [ HAVING booleanExpression ]
-
-selectWithoutFrom:
-  SELECT [ ALL | DISTINCT ]
-  { * | projectItem [, projectItem ]* }
-
-projectItem:
-  expression [ [ AS ] columnAlias ]
-  | tableAlias . *
-
-tableExpression:
-  tableReference [, tableReference ]*
-  | tableExpression [ NATURAL ] [ LEFT | RIGHT | FULL ] JOIN tableExpression [ joinCondition ]
-
-joinCondition:
-  ON booleanExpression
-  | USING '(' column [, column ]* ')'
-
-tableReference:
-  tablePrimary
-  [ [ AS ] alias [ '(' columnAlias [, columnAlias ]* ')' ] ]
-
-tablePrimary:
-  [ TABLE ] [ [ catalogName . ] schemaName . ] tableName
-
-values:
-  VALUES expression [, expression ]*
-
-groupItem:
-  expression
-  | '(' ')'
-  | '(' expression [, expression ]* ')'
-
-```
-
-
-{% top %}
-
-### Reserved Keywords
-
-Although not every SQL feature is implemented yet, some string combinations are already reserved as keywords for future use. If you want to use one of the following strings as a field name, make sure to surround them with backticks (e.g. `` `value` ``, `` `count` ``).
-
-{% highlight sql %}
-
-A, ABS, ABSOLUTE, ACTION, ADA, ADD, ADMIN, AFTER, ALL, ALLOCATE, ALLOW, ALTER, ALWAYS, AND, ANY, ARE, ARRAY, AS, ASC, ASENSITIVE, ASSERTION, ASSIGNMENT, ASYMMETRIC, AT, ATOMIC, ATTRIBUTE, ATTRIBUTES, AUTHORIZATION, AVG, BEFORE, BEGIN, BERNOULLI, BETWEEN, BIGINT, BINARY, BIT, BLOB, BOOLEAN, BOTH, BREADTH, BY, C, CALL, CALLED, CARDINALITY, CASCADE, CASCADED, CASE, CAST, CATALOG, CATALOG_NAME, CEIL, CEILING, CENTURY, CHAIN, CHAR, CHARACTER, CHARACTERISTICTS, CHARACTERS, CHARACTER_LENGTH, CHARACTER_SET_CATALOG, CHARACTER_SET_NAME, CHARACTER_SET_SCHEMA, CHAR_LENGTH, CHECK, CLASS_ORIGIN, CLOB, CLOSE, COALESCE, COBOL, COLLATE, COLLATION, COLLATION_CATALOG, COLLATION_NAME, COLLATION_SCHEMA, COLLECT, COLUMN, COLUMN_NAME, COMMAND_FUNCTION, COMMAND_FUNCTION_CODE, COMMIT, COMMITTED, CONDITION, CONDITION_NUMBER, CONNECT, CONNECTION, CONNECTION_NAME, CONSTRAINT, CONSTRAINTS, CONSTRAINT_CATALOG, CONSTRAINT_NAME, CONSTRAINT_SCHEMA, CONSTRUCTOR, CONTAINS, CONTINUE, CONVERT, CORR, CORRESPONDING, COUN
 T, COVAR_POP, COVAR_SAMP, CREATE, CROSS, CUBE, CUME_DIST, CURRENT, CURRENT_CATALOG, CURRENT_DATE, CURRENT_DEFAULT_TRANSFORM_GROUP, CURRENT_PATH, CURRENT_ROLE, CURRENT_SCHEMA, CURRENT_TIME, CURRENT_TIMESTAMP, CURRENT_TRANSFORM_GROUP_FOR_TYPE, CURRENT_USER, CURSOR, CURSOR_NAME, CYCLE, DATA, DATABASE, DATE, DATETIME_INTERVAL_CODE, DATETIME_INTERVAL_PRECISION, DAY, DEALLOCATE, DEC, DECADE, DECIMAL, DECLARE, DEFAULT, DEFAULTS, DEFERRABLE, DEFERRED, DEFINED, DEFINER, DEGREE, DELETE, DENSE_RANK, DEPTH, DEREF, DERIVED, DESC, DESCRIBE, DESCRIPTION, DESCRIPTOR, DETERMINISTIC, DIAGNOSTICS, DISALLOW, DISCONNECT, DISPATCH, DISTINCT, DOMAIN, DOUBLE, DOW, DOY, DROP, DYNAMIC, DYNAMIC_FUNCTION, DYNAMIC_FUNCTION_CODE, EACH, ELEMENT, ELSE, END, END-EXEC, EPOCH, EQUALS, ESCAPE, EVERY, EXCEPT, EXCEPTION, EXCLUDE, EXCLUDING, EXEC, EXECUTE, EXISTS, EXP, EXPLAIN, EXTEND, EXTERNAL, EXTRACT, FALSE, FETCH, FILTER, FINAL, FIRST, FIRST_VALUE, FLOAT, FLOOR, FOLLOWING, FOR, FOREIGN, FORTRAN, FOUND, FRAC_SECOND, F
 REE, FROM, FULL, FUNCTION, FUSION, G, GENERAL, GENERATED, GET, GLOBAL, GO, GOTO, GRANT, GRANTED, GROUP, GROUPING, HAVING, HIERARCHY, HOLD, HOUR, IDENTITY, IMMEDIATE, IMPLEMENTATION, IMPORT, IN, INCLUDING, INCREMENT, INDICATOR, INITIALLY, INNER, INOUT, INPUT, INSENSITIVE, INSERT, INSTANCE, INSTANTIABLE, INT, INTEGER, INTERSECT, INTERSECTION, INTERVAL, INTO, INVOKER, IS, ISOLATION, JAVA, JOIN, K, KEY, KEY_MEMBER, KEY_TYPE, LABEL, LANGUAGE, LARGE, LAST, LAST_VALUE, LATERAL, LEADING, LEFT, LENGTH, LEVEL, LIBRARY, LIKE, LIMIT, LN, LOCAL, LOCALTIME, LOCALTIMESTAMP, LOCATOR, LOWER, M, MAP, MATCH, MATCHED, MAX, MAXVALUE, MEMBER, MERGE, MESSAGE_LENGTH, MESSAGE_OCTET_LENGTH, MESSAGE_TEXT, METHOD, MICROSECOND, MILLENNIUM, MIN, MINUTE, MINVALUE, MOD, MODIFIES, MODULE, MONTH, MORE, MULTISET, MUMPS, NAME, NAMES, NATIONAL, NATURAL, NCHAR, NCLOB, NESTING, NEW, NEXT, NO, NONE, NORMALIZE, NORMALIZED, NOT, NULL, NULLABLE, NULLIF, NULLS, NUMBER, NUMERIC, OBJECT, OCTETS, OCTET_LENGTH, OF, OFFSET, OLD, O
 N, ONLY, OPEN, OPTION, OPTIONS, OR, ORDER, ORDERING, ORDINALITY, OTHERS, OUT, OUTER, OUTPUT, OVER, OVERLAPS, OVERLAY, OVERRIDING, PAD, PARAMETER, PARAMETER_MODE, PARAMETER_NAME, PARAMETER_ORDINAL_POSITION, PARAMETER_SPECIFIC_CATALOG, PARAMETER_SPECIFIC_NAME, PARAMETER_SPECIFIC_SCHEMA, PARTIAL, PARTITION, PASCAL, PASSTHROUGH, PATH, PERCENTILE_CONT, PERCENTILE_DISC, PERCENT_RANK, PLACING, PLAN, PLI, POSITION, POWER, PRECEDING, PRECISION, PREPARE, PRESERVE, PRIMARY, PRIOR, PRIVILEGES, PROCEDURE, PUBLIC, QUARTER, RANGE, RANK, READ, READS, REAL, RECURSIVE, REF, REFERENCES, REFERENCING, REGR_AVGX, REGR_AVGY, REGR_COUNT, REGR_INTERCEPT, REGR_R2, REGR_SLOPE, REGR_SXX, REGR_SXY, REGR_SYY, RELATIVE, RELEASE, REPEATABLE, RESET, RESTART, RESTRICT, RESULT, RETURN, RETURNED_CARDINALITY, RETURNED_LENGTH, RETURNED_OCTET_LENGTH, RETURNED_SQLSTATE, RETURNS, REVOKE, RIGHT, ROLE, ROLLBACK, ROLLUP, ROUTINE, ROUTINE_CATALOG, ROUTINE_NAME, ROUTINE_SCHEMA, ROW, ROWS, ROW_COUNT, ROW_NUMBER, SAVEPOINT, SCALE
 , SCHEMA, SCHEMA_NAME, SCOPE, SCOPE_CATALOGS, SCOPE_NAME, SCOPE_SCHEMA, SCROLL, SEARCH, SECOND, SECTION, SECURITY, SELECT, SELF, SENSITIVE, SEQUENCE, SERIALIZABLE, SERVER, SERVER_NAME, SESSION, SESSION_USER, SET, SETS, SIMILAR, SIMPLE, SIZE, SMALLINT, SOME, SOURCE, SPACE, SPECIFIC, SPECIFICTYPE, SPECIFIC_NAME, SQL, SQLEXCEPTION, SQLSTATE, SQLWARNING, SQL_TSI_DAY, SQL_TSI_FRAC_SECOND, SQL_TSI_HOUR, SQL_TSI_MICROSECOND, SQL_TSI_MINUTE, SQL_TSI_MONTH, SQL_TSI_QUARTER, SQL_TSI_SECOND, SQL_TSI_WEEK, SQL_TSI_YEAR, SQRT, START, STATE, STATEMENT, STATIC, STDDEV_POP, STDDEV_SAMP, STREAM, STRUCTURE, STYLE, SUBCLASS_ORIGIN, SUBMULTISET, SUBSTITUTE, SUBSTRING, SUM, SYMMETRIC, SYSTEM, SYSTEM_USER, TABLE, TABLESAMPLE, TABLE_NAME, TEMPORARY, THEN, TIES, TIME, TIMESTAMP, TIMESTAMPADD, TIMESTAMPDIFF, TIMEZONE_HOUR, TIMEZONE_MINUTE, TINYINT, TO, TOP_LEVEL_COUNT, TRAILING, TRANSACTION, TRANSACTIONS_ACTIVE, TRANSACTIONS_COMMITTED, TRANSACTIONS_ROLLED_BACK, TRANSFORM, TRANSFORMS, TRANSLATE, TRANSLATION,
  TREAT, TRIGGER, TRIGGER_CATALOG, TRIGGER_NAME, TRIGGER_SCHEMA, TRIM, TRUE, TYPE, UESCAPE, UNBOUNDED, UNCOMMITTED, UNDER, UNION, UNIQUE, UNKNOWN, UNNAMED, UNNEST, UPDATE, UPPER, UPSERT, USAGE, USER, USER_DEFINED_TYPE_CATALOG, USER_DEFINED_TYPE_CODE, USER_DEFINED_TYPE_NAME, USER_DEFINED_TYPE_SCHEMA, USING, VALUE, VALUES, VARBINARY, VARCHAR, VARYING, VAR_POP, VAR_SAMP, VERSION, VIEW, WEEK, WHEN, WHENEVER, WHERE, WIDTH_BUCKET, WINDOW, WITH, WITHIN, WITHOUT, WORK, WRAPPER, WRITE, XML, YEAR, ZONE
-
-{% endhighlight %}
-
-{% top %}
-
-Data Types
-----------
-
-The Table API is built on top of Flink's DataSet and DataStream API. Internally, it also uses Flink's `TypeInformation` to distinguish between types. The Table API does not support all Flink types so far. All supported simple types are listed in `org.apache.flink.api.table.Types`. The following table summarizes the relation between Table API types, SQL types, and the resulting Java class.
-
-| Table API              | SQL                         | Java type              |
-| :--------------------- | :-------------------------- | :--------------------- |
-| `Types.STRING`         | `VARCHAR`                   | `java.lang.String`     |
-| `Types.BOOLEAN`        | `BOOLEAN`                   | `java.lang.Boolean`    |
-| `Types.BYTE`           | `TINYINT`                   | `java.lang.Byte`       |
-| `Types.SHORT`          | `SMALLINT`                  | `java.lang.Short`      |
-| `Types.INT`            | `INTEGER, INT`              | `java.lang.Integer`    |
-| `Types.LONG`           | `BIGINT`                    | `java.lang.Long`       |
-| `Types.FLOAT`          | `REAL, FLOAT`               | `java.lang.Float`      |
-| `Types.DOUBLE`         | `DOUBLE`                    | `java.lang.Double`     |
-| `Types.DECIMAL`        | `DECIMAL`                   | `java.math.BigDecimal` |
-| `Types.DATE`           | `DATE`                      | `java.sql.Date`        |
-| `Types.TIME`           | `TIME`                      | `java.sql.Time`        |
-| `Types.TIMESTAMP`      | `TIMESTAMP(3)`              | `java.sql.Timestamp`   |
-| `Types.INTERVAL_MONTHS`| `INTERVAL YEAR TO MONTH`    | `java.lang.Integer`    |
-| `Types.INTERVAL_MILLIS`| `INTERVAL DAY TO SECOND(3)` | `java.lang.Long`       |
-
-Advanced types such as generic types, composite types (e.g. POJOs or Tuples), and arrays can be fields of a row but can not be accessed yet. They are treated like a black box within Table API and SQL.
-
-{% top %}
-
-Scalar Functions
-----------------
-
-Both the Table API and SQL come with a set of built-in scalar functions for data transformations. This section gives a brief overview of the available scalar function so far.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-<br/>
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 40%">Function</th>
-      <th class="text-center">Description</th>
-    </tr>
-  </thead>
-
-  <tbody>
-    <tr>
-      <td>
-        {% highlight java %}
-NUMERIC.exp()
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Calculates the Euler's number raised to the given power.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight java %}
-NUMERIC.log10()
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Calculates the base 10 logarithm of given value.</p>
-      </td>
-    </tr>
-
-
-    <tr>
-      <td>
-        {% highlight java %}
-NUMERIC.ln()
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Calculates the natural logarithm of given value.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight java %}
-NUMERIC.power(NUMERIC)
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Calculates the given number raised to the power of the other value.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight java %}
-NUMERIC.abs()
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Calculates the absolute value of given value.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight java %}
-NUMERIC.floor()
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Calculates the largest integer less than or equal to a given number.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight java %}
-NUMERIC.ceil()
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Calculates the smallest integer greater than or equal to a given number.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight java %}
-STRING.substring(INT, INT)
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Creates a substring of the given string at the given index for the given length. The index starts at 1 and is inclusive, i.e., the character at the index is included in the substring. The substring has the specified length or less.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight java %}
-STRING.substring(INT)
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Creates a substring of the given string beginning at the given index to the end. The start index starts at 1 and is inclusive.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight java %}
-STRING.trim(LEADING, STRING)
-STRING.trim(TRAILING, STRING)
-STRING.trim(BOTH, STRING)
-STRING.trim(BOTH)
-STRING.trim()
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Removes leading and/or trailing characters from the given string. By default, whitespaces at both sides are removed.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight java %}
-STRING.charLength()
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Returns the length of a String.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight java %}
-STRING.upperCase()
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Returns all of the characters in a string in upper case using the rules of the default locale.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight java %}
-STRING.lowerCase()
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Returns all of the characters in a string in lower case using the rules of the default locale.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight java %}
-STRING.initCap()
-{% endhighlight %}
-      </td>
-
-      <td>
-        <p>Converts the initial letter of each word in a string to uppercase. Assumes a string containing only [A-Za-z0-9], everything else is treated as whitespace.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight java %}
-STRING.like(STRING)
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Returns true, if a string matches the specified LIKE pattern. E.g. "Jo_n%" matches all strings that start with "Jo(arbitrary letter)n".</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight java %}
-STRING.similar(STRING)
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Returns true, if a string matches the specified SQL regex pattern. E.g. "A+" matches all strings that consist of at least one "A".</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight java %}
-STRING.toDate()
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Parses a date string in the form "yy-mm-dd" to a SQL date.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight java %}
-STRING.toTime()
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Parses a time string in the form "hh:mm:ss" to a SQL time.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight java %}
-STRING.toTimestamp()
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Parses a timestamp string in the form "yy-mm-dd hh:mm:ss.fff" to a SQL timestamp.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight java %}
-TEMPORAL.extract(TIMEINTERVALUNIT)
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Extracts parts of a time point or time interval. Returns the part as a long value. E.g. <code>"2006-06-05".toDate.extract(DAY)</code> leads to 5.</p>
-      </td>
-    </tr>
-
-  </tbody>
-</table>
-
-</div>
-<div data-lang="scala" markdown="1">
-<br />
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 40%">Function</th>
-      <th class="text-center">Description</th>
-    </tr>
-  </thead>
-
-  <tbody>
-    <tr>
-      <td>
-        {% highlight scala %}
-NUMERIC.exp()
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Calculates the Euler's number raised to the given power.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight scala %}
-NUMERIC.log10()
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Calculates the base 10 logarithm of given value.</p>
-      </td>
-    </tr>
-
-
-    <tr>
-      <td>
-        {% highlight scala %}
-NUMERIC.ln()
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Calculates the natural logarithm of given value.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight scala %}
-NUMERIC.power(NUMERIC)
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Calculates the given number raised to the power of the other value.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight scala %}
-NUMERIC.abs()
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Calculates the absolute value of given value.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight scala %}
-NUMERIC.floor()
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Calculates the largest integer less than or equal to a given number.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight scala %}
-NUMERIC.ceil()
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Calculates the smallest integer greater than or equal to a given number.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight scala %}
-STRING.substring(INT, INT)
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Creates a substring of the given string at the given index for the given length. The index starts at 1 and is inclusive, i.e., the character at the index is included in the substring. The substring has the specified length or less.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight scala %}
-STRING.substring(INT)
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Creates a substring of the given string beginning at the given index to the end. The start index starts at 1 and is inclusive.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight scala %}
-STRING.trim(
-  leading = true,
-  trailing = true,
-  character = " ")
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Removes leading and/or trailing characters from the given string.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight scala %}
-STRING.charLength()
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Returns the length of a String.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight scala %}
-STRING.upperCase()
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Returns all of the characters in a string in upper case using the rules of the default locale.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight scala %}
-STRING.lowerCase()
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Returns all of the characters in a string in lower case using the rules of the default locale.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight scala %}
-STRING.initCap()
-{% endhighlight %}
-      </td>
-
-      <td>
-        <p>Converts the initial letter of each word in a string to uppercase. Assumes a string containing only [A-Za-z0-9], everything else is treated as whitespace.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight scala %}
-STRING.like(STRING)
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Returns true, if a string matches the specified LIKE pattern. E.g. "Jo_n%" matches all strings that start with "Jo(arbitrary letter)n".</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight scala %}
-STRING.similar(STRING)
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Returns true, if a string matches the specified SQL regex pattern. E.g. "A+" matches all strings that consist of at least one "A".</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight scala %}
-STRING.toDate
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Parses a date string in the form "yy-mm-dd" to a SQL date.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight scala %}
-STRING.toTime
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Parses a time string in the form "hh:mm:ss" to a SQL time.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight scala %}
-STRING.toTimestamp
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Parses a timestamp string in the form "yy-mm-dd hh:mm:ss.fff" to a SQL timestamp.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight scala %}
-TEMPORAL.extract(TimeIntervalUnit)
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Extracts parts of a time point or time interval. Returns the part as a long value. E.g. <code>"2006-06-05".toDate.extract(TimeIntervalUnit.DAY)</code> leads to 5.</p>
-      </td>
-    </tr>
-
-  </tbody>
-</table>
-</div>
-
-<div data-lang="SQL" markdown="1">
-<br />
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 40%">Function</th>
-      <th class="text-center">Description</th>
-    </tr>
-  </thead>
-
-  <tbody>
-    <tr>
-      <td>
-        {% highlight sql %}
-EXP(NUMERIC)
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Calculates the Euler's number raised to the given power.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight sql %}
-LOG10(NUMERIC)
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Calculates the base 10 logarithm of given value.</p>
-      </td>
-    </tr>
-
-
-    <tr>
-      <td>
-        {% highlight sql %}
-LN(NUMERIC)
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Calculates the natural logarithm of given value.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight sql %}
-POWER(NUMERIC, NUMERIC)
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Calculates the given number raised to the power of the other value.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight sql %}
-ABS(NUMERIC)
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Calculates the absolute value of given value.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight sql %}
-FLOOR(NUMERIC)
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Calculates the largest integer less than or equal to a given number.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight sql %}
-CEIL(NUMERIC)
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Calculates the smallest integer greater than or equal to a given number.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight sql %}
-SUBSTRING(VARCHAR, INT, INT)
-SUBSTRING(VARCHAR FROM INT FOR INT)
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Creates a substring of the given string at the given index for the given length. The index starts at 1 and is inclusive, i.e., the character at the index is included in the substring. The substring has the specified length or less.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight sql %}
-SUBSTRING(VARCHAR, INT)
-SUBSTRING(VARCHAR FROM INT)
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Creates a substring of the given string beginning at the given index to the end. The start index starts at 1 and is inclusive.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight sql %}
-TRIM(LEADING VARCHAR FROM VARCHAR)
-TRIM(TRAILING VARCHAR FROM VARCHAR)
-TRIM(BOTH VARCHAR FROM VARCHAR)
-TRIM(VARCHAR)
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Removes leading and/or trailing characters from the given string. By default, whitespaces at both sides are removed.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight sql %}
-CHAR_LENGTH(VARCHAR)
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Returns the length of a String.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight sql %}
-UPPER(VARCHAR)
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Returns all of the characters in a string in upper case using the rules of the default locale.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight sql %}
-LOWER(VARCHAR)
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Returns all of the characters in a string in lower case using the rules of the default locale.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight sql %}
-INITCAP(VARCHAR)
-{% endhighlight %}
-      </td>
-
-      <td>
-        <p>Converts the initial letter of each word in a string to uppercase. Assumes a string containing only [A-Za-z0-9], everything else is treated as whitespace.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight sql %}
-VARCHAR LIKE VARCHAR
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Returns true, if a string matches the specified LIKE pattern. E.g. "Jo_n%" matches all strings that start with "Jo(arbitrary letter)n".</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight sql %}
-VARCHAR SIMILAR TO VARCHAR
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Returns true, if a string matches the specified SQL regex pattern. E.g. "A+" matches all strings that consist of at least one "A".</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight sql %}
-DATE VARCHAR
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Parses a date string in the form "yy-mm-dd" to a SQL date.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight sql %}
-TIME VARCHAR
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Parses a time string in the form "hh:mm:ss" to a SQL time.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight sql %}
-TIMESTAMP VARCHAR
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Parses a timestamp string in the form "yy-mm-dd hh:mm:ss.fff" to a SQL timestamp.</p>
-      </td>
-    </tr>
-
-    <tr>
-      <td>
-        {% highlight sql %}
-EXTRACT(TIMEINTERVALUNIT FROM TEMPORAL)
-{% endhighlight %}
-      </td>
-      <td>
-        <p>Extracts parts of a time point or time interval. Returns the part as a long value. E.g. <code>EXTRACT(DAY FROM DATE '2006-06-05')</code> leads to 5.</p>
-      </td>
-    </tr>
-
-  </tbody>
-</table>
-</div>
-</div>
-
-### User-defined Scalar Functions
-
-If a required scalar function is not contained in the built-in functions, it is possible to define custom, user-defined scalar functions for both the Table API and SQL. A user-defined scalar functions maps zero, one, or multiple scalar values to a new scalar value. 
-
-In order to define a scalar function one has to extend the base class `ScalarFunction` in `org.apache.flink.api.table.functions` and implement (one or more) evaluation methods. The behavior of a scalar function is determined by the evaluation method. An evaluation method must be declared publicly and named `eval`. The parameter types and return type of the evaluation method also determine the parameter and return types of the scalar function. Evaluation methods can also be overloaded by implementing multiple methods named `eval`.
-
-The following example snippet shows how to define your own hash code function:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-public static class HashCode extends ScalarFunction {
-  public int eval(String s) {
-    return s.hashCode() * 12;
-  }
-}
-
-BatchTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);
-
-// register the function
-tableEnv.registerFunction("hashCode", new HashCode())
-
-// use the function in Java Table API
-myTable.select("string, string.hashCode(), hashCode(string)");
-
-// use the function in SQL API
-tableEnv.sql("SELECT string, HASHCODE(string) FROM MyTable");
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-// must be defined in static/object context
-object hashCode extends ScalarFunction {
-  def eval(s: String): Int = {
-    s.hashCode() * 12
-  }
-}
-
-val tableEnv = TableEnvironment.getTableEnvironment(env)
-
-// use the function in Scala Table API
-myTable.select('string, hashCode('string))
-
-// register and use the function in SQL
-tableEnv.registerFunction("hashCode", hashCode)
-tableEnv.sql("SELECT string, HASHCODE(string) FROM MyTable");
-{% endhighlight %}
-</div>
-</div>
-
-By default the result type of an evaluation method is determined by Flink's type extraction facilities. This is sufficient for basic types or simple POJOs but might be wrong for more complex, custom, or composite types. In these cases `TypeInformation` of the result type can be manually defined by overriding `ScalarFunction#getResultType()`.
-
-Internally, the Table API and SQL code generation works with primitive values as much as possible. If a user-defined scalar function should not introduce much overhead through object creation/casting during runtime, it is recommended to declare parameters and result types as primitive types instead of their boxed classes. `Types.DATE` and `Types.TIME` can also be represented as `int`. `Types.TIMESTAMP` can be represented as `long`.
-
-The following example shows an advanced example which takes the internal timestamp representation and also returns the internal timestamp representation as a long value. By overriding `ScalarFunction#getResultType()` we define that the returned long value should be interpreted as a `Types.TIMESTAMP` by the code generation.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-public static class TimestampModifier extends ScalarFunction {
-  public long eval(long t) {
-    return t % 1000;
-  }
-
-  public TypeInformation<?> getResultType(signature: Class<?>[]) {
-    return Types.TIMESTAMP;
-  }
-}
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-object TimestampModifier extends ScalarFunction {
-  def eval(t: Long): Long = {
-    t % 1000
-  }
-
-  override def getResultType(signature: Array[Class[_]]): TypeInformation[_] = {
-    Types.TIMESTAMP
-  }
-}
-{% endhighlight %}
-</div>
-</div>
-
-
-
-{% top %}
-
-Writing Tables to External Sinks
---------------------------------
-
-A `Table` can be written to a `TableSink`, which is a generic interface to support a wide variety of file formats (e.g. CSV, Apache Parquet, Apache Avro), storage systems (e.g., JDBC, Apache HBase, Apache Cassandra, Elasticsearch), or messaging systems (e.g., Apache Kafka, RabbitMQ). A batch `Table` can only be written to a `BatchTableSink`, a streaming table requires a `StreamTableSink`. A `TableSink` can implement both interfaces at the same time.
-
-Currently, Flink only provides a `CsvTableSink` that writes a batch or streaming `Table` to CSV-formatted files. A custom `TableSink` can be defined by implementing the `BatchTableSink` and/or `StreamTableSink` interface.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-BatchTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);
-
-// compute the result Table using Table API operators and/or SQL queries
-Table result = ...
-
-// create a TableSink
-TableSink sink = new CsvTableSink("/path/to/file", fieldDelim = "|");
-// write the result Table to the TableSink
-result.writeToSink(sink);
-
-// execute the program
-env.execute();
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = ExecutionEnvironment.getExecutionEnvironment
-val tableEnv = TableEnvironment.getTableEnvironment(env)
-
-// compute the result Table using Table API operators and/or SQL queries
-val result: Table = ...
-
-// create a TableSink
-val sink: TableSink = new CsvTableSink("/path/to/file", fieldDelim = "|")
-// write the result Table to the TableSink
-result.writeToSink(sink)
-
-// execute the program
-env.execute()
-{% endhighlight %}
-</div>
-</div>
-
-{% top %}
-
-Runtime Configuration
-----
-The Table API provides a configuration (the so-called `TableConfig`) to modify runtime behavior. It can be accessed through the `TableEnvironment`.
-
-### Null Handling
-By default, the Table API supports `null` values. Null handling can be disabled to improve preformance by setting the `nullCheck` property in the `TableConfig` to `false`.
-
-{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/concepts/concepts.md
----------------------------------------------------------------------
diff --git a/docs/concepts/concepts.md b/docs/concepts/concepts.md
deleted file mode 100644
index 1cbfd21..0000000
--- a/docs/concepts/concepts.md
+++ /dev/null
@@ -1,246 +0,0 @@
----
-title: "Concepts"
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-* This will be replaced by the TOC
-{:toc}
-
-## Programs and Dataflows
-
-The basic building blocks of Flink programs are **streams** and **transformations** (note that a DataSet is internally
-also a stream). A *stream* is an intermediate result, and a *transformation* is an operation that takes one or more streams
-as input, and computes one or more result streams from them.
-
-When executed, Flink programs are mapped to **streaming dataflows**, consisting of **streams** and transformation **operators**.
-Each dataflow starts with one or more **sources** and ends in one or more **sinks**. The dataflows may resemble
-arbitrary **directed acyclic graphs** *(DAGs)*. (Special forms of cycles are permitted via *iteration* constructs, we
-omit this here for simplicity).
-
-In most cases, there is a one-to-one correspondence between the transformations in the programs and the operators
-in the dataflow. Sometimes, however, one transformation may consist of multiple transformation operators.
-
-<img src="fig/program_dataflow.svg" alt="A DataStream program, and its dataflow." class="offset" width="80%" />
-
-{% top %}
-
-### Parallel Dataflows
-
-Programs in Flink are inherently parallel and distributed. *Streams* are split into **stream partitions** and
-*operators* are split into **operator subtasks**. The operator subtasks execute independently from each other,
-in different threads and on different machines or containers.
-
-The number of operator subtasks is the **parallelism** of that particular operator. The parallelism of a stream
-is always that of its producing operator. Different operators of the program may have a different parallelism.
-
-<img src="fig/parallel_dataflow.svg" alt="A parallel dataflow" class="offset" width="80%" />
-
-Streams can transport data between two operators in a *one-to-one* (or *forwarding*) pattern, or in a *redistributing* pattern:
-
-  - **One-to-one** streams (for example between the *source* and the *map()* operators) preserves partitioning and order of
-    elements. That means that subtask[1] of the *map()* operator will see the same elements in the same order, as they
-    were produced by subtask[1] of the *source* operator.
-
-  - **Redistributing** streams (between *map()* and *keyBy/window*, as well as between *keyBy/window* and *sink*) change
-    the partitioning of streams. Each *operator subtask* sends data to different target subtasks,
-    depending on the selected transformation. Examples are *keyBy()* (re-partitions by hash code), *broadcast()*, or
-    *rebalance()* (random redistribution).
-    In a *redistributing* exchange, order among elements is only preserved for each pair of sending- and receiving
-    task (for example subtask[1] of *map()* and subtask[2] of *keyBy/window*).
-
-{% top %}
-
-### Tasks & Operator Chains
-
-For distributed execution, Flink *chains* operator subtasks together into *tasks*. Each task is executed by one thread.
-Chaining operators together into tasks is a useful optimization: it reduces the overhead of thread-to-thread
-handover and buffering, and increases overall throughput while decreasing latency.
-The chaining behavior can be configured in the APIs.
-
-The sample dataflow in the figure below is executed with five subtasks, and hence with five parallel threads.
-
-<img src="fig/tasks_chains.svg" alt="Operator chaining into Tasks" class="offset" width="80%" />
-
-{% top %}
-
-## Distributed Execution
-
-**Master, Worker, Client**
-
-The Flink runtime consists of two types of processes:
-
-  - The **master** processes (also called *JobManagers*) coordinate the distributed execution. They schedule tasks, coordinate
-    checkpoints, coordinate recovery on failures, etc.
-
-    There is always at least one master process. A high-availability setup will have multiple master processes, out of
-    which one is always the *leader*, and the others are *standby*.
-
-  - The **worker** processes (also called *TaskManagers*) execute the *tasks* (or more specifically, the subtasks) of a dataflow,
-    and buffer and exchange the data *streams*.
-
-    There must always be at least one worker process.
-
-The master and worker processes can be started in an arbitrary fashion: Directly on the machines, via containers, or via
-resource frameworks like YARN. Workers connect to masters, announcing themselves as available, and get work assigned.
-
-The **client** is not part of the runtime and program execution, but is used to prepare and send a dataflow to the master.
-After that, the client can disconnect, or stay connected to receive progress reports. The client runs either as part of the
-Java/Scala program that triggers the execution, or in the command line process `./bin/flink run ...`.
-
-<img src="fig/processes.svg" alt="The processes involved in executing a Flink dataflow" class="offset" width="80%" />
-
-{% top %}
-
-### Workers, Slots, Resources
-
-Each worker (TaskManager) is a *JVM process*, and may execute one or more subtasks in separate threads.
-To control how many tasks a worker accepts, a worker has so called **task slots** (at least one).
-
-Each *task slot* represents a fixed subset of resources of the TaskManager. A TaskManager with three slots, for example,
-will dedicate 1/3 of its managed memory to each slot. Slotting the resources means that a subtask will not
-compete with subtasks from other jobs for managed memory, but instead has a certain amount of reserved
-managed memory. Note that no CPU isolation happens here, slots currently only separate managed memory of tasks.
-
-Adjusting the number of task slots thus allows users to define how subtasks are isolated against each other.
-Having one slot per TaskManager means each task group runs in a separate JVM (which can be started in a
-separate container, for example). Having multiple slots
-means more subtasks share the same JVM. Tasks in the same JVM share TCP connections (via multiplexing) and
-heartbeats messages. They may also share data sets and data structures, thus reducing the per-task overhead.
-
-<img src="fig/tasks_slots.svg" alt="A TaskManager with Task Slots and Tasks" class="offset" width="80%" />
-
-By default, Flink allows subtasks to share slots, if they are subtasks of different tasks, but from the same
-job. The result is that one slot may hold an entire pipeline of the job. Allowing this *slot sharing*
-has two main benefits:
-
-  - A Flink cluster needs exactly as many tasks slots, as the highest parallelism used in the job.
-    No need to calculate how many tasks (with varying parallelism) a program contains in total.
-
-  - It is easier to get better resource utilization. Without slot sharing, the non-intensive
-    *source/map()* subtasks would block as many resources as the resource intensive *window* subtasks.
-    With slot sharing, increasing the base parallelism from two to six yields full utilization of the
-    slotted resources, while still making sure that each TaskManager gets only a fair share of the
-    heavy subtasks.
-
-The slot sharing behavior can be controlled in the APIs, to prevent sharing where it is undesirable.
-The mechanism for that are the *resource groups*, which define what (sub)tasks may share slots.
-
-As a rule-of-thumb, a good default number of task slots would be the number of CPU cores.
-With hyper threading, each slot then takes 2 or more hardware thread contexts.
-
-<img src="fig/slot_sharing.svg" alt="TaskManagers with shared Task Slots" class="offset" width="80%" />
-
-{% top %}
-
-## Time and Windows
-
-Aggregating events (e.g., counts, sums) works slightly differently on streams than in batch processing.
-For example, it is impossible to first count all elements in the stream and then return the count,
-because streams are in general infinite (unbounded). Instead, aggregates on streams (counts, sums, etc),
-are scoped by **windows**, such as *"count over the last 5 minutes"*, or *"sum of the last 100 elements"*.
-
-Windows can be *time driven* (example: every 30 seconds) or *data driven* (example: every 100 elements).
-One typically distinguishes different types of windows, such as *tumbling windows* (no overlap),
-*sliding windows* (with overlap), and *session windows* (gap of activity).
-
-<img src="fig/windows.svg" alt="Time- and Count Windows" class="offset" width="80%" />
-
-More window examples can be found in this [blog post](https://flink.apache.org/news/2015/12/04/Introducing-windows.html).
-
-{% top %}
-
-### Time
-
-When referring to time in a streaming program (for example to define windows), one can refer to different notions
-of time:
-
-  - **Event Time** is the time when an event was created. It is usually described by a timestamp in the events,
-    for example attached by the producing sensor, or the producing service. Flink accesses event timestamps
-    via [timestamp assigners]({{ site.baseurl }}/apis/streaming/event_timestamps_watermarks.html).
-
-  - **Ingestion time** is the time when an event enters the Flink dataflow at the source operator.
-
-  - **Processing Time** is the local time at each operator that performs a time-based operation.
-
-<img src="fig/event_ingestion_processing_time.svg" alt="Event Time, Ingestion Time, and Processing Time" class="offset" width="80%" />
-
-More details on how to handle time are in the [event time docs]({{ site.baseurl }}/apis/streaming/event_time.html).
-
-{% top %}
-
-## State and Fault Tolerance
-
-While many operations in a dataflow simply look at one individual *event at a time* (for example an event parser),
-some operations remember information across individual events (for example window operators).
-These operations are called **stateful**.
-
-The state of stateful operations is maintained in what can be thought of as an embedded key/value store.
-The state is partitioned and distributed strictly together with the streams that are read by the
-stateful operators. Hence, access the key/value state is only possible on *keyed streams*, after a *keyBy()* function,
-and is restricted to the values of the current event's key. Aligning the keys of streams and state
-makes sure that all state updates are local operations, guaranteeing consistency without transaction overhead.
-This alignment also allows Flink to redistribute the state and adjust the stream partitioning transparently.
-
-<img src="fig/state_partitioning.svg" alt="State and Partitioning" class="offset" width="50%" />
-
-{% top %}
-
-### Checkpoints for Fault Tolerance
-
-Flink implements fault tolerance using a combination of **stream replay** and **checkpoints**. A checkpoint
-defines a consistent point in streams and state from which a streaming dataflow can resume, and maintain consistency
-*(exactly-once processing semantics)*. The events and state updates since the last checkpoint are replayed from the input streams.
-
-The checkpoint interval is a means of trading off the overhead of fault tolerance during execution, with the recovery time (the amount
-of events that need to be replayed).
-
-More details on checkpoints and fault tolerance are in the [fault tolerance docs]({{ site.baseurl }}/internals/stream_checkpointing.html).
-
-<img src="fig/checkpoints.svg" alt="checkpoints and snapshots" class="offset" width="60%" />
-
-{% top %}
-
-### State Backends
-
-The exact data structures in which the key/values indexes are stored depend on the chosen **state backend**. One state backend
-stores data in an in-memory hash map, another state backend uses [RocksDB](http://rocksdb.org) as the key/value index.
-In addition to defining the data structure that holds the state, the state backends also implements the logic to
-take a point-in-time snapshot of the key/value state and store that snapshot as part of a checkpoint.
-
-{% top %}
-
-## Batch on Streaming
-
-Flink executes batch programs as a special case of streaming programs, where the streams are bounded (finite number of elements).
-A *DataSet* is treated internally as a stream of data. The concepts above thus apply to batch programs in the
-same way as well as they apply to streaming programs, with minor exceptions:
-
-  - Programs in the DataSet API do not use checkpoints. Recovery happens by fully replaying the streams.
-    That is possible, because inputs are bounded. This pushes the cost more towards the recovery,
-    but makes the regular processing cheaper, because it avoids checkpoints.
-
-  - Stateful operation in the DataSet API use simplified in-memory/out-of-core data structures, rather than
-    key/value indexes.
-
-  - The DataSet API introduces special synchronized (superstep-based) iterations, which are only possible on
-    bounded streams. For details, check out the [iteration docs]({{ site.baseurl }}/apis/batch/iterations.html).
-
-{% top %}


[88/89] [abbrv] flink git commit: [FLINK-4355] [cluster management] Add tests for the TaskManager -> ResourceManager registration.

Posted by se...@apache.org.
[FLINK-4355] [cluster management] Add tests for the TaskManager -> ResourceManager registration.

This closes #2395.


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/23048b55
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/23048b55
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/23048b55

Branch: refs/heads/flip-6
Commit: 23048b55bae8a39718428496052a37d022166811
Parents: 728f266
Author: Stephan Ewen <se...@apache.org>
Authored: Fri Aug 19 23:45:54 2016 +0200
Committer: Stephan Ewen <se...@apache.org>
Committed: Thu Aug 25 20:21:05 2016 +0200

----------------------------------------------------------------------
 .../rpc/registration/RetryingRegistration.java  |   4 +
 .../runtime/rpc/taskexecutor/SlotReport.java    |  38 ---
 .../runtime/rpc/taskexecutor/TaskExecutor.java  |  12 +
 ...TaskExecutorToResourceManagerConnection.java |   4 +
 .../TestingHighAvailabilityServices.java        |  53 +++
 .../flink/runtime/rpc/TestingGatewayBase.java   |  18 +-
 .../registration/RetryingRegistrationTest.java  | 336 +++++++++++++++++++
 .../registration/TestRegistrationGateway.java   |  85 +++++
 .../rpc/taskexecutor/TaskExecutorTest.java      |  92 ++++-
 9 files changed, 602 insertions(+), 40 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/23048b55/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/registration/RetryingRegistration.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/registration/RetryingRegistration.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/registration/RetryingRegistration.java
index 4c93684..dcb5011 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/registration/RetryingRegistration.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/registration/RetryingRegistration.java
@@ -58,12 +58,16 @@ public abstract class RetryingRegistration<Gateway extends RpcGateway, Success e
 	//  default configuration values
 	// ------------------------------------------------------------------------
 
+	/** default value for the initial registration timeout (milliseconds) */
 	private static final long INITIAL_REGISTRATION_TIMEOUT_MILLIS = 100;
 
+	/** default value for the maximum registration timeout, after exponential back-off (milliseconds) */
 	private static final long MAX_REGISTRATION_TIMEOUT_MILLIS = 30000;
 
+	/** The pause (milliseconds) made after an registration attempt caused an exception (other than timeout) */
 	private static final long ERROR_REGISTRATION_DELAY_MILLIS = 10000;
 
+	/** The pause (milliseconds) made after the registration attempt was refused */
 	private static final long REFUSED_REGISTRATION_DELAY_MILLIS = 30000;
 
 	// ------------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/flink/blob/23048b55/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/SlotReport.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/SlotReport.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/SlotReport.java
deleted file mode 100644
index e42fa4a..0000000
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/SlotReport.java
+++ /dev/null
@@ -1,38 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.runtime.rpc.taskexecutor;
-
-import java.io.Serializable;
-
-/**
- * A report about the current status of all slots of the TaskExecutor, describing
- * which slots are available and allocated, and what jobs (JobManagers) the allocated slots
- * have been allocated to.
- */
-public class SlotReport implements Serializable{
-
-	private static final long serialVersionUID = 1L;
-
-	// ------------------------------------------------------------------------
-	
-	@Override
-	public String toString() {
-		return "SlotReport";
-	}
-}

http://git-wip-us.apache.org/repos/asf/flink/blob/23048b55/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutor.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutor.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutor.java
index 1a637bb..f201e00 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutor.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutor.java
@@ -18,6 +18,7 @@
 
 package org.apache.flink.runtime.rpc.taskexecutor;
 
+import org.apache.flink.annotation.VisibleForTesting;
 import org.apache.flink.runtime.clusterframework.types.ResourceID;
 import org.apache.flink.runtime.highavailability.HighAvailabilityServices;
 import org.apache.flink.runtime.leaderretrieval.LeaderRetrievalListener;
@@ -72,6 +73,8 @@ public class TaskExecutor extends RpcEndpoint<TaskExecutorGateway> {
 
 	@Override
 	public void start() {
+		super.start();
+
 		// start by connecting to the ResourceManager
 		try {
 			haServices.getResourceManagerLeaderRetriever().start(new ResourceManagerLeaderListener());
@@ -148,6 +151,15 @@ public class TaskExecutor extends RpcEndpoint<TaskExecutorGateway> {
 	}
 
 	// ------------------------------------------------------------------------
+	//  Access to fields for testing
+	// ------------------------------------------------------------------------
+
+	@VisibleForTesting
+	TaskExecutorToResourceManagerConnection getResourceManagerConnection() {
+		return resourceManagerConnection;
+	}
+
+	// ------------------------------------------------------------------------
 	//  Utility classes
 	// ------------------------------------------------------------------------
 

http://git-wip-us.apache.org/repos/asf/flink/blob/23048b55/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorToResourceManagerConnection.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorToResourceManagerConnection.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorToResourceManagerConnection.java
index ef75862..f398b7d 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorToResourceManagerConnection.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorToResourceManagerConnection.java
@@ -40,6 +40,9 @@ import java.util.concurrent.TimeUnit;
 import static org.apache.flink.util.Preconditions.checkNotNull;
 import static org.apache.flink.util.Preconditions.checkState;
 
+/**
+ * The connection between a TaskExecutor and the ResourceManager.
+ */
 public class TaskExecutorToResourceManagerConnection {
 
 	/** the logger for all log messages of this class */
@@ -87,6 +90,7 @@ public class TaskExecutorToResourceManagerConnection {
 				log, taskExecutor.getRpcService(),
 				resourceManagerAddress, resourceManagerLeaderId,
 				taskExecutor.getAddress(), taskExecutor.getResourceID());
+		registration.startRegistration();
 
 		Future<Tuple2<ResourceManagerGateway, TaskExecutorRegistrationSuccess>> future = registration.getFuture();
 		

http://git-wip-us.apache.org/repos/asf/flink/blob/23048b55/flink-runtime/src/test/java/org/apache/flink/runtime/highavailability/TestingHighAvailabilityServices.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/highavailability/TestingHighAvailabilityServices.java b/flink-runtime/src/test/java/org/apache/flink/runtime/highavailability/TestingHighAvailabilityServices.java
new file mode 100644
index 0000000..3a9f943
--- /dev/null
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/highavailability/TestingHighAvailabilityServices.java
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.highavailability;
+
+import org.apache.flink.runtime.leaderretrieval.LeaderRetrievalService;
+
+/**
+ * A variant of the HighAvailabilityServices for testing. Each individual service can be set
+ * to an arbitrary implementation, such as a mock or default service.
+ */
+public class TestingHighAvailabilityServices implements HighAvailabilityServices {
+
+	private volatile LeaderRetrievalService resourceManagerLeaderRetriever;
+
+
+	// ------------------------------------------------------------------------
+	//  Setters for mock / testing implementations
+	// ------------------------------------------------------------------------
+
+	public void setResourceManagerLeaderRetriever(LeaderRetrievalService resourceManagerLeaderRetriever) {
+		this.resourceManagerLeaderRetriever = resourceManagerLeaderRetriever;
+	}
+	
+	// ------------------------------------------------------------------------
+	//  HA Services Methods
+	// ------------------------------------------------------------------------
+
+	@Override
+	public LeaderRetrievalService getResourceManagerLeaderRetriever() throws Exception {
+		LeaderRetrievalService service = this.resourceManagerLeaderRetriever;
+		if (service != null) {
+			return service;
+		} else {
+			throw new IllegalStateException("ResourceManagerLeaderRetriever has not been set");
+		}
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/23048b55/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/TestingGatewayBase.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/TestingGatewayBase.java b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/TestingGatewayBase.java
index 4256135..8133a87 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/TestingGatewayBase.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/TestingGatewayBase.java
@@ -34,8 +34,15 @@ public abstract class TestingGatewayBase implements RpcGateway {
 
 	private final ScheduledExecutorService executor;
 
-	protected TestingGatewayBase() {
+	private final String address;
+
+	protected TestingGatewayBase(final String address) {
 		this.executor = Executors.newSingleThreadScheduledExecutor();
+		this.address = address;
+	}
+
+	protected TestingGatewayBase() {
+		this("localhost");
 	}
 
 	// ------------------------------------------------------------------------
@@ -53,6 +60,15 @@ public abstract class TestingGatewayBase implements RpcGateway {
 	}
 
 	// ------------------------------------------------------------------------
+	//  Base class methods
+	// ------------------------------------------------------------------------
+
+	@Override
+	public String getAddress() {
+		return address;
+	}
+
+	// ------------------------------------------------------------------------
 	//  utilities
 	// ------------------------------------------------------------------------
 

http://git-wip-us.apache.org/repos/asf/flink/blob/23048b55/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/registration/RetryingRegistrationTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/registration/RetryingRegistrationTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/registration/RetryingRegistrationTest.java
new file mode 100644
index 0000000..9508825
--- /dev/null
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/registration/RetryingRegistrationTest.java
@@ -0,0 +1,336 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.registration;
+
+import akka.dispatch.Futures;
+
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.runtime.rpc.RpcService;
+import org.apache.flink.runtime.rpc.TestingRpcService;
+import org.apache.flink.util.TestLogger;
+
+import org.junit.Test;
+
+import org.slf4j.LoggerFactory;
+
+import scala.concurrent.Await;
+import scala.concurrent.ExecutionContext$;
+import scala.concurrent.Future;
+import scala.concurrent.Promise;
+import scala.concurrent.duration.FiniteDuration;
+
+import java.util.UUID;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.TimeoutException;
+
+import static java.util.concurrent.TimeUnit.SECONDS;
+import static org.junit.Assert.*;
+import static org.mockito.Mockito.*;
+
+/**
+ * Tests for the generic retrying registration class, validating the failure, retry, and back-off behavior.
+ */
+public class RetryingRegistrationTest extends TestLogger {
+
+	@Test
+	public void testSimpleSuccessfulRegistration() throws Exception {
+		final String testId = "laissez les bon temps roulez";
+		final String testEndpointAddress = "<test-address>";
+		final UUID leaderId = UUID.randomUUID();
+
+		// an endpoint that immediately returns success
+		TestRegistrationGateway testGateway = new TestRegistrationGateway(new TestRegistrationSuccess(testId));
+		TestingRpcService rpc = new TestingRpcService();
+
+		try {
+			rpc.registerGateway(testEndpointAddress, testGateway);
+
+			TestRetryingRegistration registration = new TestRetryingRegistration(rpc, testEndpointAddress, leaderId);
+			registration.startRegistration();
+
+			Future<Tuple2<TestRegistrationGateway, TestRegistrationSuccess>> future = registration.getFuture();
+			assertNotNull(future);
+
+			// multiple accesses return the same future
+			assertEquals(future, registration.getFuture());
+
+			Tuple2<TestRegistrationGateway, TestRegistrationSuccess> success = 
+					Await.result(future, new FiniteDuration(10, SECONDS));
+
+			// validate correct invocation and result
+			assertEquals(testId, success.f1.getCorrelationId());
+			assertEquals(leaderId, testGateway.getInvocations().take().leaderId());
+		}
+		finally {
+			testGateway.stop();
+			rpc.stopService();
+		}
+	}
+	
+	@Test
+	public void testPropagateFailures() throws Exception {
+		final String testExceptionMessage = "testExceptionMessage";
+
+		// RPC service that fails with exception upon the connection
+		RpcService rpc = mock(RpcService.class);
+		when(rpc.connect(anyString(), any(Class.class))).thenThrow(new RuntimeException(testExceptionMessage));
+
+		TestRetryingRegistration registration = new TestRetryingRegistration(rpc, "testaddress", UUID.randomUUID());
+		registration.startRegistration();
+
+		Future<?> future = registration.getFuture();
+		assertTrue(future.failed().isCompleted());
+
+		assertEquals(testExceptionMessage, future.failed().value().get().get().getMessage());
+	}
+
+	@Test
+	public void testRetryConnectOnFailure() throws Exception {
+		final String testId = "laissez les bon temps roulez";
+		final UUID leaderId = UUID.randomUUID();
+
+		ExecutorService executor = Executors.newCachedThreadPool();
+		TestRegistrationGateway testGateway = new TestRegistrationGateway(new TestRegistrationSuccess(testId));
+
+		try {
+			// RPC service that fails upon the first connection, but succeeds on the second
+			RpcService rpc = mock(RpcService.class);
+			when(rpc.connect(anyString(), any(Class.class))).thenReturn(
+					Futures.failed(new Exception("test connect failure")),  // first connection attempt fails
+					Futures.successful(testGateway)                         // second connection attempt succeeds
+			);
+			when(rpc.getExecutionContext()).thenReturn(ExecutionContext$.MODULE$.fromExecutor(executor));
+
+			TestRetryingRegistration registration = new TestRetryingRegistration(rpc, "foobar address", leaderId);
+			registration.startRegistration();
+
+			Tuple2<TestRegistrationGateway, TestRegistrationSuccess> success =
+					Await.result(registration.getFuture(), new FiniteDuration(10, SECONDS));
+
+			// validate correct invocation and result
+			assertEquals(testId, success.f1.getCorrelationId());
+			assertEquals(leaderId, testGateway.getInvocations().take().leaderId());
+		}
+		finally {
+			testGateway.stop();
+			executor.shutdown();
+		}
+	}
+
+	@Test
+	public void testRetriesOnTimeouts() throws Exception {
+		final String testId = "rien ne va plus";
+		final String testEndpointAddress = "<test-address>";
+		final UUID leaderId = UUID.randomUUID();
+
+		// an endpoint that immediately returns futures with timeouts before returning a successful future
+		TestRegistrationGateway testGateway = new TestRegistrationGateway(
+				null, // timeout
+				null, // timeout
+				new TestRegistrationSuccess(testId) // success
+		);
+
+		TestingRpcService rpc = new TestingRpcService();
+
+		try {
+			rpc.registerGateway(testEndpointAddress, testGateway);
+	
+			TestRetryingRegistration registration = new TestRetryingRegistration(rpc, testEndpointAddress, leaderId);
+	
+			long started = System.nanoTime();
+			registration.startRegistration();
+	
+			Future<Tuple2<TestRegistrationGateway, TestRegistrationSuccess>> future = registration.getFuture();
+			Tuple2<TestRegistrationGateway, TestRegistrationSuccess> success =
+					Await.result(future, new FiniteDuration(10, SECONDS));
+	
+			long finished = System.nanoTime();
+			long elapsedMillis = (finished - started) / 1000000;
+	
+			// validate correct invocation and result
+			assertEquals(testId, success.f1.getCorrelationId());
+			assertEquals(leaderId, testGateway.getInvocations().take().leaderId());
+	
+			// validate that some retry-delay / back-off behavior happened
+			assertTrue("retries did not properly back off", elapsedMillis >= 3 * TestRetryingRegistration.INITIAL_TIMEOUT);
+		}
+		finally {
+			rpc.stopService();
+			testGateway.stop();
+		}
+	}
+
+	@Test
+	public void testDecline() throws Exception {
+		final String testId = "qui a coupe le fromage";
+		final String testEndpointAddress = "<test-address>";
+		final UUID leaderId = UUID.randomUUID();
+
+		TestingRpcService rpc = new TestingRpcService();
+
+		TestRegistrationGateway testGateway = new TestRegistrationGateway(
+				null, // timeout
+				new RegistrationResponse.Decline("no reason "),
+				null, // timeout
+				new TestRegistrationSuccess(testId) // success
+		);
+
+		try {
+			rpc.registerGateway(testEndpointAddress, testGateway);
+
+			TestRetryingRegistration registration = new TestRetryingRegistration(rpc, testEndpointAddress, leaderId);
+
+			long started = System.nanoTime();
+			registration.startRegistration();
+	
+			Future<Tuple2<TestRegistrationGateway, TestRegistrationSuccess>> future = registration.getFuture();
+			Tuple2<TestRegistrationGateway, TestRegistrationSuccess> success =
+					Await.result(future, new FiniteDuration(10, SECONDS));
+
+			long finished = System.nanoTime();
+			long elapsedMillis = (finished - started) / 1000000;
+
+			// validate correct invocation and result
+			assertEquals(testId, success.f1.getCorrelationId());
+			assertEquals(leaderId, testGateway.getInvocations().take().leaderId());
+
+			// validate that some retry-delay / back-off behavior happened
+			assertTrue("retries did not properly back off", elapsedMillis >= 
+					2 * TestRetryingRegistration.INITIAL_TIMEOUT + TestRetryingRegistration.DELAY_ON_DECLINE);
+		}
+		finally {
+			testGateway.stop();
+			rpc.stopService();
+		}
+	}
+	
+	@Test
+	@SuppressWarnings("unchecked")
+	public void testRetryOnError() throws Exception {
+		final String testId = "Petit a petit, l'oiseau fait son nid";
+		final String testEndpointAddress = "<test-address>";
+		final UUID leaderId = UUID.randomUUID();
+
+		TestingRpcService rpc = new TestingRpcService();
+
+		try {
+			// gateway that upon calls first responds with a failure, then with a success
+			TestRegistrationGateway testGateway = mock(TestRegistrationGateway.class);
+
+			when(testGateway.registrationCall(any(UUID.class), anyLong())).thenReturn(
+					Futures.<RegistrationResponse>failed(new Exception("test exception")),
+					Futures.<RegistrationResponse>successful(new TestRegistrationSuccess(testId)));
+			
+			rpc.registerGateway(testEndpointAddress, testGateway);
+
+			TestRetryingRegistration registration = new TestRetryingRegistration(rpc, testEndpointAddress, leaderId);
+
+			long started = System.nanoTime();
+			registration.startRegistration();
+
+			Future<Tuple2<TestRegistrationGateway, TestRegistrationSuccess>> future = registration.getFuture();
+			Tuple2<TestRegistrationGateway, TestRegistrationSuccess> success =
+					Await.result(future, new FiniteDuration(10, SECONDS));
+
+			long finished = System.nanoTime();
+			long elapsedMillis = (finished - started) / 1000000;
+			
+			assertEquals(testId, success.f1.getCorrelationId());
+
+			// validate that some retry-delay / back-off behavior happened
+			assertTrue("retries did not properly back off",
+					elapsedMillis >= TestRetryingRegistration.DELAY_ON_ERROR);
+		}
+		finally {
+			rpc.stopService();
+		}
+	}
+
+	@Test
+	public void testCancellation() throws Exception {
+		final String testEndpointAddress = "my-test-address";
+		final UUID leaderId = UUID.randomUUID();
+
+		TestingRpcService rpc = new TestingRpcService();
+
+		try {
+			Promise<RegistrationResponse> result = Futures.promise();
+
+			TestRegistrationGateway testGateway = mock(TestRegistrationGateway.class);
+			when(testGateway.registrationCall(any(UUID.class), anyLong())).thenReturn(result.future());
+
+			rpc.registerGateway(testEndpointAddress, testGateway);
+
+			TestRetryingRegistration registration = new TestRetryingRegistration(rpc, testEndpointAddress, leaderId);
+			registration.startRegistration();
+
+			// cancel and fail the current registration attempt
+			registration.cancel();
+			result.failure(new TimeoutException());
+
+			// there should not be a second registration attempt
+			verify(testGateway, atMost(1)).registrationCall(any(UUID.class), anyLong());
+		}
+		finally {
+			rpc.stopService();
+		}
+	}
+
+	// ------------------------------------------------------------------------
+	//  test registration
+	// ------------------------------------------------------------------------
+
+	private static class TestRegistrationSuccess extends RegistrationResponse.Success {
+		private static final long serialVersionUID = 5542698790917150604L;
+
+		private final String correlationId;
+
+		private TestRegistrationSuccess(String correlationId) {
+			this.correlationId = correlationId;
+		}
+
+		public String getCorrelationId() {
+			return correlationId;
+		}
+	}
+
+	private static class TestRetryingRegistration extends RetryingRegistration<TestRegistrationGateway, TestRegistrationSuccess> {
+
+		// we use shorter timeouts here to speed up the tests
+		static final long INITIAL_TIMEOUT = 20;
+		static final long MAX_TIMEOUT = 200;
+		static final long DELAY_ON_ERROR = 200;
+		static final long DELAY_ON_DECLINE = 200;
+
+		public TestRetryingRegistration(RpcService rpc, String targetAddress, UUID leaderId) {
+			super(LoggerFactory.getLogger(RetryingRegistrationTest.class),
+					rpc, "TestEndpoint",
+					TestRegistrationGateway.class,
+					targetAddress, leaderId,
+					INITIAL_TIMEOUT, MAX_TIMEOUT, DELAY_ON_ERROR, DELAY_ON_DECLINE);
+		}
+
+		@Override
+		protected Future<RegistrationResponse> invokeRegistration(
+				TestRegistrationGateway gateway, UUID leaderId, long timeoutMillis) {
+			return gateway.registrationCall(leaderId, timeoutMillis);
+		}
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/23048b55/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/registration/TestRegistrationGateway.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/registration/TestRegistrationGateway.java b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/registration/TestRegistrationGateway.java
new file mode 100644
index 0000000..a049e48
--- /dev/null
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/registration/TestRegistrationGateway.java
@@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.registration;
+
+import akka.dispatch.Futures;
+
+import org.apache.flink.runtime.rpc.TestingGatewayBase;
+import org.apache.flink.util.Preconditions;
+
+import scala.concurrent.Future;
+
+import java.util.UUID;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.LinkedBlockingQueue;
+
+public class TestRegistrationGateway extends TestingGatewayBase {
+
+	private final BlockingQueue<RegistrationCall> invocations;
+
+	private final RegistrationResponse[] responses;
+
+	private int pos;
+
+	public TestRegistrationGateway(RegistrationResponse... responses) {
+		Preconditions.checkArgument(responses != null && responses.length > 0);
+
+		this.invocations = new LinkedBlockingQueue<>();
+		this.responses = responses;
+		
+	}
+
+	// ------------------------------------------------------------------------
+
+	public Future<RegistrationResponse> registrationCall(UUID leaderId, long timeout) {
+		invocations.add(new RegistrationCall(leaderId, timeout));
+
+		RegistrationResponse response = responses[pos];
+		if (pos < responses.length - 1) {
+			pos++;
+		}
+
+		// return a completed future (for a proper value), or one that never completes and will time out (for null)
+		return response != null ? Futures.successful(response) : this.<RegistrationResponse>futureWithTimeout(timeout);
+	}
+
+	public BlockingQueue<RegistrationCall> getInvocations() {
+		return invocations;
+	}
+
+	// ------------------------------------------------------------------------
+
+	public static class RegistrationCall {
+		private final UUID leaderId;
+		private final long timeout;
+
+		public RegistrationCall(UUID leaderId, long timeout) {
+			this.leaderId = leaderId;
+			this.timeout = timeout;
+		}
+
+		public UUID leaderId() {
+			return leaderId;
+		}
+
+		public long timeout() {
+			return timeout;
+		}
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/23048b55/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorTest.java
index 9f9bab3..b831ead 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorTest.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorTest.java
@@ -18,8 +18,98 @@
 
 package org.apache.flink.runtime.rpc.taskexecutor;
 
+import org.apache.flink.runtime.clusterframework.types.ResourceID;
+import org.apache.flink.runtime.highavailability.NonHaServices;
+import org.apache.flink.runtime.highavailability.TestingHighAvailabilityServices;
+import org.apache.flink.runtime.leaderelection.TestingLeaderRetrievalService;
+import org.apache.flink.runtime.rpc.TestingRpcService;
+import org.apache.flink.runtime.rpc.resourcemanager.ResourceManagerGateway;
 import org.apache.flink.util.TestLogger;
 
+import org.junit.Test;
+
+import scala.concurrent.duration.FiniteDuration;
+
+import java.util.UUID;
+
+import static org.junit.Assert.*;
+import static org.mockito.Mockito.*;
+
 public class TaskExecutorTest extends TestLogger {
-	
+
+	@Test
+	public void testImmediatelyRegistersIfLeaderIsKnown() throws Exception {
+		final ResourceID resourceID = ResourceID.generate();
+		final String resourceManagerAddress = "/resource/manager/address/one";
+
+		final TestingRpcService rpc = new TestingRpcService();
+		try {
+			// register a mock resource manager gateway
+			ResourceManagerGateway rmGateway = mock(ResourceManagerGateway.class);
+			rpc.registerGateway(resourceManagerAddress, rmGateway);
+
+			NonHaServices haServices = new NonHaServices(resourceManagerAddress);
+			TaskExecutor taskManager = new TaskExecutor(rpc, haServices, resourceID);
+			String taskManagerAddress = taskManager.getAddress();
+
+			taskManager.start();
+
+			verify(rmGateway, timeout(5000)).registerTaskExecutor(
+					any(UUID.class), eq(taskManagerAddress), eq(resourceID), any(FiniteDuration.class));
+		}
+		finally {
+			rpc.stopService();
+		}
+	}
+
+	@Test
+	public void testTriggerRegistrationOnLeaderChange() throws Exception {
+		final ResourceID resourceID = ResourceID.generate();
+
+		final String address1 = "/resource/manager/address/one";
+		final String address2 = "/resource/manager/address/two";
+		final UUID leaderId1 = UUID.randomUUID();
+		final UUID leaderId2 = UUID.randomUUID();
+
+		final TestingRpcService rpc = new TestingRpcService();
+		try {
+			// register the mock resource manager gateways
+			ResourceManagerGateway rmGateway1 = mock(ResourceManagerGateway.class);
+			ResourceManagerGateway rmGateway2 = mock(ResourceManagerGateway.class);
+			rpc.registerGateway(address1, rmGateway1);
+			rpc.registerGateway(address2, rmGateway2);
+
+			TestingLeaderRetrievalService testLeaderService = new TestingLeaderRetrievalService();
+
+			TestingHighAvailabilityServices haServices = new TestingHighAvailabilityServices();
+			haServices.setResourceManagerLeaderRetriever(testLeaderService);
+
+			TaskExecutor taskManager = new TaskExecutor(rpc, haServices, resourceID);
+			String taskManagerAddress = taskManager.getAddress();
+			taskManager.start();
+
+			// no connection initially, since there is no leader
+			assertNull(taskManager.getResourceManagerConnection());
+
+			// define a leader and see that a registration happens
+			testLeaderService.notifyListener(address1, leaderId1);
+
+			verify(rmGateway1, timeout(5000)).registerTaskExecutor(
+					eq(leaderId1), eq(taskManagerAddress), eq(resourceID), any(FiniteDuration.class));
+			assertNotNull(taskManager.getResourceManagerConnection());
+
+			// cancel the leader 
+			testLeaderService.notifyListener(null, null);
+
+			// set a new leader, see that a registration happens 
+			testLeaderService.notifyListener(address2, leaderId2);
+
+			verify(rmGateway2, timeout(5000)).registerTaskExecutor(
+					eq(leaderId2), eq(taskManagerAddress), eq(resourceID), any(FiniteDuration.class));
+			assertNotNull(taskManager.getResourceManagerConnection());
+		}
+		finally {
+			rpc.stopService();
+		}
+	}
 }


[52/89] [abbrv] flink git commit: [FLINK-4454] always display JobManager address using LeaderRetrievalService

Posted by se...@apache.org.
[FLINK-4454] always display JobManager address using LeaderRetrievalService

This closes #2406


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/72064558
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/72064558
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/72064558

Branch: refs/heads/flip-6
Commit: 720645587bc58a22db6a8d948f91384da2ecb7b7
Parents: 844c874
Author: Maximilian Michels <mx...@apache.org>
Authored: Mon Aug 22 18:11:45 2016 +0200
Committer: Maximilian Michels <mx...@apache.org>
Committed: Wed Aug 24 11:29:30 2016 +0200

----------------------------------------------------------------------
 .../org/apache/flink/client/CliFrontend.java    |  4 ++--
 .../flink/client/program/ClusterClient.java     | 23 +++++---------------
 .../client/program/StandaloneClusterClient.java |  4 ++--
 3 files changed, 10 insertions(+), 21 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/72064558/flink-clients/src/main/java/org/apache/flink/client/CliFrontend.java
----------------------------------------------------------------------
diff --git a/flink-clients/src/main/java/org/apache/flink/client/CliFrontend.java b/flink-clients/src/main/java/org/apache/flink/client/CliFrontend.java
index 15e1362..c90bc29 100644
--- a/flink-clients/src/main/java/org/apache/flink/client/CliFrontend.java
+++ b/flink-clients/src/main/java/org/apache/flink/client/CliFrontend.java
@@ -845,7 +845,7 @@ public class CliFrontend {
 		CustomCommandLine customCLI = getActiveCustomCommandLine(options.getCommandLine());
 		try {
 			ClusterClient client = customCLI.retrieveCluster(options.getCommandLine(), config);
-			logAndSysout("Using address " + client.getJobManagerAddressFromConfig() + " to connect to JobManager.");
+			logAndSysout("Using address " + client.getJobManagerAddress() + " to connect to JobManager.");
 			return client;
 		} catch (Exception e) {
 			LOG.error("Couldn't retrieve {} cluster.", customCLI.getId(), e);
@@ -896,7 +896,7 @@ public class CliFrontend {
 		}
 
 		// Avoid resolving the JobManager Gateway here to prevent blocking until we invoke the user's program.
-		final InetSocketAddress jobManagerAddress = client.getJobManagerAddressFromConfig();
+		final InetSocketAddress jobManagerAddress = client.getJobManagerAddress();
 		logAndSysout("Using address " + jobManagerAddress.getHostString() + ":" + jobManagerAddress.getPort() + " to connect to JobManager.");
 		logAndSysout("JobManager web interface address " + client.getWebInterfaceURL());
 		return client;

http://git-wip-us.apache.org/repos/asf/flink/blob/72064558/flink-clients/src/main/java/org/apache/flink/client/program/ClusterClient.java
----------------------------------------------------------------------
diff --git a/flink-clients/src/main/java/org/apache/flink/client/program/ClusterClient.java b/flink-clients/src/main/java/org/apache/flink/client/program/ClusterClient.java
index 2e6a9cc..c3c666b 100644
--- a/flink-clients/src/main/java/org/apache/flink/client/program/ClusterClient.java
+++ b/flink-clients/src/main/java/org/apache/flink/client/program/ClusterClient.java
@@ -27,7 +27,6 @@ import java.util.Collections;
 import java.util.List;
 import java.util.Map;
 
-import akka.actor.ActorRef;
 import org.apache.flink.api.common.JobExecutionResult;
 import org.apache.flink.api.common.JobID;
 import org.apache.flink.api.common.JobSubmissionResult;
@@ -57,6 +56,7 @@ import org.apache.flink.runtime.messages.accumulators.AccumulatorResultsFound;
 import org.apache.flink.runtime.messages.accumulators.RequestAccumulatorResults;
 import org.apache.flink.runtime.messages.JobManagerMessages;
 import org.apache.flink.runtime.net.ConnectionUtils;
+import org.apache.flink.runtime.util.LeaderConnectionInfo;
 import org.apache.flink.runtime.util.LeaderRetrievalUtils;
 import org.apache.flink.util.Preconditions;
 import org.apache.flink.util.SerializedValue;
@@ -232,27 +232,16 @@ public abstract class ClusterClient {
 	}
 
 	/**
-	 * Gets the current JobManager address from the Flink configuration (may change in case of a HA setup).
-	 * @return The address (host and port) of the leading JobManager when it was last retrieved (may be outdated)
-	 */
-	public InetSocketAddress getJobManagerAddressFromConfig() {
-		try {
-			String hostName = flinkConfig.getString(ConfigConstants.JOB_MANAGER_IPC_ADDRESS_KEY, null);
-			int port = flinkConfig.getInteger(ConfigConstants.JOB_MANAGER_IPC_PORT_KEY, -1);
-			return new InetSocketAddress(hostName, port);
-		} catch (Exception e) {
-			throw new RuntimeException("Failed to retrieve JobManager address", e);
-		}
-	}
-
-	/**
 	 * Gets the current JobManager address (may change in case of a HA setup).
 	 * @return The address (host and port) of the leading JobManager
 	 */
 	public InetSocketAddress getJobManagerAddress() {
 		try {
-			final ActorRef jmActor = getJobManagerGateway().actor();
-			return AkkaUtils.getInetSockeAddressFromAkkaURL(jmActor.path().toSerializationFormat());
+			LeaderConnectionInfo leaderConnectionInfo =
+				LeaderRetrievalUtils.retrieveLeaderConnectionInfo(
+					LeaderRetrievalUtils.createLeaderRetrievalService(flinkConfig), timeout);
+
+			return AkkaUtils.getInetSockeAddressFromAkkaURL(leaderConnectionInfo.getAddress());
 		} catch (Exception e) {
 			throw new RuntimeException("Failed to retrieve JobManager address", e);
 		}

http://git-wip-us.apache.org/repos/asf/flink/blob/72064558/flink-clients/src/main/java/org/apache/flink/client/program/StandaloneClusterClient.java
----------------------------------------------------------------------
diff --git a/flink-clients/src/main/java/org/apache/flink/client/program/StandaloneClusterClient.java b/flink-clients/src/main/java/org/apache/flink/client/program/StandaloneClusterClient.java
index 2c6e101..d25c9d1 100644
--- a/flink-clients/src/main/java/org/apache/flink/client/program/StandaloneClusterClient.java
+++ b/flink-clients/src/main/java/org/apache/flink/client/program/StandaloneClusterClient.java
@@ -44,7 +44,7 @@ public class StandaloneClusterClient extends ClusterClient {
 
 	@Override
 	public String getWebInterfaceURL() {
-		String host = this.getJobManagerAddressFromConfig().getHostString();
+		String host = this.getJobManagerAddress().getHostString();
 		int port = getFlinkConfiguration().getInteger(ConfigConstants.JOB_MANAGER_WEB_PORT_KEY,
 			ConfigConstants.DEFAULT_JOB_MANAGER_WEB_FRONTEND_PORT);
 		return "http://" +  host + ":" + port;
@@ -75,7 +75,7 @@ public class StandaloneClusterClient extends ClusterClient {
 	@Override
 	public String getClusterIdentifier() {
 		// Avoid blocking here by getting the address from the config without resolving the address
-		return "Standalone cluster with JobManager at " + this.getJobManagerAddressFromConfig();
+		return "Standalone cluster with JobManager at " + this.getJobManagerAddress();
 	}
 
 	@Override


[59/89] [abbrv] flink git commit: [FLINK-4264] [gelly] New GraphMetrics driver

Posted by se...@apache.org.
[FLINK-4264] [gelly] New GraphMetrics driver

Updates VertexMetrics analytic, adds directed and undirected
EdgeMetric analytics, and includes a new GraphMetrics driver.

This closes #2295


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/58850f29
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/58850f29
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/58850f29

Branch: refs/heads/flip-6
Commit: 58850f29243559c67bdc07c737bc5772c3f098db
Parents: ad8e665
Author: Greg Hogan <co...@greghogan.com>
Authored: Wed Jul 20 15:44:54 2016 -0400
Committer: Greg Hogan <co...@greghogan.com>
Committed: Wed Aug 24 10:05:12 2016 -0400

----------------------------------------------------------------------
 .../apache/flink/graph/driver/GraphMetrics.java | 232 +++++++++
 .../graph/examples/ClusteringCoefficient.java   |  14 +-
 .../org/apache/flink/graph/examples/HITS.java   |   2 +-
 .../flink/graph/examples/JaccardIndex.java      |   2 +-
 .../flink/graph/examples/TriangleListing.java   |   2 +-
 .../degree/annotate/directed/VertexDegrees.java |   2 +-
 .../directed/LocalClusteringCoefficient.java    |  32 +-
 .../undirected/LocalClusteringCoefficient.java  |  31 +-
 .../library/metric/directed/EdgeMetrics.java    | 507 +++++++++++++++++++
 .../library/metric/directed/VertexMetrics.java  | 102 +++-
 .../library/metric/undirected/EdgeMetrics.java  | 445 ++++++++++++++++
 .../metric/undirected/VertexMetrics.java        |  63 ++-
 .../metric/directed/EdgeMetricsTest.java        |  90 ++++
 .../metric/directed/VertexMetricsTest.java      |  19 +-
 .../metric/undirected/EdgeMetricsTest.java      |  89 ++++
 .../metric/undirected/VertexMetricsTest.java    |  14 +-
 16 files changed, 1600 insertions(+), 46 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/58850f29/flink-libraries/flink-gelly-examples/src/main/java/org/apache/flink/graph/driver/GraphMetrics.java
----------------------------------------------------------------------
diff --git a/flink-libraries/flink-gelly-examples/src/main/java/org/apache/flink/graph/driver/GraphMetrics.java b/flink-libraries/flink-gelly-examples/src/main/java/org/apache/flink/graph/driver/GraphMetrics.java
new file mode 100644
index 0000000..cc265bb
--- /dev/null
+++ b/flink-libraries/flink-gelly-examples/src/main/java/org/apache/flink/graph/driver/GraphMetrics.java
@@ -0,0 +1,232 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.graph.driver;
+
+import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.lang3.text.WordUtils;
+import org.apache.commons.math3.random.JDKRandomGenerator;
+import org.apache.flink.api.common.JobExecutionResult;
+import org.apache.flink.api.java.ExecutionEnvironment;
+import org.apache.flink.api.java.io.CsvOutputFormat;
+import org.apache.flink.api.java.utils.ParameterTool;
+import org.apache.flink.graph.Graph;
+import org.apache.flink.graph.GraphAnalytic;
+import org.apache.flink.graph.GraphCsvReader;
+import org.apache.flink.graph.asm.translate.LongValueToIntValue;
+import org.apache.flink.graph.asm.translate.TranslateGraphIds;
+import org.apache.flink.graph.generator.RMatGraph;
+import org.apache.flink.graph.generator.random.JDKRandomGeneratorFactory;
+import org.apache.flink.graph.generator.random.RandomGenerableFactory;
+import org.apache.flink.types.IntValue;
+import org.apache.flink.types.LongValue;
+import org.apache.flink.types.NullValue;
+import org.apache.flink.types.StringValue;
+
+import java.text.NumberFormat;
+
+/**
+ * Computes vertex and edge metrics on a directed or undirected graph.
+ *
+ * @see org.apache.flink.graph.library.metric.directed.EdgeMetrics
+ * @see org.apache.flink.graph.library.metric.directed.VertexMetrics
+ * @see org.apache.flink.graph.library.metric.undirected.EdgeMetrics
+ * @see org.apache.flink.graph.library.metric.undirected.VertexMetrics
+ */
+public class GraphMetrics {
+
+	public static final int DEFAULT_SCALE = 10;
+
+	public static final int DEFAULT_EDGE_FACTOR = 16;
+
+	public static final boolean DEFAULT_CLIP_AND_FLIP = true;
+
+	private static void printUsage() {
+		System.out.println(WordUtils.wrap("Computes vertex and edge metrics on a directed or undirected graph.", 80));
+		System.out.println();
+		System.out.println("usage: GraphMetrics --directed <true | false> --input <csv | rmat [options]>");
+		System.out.println();
+		System.out.println("options:");
+		System.out.println("  --input csv --type <integer | string> [--simplify <true | false>] --input_filename FILENAME [--input_line_delimiter LINE_DELIMITER] [--input_field_delimiter FIELD_DELIMITER]");
+		System.out.println("  --input rmat [--scale SCALE] [--edge_factor EDGE_FACTOR]");
+	}
+
+	public static void main(String[] args) throws Exception {
+		// Set up the execution environment
+		final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+		env.getConfig().enableObjectReuse();
+
+		ParameterTool parameters = ParameterTool.fromArgs(args);
+		if (! parameters.has("directed")) {
+			printUsage();
+			return;
+		}
+		boolean directedAlgorithm = parameters.getBoolean("directed");
+
+		GraphAnalytic vm;
+		GraphAnalytic em;
+
+		switch (parameters.get("input", "")) {
+			case "csv": {
+				String lineDelimiter = StringEscapeUtils.unescapeJava(
+					parameters.get("input_line_delimiter", CsvOutputFormat.DEFAULT_LINE_DELIMITER));
+
+				String fieldDelimiter = StringEscapeUtils.unescapeJava(
+					parameters.get("input_field_delimiter", CsvOutputFormat.DEFAULT_FIELD_DELIMITER));
+
+				GraphCsvReader reader = Graph
+					.fromCsvReader(parameters.get("input_filename"), env)
+						.ignoreCommentsEdges("#")
+						.lineDelimiterEdges(lineDelimiter)
+						.fieldDelimiterEdges(fieldDelimiter);
+
+				switch (parameters.get("type", "")) {
+					case "integer": {
+						Graph<LongValue, NullValue, NullValue> graph = reader
+							.keyType(LongValue.class);
+
+						if (directedAlgorithm) {
+							if (parameters.getBoolean("simplify", false)) {
+								graph = graph
+									.run(new org.apache.flink.graph.asm.simple.directed.Simplify<LongValue, NullValue, NullValue>());
+							}
+
+							vm = graph
+								.run(new org.apache.flink.graph.library.metric.directed.VertexMetrics<LongValue, NullValue, NullValue>());
+							em = graph
+								.run(new org.apache.flink.graph.library.metric.directed.EdgeMetrics<LongValue, NullValue, NullValue>());
+						} else {
+							if (parameters.getBoolean("simplify", false)) {
+								graph = graph
+									.run(new org.apache.flink.graph.asm.simple.undirected.Simplify<LongValue, NullValue, NullValue>(false));
+							}
+
+							vm = graph
+								.run(new org.apache.flink.graph.library.metric.undirected.VertexMetrics<LongValue, NullValue, NullValue>());
+							em = graph
+								.run(new org.apache.flink.graph.library.metric.undirected.EdgeMetrics<LongValue, NullValue, NullValue>());
+						}
+					} break;
+
+					case "string": {
+						Graph<StringValue, NullValue, NullValue> graph = reader
+							.keyType(StringValue.class);
+
+						if (directedAlgorithm) {
+							if (parameters.getBoolean("simplify", false)) {
+								graph = graph
+									.run(new org.apache.flink.graph.asm.simple.directed.Simplify<StringValue, NullValue, NullValue>());
+							}
+
+							vm = graph
+								.run(new org.apache.flink.graph.library.metric.directed.VertexMetrics<StringValue, NullValue, NullValue>());
+							em = graph
+								.run(new org.apache.flink.graph.library.metric.directed.EdgeMetrics<StringValue, NullValue, NullValue>());
+						} else {
+							if (parameters.getBoolean("simplify", false)) {
+								graph = graph
+									.run(new org.apache.flink.graph.asm.simple.undirected.Simplify<StringValue, NullValue, NullValue>(false));
+							}
+
+							vm = graph
+								.run(new org.apache.flink.graph.library.metric.undirected.VertexMetrics<StringValue, NullValue, NullValue>());
+							em = graph
+								.run(new org.apache.flink.graph.library.metric.undirected.EdgeMetrics<StringValue, NullValue, NullValue>());
+						}
+					} break;
+
+					default:
+						printUsage();
+						return;
+				}
+				} break;
+
+			case "rmat": {
+				int scale = parameters.getInt("scale", DEFAULT_SCALE);
+				int edgeFactor = parameters.getInt("edge_factor", DEFAULT_EDGE_FACTOR);
+
+				RandomGenerableFactory<JDKRandomGenerator> rnd = new JDKRandomGeneratorFactory();
+
+				long vertexCount = 1L << scale;
+				long edgeCount = vertexCount * edgeFactor;
+
+
+				Graph<LongValue, NullValue, NullValue> graph = new RMatGraph<>(env, rnd, vertexCount, edgeCount)
+					.generate();
+
+				if (directedAlgorithm) {
+					if (scale > 32) {
+						Graph<LongValue, NullValue, NullValue> newGraph = graph
+							.run(new org.apache.flink.graph.asm.simple.directed.Simplify<LongValue, NullValue, NullValue>());
+
+						vm = newGraph
+							.run(new org.apache.flink.graph.library.metric.directed.VertexMetrics<LongValue, NullValue, NullValue>());
+						em = newGraph
+							.run(new org.apache.flink.graph.library.metric.directed.EdgeMetrics<LongValue, NullValue, NullValue>());
+					} else {
+						Graph<IntValue, NullValue, NullValue> newGraph = graph
+							.run(new TranslateGraphIds<LongValue, IntValue, NullValue, NullValue>(new LongValueToIntValue()))
+							.run(new org.apache.flink.graph.asm.simple.directed.Simplify<IntValue, NullValue, NullValue>());
+
+						vm = newGraph
+							.run(new org.apache.flink.graph.library.metric.directed.VertexMetrics<IntValue, NullValue, NullValue>());
+						em = newGraph
+							.run(new org.apache.flink.graph.library.metric.directed.EdgeMetrics<IntValue, NullValue, NullValue>());
+					}
+				} else {
+					boolean clipAndFlip = parameters.getBoolean("clip_and_flip", DEFAULT_CLIP_AND_FLIP);
+
+					if (scale > 32) {
+						Graph<LongValue, NullValue, NullValue> newGraph = graph
+							.run(new org.apache.flink.graph.asm.simple.undirected.Simplify<LongValue, NullValue, NullValue>(clipAndFlip));
+
+						vm = newGraph
+							.run(new org.apache.flink.graph.library.metric.undirected.VertexMetrics<LongValue, NullValue, NullValue>());
+						em = newGraph
+							.run(new org.apache.flink.graph.library.metric.undirected.EdgeMetrics<LongValue, NullValue, NullValue>());
+					} else {
+						Graph<IntValue, NullValue, NullValue> newGraph = graph
+							.run(new TranslateGraphIds<LongValue, IntValue, NullValue, NullValue>(new LongValueToIntValue()))
+							.run(new org.apache.flink.graph.asm.simple.undirected.Simplify<IntValue, NullValue, NullValue>(clipAndFlip));
+
+						vm = newGraph
+							.run(new org.apache.flink.graph.library.metric.undirected.VertexMetrics<IntValue, NullValue, NullValue>());
+						em = newGraph
+							.run(new org.apache.flink.graph.library.metric.undirected.EdgeMetrics<IntValue, NullValue, NullValue>());
+					}
+				}
+				} break;
+
+			default:
+				printUsage();
+				return;
+		}
+
+		env.execute("Graph Metrics");
+
+		System.out.print("Vertex metrics:\n  ");
+		System.out.println(vm.getResult().toString().replace(";", "\n "));
+		System.out.print("\nEdge metrics:\n  ");
+		System.out.println(em.getResult().toString().replace(";", "\n "));
+
+		JobExecutionResult result = env.getLastJobExecutionResult();
+
+		NumberFormat nf = NumberFormat.getInstance();
+		System.out.println("\nExecution runtime: " + nf.format(result.getNetRuntime()) + " ms");
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/58850f29/flink-libraries/flink-gelly-examples/src/main/java/org/apache/flink/graph/examples/ClusteringCoefficient.java
----------------------------------------------------------------------
diff --git a/flink-libraries/flink-gelly-examples/src/main/java/org/apache/flink/graph/examples/ClusteringCoefficient.java b/flink-libraries/flink-gelly-examples/src/main/java/org/apache/flink/graph/examples/ClusteringCoefficient.java
index 8641428..e099e2b 100644
--- a/flink-libraries/flink-gelly-examples/src/main/java/org/apache/flink/graph/examples/ClusteringCoefficient.java
+++ b/flink-libraries/flink-gelly-examples/src/main/java/org/apache/flink/graph/examples/ClusteringCoefficient.java
@@ -72,7 +72,7 @@ public class ClusteringCoefficient {
 		System.out.println(WordUtils.wrap("This algorithm returns tuples containing the vertex ID, the degree of" +
 			" the vertex, and the number of edges between vertex neighbors.", 80));
 		System.out.println();
-		System.out.println("usage: ClusteringCoefficient --directed <true | false> --input <csv | rmat [options]> --output <print | hash | csv [options]");
+		System.out.println("usage: ClusteringCoefficient --directed <true | false> --input <csv | rmat [options]> --output <print | hash | csv [options]>");
 		System.out.println();
 		System.out.println("options:");
 		System.out.println("  --input csv --type <integer | string> --input_filename FILENAME [--input_line_delimiter LINE_DELIMITER] [--input_field_delimiter FIELD_DELIMITER]");
@@ -174,7 +174,8 @@ public class ClusteringCoefficient {
 						gcc = newGraph
 							.run(new org.apache.flink.graph.library.clustering.directed.GlobalClusteringCoefficient<LongValue, NullValue, NullValue>());
 						lcc = newGraph
-							.run(new org.apache.flink.graph.library.clustering.directed.LocalClusteringCoefficient<LongValue, NullValue, NullValue>());
+							.run(new org.apache.flink.graph.library.clustering.directed.LocalClusteringCoefficient<LongValue, NullValue, NullValue>()
+								.setIncludeZeroDegreeVertices(false));
 					} else {
 						Graph<IntValue, NullValue, NullValue> newGraph = graph
 							.run(new TranslateGraphIds<LongValue, IntValue, NullValue, NullValue>(new LongValueToIntValue()))
@@ -183,7 +184,8 @@ public class ClusteringCoefficient {
 						gcc = newGraph
 							.run(new org.apache.flink.graph.library.clustering.directed.GlobalClusteringCoefficient<IntValue, NullValue, NullValue>());
 						lcc = newGraph
-							.run(new org.apache.flink.graph.library.clustering.directed.LocalClusteringCoefficient<IntValue, NullValue, NullValue>());
+							.run(new org.apache.flink.graph.library.clustering.directed.LocalClusteringCoefficient<IntValue, NullValue, NullValue>()
+								.setIncludeZeroDegreeVertices(false));
 					}
 				} else {
 					boolean clipAndFlip = parameters.getBoolean("clip_and_flip", DEFAULT_CLIP_AND_FLIP);
@@ -195,7 +197,8 @@ public class ClusteringCoefficient {
 						gcc = newGraph
 							.run(new org.apache.flink.graph.library.clustering.undirected.GlobalClusteringCoefficient<LongValue, NullValue, NullValue>());
 						lcc = newGraph
-							.run(new org.apache.flink.graph.library.clustering.undirected.LocalClusteringCoefficient<LongValue, NullValue, NullValue>());
+							.run(new org.apache.flink.graph.library.clustering.undirected.LocalClusteringCoefficient<LongValue, NullValue, NullValue>()
+								.setIncludeZeroDegreeVertices(false));
 					} else {
 						Graph<IntValue, NullValue, NullValue> newGraph = graph
 							.run(new TranslateGraphIds<LongValue, IntValue, NullValue, NullValue>(new LongValueToIntValue()))
@@ -204,7 +207,8 @@ public class ClusteringCoefficient {
 						gcc = newGraph
 							.run(new org.apache.flink.graph.library.clustering.undirected.GlobalClusteringCoefficient<IntValue, NullValue, NullValue>());
 						lcc = newGraph
-							.run(new org.apache.flink.graph.library.clustering.undirected.LocalClusteringCoefficient<IntValue, NullValue, NullValue>());
+							.run(new org.apache.flink.graph.library.clustering.undirected.LocalClusteringCoefficient<IntValue, NullValue, NullValue>()
+								.setIncludeZeroDegreeVertices(false));
 					}
 				}
 			} break;

http://git-wip-us.apache.org/repos/asf/flink/blob/58850f29/flink-libraries/flink-gelly-examples/src/main/java/org/apache/flink/graph/examples/HITS.java
----------------------------------------------------------------------
diff --git a/flink-libraries/flink-gelly-examples/src/main/java/org/apache/flink/graph/examples/HITS.java b/flink-libraries/flink-gelly-examples/src/main/java/org/apache/flink/graph/examples/HITS.java
index c772a3a..59612d9 100644
--- a/flink-libraries/flink-gelly-examples/src/main/java/org/apache/flink/graph/examples/HITS.java
+++ b/flink-libraries/flink-gelly-examples/src/main/java/org/apache/flink/graph/examples/HITS.java
@@ -65,7 +65,7 @@ public class HITS {
 			" scores for every vertex in a directed graph. A good \"hub\" links to good \"authorities\"" +
 			" and good \"authorities\" are linked from good \"hubs\".", 80));
 		System.out.println();
-		System.out.println("usage: HITS --input <csv | rmat [options]> --output <print | hash | csv [options]");
+		System.out.println("usage: HITS --input <csv | rmat [options]> --output <print | hash | csv [options]>");
 		System.out.println();
 		System.out.println("options:");
 		System.out.println("  --input csv --type <integer | string> --input_filename FILENAME [--input_line_delimiter LINE_DELIMITER] [--input_field_delimiter FIELD_DELIMITER]");

http://git-wip-us.apache.org/repos/asf/flink/blob/58850f29/flink-libraries/flink-gelly-examples/src/main/java/org/apache/flink/graph/examples/JaccardIndex.java
----------------------------------------------------------------------
diff --git a/flink-libraries/flink-gelly-examples/src/main/java/org/apache/flink/graph/examples/JaccardIndex.java b/flink-libraries/flink-gelly-examples/src/main/java/org/apache/flink/graph/examples/JaccardIndex.java
index 2158fa2..824aab7 100644
--- a/flink-libraries/flink-gelly-examples/src/main/java/org/apache/flink/graph/examples/JaccardIndex.java
+++ b/flink-libraries/flink-gelly-examples/src/main/java/org/apache/flink/graph/examples/JaccardIndex.java
@@ -69,7 +69,7 @@ public class JaccardIndex {
 		System.out.println(WordUtils.wrap("This algorithm returns 4-tuples containing two vertex IDs, the" +
 			" number of shared neighbors, and the number of distinct neighbors.", 80));
 		System.out.println();
-		System.out.println("usage: JaccardIndex --input <csv | rmat [options]> --output <print | hash | csv [options]");
+		System.out.println("usage: JaccardIndex --input <csv | rmat [options]> --output <print | hash | csv [options]>");
 		System.out.println();
 		System.out.println("options:");
 		System.out.println("  --input csv --type <integer | string> --input_filename FILENAME [--input_line_delimiter LINE_DELIMITER] [--input_field_delimiter FIELD_DELIMITER]");

http://git-wip-us.apache.org/repos/asf/flink/blob/58850f29/flink-libraries/flink-gelly-examples/src/main/java/org/apache/flink/graph/examples/TriangleListing.java
----------------------------------------------------------------------
diff --git a/flink-libraries/flink-gelly-examples/src/main/java/org/apache/flink/graph/examples/TriangleListing.java b/flink-libraries/flink-gelly-examples/src/main/java/org/apache/flink/graph/examples/TriangleListing.java
index cd06dde..f3ce708 100644
--- a/flink-libraries/flink-gelly-examples/src/main/java/org/apache/flink/graph/examples/TriangleListing.java
+++ b/flink-libraries/flink-gelly-examples/src/main/java/org/apache/flink/graph/examples/TriangleListing.java
@@ -66,7 +66,7 @@ public class TriangleListing {
 		System.out.println(WordUtils.wrap("This algorithm returns tuples containing the vertex IDs for each triangle and" +
 			" for directed graphs a bitmask indicating the presence of the six potential connecting edges.", 80));
 		System.out.println();
-		System.out.println("usage: TriangleListing --directed <true | false> --input <csv | rmat [options]> --output <print | hash | csv [options]");
+		System.out.println("usage: TriangleListing --directed <true | false> --input <csv | rmat [options]> --output <print | hash | csv [options]>");
 		System.out.println();
 		System.out.println("options:");
 		System.out.println("  --input csv --type <integer | string> --input_filename FILENAME [--input_line_delimiter LINE_DELIMITER] [--input_field_delimiter FIELD_DELIMITER]");

http://git-wip-us.apache.org/repos/asf/flink/blob/58850f29/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/asm/degree/annotate/directed/VertexDegrees.java
----------------------------------------------------------------------
diff --git a/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/asm/degree/annotate/directed/VertexDegrees.java b/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/asm/degree/annotate/directed/VertexDegrees.java
index 9fef221..84873bc 100644
--- a/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/asm/degree/annotate/directed/VertexDegrees.java
+++ b/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/asm/degree/annotate/directed/VertexDegrees.java
@@ -86,7 +86,7 @@ extends GraphAlgorithmDelegatingDataSet<K, VV, EV, Vertex<K, Degrees>> {
 
 	@Override
 	protected String getAlgorithmName() {
-		return VertexOutDegree.class.getName();
+		return VertexDegrees.class.getName();
 	}
 
 	@Override

http://git-wip-us.apache.org/repos/asf/flink/blob/58850f29/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/clustering/directed/LocalClusteringCoefficient.java
----------------------------------------------------------------------
diff --git a/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/clustering/directed/LocalClusteringCoefficient.java b/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/clustering/directed/LocalClusteringCoefficient.java
index e0defcd..22c8b41 100644
--- a/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/clustering/directed/LocalClusteringCoefficient.java
+++ b/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/clustering/directed/LocalClusteringCoefficient.java
@@ -32,6 +32,7 @@ import org.apache.flink.graph.asm.degree.annotate.directed.VertexDegrees.Degrees
 import org.apache.flink.graph.library.clustering.directed.LocalClusteringCoefficient.Result;
 import org.apache.flink.graph.utils.Murmur3_32;
 import org.apache.flink.graph.utils.proxy.GraphAlgorithmDelegatingDataSet;
+import org.apache.flink.graph.utils.proxy.OptionalBoolean;
 import org.apache.flink.types.CopyableValue;
 import org.apache.flink.types.LongValue;
 import org.apache.flink.util.Collector;
@@ -59,9 +60,26 @@ public class LocalClusteringCoefficient<K extends Comparable<K> & CopyableValue<
 extends GraphAlgorithmDelegatingDataSet<K, VV, EV, Result<K>> {
 
 	// Optional configuration
+	private OptionalBoolean includeZeroDegreeVertices = new OptionalBoolean(true, true);
+
 	private int littleParallelism = PARALLELISM_DEFAULT;
 
 	/**
+	 * By default the vertex set is checked for zero degree vertices. When this
+	 * flag is disabled only clustering coefficient scores for vertices with
+	 * a degree of a least one will be produced.
+	 *
+	 * @param includeZeroDegreeVertices whether to output scores for vertices
+	 *                                  with a degree of zero
+	 * @return this
+	 */
+	public LocalClusteringCoefficient<K, VV, EV> setIncludeZeroDegreeVertices(boolean includeZeroDegreeVertices) {
+		this.includeZeroDegreeVertices.set(includeZeroDegreeVertices);
+
+		return this;
+	}
+
+	/**
 	 * Override the parallelism of operators processing small amounts of data.
 	 *
 	 * @param littleParallelism operator parallelism
@@ -90,6 +108,16 @@ extends GraphAlgorithmDelegatingDataSet<K, VV, EV, Result<K>> {
 
 		LocalClusteringCoefficient rhs = (LocalClusteringCoefficient) other;
 
+		// verify that configurations can be merged
+
+		if (includeZeroDegreeVertices.conflictsWith(rhs.includeZeroDegreeVertices)) {
+			return false;
+		}
+
+		// merge configurations
+
+		includeZeroDegreeVertices.mergeWith(rhs.includeZeroDegreeVertices);
+
 		littleParallelism = Math.min(littleParallelism, rhs.littleParallelism);
 
 		return true;
@@ -128,8 +156,8 @@ extends GraphAlgorithmDelegatingDataSet<K, VV, EV, Result<K>> {
 		// u, deg(u)
 		DataSet<Vertex<K, Degrees>> vertexDegree = input
 			.run(new VertexDegrees<K, VV, EV>()
-				.setParallelism(littleParallelism)
-				.setIncludeZeroDegreeVertices(true));
+				.setIncludeZeroDegreeVertices(includeZeroDegreeVertices.get())
+				.setParallelism(littleParallelism));
 
 		// u, deg(u), triangle count
 		return vertexDegree

http://git-wip-us.apache.org/repos/asf/flink/blob/58850f29/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/clustering/undirected/LocalClusteringCoefficient.java
----------------------------------------------------------------------
diff --git a/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/clustering/undirected/LocalClusteringCoefficient.java b/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/clustering/undirected/LocalClusteringCoefficient.java
index cd859d9..4b4bf07 100644
--- a/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/clustering/undirected/LocalClusteringCoefficient.java
+++ b/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/clustering/undirected/LocalClusteringCoefficient.java
@@ -32,6 +32,7 @@ import org.apache.flink.graph.asm.degree.annotate.undirected.VertexDegree;
 import org.apache.flink.graph.library.clustering.undirected.LocalClusteringCoefficient.Result;
 import org.apache.flink.graph.utils.Murmur3_32;
 import org.apache.flink.graph.utils.proxy.GraphAlgorithmDelegatingDataSet;
+import org.apache.flink.graph.utils.proxy.OptionalBoolean;
 import org.apache.flink.types.CopyableValue;
 import org.apache.flink.types.LongValue;
 import org.apache.flink.util.Collector;
@@ -59,9 +60,26 @@ public class LocalClusteringCoefficient<K extends Comparable<K> & CopyableValue<
 extends GraphAlgorithmDelegatingDataSet<K, VV, EV, Result<K>> {
 
 	// Optional configuration
+	private OptionalBoolean includeZeroDegreeVertices = new OptionalBoolean(true, true);
+
 	private int littleParallelism = PARALLELISM_DEFAULT;
 
 	/**
+	 * By default the vertex set is checked for zero degree vertices. When this
+	 * flag is disabled only clustering coefficient scores for vertices with
+	 * a degree of a least one will be produced.
+	 *
+	 * @param includeZeroDegreeVertices whether to output scores for vertices
+	 *                                  with a degree of zero
+	 * @return this
+	 */
+	public LocalClusteringCoefficient<K, VV, EV> setIncludeZeroDegreeVertices(boolean includeZeroDegreeVertices) {
+		this.includeZeroDegreeVertices.set(includeZeroDegreeVertices);
+
+		return this;
+	}
+
+	/**
 	 * Override the parallelism of operators processing small amounts of data.
 	 *
 	 * @param littleParallelism operator parallelism
@@ -91,6 +109,15 @@ extends GraphAlgorithmDelegatingDataSet<K, VV, EV, Result<K>> {
 
 		LocalClusteringCoefficient rhs = (LocalClusteringCoefficient) other;
 
+		// verify that configurations can be merged
+
+		if (includeZeroDegreeVertices.conflictsWith(rhs.includeZeroDegreeVertices)) {
+			return false;
+		}
+
+		// merge configurations
+
+		includeZeroDegreeVertices.mergeWith(rhs.includeZeroDegreeVertices);
 		littleParallelism = Math.min(littleParallelism, rhs.littleParallelism);
 
 		return true;
@@ -129,8 +156,8 @@ extends GraphAlgorithmDelegatingDataSet<K, VV, EV, Result<K>> {
 		// u, deg(u)
 		DataSet<Vertex<K, LongValue>> vertexDegree = input
 			.run(new VertexDegree<K, VV, EV>()
-				.setParallelism(littleParallelism)
-				.setIncludeZeroDegreeVertices(true));
+				.setIncludeZeroDegreeVertices(includeZeroDegreeVertices.get())
+				.setParallelism(littleParallelism));
 
 		// u, deg(u), triangle count
 		return vertexDegree

http://git-wip-us.apache.org/repos/asf/flink/blob/58850f29/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/metric/directed/EdgeMetrics.java
----------------------------------------------------------------------
diff --git a/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/metric/directed/EdgeMetrics.java b/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/metric/directed/EdgeMetrics.java
new file mode 100644
index 0000000..167e31c
--- /dev/null
+++ b/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/metric/directed/EdgeMetrics.java
@@ -0,0 +1,507 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.graph.library.metric.directed;
+
+import org.apache.commons.lang3.builder.EqualsBuilder;
+import org.apache.commons.lang3.builder.HashCodeBuilder;
+import org.apache.flink.api.common.JobExecutionResult;
+import org.apache.flink.api.common.accumulators.LongCounter;
+import org.apache.flink.api.common.accumulators.LongMaximum;
+import org.apache.flink.api.common.functions.FlatMapFunction;
+import org.apache.flink.api.common.functions.GroupReduceFunction;
+import org.apache.flink.api.common.functions.ReduceFunction;
+import org.apache.flink.api.common.io.RichOutputFormat;
+import org.apache.flink.api.common.operators.base.ReduceOperatorBase.CombineHint;
+import org.apache.flink.api.java.DataSet;
+import org.apache.flink.api.java.functions.FunctionAnnotation.ForwardedFields;
+import org.apache.flink.api.java.tuple.Tuple3;
+import org.apache.flink.api.java.tuple.Tuple4;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.graph.AbstractGraphAnalytic;
+import org.apache.flink.graph.Edge;
+import org.apache.flink.graph.Graph;
+import org.apache.flink.graph.asm.degree.annotate.directed.EdgeDegreesPair;
+import org.apache.flink.graph.asm.degree.annotate.directed.VertexDegrees.Degrees;
+import org.apache.flink.graph.library.metric.directed.EdgeMetrics.Result;
+import org.apache.flink.types.CopyableValue;
+import org.apache.flink.types.LongValue;
+import org.apache.flink.util.AbstractID;
+import org.apache.flink.util.Collector;
+
+import java.io.IOException;
+import java.text.NumberFormat;
+
+import static org.apache.flink.api.common.ExecutionConfig.PARALLELISM_DEFAULT;
+
+/**
+ * Compute the following edge metrics in a directed graph:
+ *  - number of vertices
+ *  - number of edges
+ *  - number of triangle triplets
+ *  - number of rectangle triplets
+ *  - number of triplets
+ *  - maximum degree
+ *  - maximum out degree
+ *  - maximum in degree
+ *  - maximum number of triangle triplets
+ *  - maximum number of rectangle triplets
+ *  - maximum number of triplets
+ *
+ * @param <K> graph ID type
+ * @param <VV> vertex value type
+ * @param <EV> edge value type
+ */
+public class EdgeMetrics<K extends Comparable<K> & CopyableValue<K>, VV, EV>
+extends AbstractGraphAnalytic<K, VV, EV, Result> {
+
+	private String id = new AbstractID().toString();
+
+	private int parallelism = PARALLELISM_DEFAULT;
+
+	/**
+	 * Override the operator parallelism.
+	 *
+	 * @param parallelism operator parallelism
+	 * @return this
+	 */
+	public EdgeMetrics<K, VV, EV> setParallelism(int parallelism) {
+		this.parallelism = parallelism;
+
+		return this;
+	}
+
+	/*
+	 * Implementation notes:
+	 *
+	 * Use aggregator to replace SumEdgeStats when aggregators are rewritten to use
+	 *   a hash-combineable hashable-reduce.
+	 *
+	 * Use distinct to replace ReduceEdgeStats when the combiner can be disabled
+	 *   with a sorted-reduce forced.
+	 */
+
+	@Override
+	public EdgeMetrics<K, VV, EV> run(Graph<K, VV, EV> input)
+			throws Exception {
+		super.run(input);
+
+		// s, t, (d(s), d(t))
+		DataSet<Edge<K, Tuple3<EV, Degrees, Degrees>>> edgeDegreesPair = input
+			.run(new EdgeDegreesPair<K, VV, EV>()
+				.setParallelism(parallelism));
+
+		// s, d(s), count of (u, v) where deg(u) < deg(v) or (deg(u) == deg(v) and u < v)
+		DataSet<Tuple3<K, Degrees, LongValue>> edgeStats = edgeDegreesPair
+			.flatMap(new EdgeStats<K, EV>())
+				.setParallelism(parallelism)
+				.name("Edge stats")
+			.groupBy(0, 1)
+			.reduceGroup(new ReduceEdgeStats<K>())
+				.setParallelism(parallelism)
+				.name("Reduce edge stats")
+			.groupBy(0)
+			.reduce(new SumEdgeStats<K>())
+				.setCombineHint(CombineHint.HASH)
+				.setParallelism(parallelism)
+				.name("Sum edge stats");
+
+		edgeStats
+			.output(new EdgeMetricsHelper<K, EV>(id))
+				.setParallelism(parallelism)
+				.name("Edge metrics");
+
+		return this;
+	}
+
+	@Override
+	public Result getResult() {
+		JobExecutionResult res = env.getLastJobExecutionResult();
+
+		long vertexCount = res.getAccumulatorResult(id + "-0");
+		long edgeCount = res.getAccumulatorResult(id + "-1");
+		long triangleTripletCount = res.getAccumulatorResult(id + "-2");
+		long rectangleTripletCount = res.getAccumulatorResult(id + "-3");
+		long tripletCount = res.getAccumulatorResult(id + "-4");
+		long maximumDegree = res.getAccumulatorResult(id + "-5");
+		long maximumOutDegree = res.getAccumulatorResult(id + "-6");
+		long maximumInDegree = res.getAccumulatorResult(id + "-7");
+		long maximumTriangleTriplets = res.getAccumulatorResult(id + "-8");
+		long maximumRectangleTriplets = res.getAccumulatorResult(id + "-9");
+		long maximumTriplets = res.getAccumulatorResult(id + "-a");
+
+		return new Result(vertexCount, edgeCount, triangleTripletCount, rectangleTripletCount, tripletCount,
+			maximumDegree, maximumOutDegree, maximumInDegree,
+			maximumTriangleTriplets, maximumRectangleTriplets, maximumTriplets);
+	}
+
+	/**
+	 * Produces a pair of tuples. The first tuple contains the source vertex ID,
+	 * the target vertex ID, the source degrees, and the low-order count. The
+	 * second tuple is the same with the source and target roles reversed.
+	 *
+	 * The low-order count is one if the source vertex degree is less than the
+	 * target vertex degree or if the degrees are equal and the source vertex
+	 * ID compares lower than the target vertex ID; otherwise the low-order
+	 * count is zero.
+	 *
+	 * @param <T> ID type
+	 * @param <ET> edge value type
+	 */
+	private static final class EdgeStats<T extends Comparable<T>, ET>
+	implements FlatMapFunction<Edge<T, Tuple3<ET, Degrees, Degrees>>, Tuple4<T, T, Degrees, LongValue>> {
+		private LongValue zero = new LongValue(0);
+
+		private LongValue one = new LongValue(1);
+
+		private Tuple4<T, T, Degrees, LongValue> output = new Tuple4<>();
+
+		@Override
+		public void flatMap(Edge<T, Tuple3<ET, Degrees, Degrees>> edge, Collector<Tuple4<T, T, Degrees, LongValue>> out)
+				throws Exception {
+			Tuple3<ET, Degrees, Degrees> degrees = edge.f2;
+			long sourceDegree = degrees.f1.getDegree().getValue();
+			long targetDegree = degrees.f2.getDegree().getValue();
+
+			boolean ordered = (sourceDegree < targetDegree
+				|| (sourceDegree == targetDegree && edge.f0.compareTo(edge.f1) < 0));
+
+			output.f0 = edge.f0;
+			output.f1 = edge.f1;
+			output.f2 = edge.f2.f1;
+			output.f3 = ordered ? one : zero;
+			out.collect(output);
+
+			output.f0 = edge.f1;
+			output.f1 = edge.f0;
+			output.f2 = edge.f2.f2;
+			output.f3 = ordered ? zero : one;
+			out.collect(output);
+		}
+	}
+
+	/**
+	 * Produces a distinct value for each edge.
+	 *
+	 * @param <T> ID type
+	 */
+	@ForwardedFields("0")
+	private static final class ReduceEdgeStats<T>
+	implements GroupReduceFunction<Tuple4<T, T, Degrees, LongValue>, Tuple3<T, Degrees, LongValue>> {
+		Tuple3<T, Degrees, LongValue> output = new Tuple3<>();
+
+		@Override
+		public void reduce(Iterable<Tuple4<T, T, Degrees, LongValue>> values, Collector<Tuple3<T, Degrees, LongValue>> out)
+				throws Exception {
+			Tuple4<T, T, Degrees, LongValue> value = values.iterator().next();
+
+			output.f0 = value.f0;
+			output.f1 = value.f2;
+			output.f2 = value.f3;
+
+			out.collect(output);
+		}
+	}
+
+	/**
+	 * Sums the low-order counts.
+	 *
+	 * @param <T> ID type
+	 */
+	private static class SumEdgeStats<T>
+	implements ReduceFunction<Tuple3<T, Degrees, LongValue>> {
+		@Override
+		public Tuple3<T, Degrees, LongValue> reduce(Tuple3<T, Degrees, LongValue> value1, Tuple3<T, Degrees, LongValue> value2)
+				throws Exception {
+			value1.f2.setValue(value1.f2.getValue() + value2.f2.getValue());
+			return value1;
+		}
+	}
+
+	/**
+	 * Helper class to collect edge metrics.
+	 *
+	 * @param <T> ID type
+	 */
+	private static class EdgeMetricsHelper<T extends Comparable<T>, ET>
+	extends RichOutputFormat<Tuple3<T, Degrees, LongValue>> {
+		private final String id;
+
+		private long vertexCount;
+		private long edgeCount;
+		private long triangleTripletCount;
+		private long rectangleTripletCount;
+		private long tripletCount;
+		private long maximumDegree;
+		private long maximumOutDegree;
+		private long maximumInDegree;
+		private long maximumTriangleTriplets;
+		private long maximumRectangleTriplets;
+		private long maximumTriplets;
+
+		/**
+		 * This helper class collects edge metrics by scanning over and
+		 * discarding elements from the given DataSet.
+		 *
+		 * The unique id is required because Flink's accumulator namespace is
+		 * among all operators.
+		 *
+		 * @param id unique string used for accumulator names
+		 */
+		public EdgeMetricsHelper(String id) {
+			this.id = id;
+		}
+
+		@Override
+		public void configure(Configuration parameters) {}
+
+		@Override
+		public void open(int taskNumber, int numTasks) throws IOException {}
+
+		@Override
+		public void writeRecord(Tuple3<T, Degrees, LongValue> record) throws IOException {
+			Degrees degrees = record.f1;
+			long degree = degrees.getDegree().getValue();
+			long outDegree = degrees.getOutDegree().getValue();
+			long inDegree = degrees.getInDegree().getValue();
+
+			long lowDegree = record.f2.getValue();
+			long highDegree = degree - lowDegree;
+
+			long triangleTriplets = lowDegree * (lowDegree - 1) / 2;
+			long rectangleTriplets = triangleTriplets + lowDegree * highDegree;
+			long triplets = degree * (degree - 1) / 2;
+
+			vertexCount++;
+			edgeCount += outDegree;
+			triangleTripletCount += triangleTriplets;
+			rectangleTripletCount += rectangleTriplets;
+			tripletCount += triplets;
+			maximumDegree = Math.max(maximumDegree, degree);
+			maximumOutDegree = Math.max(maximumOutDegree, outDegree);
+			maximumInDegree = Math.max(maximumInDegree, inDegree);
+			maximumTriangleTriplets = Math.max(maximumTriangleTriplets, triangleTriplets);
+			maximumRectangleTriplets = Math.max(maximumRectangleTriplets, rectangleTriplets);
+			maximumTriplets = Math.max(maximumTriplets, triplets);
+		}
+
+		@Override
+		public void close() throws IOException {
+			getRuntimeContext().addAccumulator(id + "-0", new LongCounter(vertexCount));
+			getRuntimeContext().addAccumulator(id + "-1", new LongCounter(edgeCount));
+			getRuntimeContext().addAccumulator(id + "-2", new LongCounter(triangleTripletCount));
+			getRuntimeContext().addAccumulator(id + "-3", new LongCounter(rectangleTripletCount));
+			getRuntimeContext().addAccumulator(id + "-4", new LongCounter(tripletCount));
+			getRuntimeContext().addAccumulator(id + "-5", new LongMaximum(maximumDegree));
+			getRuntimeContext().addAccumulator(id + "-6", new LongMaximum(maximumOutDegree));
+			getRuntimeContext().addAccumulator(id + "-7", new LongMaximum(maximumInDegree));
+			getRuntimeContext().addAccumulator(id + "-8", new LongMaximum(maximumTriangleTriplets));
+			getRuntimeContext().addAccumulator(id + "-9", new LongMaximum(maximumRectangleTriplets));
+			getRuntimeContext().addAccumulator(id + "-a", new LongMaximum(maximumTriplets));
+		}
+	}
+
+	/**
+	 * Wraps edge metrics.
+	 */
+	public static class Result {
+		private long vertexCount;
+		private long edgeCount;
+		private long triangleTripletCount;
+		private long rectangleTripletCount;
+		private long tripletCount;
+		private long maximumDegree;
+		private long maximumOutDegree;
+		private long maximumInDegree;
+		private long maximumTriangleTriplets;
+		private long maximumRectangleTriplets;
+		private long maximumTriplets;
+
+		public Result(long vertexCount, long edgeCount, long triangleTripletCount, long rectangleTripletCount, long tripletCount,
+				long maximumDegree, long maximumOutDegree, long maximumInDegree,
+				long maximumTriangleTriplets, long maximumRectangleTriplets, long maximumTriplets) {
+			this.vertexCount = vertexCount;
+			this.edgeCount = edgeCount;
+			this.triangleTripletCount = triangleTripletCount;
+			this.rectangleTripletCount = rectangleTripletCount;
+			this.tripletCount = tripletCount;
+			this.maximumDegree = maximumDegree;
+			this.maximumOutDegree = maximumOutDegree;
+			this.maximumInDegree = maximumInDegree;
+			this.maximumTriangleTriplets = maximumTriangleTriplets;
+			this.maximumRectangleTriplets = maximumRectangleTriplets;
+			this.maximumTriplets = maximumTriplets;
+		}
+
+		/**
+		 * Get the number of vertices.
+		 *
+		 * @return number of vertices
+		 */
+		public long getNumberOfVertices() {
+			return vertexCount;
+		}
+
+		/**
+		 * Get the number of edges.
+		 *
+		 * @return number of edges
+		 */
+		public long getNumberOfEdges() {
+			return edgeCount;
+		}
+
+		/**
+		 * Get the number of triangle triplets.
+		 *
+		 * @return number of triangle triplets
+		 */
+		public long getNumberOfTriangleTriplets() {
+			return triangleTripletCount;
+		}
+
+		/**
+		 * Get the number of rectangle triplets.
+		 *
+		 * @return number of rectangle triplets
+		 */
+		public long getNumberOfRectangleTriplets() {
+			return rectangleTripletCount;
+		}
+
+		/**
+		 * Get the number of triplets.
+		 *
+		 * @return number of triplets
+		 */
+		public long getNumberOfTriplets() {
+			return tripletCount;
+		}
+
+		/**
+		 * Get the maximum degree.
+		 *
+		 * @return maximum degree
+		 */
+		public long getMaximumDegree() {
+			return maximumDegree;
+		}
+
+		/**
+		 * Get the maximum out degree.
+		 *
+		 * @return maximum out degree
+		 */
+		public long getMaximumOutDegree() {
+			return maximumOutDegree;
+		}
+
+		/**
+		 * Get the maximum in degree.
+		 *
+		 * @return maximum in degree
+		 */
+		public long getMaximumInDegree() {
+			return maximumInDegree;
+		}
+
+		/**
+		 * Get the maximum triangle triplets.
+		 *
+		 * @return maximum triangle triplets
+		 */
+		public long getMaximumTriangleTriplets() {
+			return maximumTriangleTriplets;
+		}
+
+		/**
+		 * Get the maximum rectangle triplets.
+		 *
+		 * @return maximum rectangle triplets
+		 */
+		public long getMaximumRectangleTriplets() {
+			return maximumRectangleTriplets;
+		}
+
+		/**
+		 * Get the maximum triplets.
+		 *
+		 * @return maximum triplets
+		 */
+		public long getMaximumTriplets() {
+			return maximumTriplets;
+		}
+
+		@Override
+		public String toString() {
+			NumberFormat nf = NumberFormat.getInstance();
+
+			return "vertex count: " + nf.format(vertexCount)
+				+ "; edge count: " + nf.format(edgeCount)
+				+ "; triangle triplet count: " + nf.format(triangleTripletCount)
+				+ "; rectangle triplet count: " + nf.format(rectangleTripletCount)
+				+ "; triplet count: " + nf.format(tripletCount)
+				+ "; maximum degree: " + nf.format(maximumDegree)
+				+ "; maximum out degree: " + nf.format(maximumOutDegree)
+				+ "; maximum in degree: " + nf.format(maximumInDegree)
+				+ "; maximum triangle triplets: " + nf.format(maximumTriangleTriplets)
+				+ "; maximum rectangle triplets: " + nf.format(maximumRectangleTriplets)
+				+ "; maximum triplets: " + nf.format(maximumTriplets);
+		}
+
+		@Override
+		public int hashCode() {
+			return new HashCodeBuilder()
+				.append(vertexCount)
+				.append(edgeCount)
+				.append(triangleTripletCount)
+				.append(rectangleTripletCount)
+				.append(tripletCount)
+				.append(maximumDegree)
+				.append(maximumOutDegree)
+				.append(maximumInDegree)
+				.append(maximumTriangleTriplets)
+				.append(maximumRectangleTriplets)
+				.append(maximumTriplets)
+				.hashCode();
+		}
+
+		@Override
+		public boolean equals(Object obj) {
+			if (obj == null) { return false; }
+			if (obj == this) { return true; }
+			if (obj.getClass() != getClass()) { return false; }
+
+			Result rhs = (Result)obj;
+
+			return new EqualsBuilder()
+				.append(vertexCount, rhs.vertexCount)
+				.append(edgeCount, rhs.edgeCount)
+				.append(triangleTripletCount, rhs.triangleTripletCount)
+				.append(rectangleTripletCount, rhs.rectangleTripletCount)
+				.append(tripletCount, rhs.tripletCount)
+				.append(maximumDegree, rhs.maximumDegree)
+				.append(maximumOutDegree, rhs.maximumOutDegree)
+				.append(maximumInDegree, rhs.maximumInDegree)
+				.append(maximumTriangleTriplets, rhs.maximumTriangleTriplets)
+				.append(maximumRectangleTriplets, rhs.maximumRectangleTriplets)
+				.append(maximumTriplets, rhs.maximumTriplets)
+				.isEquals();
+		}
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/58850f29/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/metric/directed/VertexMetrics.java
----------------------------------------------------------------------
diff --git a/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/metric/directed/VertexMetrics.java b/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/metric/directed/VertexMetrics.java
index 434bd28..22f7733 100644
--- a/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/metric/directed/VertexMetrics.java
+++ b/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/metric/directed/VertexMetrics.java
@@ -22,6 +22,7 @@ import org.apache.commons.lang3.builder.EqualsBuilder;
 import org.apache.commons.lang3.builder.HashCodeBuilder;
 import org.apache.flink.api.common.JobExecutionResult;
 import org.apache.flink.api.common.accumulators.LongCounter;
+import org.apache.flink.api.common.accumulators.LongMaximum;
 import org.apache.flink.api.common.io.RichOutputFormat;
 import org.apache.flink.api.java.DataSet;
 import org.apache.flink.configuration.Configuration;
@@ -35,12 +36,19 @@ import org.apache.flink.types.CopyableValue;
 import org.apache.flink.util.AbstractID;
 
 import java.io.IOException;
+import java.text.NumberFormat;
 
 import static org.apache.flink.api.common.ExecutionConfig.PARALLELISM_DEFAULT;
 
 /**
- * Compute the number of vertices, number of edges, and number of triplets in
- * a directed graph.
+ * Compute the following vertex metrics in a directed graph:
+ *  - number of vertices
+ *  - number of edges
+ *  - number of triplets
+ *  - maximum degree
+ *  - maximum out degree
+ *  - maximum in degree
+ *  - maximum number of triplets
  *
  * @param <K> graph ID type
  * @param <VV> vertex value type
@@ -107,8 +115,12 @@ extends AbstractGraphAnalytic<K, VV, EV, Result> {
 		long vertexCount = res.getAccumulatorResult(id + "-0");
 		long edgeCount = res.getAccumulatorResult(id + "-1");
 		long tripletCount = res.getAccumulatorResult(id + "-2");
+		long maximumDegree = res.getAccumulatorResult(id + "-3");
+		long maximumOutDegree = res.getAccumulatorResult(id + "-4");
+		long maximumInDegree = res.getAccumulatorResult(id + "-5");
+		long maximumTriplets = res.getAccumulatorResult(id + "-6");
 
-		return new Result(vertexCount, edgeCount / 2, tripletCount);
+		return new Result(vertexCount, edgeCount, tripletCount, maximumDegree, maximumOutDegree, maximumInDegree, maximumTriplets);
 	}
 
 	/**
@@ -123,13 +135,17 @@ extends AbstractGraphAnalytic<K, VV, EV, Result> {
 		private long vertexCount;
 		private long edgeCount;
 		private long tripletCount;
+		private long maximumDegree;
+		private long maximumOutDegree;
+		private long maximumInDegree;
+		private long maximumTriplets;
 
 		/**
 		 * This helper class collects vertex metrics by scanning over and
 		 * discarding elements from the given DataSet.
 		 *
 		 * The unique id is required because Flink's accumulator namespace is
-		 * among all operators.
+		 * shared among all operators.
 		 *
 		 * @param id unique string used for accumulator names
 		 */
@@ -147,10 +163,16 @@ extends AbstractGraphAnalytic<K, VV, EV, Result> {
 		public void writeRecord(Vertex<T, Degrees> record) throws IOException {
 			long degree = record.f1.getDegree().getValue();
 			long outDegree = record.f1.getOutDegree().getValue();
+			long inDegree = record.f1.getInDegree().getValue();
+			long triplets = degree * (degree - 1) / 2;
 
 			vertexCount++;
 			edgeCount += outDegree;
-			tripletCount += degree * (degree - 1) / 2;
+			tripletCount += triplets;
+			maximumDegree = Math.max(maximumDegree, degree);
+			maximumOutDegree = Math.max(maximumOutDegree, outDegree);
+			maximumInDegree = Math.max(maximumInDegree, inDegree);
+			maximumTriplets = Math.max(maximumTriplets, triplets);
 		}
 
 		@Override
@@ -158,6 +180,10 @@ extends AbstractGraphAnalytic<K, VV, EV, Result> {
 			getRuntimeContext().addAccumulator(id + "-0", new LongCounter(vertexCount));
 			getRuntimeContext().addAccumulator(id + "-1", new LongCounter(edgeCount));
 			getRuntimeContext().addAccumulator(id + "-2", new LongCounter(tripletCount));
+			getRuntimeContext().addAccumulator(id + "-3", new LongMaximum(maximumDegree));
+			getRuntimeContext().addAccumulator(id + "-4", new LongMaximum(maximumOutDegree));
+			getRuntimeContext().addAccumulator(id + "-5", new LongMaximum(maximumInDegree));
+			getRuntimeContext().addAccumulator(id + "-6", new LongMaximum(maximumTriplets));
 		}
 	}
 
@@ -168,11 +194,19 @@ extends AbstractGraphAnalytic<K, VV, EV, Result> {
 		private long vertexCount;
 		private long edgeCount;
 		private long tripletCount;
+		private long maximumDegree;
+		private long maximumOutDegree;
+		private long maximumInDegree;
+		private long maximumTriplets;
 
-		public Result(long vertexCount, long edgeCount, long tripletCount) {
+		public Result(long vertexCount, long edgeCount, long tripletCount, long maximumDegree, long maximumOutDegree, long maximumInDegree, long maximumTriplets) {
 			this.vertexCount = vertexCount;
 			this.edgeCount = edgeCount;
 			this.tripletCount = tripletCount;
+			this.maximumDegree = maximumDegree;
+			this.maximumOutDegree = maximumOutDegree;
+			this.maximumInDegree = maximumInDegree;
+			this.maximumTriplets = maximumTriplets;
 		}
 
 		/**
@@ -202,11 +236,53 @@ extends AbstractGraphAnalytic<K, VV, EV, Result> {
 			return tripletCount;
 		}
 
+		/**
+		 * Get the maximum degree.
+		 *
+		 * @return maximum degree
+		 */
+		public long getMaximumDegree() {
+			return maximumDegree;
+		}
+
+		/**
+		 * Get the maximum out degree.
+		 *
+		 * @return maximum out degree
+		 */
+		public long getMaximumOutDegree() {
+			return maximumOutDegree;
+		}
+
+		/**
+		 * Get the maximum in degree.
+		 *
+		 * @return maximum in degree
+		 */
+		public long getMaximumInDegree() {
+			return maximumInDegree;
+		}
+
+		/**
+		 * Get the maximum triplets.
+		 *
+		 * @return maximum triplets
+		 */
+		public long getMaximumTriplets() {
+			return maximumTriplets;
+		}
+
 		@Override
 		public String toString() {
-			return "vertex count: " + vertexCount
-				+ ", edge count:" + edgeCount
-				+ ", triplet count: " + tripletCount;
+			NumberFormat nf = NumberFormat.getInstance();
+
+			return "vertex count: " + nf.format(vertexCount)
+				+ "; edge count: " + nf.format(edgeCount)
+				+ "; triplet count: " + nf.format(tripletCount)
+				+ "; maximum degree: " + nf.format(maximumDegree)
+				+ "; maximum out degree: " + nf.format(maximumOutDegree)
+				+ "; maximum in degree: " + nf.format(maximumInDegree)
+				+ "; maximum triplets: " + nf.format(maximumTriplets);
 		}
 
 		@Override
@@ -215,6 +291,10 @@ extends AbstractGraphAnalytic<K, VV, EV, Result> {
 				.append(vertexCount)
 				.append(edgeCount)
 				.append(tripletCount)
+				.append(maximumDegree)
+				.append(maximumOutDegree)
+				.append(maximumInDegree)
+				.append(maximumTriplets)
 				.hashCode();
 		}
 
@@ -230,6 +310,10 @@ extends AbstractGraphAnalytic<K, VV, EV, Result> {
 				.append(vertexCount, rhs.vertexCount)
 				.append(edgeCount, rhs.edgeCount)
 				.append(tripletCount, rhs.tripletCount)
+				.append(maximumDegree, rhs.maximumDegree)
+				.append(maximumOutDegree, rhs.maximumOutDegree)
+				.append(maximumInDegree, rhs.maximumInDegree)
+				.append(maximumTriplets, rhs.maximumTriplets)
 				.isEquals();
 		}
 	}

http://git-wip-us.apache.org/repos/asf/flink/blob/58850f29/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/metric/undirected/EdgeMetrics.java
----------------------------------------------------------------------
diff --git a/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/metric/undirected/EdgeMetrics.java b/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/metric/undirected/EdgeMetrics.java
new file mode 100644
index 0000000..1d5b664
--- /dev/null
+++ b/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/metric/undirected/EdgeMetrics.java
@@ -0,0 +1,445 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.graph.library.metric.undirected;
+
+import org.apache.commons.lang3.builder.EqualsBuilder;
+import org.apache.commons.lang3.builder.HashCodeBuilder;
+import org.apache.flink.api.common.JobExecutionResult;
+import org.apache.flink.api.common.accumulators.LongCounter;
+import org.apache.flink.api.common.accumulators.LongMaximum;
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.functions.ReduceFunction;
+import org.apache.flink.api.common.io.RichOutputFormat;
+import org.apache.flink.api.common.operators.base.ReduceOperatorBase.CombineHint;
+import org.apache.flink.api.java.DataSet;
+import org.apache.flink.api.java.functions.FunctionAnnotation;
+import org.apache.flink.api.java.tuple.Tuple3;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.graph.AbstractGraphAnalytic;
+import org.apache.flink.graph.Edge;
+import org.apache.flink.graph.Graph;
+import org.apache.flink.graph.asm.degree.annotate.undirected.EdgeDegreePair;
+import org.apache.flink.graph.library.metric.undirected.EdgeMetrics.Result;
+import org.apache.flink.types.CopyableValue;
+import org.apache.flink.types.LongValue;
+import org.apache.flink.util.AbstractID;
+
+import java.io.IOException;
+import java.text.NumberFormat;
+
+import static org.apache.flink.api.common.ExecutionConfig.PARALLELISM_DEFAULT;
+
+/**
+ * Compute the following edge metrics in an undirected graph:
+ *  - number of vertices
+ *  - number of edges
+ *  - number of triangle triplets
+ *  - number of rectangle triplets
+ *  - number of triplets
+ *  - maximum degree
+ *  - maximum number of triangle triplets
+ *  - maximum number of rectangle triplets
+ *  - maximum number of triplets
+ *
+ * @param <K> graph ID type
+ * @param <VV> vertex value type
+ * @param <EV> edge value type
+ */
+public class EdgeMetrics<K extends Comparable<K> & CopyableValue<K>, VV, EV>
+extends AbstractGraphAnalytic<K, VV, EV, Result> {
+
+	private String id = new AbstractID().toString();
+
+	// Optional configuration
+	private boolean reduceOnTargetId = false;
+
+	private int parallelism = PARALLELISM_DEFAULT;
+
+	/**
+	 * The degree can be counted from either the edge source or target IDs.
+	 * By default the source IDs are counted. Reducing on target IDs may
+	 * optimize the algorithm if the input edge list is sorted by target ID.
+	 *
+	 * @param reduceOnTargetId set to {@code true} if the input edge list
+	 *                         is sorted by target ID
+	 * @return this
+	 */
+	public EdgeMetrics<K, VV, EV> setReduceOnTargetId(boolean reduceOnTargetId) {
+		this.reduceOnTargetId = reduceOnTargetId;
+
+		return this;
+	}
+
+	/**
+	 * Override the operator parallelism.
+	 *
+	 * @param parallelism operator parallelism
+	 * @return this
+	 */
+	public EdgeMetrics<K, VV, EV> setParallelism(int parallelism) {
+		this.parallelism = parallelism;
+
+		return this;
+	}
+
+	/*
+	 * Implementation notes:
+	 *
+	 * Use aggregator to replace SumEdgeStats when aggregators are rewritten to use
+	 *   a hash-combineable hashed-reduce.
+	 */
+
+	@Override
+	public EdgeMetrics<K, VV, EV> run(Graph<K, VV, EV> input)
+			throws Exception {
+		super.run(input);
+
+		// s, t, (d(s), d(t))
+		DataSet<Edge<K, Tuple3<EV, LongValue, LongValue>>> edgeDegreePair = input
+			.run(new EdgeDegreePair<K, VV, EV>()
+				.setReduceOnTargetId(reduceOnTargetId)
+				.setParallelism(parallelism));
+
+		// s, d(s), count of (u, v) where deg(u) < deg(v) or (deg(u) == deg(v) and u < v)
+		DataSet<Tuple3<K, LongValue, LongValue>> edgeStats = edgeDegreePair
+			.map(new EdgeStats<K, EV>())
+				.setParallelism(parallelism)
+				.name("Edge stats")
+			.groupBy(0)
+			.reduce(new SumEdgeStats<K>())
+				.setCombineHint(CombineHint.HASH)
+				.setParallelism(parallelism)
+				.name("Sum edge stats");
+
+		edgeStats
+			.output(new EdgeMetricsHelper<K, EV>(id))
+				.setParallelism(parallelism)
+				.name("Edge metrics");
+
+		return this;
+	}
+
+	@Override
+	public Result getResult() {
+		JobExecutionResult res = env.getLastJobExecutionResult();
+
+		long vertexCount = res.getAccumulatorResult(id + "-0");
+		long edgeCount = res.getAccumulatorResult(id + "-1");
+		long triangleTripletCount = res.getAccumulatorResult(id + "-2");
+		long rectangleTripletCount = res.getAccumulatorResult(id + "-3");
+		long tripletCount = res.getAccumulatorResult(id + "-4");
+		long maximumDegree = res.getAccumulatorResult(id + "-5");
+		long maximumTriangleTriplets = res.getAccumulatorResult(id + "-6");
+		long maximumRectangleTriplets = res.getAccumulatorResult(id + "-7");
+		long maximumTriplets = res.getAccumulatorResult(id + "-8");
+
+		return new Result(vertexCount, edgeCount / 2, triangleTripletCount, rectangleTripletCount, tripletCount,
+			maximumDegree, maximumTriangleTriplets, maximumRectangleTriplets, maximumTriplets);
+	}
+
+	/**
+	 * Evaluates each edge and emits a tuple containing the source vertex ID,
+	 * the source vertex degree, and a value of zero or one indicating the
+	 * low-order count. The low-order count is one if the source vertex degree
+	 * is less than the target vertex degree or if the degrees are equal and
+	 * the source vertex ID compares lower than the target vertex ID; otherwise
+	 * the low-order count is zero.
+	 *
+	 * @param <T> ID type
+	 * @param <ET> edge value type
+	 */
+	@FunctionAnnotation.ForwardedFields("0; 2.1->1")
+	private static class EdgeStats<T extends Comparable<T>, ET>
+	implements MapFunction<Edge<T, Tuple3<ET, LongValue, LongValue>>, Tuple3<T, LongValue, LongValue>> {
+		private LongValue zero = new LongValue(0);
+
+		private LongValue one = new LongValue(1);
+
+		private Tuple3<T, LongValue, LongValue> output = new Tuple3<>();
+
+		@Override
+		public Tuple3<T, LongValue, LongValue> map(Edge<T, Tuple3<ET, LongValue, LongValue>> edge)
+				throws Exception {
+			Tuple3<ET, LongValue, LongValue> degrees = edge.f2;
+
+			output.f0 = edge.f0;
+			output.f1 = degrees.f1;
+
+			long sourceDegree = degrees.f1.getValue();
+			long targetDegree = degrees.f2.getValue();
+
+			if (sourceDegree < targetDegree ||
+					(sourceDegree == targetDegree && edge.f0.compareTo(edge.f1) < 0)) {
+				output.f2 = one;
+			} else {
+				output.f2 = zero;
+			}
+
+			return output;
+		}
+	}
+
+	/**
+	 * Sums the low-order counts.
+	 *
+	 * @param <T> ID type
+	 */
+	private static class SumEdgeStats<T>
+	implements ReduceFunction<Tuple3<T, LongValue, LongValue>> {
+		@Override
+		public Tuple3<T, LongValue, LongValue> reduce(Tuple3<T, LongValue, LongValue> value1, Tuple3<T, LongValue, LongValue> value2)
+				throws Exception {
+			value1.f2.setValue(value1.f2.getValue() + value2.f2.getValue());
+			return value1;
+		}
+	}
+
+	/**
+	 * Helper class to collect edge metrics.
+	 *
+	 * @param <T> ID type
+	 */
+	private static class EdgeMetricsHelper<T extends Comparable<T>, ET>
+	extends RichOutputFormat<Tuple3<T, LongValue, LongValue>> {
+		private final String id;
+
+		private long vertexCount;
+		private long edgeCount;
+		private long triangleTripletCount;
+		private long rectangleTripletCount;
+		private long tripletCount;
+		private long maximumDegree;
+		private long maximumTriangleTriplets;
+		private long maximumRectangleTriplets;
+		private long maximumTriplets;
+
+		/**
+		 * This helper class collects edge metrics by scanning over and
+		 * discarding elements from the given DataSet.
+		 *
+		 * The unique id is required because Flink's accumulator namespace is
+		 * among all operators.
+		 *
+		 * @param id unique string used for accumulator names
+		 */
+		public EdgeMetricsHelper(String id) {
+			this.id = id;
+		}
+
+		@Override
+		public void configure(Configuration parameters) {}
+
+		@Override
+		public void open(int taskNumber, int numTasks) throws IOException {}
+
+		@Override
+		public void writeRecord(Tuple3<T, LongValue, LongValue> record) throws IOException {
+			long degree = record.f1.getValue();
+			long lowDegree = record.f2.getValue();
+			long highDegree = degree - lowDegree;
+
+			long triangleTriplets = lowDegree * (lowDegree - 1) / 2;
+			long rectangleTriplets = triangleTriplets + lowDegree * highDegree;
+			long triplets = degree * (degree - 1) / 2;
+
+			vertexCount++;
+			edgeCount += degree;
+			triangleTripletCount += triangleTriplets;
+			rectangleTripletCount += rectangleTriplets;
+			tripletCount += triplets;
+			maximumDegree = Math.max(maximumDegree, degree);
+			maximumTriangleTriplets = Math.max(maximumTriangleTriplets, triangleTriplets);
+			maximumRectangleTriplets = Math.max(maximumRectangleTriplets, rectangleTriplets);
+			maximumTriplets = Math.max(maximumTriplets, triplets);
+		}
+
+		@Override
+		public void close() throws IOException {
+			getRuntimeContext().addAccumulator(id + "-0", new LongCounter(vertexCount));
+			getRuntimeContext().addAccumulator(id + "-1", new LongCounter(edgeCount));
+			getRuntimeContext().addAccumulator(id + "-2", new LongCounter(triangleTripletCount));
+			getRuntimeContext().addAccumulator(id + "-3", new LongCounter(rectangleTripletCount));
+			getRuntimeContext().addAccumulator(id + "-4", new LongCounter(tripletCount));
+			getRuntimeContext().addAccumulator(id + "-5", new LongMaximum(maximumDegree));
+			getRuntimeContext().addAccumulator(id + "-6", new LongMaximum(maximumTriangleTriplets));
+			getRuntimeContext().addAccumulator(id + "-7", new LongMaximum(maximumRectangleTriplets));
+			getRuntimeContext().addAccumulator(id + "-8", new LongMaximum(maximumTriplets));
+		}
+	}
+
+	/**
+	 * Wraps edge metrics.
+	 */
+	public static class Result {
+		private long vertexCount;
+		private long edgeCount;
+		private long triangleTripletCount;
+		private long rectangleTripletCount;
+		private long tripletCount;
+		private long maximumDegree;
+		private long maximumTriangleTriplets;
+		private long maximumRectangleTriplets;
+		private long maximumTriplets;
+
+		public Result(long vertexCount, long edgeCount, long triangleTripletCount, long rectangleTripletCount, long tripletCount,
+				long maximumDegree, long maximumTriangleTriplets, long maximumRectangleTriplets, long maximumTriplets) {
+			this.vertexCount = vertexCount;
+			this.edgeCount = edgeCount;
+			this.triangleTripletCount = triangleTripletCount;
+			this.rectangleTripletCount = rectangleTripletCount;
+			this.tripletCount = tripletCount;
+			this.maximumDegree = maximumDegree;
+			this.maximumTriangleTriplets = maximumTriangleTriplets;
+			this.maximumRectangleTriplets = maximumRectangleTriplets;
+			this.maximumTriplets = maximumTriplets;
+		}
+
+		/**
+		 * Get the number of vertices.
+		 *
+		 * @return number of vertices
+		 */
+		public long getNumberOfVertices() {
+			return vertexCount;
+		}
+
+		/**
+		 * Get the number of edges.
+		 *
+		 * @return number of edges
+		 */
+		public long getNumberOfEdges() {
+			return edgeCount;
+		}
+
+		/**
+		 * Get the number of triangle triplets.
+		 *
+		 * @return number of triangle triplets
+		 */
+		public long getNumberOfTriangleTriplets() {
+			return triangleTripletCount;
+		}
+
+		/**
+		 * Get the number of rectangle triplets.
+		 *
+		 * @return number of rectangle triplets
+		 */
+		public long getNumberOfRectangleTriplets() {
+			return rectangleTripletCount;
+		}
+
+		/**
+		 * Get the number of triplets.
+		 *
+		 * @return number of triplets
+		 */
+		public long getNumberOfTriplets() {
+			return tripletCount;
+		}
+
+		/**
+		 * Get the maximum degree.
+		 *
+		 * @return maximum degree
+		 */
+		public long getMaximumDegree() {
+			return maximumDegree;
+		}
+
+		/**
+		 * Get the maximum triangle triplets.
+		 *
+		 * @return maximum triangle triplets
+		 */
+		public long getMaximumTriangleTriplets() {
+			return maximumTriangleTriplets;
+		}
+
+		/**
+		 * Get the maximum rectangle triplets.
+		 *
+		 * @return maximum rectangle triplets
+		 */
+		public long getMaximumRectangleTriplets() {
+			return maximumRectangleTriplets;
+		}
+
+		/**
+		 * Get the maximum triplets.
+		 *
+		 * @return maximum triplets
+		 */
+		public long getMaximumTriplets() {
+			return maximumTriplets;
+		}
+
+		@Override
+		public String toString() {
+			NumberFormat nf = NumberFormat.getInstance();
+
+			return "vertex count: " + nf.format(vertexCount)
+				+ "; edge count: " + nf.format(edgeCount)
+				+ "; triangle triplet count: " + nf.format(triangleTripletCount)
+				+ "; rectangle triplet count: " + nf.format(rectangleTripletCount)
+				+ "; triplet count: " + nf.format(tripletCount)
+				+ "; maximum degree: " + nf.format(maximumDegree)
+				+ "; maximum triangle triplets: " + nf.format(maximumTriangleTriplets)
+				+ "; maximum rectangle triplets: " + nf.format(maximumRectangleTriplets)
+				+ "; maximum triplets: " + nf.format(maximumTriplets);
+		}
+
+		@Override
+		public int hashCode() {
+			return new HashCodeBuilder()
+				.append(vertexCount)
+				.append(edgeCount)
+				.append(triangleTripletCount)
+				.append(rectangleTripletCount)
+				.append(tripletCount)
+				.append(maximumDegree)
+				.append(maximumTriangleTriplets)
+				.append(maximumRectangleTriplets)
+				.append(maximumTriplets)
+				.hashCode();
+		}
+
+		@Override
+		public boolean equals(Object obj) {
+			if (obj == null) { return false; }
+			if (obj == this) { return true; }
+			if (obj.getClass() != getClass()) { return false; }
+
+			Result rhs = (Result)obj;
+
+			return new EqualsBuilder()
+				.append(vertexCount, rhs.vertexCount)
+				.append(edgeCount, rhs.edgeCount)
+				.append(triangleTripletCount, rhs.triangleTripletCount)
+				.append(rectangleTripletCount, rhs.rectangleTripletCount)
+				.append(tripletCount, rhs.tripletCount)
+				.append(maximumDegree, rhs.maximumDegree)
+				.append(maximumTriangleTriplets, rhs.maximumTriangleTriplets)
+				.append(maximumRectangleTriplets, rhs.maximumRectangleTriplets)
+				.append(maximumTriplets, rhs.maximumTriplets)
+				.isEquals();
+		}
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/58850f29/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/metric/undirected/VertexMetrics.java
----------------------------------------------------------------------
diff --git a/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/metric/undirected/VertexMetrics.java b/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/metric/undirected/VertexMetrics.java
index 3c26e43..d04fa7b 100644
--- a/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/metric/undirected/VertexMetrics.java
+++ b/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/metric/undirected/VertexMetrics.java
@@ -22,6 +22,7 @@ import org.apache.commons.lang3.builder.EqualsBuilder;
 import org.apache.commons.lang3.builder.HashCodeBuilder;
 import org.apache.flink.api.common.JobExecutionResult;
 import org.apache.flink.api.common.accumulators.LongCounter;
+import org.apache.flink.api.common.accumulators.LongMaximum;
 import org.apache.flink.api.common.io.RichOutputFormat;
 import org.apache.flink.api.java.DataSet;
 import org.apache.flink.configuration.Configuration;
@@ -35,12 +36,17 @@ import org.apache.flink.types.LongValue;
 import org.apache.flink.util.AbstractID;
 
 import java.io.IOException;
+import java.text.NumberFormat;
 
 import static org.apache.flink.api.common.ExecutionConfig.PARALLELISM_DEFAULT;
 
 /**
- * Compute the number of vertices, number of edges, and number of triplets in
- * an undirected graph.
+ * Compute the following vertex metrics in an undirected graph:
+ *  - number of vertices
+ *  - number of edges
+ *  - number of triplets
+ *  - maximum degree
+ *  - maximum number of triplets
  *
  * @param <K> graph ID type
  * @param <VV> vertex value type
@@ -125,8 +131,10 @@ extends AbstractGraphAnalytic<K, VV, EV, Result> {
 		long vertexCount = res.getAccumulatorResult(id + "-0");
 		long edgeCount = res.getAccumulatorResult(id + "-1");
 		long tripletCount = res.getAccumulatorResult(id + "-2");
+		long maximumDegree = res.getAccumulatorResult(id + "-3");
+		long maximumTriplets = res.getAccumulatorResult(id + "-4");
 
-		return new Result(vertexCount, edgeCount / 2, tripletCount);
+		return new Result(vertexCount, edgeCount / 2, tripletCount, maximumDegree, maximumTriplets);
 	}
 
 	/**
@@ -141,13 +149,15 @@ extends AbstractGraphAnalytic<K, VV, EV, Result> {
 		private long vertexCount;
 		private long edgeCount;
 		private long tripletCount;
+		private long maximumDegree;
+		private long maximumTriplets;
 
 		/**
 		 * This helper class collects vertex metrics by scanning over and
 		 * discarding elements from the given DataSet.
 		 *
 		 * The unique id is required because Flink's accumulator namespace is
-		 * among all operators.
+		 * shared among all operators.
 		 *
 		 * @param id unique string used for accumulator names
 		 */
@@ -164,10 +174,13 @@ extends AbstractGraphAnalytic<K, VV, EV, Result> {
 		@Override
 		public void writeRecord(Vertex<T, LongValue> record) throws IOException {
 			long degree = record.f1.getValue();
+			long triplets = degree * (degree - 1) / 2;
 
 			vertexCount++;
 			edgeCount += degree;
-			tripletCount += degree * (degree - 1) / 2;
+			tripletCount += triplets;
+			maximumDegree = Math.max(maximumDegree, degree);
+			maximumTriplets = Math.max(maximumTriplets, triplets);
 		}
 
 		@Override
@@ -175,6 +188,8 @@ extends AbstractGraphAnalytic<K, VV, EV, Result> {
 			getRuntimeContext().addAccumulator(id + "-0", new LongCounter(vertexCount));
 			getRuntimeContext().addAccumulator(id + "-1", new LongCounter(edgeCount));
 			getRuntimeContext().addAccumulator(id + "-2", new LongCounter(tripletCount));
+			getRuntimeContext().addAccumulator(id + "-3", new LongMaximum(maximumDegree));
+			getRuntimeContext().addAccumulator(id + "-4", new LongMaximum(maximumTriplets));
 		}
 	}
 
@@ -185,11 +200,15 @@ extends AbstractGraphAnalytic<K, VV, EV, Result> {
 		private long vertexCount;
 		private long edgeCount;
 		private long tripletCount;
+		private long maximumDegree;
+		private long maximumTriplets;
 
-		public Result(long vertexCount, long edgeCount, long tripletCount) {
+		public Result(long vertexCount, long edgeCount, long tripletCount, long maximumDegree, long maximumTriplets) {
 			this.vertexCount = vertexCount;
 			this.edgeCount = edgeCount;
 			this.tripletCount = tripletCount;
+			this.maximumDegree = maximumDegree;
+			this.maximumTriplets = maximumTriplets;
 		}
 
 		/**
@@ -219,11 +238,33 @@ extends AbstractGraphAnalytic<K, VV, EV, Result> {
 			return tripletCount;
 		}
 
+		/**
+		 * Get the maximum degree.
+		 *
+		 * @return maximum degree
+		 */
+		public long getMaximumDegree() {
+			return maximumDegree;
+		}
+
+		/**
+		 * Get the maximum triplets.
+		 *
+		 * @return maximum triplets
+		 */
+		public long getMaximumTriplets() {
+			return maximumTriplets;
+		}
+
 		@Override
 		public String toString() {
-			return "vertex count: " + vertexCount
-					+ ", edge count:" + edgeCount
-					+ ", triplet count: " + tripletCount;
+			NumberFormat nf = NumberFormat.getInstance();
+
+			return "vertex count: " + nf.format(vertexCount)
+				+ "; edge count: " + nf.format(edgeCount)
+				+ "; triplet count: " + nf.format(tripletCount)
+				+ "; maximum degree: " + nf.format(maximumDegree)
+				+ "; maximum triplets: " + nf.format(maximumTriplets);
 		}
 
 		@Override
@@ -232,6 +273,8 @@ extends AbstractGraphAnalytic<K, VV, EV, Result> {
 				.append(vertexCount)
 				.append(edgeCount)
 				.append(tripletCount)
+				.append(maximumDegree)
+				.append(maximumTriplets)
 				.hashCode();
 		}
 
@@ -247,6 +290,8 @@ extends AbstractGraphAnalytic<K, VV, EV, Result> {
 				.append(vertexCount, rhs.vertexCount)
 				.append(edgeCount, rhs.edgeCount)
 				.append(tripletCount, rhs.tripletCount)
+				.append(maximumDegree, rhs.maximumDegree)
+				.append(maximumTriplets, rhs.maximumTriplets)
 				.isEquals();
 		}
 	}

http://git-wip-us.apache.org/repos/asf/flink/blob/58850f29/flink-libraries/flink-gelly/src/test/java/org/apache/flink/graph/library/metric/directed/EdgeMetricsTest.java
----------------------------------------------------------------------
diff --git a/flink-libraries/flink-gelly/src/test/java/org/apache/flink/graph/library/metric/directed/EdgeMetricsTest.java b/flink-libraries/flink-gelly/src/test/java/org/apache/flink/graph/library/metric/directed/EdgeMetricsTest.java
new file mode 100644
index 0000000..af5a154
--- /dev/null
+++ b/flink-libraries/flink-gelly/src/test/java/org/apache/flink/graph/library/metric/directed/EdgeMetricsTest.java
@@ -0,0 +1,90 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.graph.library.metric.directed;
+
+import org.apache.commons.math3.util.CombinatoricsUtils;
+import org.apache.flink.graph.asm.AsmTestBase;
+import org.apache.flink.graph.library.metric.directed.EdgeMetrics.Result;
+import org.apache.flink.types.IntValue;
+import org.apache.flink.types.LongValue;
+import org.apache.flink.types.NullValue;
+import org.junit.Test;
+
+import static org.junit.Assert.assertEquals;
+
+public class EdgeMetricsTest
+extends AsmTestBase {
+
+	@Test
+	public void testWithSimpleGraph()
+			throws Exception {
+		Result expectedResult = new Result(6, 7, 2, 6, 13, 4, 2, 3, 1, 3, 6);
+
+		Result edgeMetrics = new EdgeMetrics<IntValue, NullValue, NullValue>()
+			.run(directedSimpleGraph)
+			.execute();
+
+		assertEquals(expectedResult, edgeMetrics);
+	}
+
+	@Test
+	public void testWithCompleteGraph()
+			throws Exception {
+		long expectedDegree = completeGraphVertexCount - 1;
+		long expectedEdges = completeGraphVertexCount * expectedDegree;
+		long expectedMaximumTriplets = CombinatoricsUtils.binomialCoefficient((int)expectedDegree, 2);
+		long expectedTriplets = completeGraphVertexCount * expectedMaximumTriplets;
+
+		Result expectedResult = new Result(completeGraphVertexCount, expectedEdges, expectedTriplets / 3, 2 * expectedTriplets / 3, expectedTriplets,
+			expectedDegree, expectedDegree, expectedDegree,
+			expectedMaximumTriplets, expectedMaximumTriplets, expectedMaximumTriplets);
+
+		Result edgeMetrics = new EdgeMetrics<LongValue, NullValue, NullValue>()
+			.run(completeGraph)
+			.execute();
+
+		assertEquals(expectedResult, edgeMetrics);
+	}
+
+	@Test
+	public void testWithEmptyGraph()
+			throws Exception {
+		Result expectedResult;
+
+		expectedResult = new Result(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0);
+
+		Result withoutZeroDegreeVertices = new EdgeMetrics<LongValue, NullValue, NullValue>()
+			.run(emptyGraph)
+			.execute();
+
+		assertEquals(withoutZeroDegreeVertices, expectedResult);
+	}
+
+	@Test
+	public void testWithRMatGraph()
+			throws Exception {
+		Result expectedResult = new Result(902, 12009, 107817, 315537, 1003442, 463, 334, 342, 820, 3822, 106953);
+
+		Result withoutZeroDegreeVertices = new EdgeMetrics<LongValue, NullValue, NullValue>()
+			.run(directedRMatGraph)
+			.execute();
+
+		assertEquals(expectedResult, withoutZeroDegreeVertices);
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/58850f29/flink-libraries/flink-gelly/src/test/java/org/apache/flink/graph/library/metric/directed/VertexMetricsTest.java
----------------------------------------------------------------------
diff --git a/flink-libraries/flink-gelly/src/test/java/org/apache/flink/graph/library/metric/directed/VertexMetricsTest.java b/flink-libraries/flink-gelly/src/test/java/org/apache/flink/graph/library/metric/directed/VertexMetricsTest.java
index 8145abd..e4362c0 100644
--- a/flink-libraries/flink-gelly/src/test/java/org/apache/flink/graph/library/metric/directed/VertexMetricsTest.java
+++ b/flink-libraries/flink-gelly/src/test/java/org/apache/flink/graph/library/metric/directed/VertexMetricsTest.java
@@ -34,10 +34,10 @@ extends AsmTestBase {
 	@Test
 	public void testWithSimpleGraph()
 			throws Exception {
-		Result expectedResult = new Result(6, 7, 13);
+		Result expectedResult = new Result(6, 7, 13, 4, 2, 3, 6);
 
 		Result vertexMetrics = new VertexMetrics<IntValue, NullValue, NullValue>()
-			.run(undirectedSimpleGraph)
+			.run(directedSimpleGraph)
 			.execute();
 
 		assertEquals(expectedResult, vertexMetrics);
@@ -47,10 +47,11 @@ extends AsmTestBase {
 	public void testWithCompleteGraph()
 			throws Exception {
 		long expectedDegree = completeGraphVertexCount - 1;
-		long expectedEdges = completeGraphVertexCount * expectedDegree / 2;
-		long expectedTriplets = completeGraphVertexCount * CombinatoricsUtils.binomialCoefficient((int)expectedDegree, 2);
+		long expectedEdges = completeGraphVertexCount * expectedDegree;
+		long expectedMaximumTriplets = CombinatoricsUtils.binomialCoefficient((int)expectedDegree, 2);
+		long expectedTriplets = completeGraphVertexCount * expectedMaximumTriplets;
 
-		Result expectedResult = new Result(completeGraphVertexCount, expectedEdges, expectedTriplets);
+		Result expectedResult = new Result(completeGraphVertexCount, expectedEdges, expectedTriplets, expectedDegree, expectedDegree, expectedDegree, expectedMaximumTriplets);
 
 		Result vertexMetrics = new VertexMetrics<LongValue, NullValue, NullValue>()
 			.run(completeGraph)
@@ -64,7 +65,7 @@ extends AsmTestBase {
 			throws Exception {
 		Result expectedResult;
 
-		expectedResult = new Result(0, 0, 0);
+		expectedResult = new Result(0, 0, 0, 0, 0, 0, 0);
 
 		Result withoutZeroDegreeVertices = new VertexMetrics<LongValue, NullValue, NullValue>()
 			.setIncludeZeroDegreeVertices(false)
@@ -73,7 +74,7 @@ extends AsmTestBase {
 
 		assertEquals(withoutZeroDegreeVertices, expectedResult);
 
-		expectedResult = new Result(3, 0, 0);
+		expectedResult = new Result(3, 0, 0, 0, 0, 0, 0);
 
 		Result withZeroDegreeVertices = new VertexMetrics<LongValue, NullValue, NullValue>()
 			.setIncludeZeroDegreeVertices(true)
@@ -86,10 +87,10 @@ extends AsmTestBase {
 	@Test
 	public void testWithRMatGraph()
 			throws Exception {
-		Result expectedResult = new Result(902, 10442, 1003442);
+		Result expectedResult = new Result(902, 12009, 1003442, 463, 334, 342, 106953);
 
 		Result withoutZeroDegreeVertices = new VertexMetrics<LongValue, NullValue, NullValue>()
-			.run(undirectedRMatGraph)
+			.run(directedRMatGraph)
 			.execute();
 
 		assertEquals(expectedResult, withoutZeroDegreeVertices);

http://git-wip-us.apache.org/repos/asf/flink/blob/58850f29/flink-libraries/flink-gelly/src/test/java/org/apache/flink/graph/library/metric/undirected/EdgeMetricsTest.java
----------------------------------------------------------------------
diff --git a/flink-libraries/flink-gelly/src/test/java/org/apache/flink/graph/library/metric/undirected/EdgeMetricsTest.java b/flink-libraries/flink-gelly/src/test/java/org/apache/flink/graph/library/metric/undirected/EdgeMetricsTest.java
new file mode 100644
index 0000000..b300d66
--- /dev/null
+++ b/flink-libraries/flink-gelly/src/test/java/org/apache/flink/graph/library/metric/undirected/EdgeMetricsTest.java
@@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.graph.library.metric.undirected;
+
+import org.apache.commons.math3.util.CombinatoricsUtils;
+import org.apache.flink.graph.asm.AsmTestBase;
+import org.apache.flink.graph.library.metric.undirected.EdgeMetrics.Result;
+import org.apache.flink.types.IntValue;
+import org.apache.flink.types.LongValue;
+import org.apache.flink.types.NullValue;
+import org.junit.Test;
+
+import static org.junit.Assert.assertEquals;
+
+public class EdgeMetricsTest
+extends AsmTestBase {
+
+	@Test
+	public void testWithSimpleGraph()
+			throws Exception {
+		Result expectedResult = new Result(6, 7, 2, 6, 13, 4, 1, 3, 6);
+
+		Result edgeMetrics = new EdgeMetrics<IntValue, NullValue, NullValue>()
+			.run(undirectedSimpleGraph)
+			.execute();
+
+		assertEquals(expectedResult, edgeMetrics);
+	}
+
+	@Test
+	public void testWithCompleteGraph()
+			throws Exception {
+		long expectedDegree = completeGraphVertexCount - 1;
+		long expectedEdges = completeGraphVertexCount * expectedDegree / 2;
+		long expectedMaximumTriplets = CombinatoricsUtils.binomialCoefficient((int)expectedDegree, 2);
+		long expectedTriplets = completeGraphVertexCount * expectedMaximumTriplets;
+
+		Result expectedResult = new Result(completeGraphVertexCount, expectedEdges, expectedTriplets / 3, 2 * expectedTriplets / 3, expectedTriplets,
+			expectedDegree, expectedMaximumTriplets, expectedMaximumTriplets, expectedMaximumTriplets);
+
+		Result edgeMetrics = new EdgeMetrics<LongValue, NullValue, NullValue>()
+			.run(completeGraph)
+			.execute();
+
+		assertEquals(expectedResult, edgeMetrics);
+	}
+
+	@Test
+	public void testWithEmptyGraph()
+			throws Exception {
+		Result expectedResult;
+
+		expectedResult = new Result(0, 0, 0, 0, 0, 0, 0, 0, 0);
+
+		Result withoutZeroDegreeVertices = new EdgeMetrics<LongValue, NullValue, NullValue>()
+			.run(emptyGraph)
+			.execute();
+
+		assertEquals(withoutZeroDegreeVertices, expectedResult);
+	}
+
+	@Test
+	public void testWithRMatGraph()
+			throws Exception {
+		Result expectedResult = new Result(902, 10442, 107817, 315537, 1003442, 463, 820, 3822, 106953);
+
+		Result withoutZeroDegreeVertices = new EdgeMetrics<LongValue, NullValue, NullValue>()
+			.run(undirectedRMatGraph)
+			.execute();
+
+		assertEquals(expectedResult, withoutZeroDegreeVertices);
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/58850f29/flink-libraries/flink-gelly/src/test/java/org/apache/flink/graph/library/metric/undirected/VertexMetricsTest.java
----------------------------------------------------------------------
diff --git a/flink-libraries/flink-gelly/src/test/java/org/apache/flink/graph/library/metric/undirected/VertexMetricsTest.java b/flink-libraries/flink-gelly/src/test/java/org/apache/flink/graph/library/metric/undirected/VertexMetricsTest.java
index a36ca94..8f7e1da 100644
--- a/flink-libraries/flink-gelly/src/test/java/org/apache/flink/graph/library/metric/undirected/VertexMetricsTest.java
+++ b/flink-libraries/flink-gelly/src/test/java/org/apache/flink/graph/library/metric/undirected/VertexMetricsTest.java
@@ -34,7 +34,7 @@ extends AsmTestBase {
 	@Test
 	public void testWithSimpleGraph()
 			throws Exception {
-		Result expectedResult = new Result(6, 7, 13);
+		Result expectedResult = new Result(6, 7, 13, 4, 6);
 
 		Result vertexMetrics = new VertexMetrics<IntValue, NullValue, NullValue>()
 			.run(undirectedSimpleGraph)
@@ -48,9 +48,11 @@ extends AsmTestBase {
 			throws Exception {
 		long expectedDegree = completeGraphVertexCount - 1;
 		long expectedEdges = completeGraphVertexCount * expectedDegree / 2;
-		long expectedTriplets = completeGraphVertexCount * CombinatoricsUtils.binomialCoefficient((int)expectedDegree, 2);
+		long expectedMaximumTriplets = CombinatoricsUtils.binomialCoefficient((int)expectedDegree, 2);
+		long expectedTriplets = completeGraphVertexCount * expectedMaximumTriplets;
 
-		Result expectedResult = new Result(completeGraphVertexCount, expectedEdges, expectedTriplets);
+		Result expectedResult = new Result(completeGraphVertexCount, expectedEdges, expectedTriplets,
+			expectedDegree, expectedMaximumTriplets);
 
 		Result vertexMetrics = new VertexMetrics<LongValue, NullValue, NullValue>()
 			.run(completeGraph)
@@ -64,7 +66,7 @@ extends AsmTestBase {
 			throws Exception {
 		Result expectedResult;
 
-		expectedResult = new Result(0, 0, 0);
+		expectedResult = new Result(0, 0, 0, 0, 0);
 
 		Result withoutZeroDegreeVertices = new VertexMetrics<LongValue, NullValue, NullValue>()
 			.setIncludeZeroDegreeVertices(false)
@@ -73,7 +75,7 @@ extends AsmTestBase {
 
 		assertEquals(withoutZeroDegreeVertices, expectedResult);
 
-		expectedResult = new Result(3, 0, 0);
+		expectedResult = new Result(3, 0, 0, 0, 0);
 
 		Result withZeroDegreeVertices = new VertexMetrics<LongValue, NullValue, NullValue>()
 			.setIncludeZeroDegreeVertices(true)
@@ -86,7 +88,7 @@ extends AsmTestBase {
 	@Test
 	public void testWithRMatGraph()
 			throws Exception {
-		Result expectedResult = new Result(902, 10442, 1003442);
+		Result expectedResult = new Result(902, 10442, 1003442, 463, 106953);
 
 		Result withoutZeroDegreeVertices = new VertexMetrics<LongValue, NullValue, NullValue>()
 			.run(undirectedRMatGraph)


[80/89] [abbrv] flink git commit: [FLINK-4383] [rpc] Eagerly serialize remote rpc invocation messages

Posted by se...@apache.org.
[FLINK-4383] [rpc] Eagerly serialize remote rpc invocation messages

This PR introduces an eager serialization for remote rpc invocation messages.
That way it is possible to check whether the message is serializable and
whether it exceeds the maximum allowed akka frame size. If either of these
constraints is violated, a proper exception is thrown instead of simply
swallowing the exception as Akka does it.

Address PR comments

This closes #2365.


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/946ea09d
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/946ea09d
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/946ea09d

Branch: refs/heads/flip-6
Commit: 946ea09d9e54cbcac1294e5221629412b196310f
Parents: a2f3f31
Author: Till Rohrmann <tr...@apache.org>
Authored: Fri Aug 12 10:32:30 2016 +0200
Committer: Stephan Ewen <se...@apache.org>
Committed: Thu Aug 25 20:21:03 2016 +0200

----------------------------------------------------------------------
 .../flink/runtime/rpc/akka/AkkaGateway.java     |   2 +-
 .../runtime/rpc/akka/AkkaInvocationHandler.java |  83 ++++++--
 .../flink/runtime/rpc/akka/AkkaRpcActor.java    |  26 ++-
 .../flink/runtime/rpc/akka/AkkaRpcService.java  |  20 +-
 .../rpc/akka/messages/LocalRpcInvocation.java   |  54 +++++
 .../rpc/akka/messages/RemoteRpcInvocation.java  | 206 ++++++++++++++++++
 .../rpc/akka/messages/RpcInvocation.java        | 106 +++-------
 .../runtime/rpc/akka/AkkaRpcServiceTest.java    |   2 +-
 .../rpc/akka/MessageSerializationTest.java      | 210 +++++++++++++++++++
 9 files changed, 597 insertions(+), 112 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/946ea09d/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaGateway.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaGateway.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaGateway.java
index ec3091c..f6125dc 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaGateway.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaGateway.java
@@ -26,5 +26,5 @@ import org.apache.flink.runtime.rpc.RpcGateway;
  */
 interface AkkaGateway extends RpcGateway {
 
-	ActorRef getRpcServer();
+	ActorRef getRpcEndpoint();
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/946ea09d/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaInvocationHandler.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaInvocationHandler.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaInvocationHandler.java
index 580b161..297104b 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaInvocationHandler.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaInvocationHandler.java
@@ -25,13 +25,17 @@ import org.apache.flink.api.java.tuple.Tuple2;
 import org.apache.flink.runtime.rpc.MainThreadExecutor;
 import org.apache.flink.runtime.rpc.RpcTimeout;
 import org.apache.flink.runtime.rpc.akka.messages.CallAsync;
+import org.apache.flink.runtime.rpc.akka.messages.LocalRpcInvocation;
+import org.apache.flink.runtime.rpc.akka.messages.RemoteRpcInvocation;
 import org.apache.flink.runtime.rpc.akka.messages.RpcInvocation;
 import org.apache.flink.runtime.rpc.akka.messages.RunAsync;
 import org.apache.flink.util.Preconditions;
+import org.apache.log4j.Logger;
 import scala.concurrent.Await;
 import scala.concurrent.Future;
 import scala.concurrent.duration.FiniteDuration;
 
+import java.io.IOException;
 import java.lang.annotation.Annotation;
 import java.lang.reflect.InvocationHandler;
 import java.lang.reflect.Method;
@@ -42,19 +46,28 @@ import static org.apache.flink.util.Preconditions.checkNotNull;
 import static org.apache.flink.util.Preconditions.checkArgument;
 
 /**
- * Invocation handler to be used with a {@link AkkaRpcActor}. The invocation handler wraps the
- * rpc in a {@link RpcInvocation} message and then sends it to the {@link AkkaRpcActor} where it is
+ * Invocation handler to be used with an {@link AkkaRpcActor}. The invocation handler wraps the
+ * rpc in a {@link LocalRpcInvocation} message and then sends it to the {@link AkkaRpcActor} where it is
  * executed.
  */
 class AkkaInvocationHandler implements InvocationHandler, AkkaGateway, MainThreadExecutor {
-	private final ActorRef rpcServer;
+	private static final Logger LOG = Logger.getLogger(AkkaInvocationHandler.class);
+
+	private final ActorRef rpcEndpoint;
+
+	// whether the actor ref is local and thus no message serialization is needed
+	private final boolean isLocal;
 
 	// default timeout for asks
 	private final Timeout timeout;
 
-	AkkaInvocationHandler(ActorRef rpcServer, Timeout timeout) {
-		this.rpcServer = Preconditions.checkNotNull(rpcServer);
+	private final long maximumFramesize;
+
+	AkkaInvocationHandler(ActorRef rpcEndpoint, Timeout timeout, long maximumFramesize) {
+		this.rpcEndpoint = Preconditions.checkNotNull(rpcEndpoint);
+		this.isLocal = this.rpcEndpoint.path().address().hasLocalScope();
 		this.timeout = Preconditions.checkNotNull(timeout);
+		this.maximumFramesize = maximumFramesize;
 	}
 
 	@Override
@@ -76,23 +89,43 @@ class AkkaInvocationHandler implements InvocationHandler, AkkaGateway, MainThrea
 				parameterAnnotations,
 				args);
 
-			RpcInvocation rpcInvocation = new RpcInvocation(
-				methodName,
-				filteredArguments.f0,
-				filteredArguments.f1);
+			RpcInvocation rpcInvocation;
+
+			if (isLocal) {
+				rpcInvocation = new LocalRpcInvocation(
+					methodName,
+					filteredArguments.f0,
+					filteredArguments.f1);
+			} else {
+				try {
+					RemoteRpcInvocation remoteRpcInvocation = new RemoteRpcInvocation(
+						methodName,
+						filteredArguments.f0,
+						filteredArguments.f1);
+
+					if (remoteRpcInvocation.getSize() > maximumFramesize) {
+						throw new IOException("The rpc invocation size exceeds the maximum akka framesize.");
+					} else {
+						rpcInvocation = remoteRpcInvocation;
+					}
+				} catch (IOException e) {
+					LOG.warn("Could not create remote rpc invocation message. Failing rpc invocation because...", e);
+					throw e;
+				}
+			}
 
 			Class<?> returnType = method.getReturnType();
 
 			if (returnType.equals(Void.TYPE)) {
-				rpcServer.tell(rpcInvocation, ActorRef.noSender());
+				rpcEndpoint.tell(rpcInvocation, ActorRef.noSender());
 
 				result = null;
 			} else if (returnType.equals(Future.class)) {
 				// execute an asynchronous call
-				result = Patterns.ask(rpcServer, rpcInvocation, futureTimeout);
+				result = Patterns.ask(rpcEndpoint, rpcInvocation, futureTimeout);
 			} else {
 				// execute a synchronous call
-				Future<?> futureResult = Patterns.ask(rpcServer, rpcInvocation, futureTimeout);
+				Future<?> futureResult = Patterns.ask(rpcEndpoint, rpcInvocation, futureTimeout);
 				FiniteDuration duration = timeout.duration();
 
 				result = Await.result(futureResult, duration);
@@ -103,8 +136,8 @@ class AkkaInvocationHandler implements InvocationHandler, AkkaGateway, MainThrea
 	}
 
 	@Override
-	public ActorRef getRpcServer() {
-		return rpcServer;
+	public ActorRef getRpcEndpoint() {
+		return rpcEndpoint;
 	}
 
 	@Override
@@ -117,19 +150,25 @@ class AkkaInvocationHandler implements InvocationHandler, AkkaGateway, MainThrea
 		checkNotNull(runnable, "runnable");
 		checkArgument(delay >= 0, "delay must be zero or greater");
 		
-		// Unfortunately I couldn't find a way to allow only local communication. Therefore, the
-		// runnable field is transient transient
-		rpcServer.tell(new RunAsync(runnable, delay), ActorRef.noSender());
+		if (isLocal) {
+			rpcEndpoint.tell(new RunAsync(runnable, delay), ActorRef.noSender());
+		} else {
+			throw new RuntimeException("Trying to send a Runnable to a remote actor at " +
+				rpcEndpoint.path() + ". This is not supported.");
+		}
 	}
 
 	@Override
 	public <V> Future<V> callAsync(Callable<V> callable, Timeout callTimeout) {
-		// Unfortunately I couldn't find a way to allow only local communication. Therefore, the
-		// callable field is declared transient
-		@SuppressWarnings("unchecked")
-		Future<V> result = (Future<V>) Patterns.ask(rpcServer, new CallAsync(callable), callTimeout);
+		if(isLocal) {
+			@SuppressWarnings("unchecked")
+			Future<V> result = (Future<V>) Patterns.ask(rpcEndpoint, new CallAsync(callable), callTimeout);
 
-		return result;
+			return result;
+		} else {
+			throw new RuntimeException("Trying to send a Callable to a remote actor at " +
+				rpcEndpoint.path() + ". This is not supported.");
+		}
 	}
 
 	/**

http://git-wip-us.apache.org/repos/asf/flink/blob/946ea09d/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActor.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActor.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActor.java
index 5e0a7da..dfcbcc3 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActor.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActor.java
@@ -26,6 +26,7 @@ import org.apache.flink.runtime.rpc.MainThreadValidatorUtil;
 import org.apache.flink.runtime.rpc.RpcEndpoint;
 import org.apache.flink.runtime.rpc.RpcGateway;
 import org.apache.flink.runtime.rpc.akka.messages.CallAsync;
+import org.apache.flink.runtime.rpc.akka.messages.LocalRpcInvocation;
 import org.apache.flink.runtime.rpc.akka.messages.RpcInvocation;
 import org.apache.flink.runtime.rpc.akka.messages.RunAsync;
 
@@ -35,6 +36,7 @@ import org.slf4j.LoggerFactory;
 import scala.concurrent.Future;
 import scala.concurrent.duration.FiniteDuration;
 
+import java.io.IOException;
 import java.lang.reflect.Method;
 import java.util.concurrent.Callable;
 import java.util.concurrent.TimeUnit;
@@ -42,10 +44,10 @@ import java.util.concurrent.TimeUnit;
 import static org.apache.flink.util.Preconditions.checkNotNull;
 
 /**
- * Akka rpc actor which receives {@link RpcInvocation}, {@link RunAsync} and {@link CallAsync}
+ * Akka rpc actor which receives {@link LocalRpcInvocation}, {@link RunAsync} and {@link CallAsync}
  * messages.
  * <p>
- * The {@link RpcInvocation} designates a rpc and is dispatched to the given {@link RpcEndpoint}
+ * The {@link LocalRpcInvocation} designates a rpc and is dispatched to the given {@link RpcEndpoint}
  * instance.
  * <p>
  * The {@link RunAsync} and {@link CallAsync} messages contain executable code which is executed
@@ -95,15 +97,12 @@ class AkkaRpcActor<C extends RpcGateway, T extends RpcEndpoint<C>> extends Untyp
 	 * @param rpcInvocation Rpc invocation message
 	 */
 	private void handleRpcInvocation(RpcInvocation rpcInvocation) {
-		Method rpcMethod = null;
-
 		try {
-			rpcMethod = lookupRpcMethod(rpcInvocation.getMethodName(), rpcInvocation.getParameterTypes());
-		} catch (final NoSuchMethodException e) {
-			LOG.error("Could not find rpc method for rpc invocation: {}.", rpcInvocation, e);
-		}
+			String methodName = rpcInvocation.getMethodName();
+			Class<?>[] parameterTypes = rpcInvocation.getParameterTypes();
+
+			Method rpcMethod = lookupRpcMethod(methodName, parameterTypes);
 
-		if (rpcMethod != null) {
 			if (rpcMethod.getReturnType().equals(Void.TYPE)) {
 				// No return value to send back
 				try {
@@ -127,6 +126,12 @@ class AkkaRpcActor<C extends RpcGateway, T extends RpcEndpoint<C>> extends Untyp
 					getSender().tell(new Status.Failure(e), getSelf());
 				}
 			}
+		} catch(ClassNotFoundException e) {
+			LOG.error("Could not load method arguments.", e);
+		} catch (IOException e) {
+			LOG.error("Could not deserialize rpc invocation message.", e);
+		} catch (final NoSuchMethodException e) {
+			LOG.error("Could not find rpc method for rpc invocation: {}.", rpcInvocation, e);
 		}
 	}
 
@@ -195,7 +200,8 @@ class AkkaRpcActor<C extends RpcGateway, T extends RpcEndpoint<C>> extends Untyp
 	 * @param methodName Name of the method
 	 * @param parameterTypes Parameter types of the method
 	 * @return Method of the rpc endpoint
-	 * @throws NoSuchMethodException
+	 * @throws NoSuchMethodException Thrown if the method with the given name and parameter types
+	 * 									cannot be found at the rpc endpoint
 	 */
 	private Method lookupRpcMethod(final String methodName, final Class<?>[] parameterTypes) throws NoSuchMethodException {
 		return rpcEndpoint.getClass().getMethod(methodName, parameterTypes);

http://git-wip-us.apache.org/repos/asf/flink/blob/946ea09d/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
index db40f10..b963c53 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
@@ -58,17 +58,27 @@ public class AkkaRpcService implements RpcService {
 
 	private static final Logger LOG = LoggerFactory.getLogger(AkkaRpcService.class);
 
+	static final String MAXIMUM_FRAME_SIZE_PATH = "akka.remote.netty.tcp.maximum-frame-size";
+
 	private final Object lock = new Object();
 
 	private final ActorSystem actorSystem;
 	private final Timeout timeout;
 	private final Set<ActorRef> actors = new HashSet<>(4);
+	private final long maximumFramesize;
 
 	private volatile boolean stopped;
 
 	public AkkaRpcService(final ActorSystem actorSystem, final Timeout timeout) {
 		this.actorSystem = checkNotNull(actorSystem, "actor system");
 		this.timeout = checkNotNull(timeout, "timeout");
+
+		if (actorSystem.settings().config().hasPath(MAXIMUM_FRAME_SIZE_PATH)) {
+			maximumFramesize = actorSystem.settings().config().getBytes(MAXIMUM_FRAME_SIZE_PATH);
+		} else {
+			// only local communication
+			maximumFramesize = Long.MAX_VALUE;
+		}
 	}
 
 	// this method does not mutate state and is thus thread-safe
@@ -88,7 +98,7 @@ public class AkkaRpcService implements RpcService {
 			public C apply(Object obj) {
 				ActorRef actorRef = ((ActorIdentity) obj).getRef();
 
-				InvocationHandler akkaInvocationHandler = new AkkaInvocationHandler(actorRef, timeout);
+				InvocationHandler akkaInvocationHandler = new AkkaInvocationHandler(actorRef, timeout, maximumFramesize);
 
 				@SuppressWarnings("unchecked")
 				C proxy = (C) Proxy.newProxyInstance(
@@ -116,7 +126,7 @@ public class AkkaRpcService implements RpcService {
 
 		LOG.info("Starting RPC endpoint for {} at {} .", rpcEndpoint.getClass().getName(), actorRef.path());
 
-		InvocationHandler akkaInvocationHandler = new AkkaInvocationHandler(actorRef, timeout);
+		InvocationHandler akkaInvocationHandler = new AkkaInvocationHandler(actorRef, timeout, maximumFramesize);
 
 		// Rather than using the System ClassLoader directly, we derive the ClassLoader
 		// from this class . That works better in cases where Flink runs embedded and all Flink
@@ -142,12 +152,12 @@ public class AkkaRpcService implements RpcService {
 				if (stopped) {
 					return;
 				} else {
-					fromThisService = actors.remove(akkaClient.getRpcServer());
+					fromThisService = actors.remove(akkaClient.getRpcEndpoint());
 				}
 			}
 
 			if (fromThisService) {
-				ActorRef selfActorRef = akkaClient.getRpcServer();
+				ActorRef selfActorRef = akkaClient.getRpcEndpoint();
 				LOG.info("Stopping RPC endpoint {}.", selfActorRef.path());
 				selfActorRef.tell(PoisonPill.getInstance(), ActorRef.noSender());
 			} else {
@@ -178,7 +188,7 @@ public class AkkaRpcService implements RpcService {
 		checkState(!stopped, "RpcService is stopped");
 
 		if (selfGateway instanceof AkkaGateway) {
-			ActorRef actorRef = ((AkkaGateway) selfGateway).getRpcServer();
+			ActorRef actorRef = ((AkkaGateway) selfGateway).getRpcEndpoint();
 			return AkkaUtils.getAkkaURL(actorSystem, actorRef);
 		} else {
 			String className = AkkaGateway.class.getName();

http://git-wip-us.apache.org/repos/asf/flink/blob/946ea09d/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/LocalRpcInvocation.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/LocalRpcInvocation.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/LocalRpcInvocation.java
new file mode 100644
index 0000000..97c10d7
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/LocalRpcInvocation.java
@@ -0,0 +1,54 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.akka.messages;
+
+import org.apache.flink.util.Preconditions;
+
+/**
+ * Local rpc invocation message containing the remote procedure name, its parameter types and the
+ * corresponding call arguments. This message will only be sent if the communication is local and,
+ * thus, the message does not have to be serialized.
+ */
+public final class LocalRpcInvocation implements RpcInvocation {
+
+	private final String methodName;
+	private final Class<?>[] parameterTypes;
+	private final Object[] args;
+
+	public LocalRpcInvocation(String methodName, Class<?>[] parameterTypes, Object[] args) {
+		this.methodName = Preconditions.checkNotNull(methodName);
+		this.parameterTypes = Preconditions.checkNotNull(parameterTypes);
+		this.args = args;
+	}
+
+	@Override
+	public String getMethodName() {
+		return methodName;
+	}
+
+	@Override
+	public Class<?>[] getParameterTypes() {
+		return parameterTypes;
+	}
+
+	@Override
+	public Object[] getArgs() {
+		return args;
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/946ea09d/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RemoteRpcInvocation.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RemoteRpcInvocation.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RemoteRpcInvocation.java
new file mode 100644
index 0000000..bc26a29
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RemoteRpcInvocation.java
@@ -0,0 +1,206 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.akka.messages;
+
+import org.apache.flink.util.Preconditions;
+import org.apache.flink.util.SerializedValue;
+
+import java.io.IOException;
+import java.io.ObjectInputStream;
+import java.io.ObjectOutputStream;
+import java.io.Serializable;
+
+/**
+ * Remote rpc invocation message which is used when the actor communication is remote and, thus, the
+ * message has to be serialized.
+ * <p>
+ * In order to fail fast and report an appropriate error message to the user, the method name, the
+ * parameter types and the arguments are eagerly serialized. In case the the invocation call
+ * contains a non-serializable object, then an {@link IOException} is thrown.
+ */
+public class RemoteRpcInvocation implements RpcInvocation, Serializable {
+	private static final long serialVersionUID = 6179354390913843809L;
+
+	// Serialized invocation data
+	private SerializedValue<RemoteRpcInvocation.MethodInvocation> serializedMethodInvocation;
+
+	// Transient field which is lazily initialized upon first access to the invocation data
+	private transient RemoteRpcInvocation.MethodInvocation methodInvocation;
+
+	public  RemoteRpcInvocation(
+		final String methodName,
+		final Class<?>[] parameterTypes,
+		final Object[] args) throws IOException {
+
+		serializedMethodInvocation = new SerializedValue<>(new RemoteRpcInvocation.MethodInvocation(methodName, parameterTypes, args));
+		methodInvocation = null;
+	}
+
+	@Override
+	public String getMethodName() throws IOException, ClassNotFoundException {
+		deserializeMethodInvocation();
+
+		return methodInvocation.getMethodName();
+	}
+
+	@Override
+	public Class<?>[] getParameterTypes() throws IOException, ClassNotFoundException {
+		deserializeMethodInvocation();
+
+		return methodInvocation.getParameterTypes();
+	}
+
+	@Override
+	public Object[] getArgs() throws IOException, ClassNotFoundException {
+		deserializeMethodInvocation();
+
+		return methodInvocation.getArgs();
+	}
+
+	/**
+	 * Size (#bytes of the serialized data) of the rpc invocation message.
+	 *
+	 * @return Size of the remote rpc invocation message
+	 */
+	public long getSize() {
+		return serializedMethodInvocation.getByteArray().length;
+	}
+
+	private void deserializeMethodInvocation() throws IOException, ClassNotFoundException {
+		if (methodInvocation == null) {
+			methodInvocation = serializedMethodInvocation.deserializeValue(ClassLoader.getSystemClassLoader());
+		}
+	}
+
+	// -------------------------------------------------------------------
+	// Serialization methods
+	// -------------------------------------------------------------------
+
+	private void writeObject(ObjectOutputStream oos) throws IOException {
+		oos.writeObject(serializedMethodInvocation);
+	}
+
+	@SuppressWarnings("unchecked")
+	private void readObject(ObjectInputStream ois) throws IOException, ClassNotFoundException {
+		serializedMethodInvocation = (SerializedValue<RemoteRpcInvocation.MethodInvocation>) ois.readObject();
+		methodInvocation = null;
+	}
+
+	// -------------------------------------------------------------------
+	// Utility classes
+	// -------------------------------------------------------------------
+
+	/**
+	 * Wrapper class for the method invocation information
+	 */
+	private static final class MethodInvocation implements Serializable {
+		private static final long serialVersionUID = 9187962608946082519L;
+
+		private String methodName;
+		private Class<?>[] parameterTypes;
+		private Object[] args;
+
+		private MethodInvocation(final String methodName, final Class<?>[] parameterTypes, final Object[] args) {
+			this.methodName = methodName;
+			this.parameterTypes = Preconditions.checkNotNull(parameterTypes);
+			this.args = args;
+		}
+
+		String getMethodName() {
+			return methodName;
+		}
+
+		Class<?>[] getParameterTypes() {
+			return parameterTypes;
+		}
+
+		Object[] getArgs() {
+			return args;
+		}
+
+		private void writeObject(ObjectOutputStream oos) throws IOException {
+			oos.writeUTF(methodName);
+
+			oos.writeInt(parameterTypes.length);
+
+			for (Class<?> parameterType : parameterTypes) {
+				oos.writeObject(parameterType);
+			}
+
+			if (args != null) {
+				oos.writeBoolean(true);
+
+				for (int i = 0; i < args.length; i++) {
+					try {
+						oos.writeObject(args[i]);
+					} catch (IOException e) {
+						throw new IOException("Could not serialize " + i + "th argument of method " +
+							methodName + ". This indicates that the argument type " +
+							args.getClass().getName() + " is not serializable. Arguments have to " +
+							"be serializable for remote rpc calls.", e);
+					}
+				}
+			} else {
+				oos.writeBoolean(false);
+			}
+		}
+
+		private void readObject(ObjectInputStream ois) throws IOException, ClassNotFoundException {
+			methodName = ois.readUTF();
+
+			int length = ois.readInt();
+
+			parameterTypes = new Class<?>[length];
+
+			for (int i = 0; i < length; i++) {
+				try {
+					parameterTypes[i] = (Class<?>) ois.readObject();
+				} catch (IOException e) {
+					throw new IOException("Could not deserialize " + i + "th parameter type of method " +
+						methodName + '.', e);
+				} catch (ClassNotFoundException e) {
+					throw new ClassNotFoundException("Could not deserialize " + i + "th " +
+						"parameter type of method " + methodName + ". This indicates that the parameter " +
+						"type is not part of the system class loader.", e);
+				}
+			}
+
+			boolean hasArgs = ois.readBoolean();
+
+			if (hasArgs) {
+				args = new Object[length];
+
+				for (int i = 0; i < length; i++) {
+					try {
+						args[i] = ois.readObject();
+					} catch (IOException e) {
+						throw new IOException("Could not deserialize " + i + "th argument of method " +
+							methodName + '.', e);
+					} catch (ClassNotFoundException e) {
+						throw new ClassNotFoundException("Could not deserialize " + i + "th " +
+							"argument of method " + methodName + ". This indicates that the argument " +
+							"type is not part of the system class loader.", e);
+					}
+				}
+			} else {
+				args = null;
+			}
+		}
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/946ea09d/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RpcInvocation.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RpcInvocation.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RpcInvocation.java
index 5d52ef1..b174c99 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RpcInvocation.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RpcInvocation.java
@@ -18,81 +18,41 @@
 
 package org.apache.flink.runtime.rpc.akka.messages;
 
-import org.apache.flink.util.Preconditions;
-
 import java.io.IOException;
-import java.io.ObjectInputStream;
-import java.io.ObjectOutputStream;
-import java.io.Serializable;
 
 /**
- * Rpc invocation message containing the remote procedure name, its parameter types and the
- * corresponding call arguments.
+ * Interface for rpc invocation messages. The interface allows to request all necessary information
+ * to lookup a method and call it with the corresponding arguments.
  */
-public final class RpcInvocation implements Serializable {
-	private static final long serialVersionUID = -7058254033460536037L;
-
-	private final String methodName;
-	private final Class<?>[] parameterTypes;
-	private transient Object[] args;
-
-	public RpcInvocation(String methodName, Class<?>[] parameterTypes, Object[] args) {
-		this.methodName = Preconditions.checkNotNull(methodName);
-		this.parameterTypes = Preconditions.checkNotNull(parameterTypes);
-		this.args = args;
-	}
-
-	public String getMethodName() {
-		return methodName;
-	}
-
-	public Class<?>[] getParameterTypes() {
-		return parameterTypes;
-	}
-
-	public Object[] getArgs() {
-		return args;
-	}
-
-	private void writeObject(ObjectOutputStream oos) throws IOException {
-		oos.defaultWriteObject();
-
-		if (args != null) {
-			// write has args true
-			oos.writeBoolean(true);
-
-			for (int i = 0; i < args.length; i++) {
-				try {
-					oos.writeObject(args[i]);
-				} catch (IOException e) {
-					Class<?> argClass = args[i].getClass();
-
-					throw new IOException("Could not write " + i + "th argument of method " +
-						methodName + ". The argument type is " + argClass + ". " +
-						"Make sure that this type is serializable.", e);
-				}
-			}
-		} else {
-			// write has args false
-			oos.writeBoolean(false);
-		}
-	}
-
-	private void readObject(ObjectInputStream ois) throws IOException, ClassNotFoundException {
-		ois.defaultReadObject();
-
-		boolean hasArgs = ois.readBoolean();
-
-		if (hasArgs) {
-			int numberArguments = parameterTypes.length;
-
-			args = new Object[numberArguments];
-
-			for (int i = 0; i < numberArguments; i++) {
-				args[i] = ois.readObject();
-			}
-		} else {
-			args = null;
-		}
-	}
+public interface RpcInvocation {
+
+	/**
+	 * Returns the method's name.
+	 *
+	 * @return Method name
+	 * @throws IOException if the rpc invocation message is a remote message and could not be deserialized
+	 * @throws ClassNotFoundException if the rpc invocation message is a remote message and contains
+	 * serialized classes which cannot be found on the receiving side
+	 */
+	String getMethodName() throws IOException, ClassNotFoundException;
+
+	/**
+	 * Returns the method's parameter types
+	 *
+	 * @return Method's parameter types
+	 * @throws IOException if the rpc invocation message is a remote message and could not be deserialized
+	 * @throws ClassNotFoundException if the rpc invocation message is a remote message and contains
+	 * serialized classes which cannot be found on the receiving side
+	 */
+	Class<?>[] getParameterTypes() throws IOException, ClassNotFoundException;
+
+	/**
+	 * Returns the arguments of the remote procedure call
+	 *
+	 * @return Arguments of the remote procedure call
+	 * @throws IOException if the rpc invocation message is a remote message and could not be deserialized
+	 * @throws ClassNotFoundException if the rpc invocation message is a remote message and contains
+	 * serialized classes which cannot be found on the receiving side
+	 */
+	Object[] getArgs() throws IOException, ClassNotFoundException;
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/946ea09d/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java
index 5e37e10..f26b40b 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java
@@ -64,7 +64,7 @@ public class AkkaRpcServiceTest extends TestLogger {
 		AkkaGateway akkaClient = (AkkaGateway) rm;
 
 		
-		jobMaster.registerAtResourceManager(AkkaUtils.getAkkaURL(actorSystem, akkaClient.getRpcServer()));
+		jobMaster.registerAtResourceManager(AkkaUtils.getAkkaURL(actorSystem, akkaClient.getRpcEndpoint()));
 
 		// wait for successful registration
 		FiniteDuration timeout = new FiniteDuration(200, TimeUnit.SECONDS);

http://git-wip-us.apache.org/repos/asf/flink/blob/946ea09d/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/MessageSerializationTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/MessageSerializationTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/MessageSerializationTest.java
new file mode 100644
index 0000000..ca8179c
--- /dev/null
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/MessageSerializationTest.java
@@ -0,0 +1,210 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.akka;
+
+import akka.actor.ActorSystem;
+import akka.util.Timeout;
+import com.typesafe.config.Config;
+import com.typesafe.config.ConfigValueFactory;
+import org.apache.flink.runtime.akka.AkkaUtils;
+import org.apache.flink.runtime.rpc.RpcEndpoint;
+import org.apache.flink.runtime.rpc.RpcGateway;
+import org.apache.flink.runtime.rpc.RpcMethod;
+import org.apache.flink.runtime.rpc.RpcService;
+import org.apache.flink.util.TestLogger;
+import org.hamcrest.core.Is;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import scala.concurrent.Await;
+import scala.concurrent.Future;
+import scala.concurrent.duration.FiniteDuration;
+
+import java.io.IOException;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.TimeUnit;
+
+import static org.junit.Assert.assertThat;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests that akka rpc invocation messages are properly serialized and errors reported
+ */
+public class MessageSerializationTest extends TestLogger {
+	private static ActorSystem actorSystem1;
+	private static ActorSystem actorSystem2;
+	private static AkkaRpcService akkaRpcService1;
+	private static AkkaRpcService akkaRpcService2;
+
+	private static final FiniteDuration timeout = new FiniteDuration(10L, TimeUnit.SECONDS);
+	private static final int maxFrameSize = 32000;
+
+	@BeforeClass
+	public static void setup() {
+		Config akkaConfig = AkkaUtils.getDefaultAkkaConfig();
+		Config modifiedAkkaConfig = akkaConfig.withValue(AkkaRpcService.MAXIMUM_FRAME_SIZE_PATH, ConfigValueFactory.fromAnyRef(maxFrameSize + "b"));
+
+		actorSystem1 = AkkaUtils.createActorSystem(modifiedAkkaConfig);
+		actorSystem2 = AkkaUtils.createActorSystem(modifiedAkkaConfig);
+
+		akkaRpcService1 = new AkkaRpcService(actorSystem1, new Timeout(timeout));
+		akkaRpcService2 = new AkkaRpcService(actorSystem2, new Timeout(timeout));
+	}
+
+	@AfterClass
+	public static void teardown() {
+		akkaRpcService1.stopService();
+		akkaRpcService2.stopService();
+
+		actorSystem1.shutdown();
+		actorSystem2.shutdown();
+
+		actorSystem1.awaitTermination();
+		actorSystem2.awaitTermination();
+	}
+
+	/**
+	 * Tests that a local rpc call with a non serializable argument can be executed.
+	 */
+	@Test
+	public void testNonSerializableLocalMessageTransfer() throws InterruptedException, IOException {
+		LinkedBlockingQueue<Object> linkedBlockingQueue = new LinkedBlockingQueue<>();
+		TestEndpoint testEndpoint = new TestEndpoint(akkaRpcService1, linkedBlockingQueue);
+
+		TestGateway testGateway = testEndpoint.getSelf();
+
+		NonSerializableObject expected = new NonSerializableObject(42);
+
+		testGateway.foobar(expected);
+
+		assertThat(linkedBlockingQueue.take(), Is.<Object>is(expected));
+	}
+
+	/**
+	 * Tests that a remote rpc call with a non-serializable argument fails with an
+	 * {@link IOException} (or an {@link java.lang.reflect.UndeclaredThrowableException} if the
+	 * the method declaration does not include the {@link IOException} as throwable).
+	 */
+	@Test(expected = IOException.class)
+	public void testNonSerializableRemoteMessageTransfer() throws Exception {
+		LinkedBlockingQueue<Object> linkedBlockingQueue = new LinkedBlockingQueue<>();
+
+		TestEndpoint testEndpoint = new TestEndpoint(akkaRpcService1, linkedBlockingQueue);
+
+		String address = testEndpoint.getAddress();
+
+		Future<TestGateway> remoteGatewayFuture = akkaRpcService2.connect(address, TestGateway.class);
+
+		TestGateway remoteGateway = Await.result(remoteGatewayFuture, timeout);
+
+		remoteGateway.foobar(new Object());
+
+		fail("Should have failed because Object is not serializable.");
+	}
+
+	/**
+	 * Tests that a remote rpc call with a serializable argument can be successfully executed.
+	 */
+	@Test
+	public void testSerializableRemoteMessageTransfer() throws Exception {
+		LinkedBlockingQueue<Object> linkedBlockingQueue = new LinkedBlockingQueue<>();
+
+		TestEndpoint testEndpoint = new TestEndpoint(akkaRpcService1, linkedBlockingQueue);
+
+		String address = testEndpoint.getAddress();
+
+		Future<TestGateway> remoteGatewayFuture = akkaRpcService2.connect(address, TestGateway.class);
+
+		TestGateway remoteGateway = Await.result(remoteGatewayFuture, timeout);
+
+		int expected = 42;
+
+		remoteGateway.foobar(expected);
+
+		assertThat(linkedBlockingQueue.take(), Is.<Object>is(expected));
+	}
+
+	/**
+	 * Tests that a message which exceeds the maximum frame size is detected and a corresponding
+	 * exception is thrown.
+	 */
+	@Test(expected = IOException.class)
+	public void testMaximumFramesizeRemoteMessageTransfer() throws Exception {
+		LinkedBlockingQueue<Object> linkedBlockingQueue = new LinkedBlockingQueue<>();
+
+		TestEndpoint testEndpoint = new TestEndpoint(akkaRpcService1, linkedBlockingQueue);
+
+		String address = testEndpoint.getAddress();
+
+		Future<TestGateway> remoteGatewayFuture = akkaRpcService2.connect(address, TestGateway.class);
+
+		TestGateway remoteGateway = Await.result(remoteGatewayFuture, timeout);
+
+		int bufferSize = maxFrameSize + 1;
+		byte[] buffer = new byte[bufferSize];
+
+		remoteGateway.foobar(buffer);
+
+		fail("Should have failed due to exceeding the maximum framesize.");
+	}
+
+	private interface TestGateway extends RpcGateway {
+		void foobar(Object object) throws IOException, InterruptedException;
+	}
+
+	private static class TestEndpoint extends RpcEndpoint<TestGateway> {
+
+		private final LinkedBlockingQueue<Object> queue;
+
+		protected TestEndpoint(RpcService rpcService, LinkedBlockingQueue<Object> queue) {
+			super(rpcService);
+			this.queue = queue;
+		}
+
+		@RpcMethod
+		public void foobar(Object object) throws InterruptedException {
+			queue.put(object);
+		}
+	}
+
+	private static class NonSerializableObject {
+		private final Object object = new Object();
+		private final int value;
+
+		NonSerializableObject(int value) {
+			this.value = value;
+		}
+
+		@Override
+		public boolean equals(Object obj) {
+			if (obj instanceof NonSerializableObject) {
+				NonSerializableObject nonSerializableObject = (NonSerializableObject) obj;
+
+				return value == nonSerializableObject.value;
+			} else {
+				return false;
+			}
+		}
+
+		@Override
+		public int hashCode() {
+			return value * 41;
+		}
+	}
+}


[56/89] [abbrv] flink git commit: [FLINK-4253] [config] Clean up renaming of 'recovery.mode'

Posted by se...@apache.org.
[FLINK-4253] [config] Clean up renaming of 'recovery.mode'

- Renamed config keys and default values to be consistent
- Updated default flink-conf.yaml
- Cleaned up code occurrences of recovery mode

This closes #2342.


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/58165d69
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/58165d69
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/58165d69

Branch: refs/heads/flip-6
Commit: 58165d69fb4cd49018031f14bfaf1e17039b36f7
Parents: 01ffe34
Author: Ufuk Celebi <uc...@apache.org>
Authored: Mon Aug 22 14:59:14 2016 +0200
Committer: Ufuk Celebi <uc...@apache.org>
Committed: Wed Aug 24 12:09:31 2016 +0200

----------------------------------------------------------------------
 docs/setup/config.md                            |  34 ++--
 docs/setup/jobmanager_high_availability.md      |  16 +-
 .../flink/configuration/ConfigConstants.java    | 157 +++++++++--------
 flink-dist/src/main/flink-bin/bin/config.sh     |  33 ++--
 .../src/main/flink-bin/bin/start-cluster.sh     |   2 +-
 .../src/main/flink-bin/bin/stop-cluster.sh      |   2 +-
 flink-dist/src/main/resources/flink-conf.yaml   |  16 +-
 .../webmonitor/WebRuntimeMonitorITCase.java     |   4 +-
 .../apache/flink/runtime/blob/BlobServer.java   |  11 +-
 .../apache/flink/runtime/blob/BlobUtils.java    |   3 +-
 .../flink/runtime/blob/FileSystemBlobStore.java |  20 ++-
 .../StandaloneCheckpointIDCounter.java          |   3 +-
 .../jobmanager/HighAvailabilityMode.java        |  40 ++---
 .../runtime/util/LeaderRetrievalUtils.java      |   2 +-
 .../flink/runtime/util/ZooKeeperUtils.java      | 175 +++++++++----------
 .../flink/runtime/jobmanager/JobManager.scala   |  17 +-
 .../runtime/minicluster/FlinkMiniCluster.scala  |  14 +-
 .../flink/runtime/blob/BlobRecoveryITCase.java  |   4 +-
 .../client/JobClientActorRecoveryITCase.java    |   3 +-
 .../BlobLibraryCacheRecoveryITCase.java         |   4 +-
 .../jobmanager/HighAvailabilityModeTest.java    |  71 ++++++++
 .../jobmanager/JobManagerHARecoveryTest.java    |   4 +-
 .../JobManagerLeaderElectionTest.java           |   4 +-
 .../ZooKeeperLeaderElectionTest.java            |  12 +-
 .../ZooKeeperLeaderRetrievalTest.java           |  46 +----
 .../runtime/testutils/JobManagerProcess.java    |   2 +-
 .../runtime/testutils/TaskManagerProcess.java   |   2 +-
 .../runtime/testutils/ZooKeeperTestUtils.java   |  10 +-
 .../runtime/testingUtils/TestingUtils.scala     |   8 +-
 .../apache/flink/test/util/TestBaseUtils.java   |   2 +-
 .../test/util/ForkableFlinkMiniCluster.scala    |   2 +-
 .../flink/test/recovery/ChaosMonkeyITCase.java  |   4 +-
 .../JobManagerHACheckpointRecoveryITCase.java   |   4 +-
 .../JobManagerHAJobGraphRecoveryITCase.java     |   6 +-
 ...agerHAProcessFailureBatchRecoveryITCase.java |   4 +-
 .../ZooKeeperLeaderElectionITCase.java          |   8 +-
 .../flink/yarn/YARNHighAvailabilityITCase.java  |   2 +-
 37 files changed, 389 insertions(+), 362 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/docs/setup/config.md
----------------------------------------------------------------------
diff --git a/docs/setup/config.md b/docs/setup/config.md
index e6a335b..51475cc 100644
--- a/docs/setup/config.md
+++ b/docs/setup/config.md
@@ -134,7 +134,7 @@ will be used under the directory specified by jobmanager.web.tmpdir.
 
 - `state.backend.fs.checkpointdir`: Directory for storing checkpoints in a Flink supported filesystem. Note: State backend must be accessible from the JobManager, use `file://` only for local setups.
 
-- `high-availability.zookeeper.storageDir`: Required for HA. Directory for storing JobManager metadata; this is persisted in the state backend and only a pointer to this state is stored in ZooKeeper. Exactly like the checkpoint directory it must be accessible from the JobManager and a local filesystem should only be used for local deployments. Previously this config was named as `recovery.zookeeper.storageDir`.
+- `high-availability.zookeeper.storageDir`: Required for HA. Directory for storing JobManager metadata; this is persisted in the state backend and only a pointer to this state is stored in ZooKeeper. Exactly like the checkpoint directory it must be accessible from the JobManager and a local filesystem should only be used for local deployments. Previously this key was named `recovery.zookeeper.storageDir`.
 
 - `blob.storage.directory`: Directory for storing blobs (such as user jar's) on the TaskManagers.
 
@@ -283,31 +283,37 @@ of the JobManager, because the same ActorSystem is used. Its not possible to use
   For example when running Flink on YARN on an environment with a restrictive firewall, this option allows specifying a range of allowed ports.
 
 
-## High Availability Mode
+## High Availability (HA)
 
-- `high-availability`: (Default 'none') Defines the recovery mode used for the cluster execution. Currently, Flink supports the 'none' mode where only a single JobManager runs and no JobManager state is checkpointed. The high availability mode 'zookeeper' supports the execution of multiple JobManagers and JobManager state checkpointing. Among the group of JobManagers, ZooKeeper elects one of them as the leader which is responsible for the cluster execution. In case of a JobManager failure, a standby JobManager will be elected as the new leader and is given the last checkpointed JobManager state. In order to use the 'zookeeper' mode, it is mandatory to also define the `recovery.zookeeper.quorum` configuration value.  Previously this config was named 'recovery.mode' and the default config was 'standalone'.
+- `high-availability`: Defines the high availability mode used for the cluster execution. Currently, Flink supports the following modes:
+  - `none` (default): No high availability. A single JobManager runs and no JobManager state is checkpointed.
+  - `zookeeper`: Supports the execution of multiple JobManagers and JobManager state checkpointing. Among the group of JobManagers, ZooKeeper elects one of them as the leader which is responsible for the cluster execution. In case of a JobManager failure, a standby JobManager will be elected as the new leader and is given the last checkpointed JobManager state. In order to use the 'zookeeper' mode, it is mandatory to also define the `high-availability.zookeeper.quorum` configuration value.
 
-- `high-availability.zookeeper.quorum`: Defines the ZooKeeper quorum URL which is used to connet to the ZooKeeper cluster when the 'zookeeper' recovery mode is selected.  Previously this config was name as `recovery.zookeeper.quorum`.
+Previously this key was named `recovery.mode` and the default value was `standalone`.
 
-- `high-availability.zookeeper.path.root`: (Default '/flink') Defines the root dir under which the ZooKeeper recovery mode will create namespace directories.  Previously this config was name as `recovery.zookeeper.path.root`.
+### ZooKeeper-based HA Mode
 
-- `high-availability.zookeeper.path.namespace`: (Default '/default_ns' in standalone mode, or the <yarn-application-id> under Yarn) Defines the subdirectory under the root dir where the ZooKeeper recovery mode will create znodes. This allows to isolate multiple applications on the same ZooKeeper.  Previously this config was named as `recovery.zookeeper.path.namespace`.
+- `high-availability.zookeeper.quorum`: Defines the ZooKeeper quorum URL which is used to connect to the ZooKeeper cluster when the 'zookeeper' HA mode is selected. Previously this key was name `recovery.zookeeper.quorum`.
 
-- `high-availability.zookeeper.path.latch`: (Default '/leaderlatch') Defines the znode of the leader latch which is used to elect the leader.  Previously this config was named as `recovery.zookeeper.path.latch`.
+- `high-availability.zookeeper.path.root`: (Default `/flink`) Defines the root dir under which the ZooKeeper HA mode will create namespace directories. Previously this ket was named `recovery.zookeeper.path.root`.
 
-- `high-availability.zookeeper.path.leader`: (Default '/leader') Defines the znode of the leader which contains the URL to the leader and the current leader session ID  Previously this config was named as `recovery.zookeeper.path.leader`.
+- `high-availability.zookeeper.path.namespace`: (Default `/default_ns` in standalone cluster mode, or the <yarn-application-id> under YARN) Defines the subdirectory under the root dir where the ZooKeeper HA mode will create znodes. This allows to isolate multiple applications on the same ZooKeeper. Previously this key was named `recovery.zookeeper.path.namespace`.
 
-- `high-availability.zookeeper.storageDir`: Defines the directory in the state backend where the JobManager metadata will be stored (ZooKeeper only keeps pointers to it). Required for HA.  Previously this config was named as `recovery.zookeeper.storageDir`.
+- `high-availability.zookeeper.path.latch`: (Default `/leaderlatch`) Defines the znode of the leader latch which is used to elect the leader. Previously this key was named `recovery.zookeeper.path.latch`.
 
-- `high-availability.zookeeper.client.session-timeout`: (Default '60000') Defines the session timeout for the ZooKeeper session in ms.  Previously this config was named as `recovery.zookeeper.client.session-timeout`
+- `high-availability.zookeeper.path.leader`: (Default `/leader`) Defines the znode of the leader which contains the URL to the leader and the current leader session ID. Previously this key was named `recovery.zookeeper.path.leader`.
 
-- `high-availability.zookeeper.client.connection-timeout`: (Default '15000') Defines the connection timeout for ZooKeeper in ms.  Previously this config was named as `recovery.zookeeper.client.connection-timeout`.
+- `high-availability.zookeeper.storageDir`: Defines the directory in the state backend where the JobManager metadata will be stored (ZooKeeper only keeps pointers to it). Required for HA. Previously this key was named `recovery.zookeeper.storageDir`.
 
-- `high-availability.zookeeper.client.retry-wait`: (Default '5000') Defines the pause between consecutive retries in ms.  Previously this config was named as `recovery.zookeeper.client.retry-wait`.
+- `high-availability.zookeeper.client.session-timeout`: (Default `60000`) Defines the session timeout for the ZooKeeper session in ms. Previously this key was named `recovery.zookeeper.client.session-timeout`
 
-- `high-availability.zookeeper.client.max-retry-attempts`: (Default '3') Defines the number of connection retries before the client gives up.  Previously this config was named as `recovery.zookeeper.client.max-retry-attempts`.
+- `high-availability.zookeeper.client.connection-timeout`: (Default `15000`) Defines the connection timeout for ZooKeeper in ms. Previously this key was named `recovery.zookeeper.client.connection-timeout`.
 
-- `high-availability.job.delay`: (Default 'akka.ask.timeout') Defines the delay before persisted jobs are recovered in case of a recovery situation.  Previously this config was named as `recovery.job.delay`.
+- `high-availability.zookeeper.client.retry-wait`: (Default `5000`) Defines the pause between consecutive retries in ms. Previously this key was named `recovery.zookeeper.client.retry-wait`.
+
+- `high-availability.zookeeper.client.max-retry-attempts`: (Default `3`) Defines the number of connection retries before the client gives up. Previously this key was named `recovery.zookeeper.client.max-retry-attempts`.
+
+- `high-availability.job.delay`: (Default `akka.ask.timeout`) Defines the delay before persisted jobs are recovered in case of a master recovery situation. Previously this key was named `recovery.job.delay`.
 
 ## Environment
 

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/docs/setup/jobmanager_high_availability.md
----------------------------------------------------------------------
diff --git a/docs/setup/jobmanager_high_availability.md b/docs/setup/jobmanager_high_availability.md
index dd6782d..9dcc7cc 100644
--- a/docs/setup/jobmanager_high_availability.md
+++ b/docs/setup/jobmanager_high_availability.md
@@ -44,7 +44,7 @@ As an example, consider the following setup with three JobManager instances:
 
 To enable JobManager High Availability you have to set the **high-availability mode** to *zookeeper*, configure a **ZooKeeper quorum** and set up a **masters file** with all JobManagers hosts and their web UI ports.
 
-Flink leverages **[ZooKeeper](http://zookeeper.apache.org)** for  *distributed coordination* between all running JobManager instances. ZooKeeper is a separate service from Flink, which provides highly reliable distributed coordination via leader election and light-weight consistent state storage. Check out [ZooKeeper's Getting Started Guide](http://zookeeper.apache.org/doc/trunk/zookeeperStarted.html) for more information about ZooKeeper. Flink includes scripts to [bootstrap a simple ZooKeeper](#bootstrap-zookeeper) installation.
+Flink leverages **[ZooKeeper](http://zookeeper.apache.org)** for *distributed coordination* between all running JobManager instances. ZooKeeper is a separate service from Flink, which provides highly reliable distributed coordination via leader election and light-weight consistent state storage. Check out [ZooKeeper's Getting Started Guide](http://zookeeper.apache.org/doc/trunk/zookeeperStarted.html) for more information about ZooKeeper. Flink includes scripts to [bootstrap a simple ZooKeeper](#bootstrap-zookeeper) installation.
 
 #### Masters File (masters)
 
@@ -67,7 +67,6 @@ In order to start an HA-cluster add the following configuration keys to `conf/fl
 - **high-availability mode** (required): The *high-availability mode* has to be set in `conf/flink-conf.yaml` to *zookeeper* in order to enable high availability mode.
 
   <pre>high-availability: zookeeper</pre>
-- **Previously this config was named 'recovery.mode' and the default config was 'standalone'.
 
 - **ZooKeeper quorum** (required): A *ZooKeeper quorum* is a replicated group of ZooKeeper servers, which provide the distributed coordination service.
 
@@ -83,14 +82,14 @@ In order to start an HA-cluster add the following configuration keys to `conf/fl
 
   <pre>high-availability.zookeeper.path.namespace: /default_ns # important: customize per cluster</pre>
 
-  **Important**: if you are running multiple Flink HA clusters, you have to manually configure separate namespaces for each cluster. By default, the Yarn cluster and the Yarn session automatically generate namespaces based on Yarn application id. A manual configuration overrides this behaviour in Yarn. Specifying a namespace with the -z CLI option, in turn, overrides manual configuration. 
+  **Important**: if you are running multiple Flink HA clusters, you have to manually configure separate namespaces for each cluster. By default, the Yarn cluster and the Yarn session automatically generate namespaces based on Yarn application id. A manual configuration overrides this behaviour in Yarn. Specifying a namespace with the -z CLI option, in turn, overrides manual configuration.
 
 - **State backend and storage directory** (required): JobManager meta data is persisted in the *state backend* and only a pointer to this state is stored in ZooKeeper. Currently, only the file system state backend is supported in HA mode.
 
     <pre>
+high-availability.zookeeper.storageDir: hdfs:///flink/recovery</pre>
 state.backend: filesystem
 state.backend.fs.checkpointdir: hdfs:///flink/checkpoints
-high-availability.zookeeper.storageDir: hdfs:///flink/recovery</pre>
 
     The `storageDir` stores all meta data needed to recover a JobManager failure.
 
@@ -98,16 +97,17 @@ After configuring the masters and the ZooKeeper quorum, you can use the provided
 
 #### Example: Standalone Cluster with 2 JobManagers
 
-1. **Configure recovery mode and ZooKeeper quorum** in `conf/flink-conf.yaml`:
+1. **Configure high availability mode and ZooKeeper quorum** in `conf/flink-conf.yaml`:
 
    <pre>
 high-availability: zookeeper
 high-availability.zookeeper.quorum: localhost:2181
 high-availability.zookeeper.path.root: /flink
 high-availability.zookeeper.path.namespace: /cluster_one # important: customize per cluster
+high-availability.zookeeper.storageDir: hdfs:///flink/recovery</pre>
+
 state.backend: filesystem
 state.backend.fs.checkpointdir: hdfs:///flink/checkpoints
-high-availability.zookeeper.storageDir: hdfs:///flink/recovery</pre>
 
 2. **Configure masters** in `conf/masters`:
 
@@ -184,16 +184,16 @@ This means that the application can be restarted 10 times before YARN fails the
 
 #### Example: Highly Available YARN Session
 
-1. **Configure recovery mode and ZooKeeper quorum** in `conf/flink-conf.yaml`:
+1. **Configure HA mode and ZooKeeper quorum** in `conf/flink-conf.yaml`:
 
    <pre>
 high-availability: zookeeper
 high-availability.zookeeper.quorum: localhost:2181
+high-availability.zookeeper.storageDir: hdfs:///flink/recovery
 high-availability.zookeeper.path.root: /flink
 high-availability.zookeeper.path.namespace: /cluster_one # important: customize per cluster
 state.backend: filesystem
 state.backend.fs.checkpointdir: hdfs:///flink/checkpoints
-high-availability.zookeeper.storageDir: hdfs:///flink/recovery
 yarn.application-attempts: 10</pre>
 
 3. **Configure ZooKeeper server** in `conf/zoo.cfg` (currently it's only possible to run a single ZooKeeper server per machine):

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-core/src/main/java/org/apache/flink/configuration/ConfigConstants.java
----------------------------------------------------------------------
diff --git a/flink-core/src/main/java/org/apache/flink/configuration/ConfigConstants.java b/flink-core/src/main/java/org/apache/flink/configuration/ConfigConstants.java
index 5cc1161..514c730 100644
--- a/flink-core/src/main/java/org/apache/flink/configuration/ConfigConstants.java
+++ b/flink-core/src/main/java/org/apache/flink/configuration/ConfigConstants.java
@@ -624,129 +624,134 @@ public final class ConfigConstants {
 	
 	public static final String FLINK_JVM_OPTIONS = "env.java.opts";
 
-	// --------------------------- Recovery -----------------------------------
+	// --------------------------- High Availability --------------------------
 
-	/** Defines recovery mode used for the cluster execution ("standalone", "zookeeper")
-	 *  Use {@link #HIGH_AVAILABILITY} instead
-	 * */
+	/** Defines high availabilty mode used for the cluster execution ("NONE", "ZOOKEEPER") */
+	@PublicEvolving
+	public static final String HA_MODE = "high-availability";
+
+	/** Ports used by the job manager if not in 'none' recovery mode */
+	@PublicEvolving
+	public static final String HA_JOB_MANAGER_PORT = "high-availability.jobmanager.port";
+
+	/** The time before the JobManager recovers persisted jobs */
+	@PublicEvolving
+	public static final String HA_JOB_DELAY = "high-availability.job.delay";
+
+	/** Deprecated in favour of {@link #HA_MODE}. */
 	@Deprecated
 	public static final String RECOVERY_MODE = "recovery.mode";
 
-	/** Defines recovery mode used for the cluster execution ("NONE", "ZOOKEEPER") */
-	public static final String HIGH_AVAILABILITY = "high-availability";
-
-	/** Ports used by the job manager if not in standalone recovery mode */
+	/** Deprecated in favour of {@link #HA_JOB_MANAGER_PORT}. */
 	@Deprecated
 	public static final String RECOVERY_JOB_MANAGER_PORT = "recovery.jobmanager.port";
 
-	/** Ports used by the job manager if not in 'none' recovery mode */
-	public static final String HA_JOB_MANAGER_PORT =
-		"high-availability.jobmanager.port";
-
-	/** The time before the JobManager recovers persisted jobs */
+	/** Deprecated in favour of {@link #HA_JOB_DELAY}. */
 	@Deprecated
 	public static final String RECOVERY_JOB_DELAY = "recovery.job.delay";
 
-	/** The time before the JobManager recovers persisted jobs */
-	public static final String HA_JOB_DELAY = "high-availability.job.delay";
-
 	// --------------------------- ZooKeeper ----------------------------------
 
 	/** ZooKeeper servers. */
-	@Deprecated
-	public static final String ZOOKEEPER_QUORUM_KEY = "recovery.zookeeper.quorum";
-
-	/** ZooKeeper servers. */
-	public static final String HA_ZOOKEEPER_QUORUM_KEY =
-		"high-availability.zookeeper.quorum";
+	@PublicEvolving
+	public static final String HA_ZOOKEEPER_QUORUM_KEY = "high-availability.zookeeper.quorum";
 
 	/**
 	 * File system state backend base path for recoverable state handles. Recovery state is written
 	 * to this path and the file state handles are persisted for recovery.
 	 */
+	@PublicEvolving
+	public static final String HA_ZOOKEEPER_STORAGE_PATH = "high-availability.zookeeper.storageDir";
+
+	/** ZooKeeper root path. */
+	@PublicEvolving
+	public static final String HA_ZOOKEEPER_DIR_KEY = "high-availability.zookeeper.path.root";
+
+	@PublicEvolving
+	public static final String HA_ZOOKEEPER_NAMESPACE_KEY = "high-availability.zookeeper.path.namespace";
+
+	@PublicEvolving
+	public static final String HA_ZOOKEEPER_LATCH_PATH = "high-availability.zookeeper.path.latch";
+
+	/** ZooKeeper root path (ZNode) for job graphs. */
+	@PublicEvolving
+	public static final String HA_ZOOKEEPER_JOBGRAPHS_PATH = "high-availability.zookeeper.path.jobgraphs";
+
+	@PublicEvolving
+	public static final String HA_ZOOKEEPER_LEADER_PATH = "high-availability.zookeeper.path.leader";
+
+	/** ZooKeeper root path (ZNode) for completed checkpoints. */
+	@PublicEvolving
+	public static final String HA_ZOOKEEPER_CHECKPOINTS_PATH = "high-availability.zookeeper.path.checkpoints";
+
+	/** ZooKeeper root path (ZNode) for checkpoint counters. */
+	@PublicEvolving
+	public static final String HA_ZOOKEEPER_CHECKPOINT_COUNTER_PATH = "high-availability.zookeeper.path.checkpoint-counter";
+
+	@PublicEvolving
+	public static final String HA_ZOOKEEPER_SESSION_TIMEOUT = "high-availability.zookeeper.client.session-timeout";
+
+	@PublicEvolving
+	public static final String HA_ZOOKEEPER_CONNECTION_TIMEOUT = "high-availability.zookeeper.client.connection-timeout";
+
+	@PublicEvolving
+	public static final String HA_ZOOKEEPER_RETRY_WAIT = "high-availability.zookeeper.client.retry-wait";
+
+	@PublicEvolving
+	public static final String HA_ZOOKEEPER_MAX_RETRY_ATTEMPTS = "high-availability.zookeeper.client.max-retry-attempts";
+
+	/** Deprecated in favour of {@link #HA_ZOOKEEPER_QUORUM_KEY}. */
 	@Deprecated
-	public static final String ZOOKEEPER_RECOVERY_PATH = "recovery.zookeeper.storageDir";
+	public static final String ZOOKEEPER_QUORUM_KEY = "recovery.zookeeper.quorum";
 
-	/**
-	 * File system state backend base path for recoverable state handles. Recovery state is written
-	 * to this path and the file state handles are persisted for recovery.
-	 */
-	public static final String ZOOKEEPER_HA_PATH =
-		"high-availability.zookeeper.storageDir";
+	/** Deprecated in favour of {@link #HA_ZOOKEEPER_STORAGE_PATH}. */
+	@Deprecated
+	public static final String ZOOKEEPER_RECOVERY_PATH = "recovery.zookeeper.storageDir";
 
-	/** ZooKeeper root path. */
+	/** Deprecated in favour of {@link #HA_ZOOKEEPER_DIR_KEY}. */
 	@Deprecated
 	public static final String ZOOKEEPER_DIR_KEY = "recovery.zookeeper.path.root";
 
-	/** ZooKeeper root path. */
-	public static final String HA_ZOOKEEPER_DIR_KEY =
-		"high-availability.zookeeper.path.root";
-
+	/** Deprecated in favour of {@link #HA_ZOOKEEPER_NAMESPACE_KEY}. */
 	@Deprecated
 	public static final String ZOOKEEPER_NAMESPACE_KEY = "recovery.zookeeper.path.namespace";
 
-	public static final String HA_ZOOKEEPER_NAMESPACE_KEY =
-		"high-availability.zookeeper.path.namespace";
-
+	/** Deprecated in favour of {@link #HA_ZOOKEEPER_LATCH_PATH}. */
 	@Deprecated
 	public static final String ZOOKEEPER_LATCH_PATH = "recovery.zookeeper.path.latch";
 
-	public static final String HA_ZOOKEEPER_LATCH_PATH =
-		"high-availability.zookeeper.path.latch";
-
+	/** Deprecated in favour of {@link #HA_ZOOKEEPER_LEADER_PATH}. */
 	@Deprecated
 	public static final String ZOOKEEPER_LEADER_PATH = "recovery.zookeeper.path.leader";
 
-	public static final String HA_ZOOKEEPER_LEADER_PATH = "high-availability.zookeeper.path.leader";
-
-	/** ZooKeeper root path (ZNode) for job graphs. */
+	/** Deprecated in favour of {@link #HA_ZOOKEEPER_JOBGRAPHS_PATH}. */
 	@Deprecated
 	public static final String ZOOKEEPER_JOBGRAPHS_PATH = "recovery.zookeeper.path.jobgraphs";
 
-	/** ZooKeeper root path (ZNode) for job graphs. */
-	public static final String HA_ZOOKEEPER_JOBGRAPHS_PATH =
-		"high-availability.zookeeper.path.jobgraphs";
-
-	/** ZooKeeper root path (ZNode) for completed checkpoints. */
+	/** Deprecated in favour of {@link #HA_ZOOKEEPER_CHECKPOINTS_PATH}. */
 	@Deprecated
 	public static final String ZOOKEEPER_CHECKPOINTS_PATH = "recovery.zookeeper.path.checkpoints";
 
-	/** ZooKeeper root path (ZNode) for completed checkpoints. */
-	public static final String HA_ZOOKEEPER_CHECKPOINTS_PATH =
-		"high-availability.zookeeper.path.checkpoints";
-
-	/** ZooKeeper root path (ZNode) for checkpoint counters. */
+	/** Deprecated in favour of {@link #HA_ZOOKEEPER_CHECKPOINT_COUNTER_PATH}. */
 	@Deprecated
 	public static final String ZOOKEEPER_CHECKPOINT_COUNTER_PATH = "recovery.zookeeper.path.checkpoint-counter";
 
-	/** ZooKeeper root path (ZNode) for checkpoint counters. */
-	public static final String HA_ZOOKEEPER_CHECKPOINT_COUNTER_PATH =
-		"high-availability.zookeeper.path.checkpoint-counter";
-
+	/** Deprecated in favour of {@link #HA_ZOOKEEPER_SESSION_TIMEOUT}. */
 	@Deprecated
 	public static final String ZOOKEEPER_SESSION_TIMEOUT = "recovery.zookeeper.client.session-timeout";
 
-	public static final String HA_ZOOKEEPER_SESSION_TIMEOUT =
-		"high-availability.zookeeper.client.session-timeout";
-
+	/** Deprecated in favour of {@link #HA_ZOOKEEPER_CONNECTION_TIMEOUT}. */
 	@Deprecated
 	public static final String ZOOKEEPER_CONNECTION_TIMEOUT = "recovery.zookeeper.client.connection-timeout";
 
-	public static final String HA_ZOOKEEPER_CONNECTION_TIMEOUT =
-		"high-availability.zookeeper.client.connection-timeout";
-
+	/** Deprecated in favour of {@link #HA_ZOOKEEPER_RETRY_WAIT}. */
 	@Deprecated
 	public static final String ZOOKEEPER_RETRY_WAIT = "recovery.zookeeper.client.retry-wait";
 
-	public static final String HA_ZOOKEEPER_RETRY_WAIT =
-		"high-availability.zookeeper.client.retry-wait";
-
+	/** Deprecated in favour of {@link #HA_ZOOKEEPER_MAX_RETRY_ATTEMPTS}. */
 	@Deprecated
 	public static final String ZOOKEEPER_MAX_RETRY_ATTEMPTS = "recovery.zookeeper.client.max-retry-attempts";
 
-	public static final String HA_ZOOKEEPER_MAX_RETRY_ATTEMPTS =
-		"high-availability.zookeeper.client.max-retry-attempts";
-
 	// ---------------------------- Metrics -----------------------------------
 
 	/**
@@ -1090,16 +1095,24 @@ public final class ConfigConstants {
 
 	public static final String LOCAL_START_WEBSERVER = "local.start-webserver";
 
-	// --------------------------- Recovery ---------------------------------
+	// --------------------------- High Availability ---------------------------------
+
+	@PublicEvolving
+	public static String DEFAULT_HA_MODE = "none";
+
+	/** Deprecated in favour of {@link #DEFAULT_HA_MODE} */
 	@Deprecated
 	public static String DEFAULT_RECOVERY_MODE = "standalone";
 
-	public static String DEFAULT_HIGH_AVAILABILTY = "none";
-
 	/**
 	 * Default port used by the job manager if not in standalone recovery mode. If <code>0</code>
 	 * the OS picks a random port port.
 	 */
+	@PublicEvolving
+	public static final String DEFAULT_HA_JOB_MANAGER_PORT = "0";
+
+	/** Deprecated in favour of {@link #DEFAULT_HA_JOB_MANAGER_PORT} */
+	@Deprecated
 	public static final String DEFAULT_RECOVERY_JOB_MANAGER_PORT = "0";
 
 	// --------------------------- ZooKeeper ----------------------------------

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-dist/src/main/flink-bin/bin/config.sh
----------------------------------------------------------------------
diff --git a/flink-dist/src/main/flink-bin/bin/config.sh b/flink-dist/src/main/flink-bin/bin/config.sh
index 687a39c..f7e7d58 100755
--- a/flink-dist/src/main/flink-bin/bin/config.sh
+++ b/flink-dist/src/main/flink-bin/bin/config.sh
@@ -104,8 +104,6 @@ KEY_ENV_JAVA_OPTS="env.java.opts"
 KEY_ENV_JAVA_OPTS_JM="env.java.opts.jobmanager"
 KEY_ENV_JAVA_OPTS_TM="env.java.opts.taskmanager"
 KEY_ENV_SSH_OPTS="env.ssh.opts"
-#deprecated
-KEY_RECOVERY_MODE="recovery.mode"
 KEY_HIGH_AVAILABILITY="high-availability"
 KEY_ZK_HEAP_MB="zookeeper.heap.mb"
 
@@ -259,25 +257,22 @@ if [ -z "${ZK_HEAP}" ]; then
     ZK_HEAP=$(readFromConfig ${KEY_ZK_HEAP_MB} 0 "${YAML_CONF}")
 fi
 
-# for backward compatability
-if [ -z "${OLD_RECOVERY_MODE}" ]; then
-    OLD_RECOVERY_MODE=$(readFromConfig ${KEY_RECOVERY_MODE} "standalone" "${YAML_CONF}")
-fi
-
-if [ -z "${RECOVERY_MODE}" ]; then
-     # Read the new config
-     RECOVERY_MODE=$(readFromConfig ${KEY_HIGH_AVAILABILITY} "" "${YAML_CONF}")
-     if [ -z "${RECOVERY_MODE}" ]; then
-        #no new config found. So old config should be used
-        if [ -z "${OLD_RECOVERY_MODE}" ]; then
-            # If old config is also not found, use the 'none' as the default config
-            RECOVERY_MODE="none"
-        elif [ ${OLD_RECOVERY_MODE} = "standalone" ]; then
-            # if oldconfig is 'standalone', rename to 'none'
-            RECOVERY_MODE="none"
+# High availability
+if [ -z "${HIGH_AVAILABILITY}" ]; then
+     HIGH_AVAILABILITY=$(readFromConfig ${KEY_HIGH_AVAILABILITY} "" "${YAML_CONF}")
+     if [ -z "${HIGH_AVAILABILITY}" ]; then
+        # Try deprecated value
+        DEPRECATED_HA=$(readFromConfig "recovery.mode" "" "${YAML_CONF}")
+        if [ -z "${DEPRECATED_HA}" ]; then
+            HIGH_AVAILABILITY="none"
+        elif [ ${DEPRECATED_HA} == "standalone" ]; then
+            # Standalone is now 'none'
+            HIGH_AVAILABILITY="none"
         else
-            RECOVERY_MODE=${OLD_RECOVERY_MODE}
+            HIGH_AVAILABILITY=${DEPRECATED_HA}
         fi
+     else
+         HIGH_AVAILABILITY="none"
      fi
 fi
 

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-dist/src/main/flink-bin/bin/start-cluster.sh
----------------------------------------------------------------------
diff --git a/flink-dist/src/main/flink-bin/bin/start-cluster.sh b/flink-dist/src/main/flink-bin/bin/start-cluster.sh
index 77bff1e..7611189 100755
--- a/flink-dist/src/main/flink-bin/bin/start-cluster.sh
+++ b/flink-dist/src/main/flink-bin/bin/start-cluster.sh
@@ -29,7 +29,7 @@ bin=`cd "$bin"; pwd`
 
 # Start the JobManager instance(s)
 shopt -s nocasematch
-if [[ $RECOVERY_MODE == "zookeeper" ]]; then
+if [[ $HIGH_AVAILABILITY == "zookeeper" ]]; then
     # HA Mode
     readMasters
 

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-dist/src/main/flink-bin/bin/stop-cluster.sh
----------------------------------------------------------------------
diff --git a/flink-dist/src/main/flink-bin/bin/stop-cluster.sh b/flink-dist/src/main/flink-bin/bin/stop-cluster.sh
index c4d9086..bc86291 100755
--- a/flink-dist/src/main/flink-bin/bin/stop-cluster.sh
+++ b/flink-dist/src/main/flink-bin/bin/stop-cluster.sh
@@ -37,7 +37,7 @@ fi
 
 # Stop JobManager instance(s)
 shopt -s nocasematch
-if [[ $RECOVERY_MODE == "zookeeper" ]]; then
+if [[ $HIGH_AVAILABILITY == "zookeeper" ]]; then
     # HA Mode
     readMasters
 

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-dist/src/main/resources/flink-conf.yaml
----------------------------------------------------------------------
diff --git a/flink-dist/src/main/resources/flink-conf.yaml b/flink-dist/src/main/resources/flink-conf.yaml
index a2586ce..27fd84a 100644
--- a/flink-dist/src/main/resources/flink-conf.yaml
+++ b/flink-dist/src/main/resources/flink-conf.yaml
@@ -131,18 +131,14 @@ jobmanager.web.port: 8081
 
 
 #==============================================================================
-# Master High Availability (required configuration)
+# High Availability
 #==============================================================================
 
-# The list of ZooKepper quorum peers that coordinate the high-availability
+# The list of ZooKeeper quorum peers that coordinate the high-availability
 # setup. This must be a list of the form:
-# "host1:clientPort,host2[:clientPort],..." (default clientPort: 2181)
 #
-# high-availability: zookeeper
-#
-# recovery.zookeeper.quorum: localhost:2181,...
+# "host1:clientPort,host2:clientPort,..." (default clientPort: 2181)
 #
-# Note: You need to set the state backend to 'filesystem' and the checkpoint
-# directory (see above) before configuring the storageDir.
-#
-# recovery.zookeeper.storageDir: hdfs:///recovery
+# high-availability: zookeeper
+# high-availability.zookeeper.quorum: localhost:2181
+# high-availability.zookeeper.storageDir: hdfs:///flink/ha/

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-runtime-web/src/test/java/org/apache/flink/runtime/webmonitor/WebRuntimeMonitorITCase.java
----------------------------------------------------------------------
diff --git a/flink-runtime-web/src/test/java/org/apache/flink/runtime/webmonitor/WebRuntimeMonitorITCase.java b/flink-runtime-web/src/test/java/org/apache/flink/runtime/webmonitor/WebRuntimeMonitorITCase.java
index d9edafe..54c5e76 100644
--- a/flink-runtime-web/src/test/java/org/apache/flink/runtime/webmonitor/WebRuntimeMonitorITCase.java
+++ b/flink-runtime-web/src/test/java/org/apache/flink/runtime/webmonitor/WebRuntimeMonitorITCase.java
@@ -146,7 +146,7 @@ public class WebRuntimeMonitorITCase extends TestLogger {
 		List<LeaderRetrievalService> leaderRetrievalServices = new ArrayList<>();
 
 		try (TestingServer zooKeeper = new TestingServer()) {
-			final Configuration config = ZooKeeperTestUtils.createZooKeeperRecoveryModeConfig(
+			final Configuration config = ZooKeeperTestUtils.createZooKeeperHAConfig(
 				zooKeeper.getConnectString(),
 				temporaryFolder.getRoot().getPath());
 
@@ -296,7 +296,7 @@ public class WebRuntimeMonitorITCase extends TestLogger {
 			final Configuration config = new Configuration();
 			config.setInteger(ConfigConstants.JOB_MANAGER_WEB_PORT_KEY, 0);
 			config.setString(ConfigConstants.JOB_MANAGER_WEB_LOG_PATH_KEY, logFile.toString());
-			config.setString(ConfigConstants.HIGH_AVAILABILITY, "ZOOKEEPER");
+			config.setString(ConfigConstants.HA_MODE, "ZOOKEEPER");
 			config.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, zooKeeper.getConnectString());
 
 			actorSystem = AkkaUtils.createDefaultActorSystem();

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-runtime/src/main/java/org/apache/flink/runtime/blob/BlobServer.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/blob/BlobServer.java b/flink-runtime/src/main/java/org/apache/flink/runtime/blob/BlobServer.java
index d1b78a5..ff54b67 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/blob/BlobServer.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/blob/BlobServer.java
@@ -66,7 +66,7 @@ public class BlobServer extends Thread implements BlobService {
 	/** Is the root directory for file storage */
 	private final File storageDir;
 
-	/** Blob store for recovery */
+	/** Blob store for HA */
 	private final BlobStore blobStore;
 
 	/** Set of currently running threads */
@@ -77,7 +77,7 @@ public class BlobServer extends Thread implements BlobService {
 
 	/**
 	 * Shutdown hook thread to ensure deletion of the storage directory (or <code>null</code> if
-	 * the configured recovery mode does not equal{@link HighAvailabilityMode#NONE})
+	 * the configured high availability mode does not equal{@link HighAvailabilityMode#NONE})
 	 */
 	private final Thread shutdownHook;
 
@@ -97,15 +97,12 @@ public class BlobServer extends Thread implements BlobService {
 		this.storageDir = BlobUtils.initStorageDirectory(storageDirectory);
 		LOG.info("Created BLOB server storage directory {}", storageDir);
 
-		// No recovery.
 		if (highAvailabilityMode == HighAvailabilityMode.NONE) {
 			this.blobStore = new VoidBlobStore();
-		}
-		// Recovery.
-		else if (highAvailabilityMode == HighAvailabilityMode.ZOOKEEPER) {
+		} else if (highAvailabilityMode == HighAvailabilityMode.ZOOKEEPER) {
 			this.blobStore = new FileSystemBlobStore(config);
 		} else {
-			throw new IllegalConfigurationException("Unexpected recovery mode '" + highAvailabilityMode + ".");
+			throw new IllegalConfigurationException("Unexpected high availability mode '" + highAvailabilityMode + ".");
 		}
 
 		// configure the maximum number of concurrent connections

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-runtime/src/main/java/org/apache/flink/runtime/blob/BlobUtils.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/blob/BlobUtils.java b/flink-runtime/src/main/java/org/apache/flink/runtime/blob/BlobUtils.java
index 6ba1944..e74fa6f 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/blob/BlobUtils.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/blob/BlobUtils.java
@@ -336,7 +336,8 @@ public class BlobUtils {
 	 *
 	 * <p>The returned path can be used with the state backend for recovery purposes.
 	 *
-	 * <p>This follows the same scheme as {@link #getStorageLocation(File, BlobKey)}.
+	 * <p>This follows the same scheme as {@link #getStorageLocation(File, BlobKey)}
+	 * and is used for HA.
 	 */
 	static String getRecoveryPath(String basePath, BlobKey blobKey) {
 		// format: $base/cache/blob_$key

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-runtime/src/main/java/org/apache/flink/runtime/blob/FileSystemBlobStore.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/blob/FileSystemBlobStore.java b/flink-runtime/src/main/java/org/apache/flink/runtime/blob/FileSystemBlobStore.java
index f535c35..ee189d4 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/blob/FileSystemBlobStore.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/blob/FileSystemBlobStore.java
@@ -25,6 +25,7 @@ import org.apache.flink.configuration.Configuration;
 import org.apache.flink.configuration.IllegalConfigurationException;
 import org.apache.flink.core.fs.FileSystem;
 import org.apache.flink.core.fs.Path;
+import org.apache.flink.util.ConfigurationUtil;
 import org.apache.flink.util.IOUtils;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -41,7 +42,7 @@ import static org.apache.flink.util.Preconditions.checkNotNull;
 /**
  * Blob store backed by {@link FileSystem}.
  *
- * <p>This is used in addition to the local blob storage
+ * <p>This is used in addition to the local blob storage for high availability.
  */
 class FileSystemBlobStore implements BlobStore {
 
@@ -51,18 +52,19 @@ class FileSystemBlobStore implements BlobStore {
 	private final String basePath;
 
 	FileSystemBlobStore(Configuration config) throws IOException {
-		String recoveryPath = config.getString(ConfigConstants.ZOOKEEPER_HA_PATH, null);
-		if(recoveryPath == null) {
-			recoveryPath = config.getString(ConfigConstants.ZOOKEEPER_HA_PATH, null);
-		}
+		String storagePath = ConfigurationUtil.getStringWithDeprecatedKeys(
+				config,
+				ConfigConstants.HA_ZOOKEEPER_STORAGE_PATH,
+				null,
+				ConfigConstants.ZOOKEEPER_RECOVERY_PATH);
 
-		if (recoveryPath == null) {
+		if (storagePath == null) {
 			throw new IllegalConfigurationException(String.format("Missing configuration for " +
-					"file system state backend recovery path. Please specify via " +
-					"'%s' key.", ConfigConstants.ZOOKEEPER_HA_PATH));
+					"ZooKeeper file system path. Please specify via " +
+					"'%s' key.", ConfigConstants.HA_ZOOKEEPER_STORAGE_PATH));
 		}
 
-		this.basePath = recoveryPath + "/blob";
+		this.basePath = storagePath + "/blob";
 
 		FileSystem.get(new Path(basePath).toUri()).mkdirs(new Path(basePath));
 		LOG.info("Created blob directory {}.", basePath);

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StandaloneCheckpointIDCounter.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StandaloneCheckpointIDCounter.java b/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StandaloneCheckpointIDCounter.java
index c2f67f1..84cbe19 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StandaloneCheckpointIDCounter.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StandaloneCheckpointIDCounter.java
@@ -25,8 +25,7 @@ import java.util.concurrent.atomic.AtomicLong;
 /**
  * {@link CheckpointIDCounter} instances for JobManagers running in {@link HighAvailabilityMode#NONE}.
  *
- * <p>Simple wrapper of an {@link AtomicLong}. This is sufficient, because job managers are not
- * recoverable in this recovery mode.
+ * <p>Simple wrapper around an {@link AtomicLong}.
  */
 public class StandaloneCheckpointIDCounter implements CheckpointIDCounter {
 

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-runtime/src/main/java/org/apache/flink/runtime/jobmanager/HighAvailabilityMode.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/jobmanager/HighAvailabilityMode.java b/flink-runtime/src/main/java/org/apache/flink/runtime/jobmanager/HighAvailabilityMode.java
index 8e2efa8..087ad3b 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/jobmanager/HighAvailabilityMode.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/jobmanager/HighAvailabilityMode.java
@@ -20,18 +20,18 @@ package org.apache.flink.runtime.jobmanager;
 
 import org.apache.flink.configuration.ConfigConstants;
 import org.apache.flink.configuration.Configuration;
+import org.apache.flink.util.ConfigurationUtil;
 
 /**
- * Recovery mode for Flink's cluster execution. Currently supported modes are:
+ * High availability mode for Flink's cluster execution. Currently supported modes are:
  *
- * - Standalone: No recovery from JobManager failures
+ * - NONE: No high availability.
  * - ZooKeeper: JobManager high availability via ZooKeeper
  * ZooKeeper is used to select a leader among a group of JobManager. This JobManager
  * is responsible for the job execution. Upon failure of the leader a new leader is elected
  * which will take over the responsibilities of the old leader
  */
 public enum HighAvailabilityMode {
-	// STANDALONE mode renamed to NONE
 	NONE,
 	ZOOKEEPER;
 
@@ -39,30 +39,24 @@ public enum HighAvailabilityMode {
 	 * Return the configured {@link HighAvailabilityMode}.
 	 *
 	 * @param config The config to parse
-	 * @return Configured recovery mode or {@link ConfigConstants#DEFAULT_HIGH_AVAILABILTY} if not
+	 * @return Configured recovery mode or {@link ConfigConstants#DEFAULT_HA_MODE} if not
 	 * configured.
 	 */
 	public static HighAvailabilityMode fromConfig(Configuration config) {
-		// Not passing the default value here so that we could determine
-		// if there is an older config set
-		String recoveryMode = config.getString(
-			ConfigConstants.HIGH_AVAILABILITY, "");
-		if (recoveryMode.isEmpty()) {
-			// New config is not set.
-			// check the older one
-			// check for older 'recover.mode' config
-			recoveryMode = config.getString(
-				ConfigConstants.RECOVERY_MODE,
-				ConfigConstants.DEFAULT_RECOVERY_MODE);
-			if (recoveryMode.equalsIgnoreCase(ConfigConstants.DEFAULT_RECOVERY_MODE)) {
-				// There is no HA configured.
-				return HighAvailabilityMode.valueOf(ConfigConstants.DEFAULT_HIGH_AVAILABILTY.toUpperCase());
-			}
-		} else if (recoveryMode.equalsIgnoreCase(ConfigConstants.DEFAULT_HIGH_AVAILABILTY)) {
-			// The new config is found but with default value. So use this
-			return HighAvailabilityMode.valueOf(ConfigConstants.DEFAULT_HIGH_AVAILABILTY.toUpperCase());
+		String haMode = ConfigurationUtil.getStringWithDeprecatedKeys(
+				config,
+				ConfigConstants.HA_MODE,
+				null,
+				ConfigConstants.RECOVERY_MODE);
+
+		if (haMode == null) {
+			return HighAvailabilityMode.NONE;
+		} else if (haMode.equalsIgnoreCase(ConfigConstants.DEFAULT_RECOVERY_MODE)) {
+			// Map old default to new default
+			return HighAvailabilityMode.NONE;
+		} else {
+			return HighAvailabilityMode.valueOf(haMode.toUpperCase());
 		}
-		return HighAvailabilityMode.valueOf(recoveryMode.toUpperCase());
 	}
 
 	/**

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-runtime/src/main/java/org/apache/flink/runtime/util/LeaderRetrievalUtils.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/util/LeaderRetrievalUtils.java b/flink-runtime/src/main/java/org/apache/flink/runtime/util/LeaderRetrievalUtils.java
index 7a656cf..b6d9306 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/util/LeaderRetrievalUtils.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/util/LeaderRetrievalUtils.java
@@ -282,7 +282,7 @@ public class LeaderRetrievalUtils {
 	}
 
 	/**
-	 * Gets the recovery mode as configured, based on the {@link ConfigConstants#HIGH_AVAILABILITY}
+	 * Gets the recovery mode as configured, based on the {@link ConfigConstants#HA_MODE}
 	 * config key.
 	 * 
 	 * @param config The configuration to read the recovery mode from.

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-runtime/src/main/java/org/apache/flink/runtime/util/ZooKeeperUtils.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/util/ZooKeeperUtils.java b/flink-runtime/src/main/java/org/apache/flink/runtime/util/ZooKeeperUtils.java
index 5fd6f8c..bb48c81 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/util/ZooKeeperUtils.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/util/ZooKeeperUtils.java
@@ -25,8 +25,8 @@ import org.apache.flink.api.common.JobID;
 import org.apache.flink.configuration.ConfigConstants;
 import org.apache.flink.configuration.Configuration;
 import org.apache.flink.configuration.IllegalConfigurationException;
-import org.apache.flink.runtime.checkpoint.CompletedCheckpointStore;
 import org.apache.flink.runtime.checkpoint.CompletedCheckpoint;
+import org.apache.flink.runtime.checkpoint.CompletedCheckpointStore;
 import org.apache.flink.runtime.checkpoint.ZooKeeperCheckpointIDCounter;
 import org.apache.flink.runtime.checkpoint.ZooKeeperCompletedCheckpointStore;
 import org.apache.flink.runtime.jobmanager.HighAvailabilityMode;
@@ -36,6 +36,7 @@ import org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionService;
 import org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalService;
 import org.apache.flink.runtime.zookeeper.StateStorageHelper;
 import org.apache.flink.runtime.zookeeper.filesystem.FileSystemStateStorageHelper;
+import org.apache.flink.util.ConfigurationUtil;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -56,48 +57,57 @@ public class ZooKeeperUtils {
 	 * @return {@link CuratorFramework} instance
 	 */
 	public static CuratorFramework startCuratorFramework(Configuration configuration) {
-		String zkQuorum = configuration.getString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, "");
-		if (zkQuorum.isEmpty()) {
-			zkQuorum = configuration.getString(ConfigConstants.ZOOKEEPER_QUORUM_KEY, "");
-		}
+		String zkQuorum = ConfigurationUtil.getStringWithDeprecatedKeys(
+				configuration,
+				ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY,
+				null,
+				ConfigConstants.ZOOKEEPER_QUORUM_KEY);
+
 		if (zkQuorum == null || zkQuorum.equals("")) {
 			throw new RuntimeException("No valid ZooKeeper quorum has been specified. " +
 					"You can specify the quorum via the configuration key '" +
 					ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY + "'.");
 		}
 
-		int sessionTimeout = getConfiguredIntValue(configuration,
-			ConfigConstants.HA_ZOOKEEPER_SESSION_TIMEOUT,
-			ConfigConstants.ZOOKEEPER_SESSION_TIMEOUT,
-			ConfigConstants.DEFAULT_ZOOKEEPER_SESSION_TIMEOUT);
-
-		int connectionTimeout = getConfiguredIntValue(configuration,
-			ConfigConstants.HA_ZOOKEEPER_CONNECTION_TIMEOUT,
-			ConfigConstants.ZOOKEEPER_CONNECTION_TIMEOUT,
-			ConfigConstants.DEFAULT_ZOOKEEPER_SESSION_TIMEOUT);
-
-		int retryWait = getConfiguredIntValue(configuration,
-			ConfigConstants.HA_ZOOKEEPER_RETRY_WAIT,
-			ConfigConstants.ZOOKEEPER_RETRY_WAIT,
-			ConfigConstants.DEFAULT_ZOOKEEPER_RETRY_WAIT);
-
-		int maxRetryAttempts = getConfiguredIntValue(configuration,
-			ConfigConstants.HA_ZOOKEEPER_MAX_RETRY_ATTEMPTS,
-			ConfigConstants.ZOOKEEPER_MAX_RETRY_ATTEMPTS,
-			ConfigConstants.DEFAULT_ZOOKEEPER_MAX_RETRY_ATTEMPTS);
-
-		String root = getConfiguredStringValue(configuration, ConfigConstants.HA_ZOOKEEPER_DIR_KEY,
-			ConfigConstants.ZOOKEEPER_DIR_KEY,
-			ConfigConstants.DEFAULT_ZOOKEEPER_DIR_KEY);
-
-		String namespace = getConfiguredStringValue(configuration,
-			ConfigConstants.HA_ZOOKEEPER_NAMESPACE_KEY,
-			ConfigConstants.ZOOKEEPER_NAMESPACE_KEY,
-			ConfigConstants.DEFAULT_ZOOKEEPER_NAMESPACE_KEY);
+		int sessionTimeout = ConfigurationUtil.getIntegerWithDeprecatedKeys(
+				configuration,
+				ConfigConstants.HA_ZOOKEEPER_SESSION_TIMEOUT,
+				ConfigConstants.DEFAULT_ZOOKEEPER_SESSION_TIMEOUT,
+				ConfigConstants.ZOOKEEPER_SESSION_TIMEOUT);
+
+		int connectionTimeout = ConfigurationUtil.getIntegerWithDeprecatedKeys(
+				configuration,
+				ConfigConstants.HA_ZOOKEEPER_CONNECTION_TIMEOUT,
+				ConfigConstants.DEFAULT_ZOOKEEPER_CONNECTION_TIMEOUT,
+				ConfigConstants.ZOOKEEPER_CONNECTION_TIMEOUT);
+
+		int retryWait = ConfigurationUtil.getIntegerWithDeprecatedKeys(
+				configuration,
+				ConfigConstants.HA_ZOOKEEPER_RETRY_WAIT,
+				ConfigConstants.DEFAULT_ZOOKEEPER_RETRY_WAIT,
+				ConfigConstants.ZOOKEEPER_RETRY_WAIT);
+
+		int maxRetryAttempts = ConfigurationUtil.getIntegerWithDeprecatedKeys(
+				configuration,
+				ConfigConstants.HA_ZOOKEEPER_MAX_RETRY_ATTEMPTS,
+				ConfigConstants.DEFAULT_ZOOKEEPER_MAX_RETRY_ATTEMPTS,
+				ConfigConstants.ZOOKEEPER_MAX_RETRY_ATTEMPTS);
+
+		String root = ConfigurationUtil.getStringWithDeprecatedKeys(
+				configuration,
+				ConfigConstants.HA_ZOOKEEPER_DIR_KEY,
+				ConfigConstants.DEFAULT_ZOOKEEPER_DIR_KEY,
+				ConfigConstants.ZOOKEEPER_DIR_KEY);
+
+		String namespace = ConfigurationUtil.getStringWithDeprecatedKeys(
+				configuration,
+				ConfigConstants.HA_ZOOKEEPER_NAMESPACE_KEY,
+				ConfigConstants.DEFAULT_ZOOKEEPER_NAMESPACE_KEY,
+				ConfigConstants.ZOOKEEPER_NAMESPACE_KEY);
 
 		String rootWithNamespace = generateZookeeperPath(root, namespace);
 
-		LOG.info("Using '{}' as zookeeper namespace.", rootWithNamespace);
+		LOG.info("Using '{}' as Zookeeper namespace.", rootWithNamespace);
 
 		CuratorFramework cf = CuratorFrameworkFactory.builder()
 				.connectString(zkQuorum)
@@ -114,31 +124,6 @@ public class ZooKeeperUtils {
 		return cf;
 	}
 
-	private static int getConfiguredIntValue(Configuration configuration, String newConfigName, String oldConfigName, int defaultValue) {
-		int val = configuration.getInteger(newConfigName, -1);
-		if (val == -1) {
-			val = configuration.getInteger(
-				oldConfigName, -1);
-		}
-		// if still the val is not set use the default value
-		if (val == -1) {
-			return defaultValue;
-		}
-		return val;
-	}
-
-	private static String getConfiguredStringValue(Configuration configuration, String newConfigName, String oldConfigName, String defaultValue) {
-		String val = configuration.getString(newConfigName, "");
-		if (val.isEmpty()) {
-			val = configuration.getString(
-				oldConfigName, "");
-		}
-		// still no value found - use the default value
-		if (val.isEmpty()) {
-			return defaultValue;
-		}
-		return val;
-	}
 	/**
 	 * Returns whether {@link HighAvailabilityMode#ZOOKEEPER} is configured.
 	 */
@@ -153,10 +138,11 @@ public class ZooKeeperUtils {
 	public static String getZooKeeperEnsemble(Configuration flinkConf)
 			throws IllegalConfigurationException {
 
-		String zkQuorum = flinkConf.getString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, "");
-		if (zkQuorum.isEmpty()) {
-			zkQuorum = flinkConf.getString(ConfigConstants.ZOOKEEPER_QUORUM_KEY, "");
-		}
+		String zkQuorum = ConfigurationUtil.getStringWithDeprecatedKeys(
+				flinkConf,
+				ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY,
+				"",
+				ConfigConstants.ZOOKEEPER_QUORUM_KEY);
 
 		if (zkQuorum == null || zkQuorum.equals("")) {
 			throw new IllegalConfigurationException("No ZooKeeper quorum specified in config.");
@@ -178,9 +164,11 @@ public class ZooKeeperUtils {
 	public static ZooKeeperLeaderRetrievalService createLeaderRetrievalService(
 			Configuration configuration) throws Exception {
 		CuratorFramework client = startCuratorFramework(configuration);
-		String leaderPath = getConfiguredStringValue(configuration,
-			ConfigConstants.HA_ZOOKEEPER_LEADER_PATH, ConfigConstants.ZOOKEEPER_LEADER_PATH,
-			ConfigConstants.DEFAULT_ZOOKEEPER_LEADER_PATH);
+		String leaderPath = ConfigurationUtil.getStringWithDeprecatedKeys(
+				configuration,
+				ConfigConstants.HA_ZOOKEEPER_LEADER_PATH,
+				ConfigConstants.DEFAULT_ZOOKEEPER_LEADER_PATH,
+				ConfigConstants.ZOOKEEPER_LEADER_PATH);
 
 		return new ZooKeeperLeaderRetrievalService(client, leaderPath);
 	}
@@ -213,12 +201,16 @@ public class ZooKeeperUtils {
 			CuratorFramework client,
 			Configuration configuration) throws Exception {
 
-		String latchPath = getConfiguredStringValue(configuration,
-			ConfigConstants.HA_ZOOKEEPER_LATCH_PATH, ConfigConstants.ZOOKEEPER_LATCH_PATH,
-			ConfigConstants.DEFAULT_ZOOKEEPER_LATCH_PATH);
-		String leaderPath = getConfiguredStringValue(configuration,
-			ConfigConstants.HA_ZOOKEEPER_LEADER_PATH, ConfigConstants.ZOOKEEPER_LEADER_PATH,
-			ConfigConstants.DEFAULT_ZOOKEEPER_LEADER_PATH);
+		String latchPath = ConfigurationUtil.getStringWithDeprecatedKeys(
+				configuration,
+				ConfigConstants.HA_ZOOKEEPER_LATCH_PATH,
+				ConfigConstants.DEFAULT_ZOOKEEPER_LATCH_PATH,
+				ConfigConstants.ZOOKEEPER_LATCH_PATH);
+		String leaderPath = ConfigurationUtil.getStringWithDeprecatedKeys(
+				configuration,
+				ConfigConstants.HA_ZOOKEEPER_LEADER_PATH,
+				ConfigConstants.DEFAULT_ZOOKEEPER_LEADER_PATH,
+				ConfigConstants.ZOOKEEPER_LEADER_PATH);
 
 		return new ZooKeeperLeaderElectionService(client, latchPath, leaderPath);
 	}
@@ -239,9 +231,11 @@ public class ZooKeeperUtils {
 		StateStorageHelper<SubmittedJobGraph> stateStorage = createFileSystemStateStorage(configuration, "submittedJobGraph");
 
 		// ZooKeeper submitted jobs root dir
-		String zooKeeperSubmittedJobsPath = getConfiguredStringValue(configuration,
-			ConfigConstants.HA_ZOOKEEPER_JOBGRAPHS_PATH, ConfigConstants.ZOOKEEPER_JOBGRAPHS_PATH,
-			ConfigConstants.DEFAULT_ZOOKEEPER_JOBGRAPHS_PATH);
+		String zooKeeperSubmittedJobsPath = ConfigurationUtil.getStringWithDeprecatedKeys(
+				configuration,
+				ConfigConstants.HA_ZOOKEEPER_JOBGRAPHS_PATH,
+				ConfigConstants.DEFAULT_ZOOKEEPER_JOBGRAPHS_PATH,
+				ConfigConstants.ZOOKEEPER_JOBGRAPHS_PATH);
 
 		return new ZooKeeperSubmittedJobGraphStore(
 				client, zooKeeperSubmittedJobsPath, stateStorage);
@@ -266,10 +260,11 @@ public class ZooKeeperUtils {
 
 		checkNotNull(configuration, "Configuration");
 
-		String checkpointsPath = getConfiguredStringValue(configuration,
-			ConfigConstants.HA_ZOOKEEPER_CHECKPOINTS_PATH,
-			ConfigConstants.ZOOKEEPER_CHECKPOINTS_PATH,
-			ConfigConstants.DEFAULT_ZOOKEEPER_CHECKPOINTS_PATH);
+		String checkpointsPath = ConfigurationUtil.getStringWithDeprecatedKeys(
+				configuration,
+				ConfigConstants.HA_ZOOKEEPER_CHECKPOINTS_PATH,
+				ConfigConstants.DEFAULT_ZOOKEEPER_CHECKPOINTS_PATH,
+				ConfigConstants.ZOOKEEPER_CHECKPOINTS_PATH);
 
 		StateStorageHelper<CompletedCheckpoint> stateStorage = createFileSystemStateStorage(
 			configuration,
@@ -298,10 +293,11 @@ public class ZooKeeperUtils {
 			Configuration configuration,
 			JobID jobId) throws Exception {
 
-		String checkpointIdCounterPath = getConfiguredStringValue(configuration,
-			ConfigConstants.HA_ZOOKEEPER_CHECKPOINT_COUNTER_PATH,
-			ConfigConstants.ZOOKEEPER_CHECKPOINT_COUNTER_PATH,
-			ConfigConstants.DEFAULT_ZOOKEEPER_CHECKPOINT_COUNTER_PATH);
+		String checkpointIdCounterPath = ConfigurationUtil.getStringWithDeprecatedKeys(
+				configuration,
+				ConfigConstants.HA_ZOOKEEPER_CHECKPOINT_COUNTER_PATH,
+				ConfigConstants.DEFAULT_ZOOKEEPER_CHECKPOINT_COUNTER_PATH,
+				ConfigConstants.ZOOKEEPER_CHECKPOINT_COUNTER_PATH);
 
 		checkpointIdCounterPath += ZooKeeperSubmittedJobGraphStore.getPathForJob(jobId);
 
@@ -321,16 +317,15 @@ public class ZooKeeperUtils {
 			Configuration configuration,
 			String prefix) throws IOException {
 
-		String rootPath = configuration.getString(
-			ConfigConstants.ZOOKEEPER_HA_PATH, "");
-		if (rootPath.isEmpty()) {
-			rootPath = configuration.getString(
-				ConfigConstants.ZOOKEEPER_RECOVERY_PATH, "");
-		}
+		String rootPath = ConfigurationUtil.getStringWithDeprecatedKeys(
+				configuration,
+				ConfigConstants.HA_ZOOKEEPER_STORAGE_PATH,
+				"",
+				ConfigConstants.ZOOKEEPER_RECOVERY_PATH);
 
 		if (rootPath.equals("")) {
 			throw new IllegalConfigurationException("Missing recovery path. Specify via " +
-				"configuration key '" + ConfigConstants.ZOOKEEPER_HA_PATH + "'.");
+				"configuration key '" + ConfigConstants.HA_ZOOKEEPER_STORAGE_PATH + "'.");
 		} else {
 			return new FileSystemStateStorageHelper<T>(rootPath, prefix);
 		}

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobManager.scala
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobManager.scala b/flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobManager.scala
index 5962afc..d172a2b 100644
--- a/flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobManager.scala
+++ b/flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobManager.scala
@@ -82,7 +82,7 @@ import org.apache.flink.runtime.taskmanager.TaskManager
 import org.apache.flink.runtime.util._
 import org.apache.flink.runtime.webmonitor.{WebMonitor, WebMonitorUtils}
 import org.apache.flink.runtime.{FlinkActor, LeaderSessionMessageFilter, LogMessages}
-import org.apache.flink.util.{InstantiationUtil, NetUtils}
+import org.apache.flink.util.{ConfigurationUtil, InstantiationUtil, NetUtils}
 
 import org.jboss.netty.channel.ChannelException
 
@@ -155,7 +155,7 @@ class JobManager(
   /** Either running or not yet archived jobs (session hasn't been ended). */
   protected val currentJobs = scala.collection.mutable.HashMap[JobID, (ExecutionGraph, JobInfo)]()
 
-  protected val recoveryMode = HighAvailabilityMode.fromConfig(flinkConfiguration)
+  protected val haMode = HighAvailabilityMode.fromConfig(flinkConfiguration)
 
   var leaderSessionID: Option[UUID] = None
 
@@ -317,7 +317,7 @@ class JobManager(
 
         // TODO (critical next step) This needs to be more flexible and robust (e.g. wait for task
         // managers etc.)
-        if (recoveryMode != HighAvailabilityMode.NONE) {
+        if (haMode != HighAvailabilityMode.NONE) {
           log.info(s"Delaying recovery of all jobs by $jobRecoveryTimeout.")
 
           context.system.scheduler.scheduleOnce(
@@ -2462,7 +2462,7 @@ object JobManager {
         // The port range of allowed job manager ports or 0 for random
         configuration.getString(
           ConfigConstants.RECOVERY_JOB_MANAGER_PORT,
-          ConfigConstants.DEFAULT_RECOVERY_JOB_MANAGER_PORT)
+          ConfigConstants.DEFAULT_HA_JOB_MANAGER_PORT)
       }
       else {
         LOG.info("Starting JobManager without high-availability")
@@ -2594,10 +2594,11 @@ object JobManager {
 
     val savepointStore = SavepointStoreFactory.createFromConfig(configuration)
 
-    var jobRecoveryTimeoutStr = configuration.getString(ConfigConstants.HA_JOB_DELAY, "");
-    if (jobRecoveryTimeoutStr.isEmpty) {
-      jobRecoveryTimeoutStr = configuration.getString(ConfigConstants.RECOVERY_JOB_DELAY, "");
-    }
+    val jobRecoveryTimeoutStr = ConfigurationUtil.getStringWithDeprecatedKeys(
+      configuration,
+      ConfigConstants.HA_JOB_DELAY,
+      null,
+      ConfigConstants.RECOVERY_JOB_DELAY)
 
     val jobRecoveryTimeout = if (jobRecoveryTimeoutStr == null || jobRecoveryTimeoutStr.isEmpty) {
       timeout

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-runtime/src/main/scala/org/apache/flink/runtime/minicluster/FlinkMiniCluster.scala
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/scala/org/apache/flink/runtime/minicluster/FlinkMiniCluster.scala b/flink-runtime/src/main/scala/org/apache/flink/runtime/minicluster/FlinkMiniCluster.scala
index 0778aae..a547d25 100644
--- a/flink-runtime/src/main/scala/org/apache/flink/runtime/minicluster/FlinkMiniCluster.scala
+++ b/flink-runtime/src/main/scala/org/apache/flink/runtime/minicluster/FlinkMiniCluster.scala
@@ -84,7 +84,7 @@ abstract class FlinkMiniCluster(
 
   implicit val timeout = AkkaUtils.getTimeout(configuration)
 
-  val recoveryMode = HighAvailabilityMode.fromConfig(configuration)
+  val haMode = HighAvailabilityMode.fromConfig(configuration)
 
   val numJobManagers = getNumberOfJobManagers
 
@@ -122,7 +122,7 @@ abstract class FlinkMiniCluster(
   // --------------------------------------------------------------------------
 
   def getNumberOfJobManagers: Int = {
-    if(recoveryMode == HighAvailabilityMode.NONE) {
+    if(haMode == HighAvailabilityMode.NONE) {
       1
     } else {
       configuration.getInteger(
@@ -133,7 +133,7 @@ abstract class FlinkMiniCluster(
   }
 
   def getNumberOfResourceManagers: Int = {
-    if(recoveryMode == HighAvailabilityMode.NONE) {
+    if(haMode == HighAvailabilityMode.NONE) {
       1
     } else {
       configuration.getInteger(
@@ -372,7 +372,7 @@ abstract class FlinkMiniCluster(
     webMonitor foreach {
       _.stop()
     }
-    
+
     val tmFutures = taskManagerActors map {
       _.map(gracefulStop(_, timeout))
     } getOrElse(Seq())
@@ -417,7 +417,7 @@ abstract class FlinkMiniCluster(
       _ foreach(_.awaitTermination())
     }
   }
-  
+
   def running = isRunning
 
   // --------------------------------------------------------------------------
@@ -466,7 +466,7 @@ abstract class FlinkMiniCluster(
   : JobExecutionResult = {
     submitJobAndWait(jobGraph, printUpdates, timeout, createLeaderRetrievalService())
   }
-  
+
   @throws(classOf[JobExecutionException])
   def submitJobAndWait(
       jobGraph: JobGraph,
@@ -524,7 +524,7 @@ abstract class FlinkMiniCluster(
   protected def createLeaderRetrievalService(): LeaderRetrievalService = {
     (jobManagerActorSystems, jobManagerActors) match {
       case (Some(jmActorSystems), Some(jmActors)) =>
-        if (recoveryMode == HighAvailabilityMode.NONE) {
+        if (haMode == HighAvailabilityMode.NONE) {
           new StandaloneLeaderRetrievalService(
             AkkaUtils.getAkkaURL(jmActorSystems(0), jmActors(0)))
         } else {

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-runtime/src/test/java/org/apache/flink/runtime/blob/BlobRecoveryITCase.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/blob/BlobRecoveryITCase.java b/flink-runtime/src/test/java/org/apache/flink/runtime/blob/BlobRecoveryITCase.java
index bd4723f..8464d68 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/blob/BlobRecoveryITCase.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/blob/BlobRecoveryITCase.java
@@ -68,9 +68,9 @@ public class BlobRecoveryITCase {
 
 		try {
 			Configuration config = new Configuration();
-			config.setString(ConfigConstants.HIGH_AVAILABILITY, "ZOOKEEPER");
+			config.setString(ConfigConstants.HA_MODE, "ZOOKEEPER");
 			config.setString(ConfigConstants.STATE_BACKEND, "FILESYSTEM");
-			config.setString(ConfigConstants.ZOOKEEPER_HA_PATH, recoveryDir.getPath());
+			config.setString(ConfigConstants.HA_ZOOKEEPER_STORAGE_PATH, recoveryDir.getPath());
 
 			for (int i = 0; i < server.length; i++) {
 				server[i] = new BlobServer(config);

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-runtime/src/test/java/org/apache/flink/runtime/client/JobClientActorRecoveryITCase.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/client/JobClientActorRecoveryITCase.java b/flink-runtime/src/test/java/org/apache/flink/runtime/client/JobClientActorRecoveryITCase.java
index cc1994a..e947744 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/client/JobClientActorRecoveryITCase.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/client/JobClientActorRecoveryITCase.java
@@ -20,7 +20,6 @@ package org.apache.flink.runtime.client;
 
 import akka.actor.PoisonPill;
 import org.apache.curator.test.TestingServer;
-import org.apache.flink.api.common.ExecutionConfig;
 import org.apache.flink.api.common.JobExecutionResult;
 import org.apache.flink.configuration.ConfigConstants;
 import org.apache.flink.configuration.Configuration;
@@ -82,7 +81,7 @@ public class JobClientActorRecoveryITCase extends TestLogger {
 	public void testJobClientRecovery() throws Exception {
 		File rootFolder = tempFolder.getRoot();
 
-		Configuration config = ZooKeeperTestUtils.createZooKeeperRecoveryModeConfig(
+		Configuration config = ZooKeeperTestUtils.createZooKeeperHAConfig(
 			zkServer.getConnectString(),
 			rootFolder.getPath());
 

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-runtime/src/test/java/org/apache/flink/runtime/execution/librarycache/BlobLibraryCacheRecoveryITCase.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/execution/librarycache/BlobLibraryCacheRecoveryITCase.java b/flink-runtime/src/test/java/org/apache/flink/runtime/execution/librarycache/BlobLibraryCacheRecoveryITCase.java
index a3fe0d4..f6bed56 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/execution/librarycache/BlobLibraryCacheRecoveryITCase.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/execution/librarycache/BlobLibraryCacheRecoveryITCase.java
@@ -63,9 +63,9 @@ public class BlobLibraryCacheRecoveryITCase {
 
 		try {
 			Configuration config = new Configuration();
-			config.setString(ConfigConstants.HIGH_AVAILABILITY, "ZOOKEEPER");
+			config.setString(ConfigConstants.HA_MODE, "ZOOKEEPER");
 			config.setString(ConfigConstants.STATE_BACKEND, "FILESYSTEM");
-			config.setString(ConfigConstants.ZOOKEEPER_HA_PATH, temporaryFolder.getRoot().getAbsolutePath());
+			config.setString(ConfigConstants.HA_ZOOKEEPER_STORAGE_PATH, temporaryFolder.getRoot().getAbsolutePath());
 
 			for (int i = 0; i < server.length; i++) {
 				server[i] = new BlobServer(config);

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-runtime/src/test/java/org/apache/flink/runtime/jobmanager/HighAvailabilityModeTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/jobmanager/HighAvailabilityModeTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/jobmanager/HighAvailabilityModeTest.java
new file mode 100644
index 0000000..04c0e48
--- /dev/null
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/jobmanager/HighAvailabilityModeTest.java
@@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.jobmanager;
+
+import org.apache.flink.configuration.ConfigConstants;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.runtime.jobmanager.HighAvailabilityMode;
+import org.junit.Test;
+
+import static org.junit.Assert.assertEquals;
+
+public class HighAvailabilityModeTest {
+
+	// Default HA mode
+	private final static HighAvailabilityMode DEFAULT_HA_MODE = HighAvailabilityMode.valueOf(
+			ConfigConstants.DEFAULT_HA_MODE.toUpperCase());
+
+	/**
+	 * Tests HA mode configuration.
+	 */
+	@Test
+	public void testFromConfig() throws Exception {
+		Configuration config = new Configuration();
+
+		// Check default
+		assertEquals(DEFAULT_HA_MODE, HighAvailabilityMode.fromConfig(config));
+
+		// Check not equals default
+		config.setString(ConfigConstants.HA_MODE, HighAvailabilityMode.ZOOKEEPER.name().toLowerCase());
+		assertEquals(HighAvailabilityMode.ZOOKEEPER, HighAvailabilityMode.fromConfig(config));
+	}
+
+	/**
+	 * Tests HA mode configuration with deprecated config values.
+	 */
+	@Test
+	public void testDeprecatedFromConfig() throws Exception {
+		Configuration config = new Configuration();
+
+		// Check mapping of old default to new default
+		config.setString(ConfigConstants.RECOVERY_MODE, ConfigConstants.DEFAULT_RECOVERY_MODE);
+		assertEquals(DEFAULT_HA_MODE, HighAvailabilityMode.fromConfig(config));
+
+		// Check deprecated config
+		config.setString(ConfigConstants.RECOVERY_MODE, HighAvailabilityMode.ZOOKEEPER.name().toLowerCase());
+		assertEquals(HighAvailabilityMode.ZOOKEEPER, HighAvailabilityMode.fromConfig(config));
+
+		// Check precedence over deprecated config
+		config.setString(ConfigConstants.HA_MODE, HighAvailabilityMode.NONE.name().toLowerCase());
+		config.setString(ConfigConstants.RECOVERY_MODE, HighAvailabilityMode.ZOOKEEPER.name().toLowerCase());
+
+		assertEquals(HighAvailabilityMode.NONE, HighAvailabilityMode.fromConfig(config));
+	}
+
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-runtime/src/test/java/org/apache/flink/runtime/jobmanager/JobManagerHARecoveryTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/jobmanager/JobManagerHARecoveryTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/jobmanager/JobManagerHARecoveryTest.java
index d980517..0e1c7c5 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/jobmanager/JobManagerHARecoveryTest.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/jobmanager/JobManagerHARecoveryTest.java
@@ -123,8 +123,8 @@ public class JobManagerHARecoveryTest {
 		ActorRef jobManager = null;
 		ActorRef taskManager = null;
 
-		flinkConfiguration.setString(ConfigConstants.HIGH_AVAILABILITY, "zookeeper");
-		flinkConfiguration.setString(ConfigConstants.ZOOKEEPER_HA_PATH, temporaryFolder.newFolder().toString());
+		flinkConfiguration.setString(ConfigConstants.HA_MODE, "zookeeper");
+		flinkConfiguration.setString(ConfigConstants.HA_ZOOKEEPER_STORAGE_PATH, temporaryFolder.newFolder().toString());
 		flinkConfiguration.setInteger(ConfigConstants.TASK_MANAGER_NUM_TASK_SLOTS, slots);
 
 		try {

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/JobManagerLeaderElectionTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/JobManagerLeaderElectionTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/JobManagerLeaderElectionTest.java
index 5c696ce..ed4d530 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/JobManagerLeaderElectionTest.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/JobManagerLeaderElectionTest.java
@@ -101,7 +101,7 @@ public class JobManagerLeaderElectionTest extends TestLogger {
 	@Test
 	public void testLeaderElection() throws Exception {
 		final Configuration configuration = ZooKeeperTestUtils
-			.createZooKeeperRecoveryModeConfig(
+			.createZooKeeperHAConfig(
 				testingServer.getConnectString(),
 				tempFolder.getRoot().getPath());
 
@@ -130,7 +130,7 @@ public class JobManagerLeaderElectionTest extends TestLogger {
 	@Test
 	public void testLeaderReelection() throws Exception {
 		final Configuration configuration = ZooKeeperTestUtils
-			.createZooKeeperRecoveryModeConfig(
+			.createZooKeeperHAConfig(
 				testingServer.getConnectString(),
 				tempFolder.getRoot().getPath());
 

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/ZooKeeperLeaderElectionTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/ZooKeeperLeaderElectionTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/ZooKeeperLeaderElectionTest.java
index 048fbee..e20985b 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/ZooKeeperLeaderElectionTest.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/ZooKeeperLeaderElectionTest.java
@@ -90,7 +90,7 @@ public class ZooKeeperLeaderElectionTest extends TestLogger {
 	public void testZooKeeperLeaderElectionRetrieval() throws Exception {
 		Configuration configuration = new Configuration();
 		configuration.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, testingServer.getConnectString());
-		configuration.setString(ConfigConstants.HIGH_AVAILABILITY, "zookeeper");
+		configuration.setString(ConfigConstants.HA_MODE, "zookeeper");
 
 		ZooKeeperLeaderElectionService leaderElectionService = null;
 		ZooKeeperLeaderRetrievalService leaderRetrievalService = null;
@@ -135,7 +135,7 @@ public class ZooKeeperLeaderElectionTest extends TestLogger {
 	public void testZooKeeperReelection() throws Exception {
 		Configuration configuration = new Configuration();
 		configuration.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, testingServer.getConnectString());
-		configuration.setString(ConfigConstants.HIGH_AVAILABILITY, "zookeeper");
+		configuration.setString(ConfigConstants.HA_MODE, "zookeeper");
 
 		Deadline deadline = new FiniteDuration(5, TimeUnit.MINUTES).fromNow();
 
@@ -218,7 +218,7 @@ public class ZooKeeperLeaderElectionTest extends TestLogger {
 	public void testZooKeeperReelectionWithReplacement() throws Exception {
 		Configuration configuration = new Configuration();
 		configuration.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, testingServer.getConnectString());
-		configuration.setString(ConfigConstants.HIGH_AVAILABILITY, "zookeeper");
+		configuration.setString(ConfigConstants.HA_MODE, "zookeeper");
 
 		int num = 3;
 		int numTries = 30;
@@ -296,7 +296,7 @@ public class ZooKeeperLeaderElectionTest extends TestLogger {
 
 		Configuration configuration = new Configuration();
 		configuration.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, testingServer.getConnectString());
-		configuration.setString(ConfigConstants.HIGH_AVAILABILITY, "zookeeper");
+		configuration.setString(ConfigConstants.HA_MODE, "zookeeper");
 		configuration.setString(ConfigConstants.HA_ZOOKEEPER_LEADER_PATH, leaderPath);
 
 		ZooKeeperLeaderElectionService leaderElectionService = null;
@@ -380,7 +380,7 @@ public class ZooKeeperLeaderElectionTest extends TestLogger {
 	public void testExceptionForwarding() throws Exception {
 		Configuration configuration = new Configuration();
 		configuration.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, testingServer.getConnectString());
-		configuration.setString(ConfigConstants.HIGH_AVAILABILITY, "zookeeper");
+		configuration.setString(ConfigConstants.HA_MODE, "zookeeper");
 
 		ZooKeeperLeaderElectionService leaderElectionService = null;
 		ZooKeeperLeaderRetrievalService leaderRetrievalService = null;
@@ -449,7 +449,7 @@ public class ZooKeeperLeaderElectionTest extends TestLogger {
 	public void testEphemeralZooKeeperNodes() throws Exception {
 		Configuration configuration = new Configuration();
 		configuration.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, testingServer.getConnectString());
-		configuration.setString(ConfigConstants.HIGH_AVAILABILITY, "zookeeper");
+		configuration.setString(ConfigConstants.HA_MODE, "zookeeper");
 
 		ZooKeeperLeaderElectionService leaderElectionService;
 		ZooKeeperLeaderRetrievalService leaderRetrievalService = null;

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/ZooKeeperLeaderRetrievalTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/ZooKeeperLeaderRetrievalTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/ZooKeeperLeaderRetrievalTest.java
index 5aace34..0fe0644 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/ZooKeeperLeaderRetrievalTest.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/leaderelection/ZooKeeperLeaderRetrievalTest.java
@@ -23,7 +23,6 @@ import org.apache.curator.test.TestingServer;
 import org.apache.flink.configuration.ConfigConstants;
 import org.apache.flink.configuration.Configuration;
 import org.apache.flink.runtime.jobmanager.JobManager;
-import org.apache.flink.runtime.jobmanager.HighAvailabilityMode;
 import org.apache.flink.runtime.leaderretrieval.LeaderRetrievalService;
 import org.apache.flink.runtime.util.LeaderRetrievalUtils;
 import org.apache.flink.runtime.util.ZooKeeperUtils;
@@ -44,7 +43,6 @@ import java.net.UnknownHostException;
 import java.util.concurrent.TimeUnit;
 
 import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
 
 public class ZooKeeperLeaderRetrievalTest extends TestLogger{
 
@@ -84,7 +82,7 @@ public class ZooKeeperLeaderRetrievalTest extends TestLogger{
 
 		long sleepingTime = 1000;
 
-		config.setString(ConfigConstants.HIGH_AVAILABILITY, "zookeeper");
+		config.setString(ConfigConstants.HA_MODE, "zookeeper");
 		config.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, testingServer.getConnectString());
 
 		LeaderElectionService leaderElectionService = null;
@@ -181,7 +179,7 @@ public class ZooKeeperLeaderRetrievalTest extends TestLogger{
 	@Test
 	public void testTimeoutOfFindConnectingAddress() throws Exception {
 		Configuration config = new Configuration();
-		config.setString(ConfigConstants.HIGH_AVAILABILITY, "zookeeper");
+		config.setString(ConfigConstants.HA_MODE, "zookeeper");
 		config.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, testingServer.getConnectString());
 
 		FiniteDuration timeout = new FiniteDuration(10, TimeUnit.SECONDS);
@@ -192,46 +190,6 @@ public class ZooKeeperLeaderRetrievalTest extends TestLogger{
 		assertEquals(InetAddress.getLocalHost(), result);
 	}
 
-	@Test
-	public void testConnectionToZookeeperOverridingOldConfig() throws Exception {
-		Configuration config = new Configuration();
-		// The new config will be taken into effect
-		config.setString(ConfigConstants.RECOVERY_MODE, "standalone");
-		config.setString(ConfigConstants.HIGH_AVAILABILITY, "zookeeper");
-		config.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, testingServer.getConnectString());
-
-		FiniteDuration timeout = new FiniteDuration(10, TimeUnit.SECONDS);
-
-		LeaderRetrievalService leaderRetrievalService =
-			LeaderRetrievalUtils.createLeaderRetrievalService(config);
-		InetAddress result = LeaderRetrievalUtils.findConnectingAddress(leaderRetrievalService, timeout);
-
-		assertEquals(InetAddress.getLocalHost(), result);
-	}
-
-	@Test
-	public void testConnectionToStandAloneLeaderOverridingOldConfig() throws Exception {
-		Configuration config = new Configuration();
-		// The new config will be taken into effect
-		config.setString(ConfigConstants.RECOVERY_MODE, "zookeeper");
-		config.setString(ConfigConstants.HIGH_AVAILABILITY, "none");
-		config.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, testingServer.getConnectString());
-
-		HighAvailabilityMode mode = HighAvailabilityMode.fromConfig(config);
-		assertTrue(mode == HighAvailabilityMode.NONE);
-	}
-
-	@Test
-	public void testConnectionToZookeeperUsingOldConfig() throws Exception {
-		Configuration config = new Configuration();
-		// The new config will be taken into effect
-		config.setString(ConfigConstants.RECOVERY_MODE, "zookeeper");
-		config.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, testingServer.getConnectString());
-
-		HighAvailabilityMode mode = HighAvailabilityMode.fromConfig(config);
-		assertTrue(mode == HighAvailabilityMode.ZOOKEEPER);
-	}
-
 	class FindConnectingAddress implements Runnable {
 
 		private final Configuration config;

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/JobManagerProcess.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/JobManagerProcess.java b/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/JobManagerProcess.java
index 66d523f..e8981a0 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/JobManagerProcess.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/JobManagerProcess.java
@@ -212,7 +212,7 @@ public class JobManagerProcess extends TestJvmProcess {
 		 * <code>--port PORT</code>.
 		 *
 		 * <p>Other arguments are parsed to a {@link Configuration} and passed to the
-		 * JobManager, for instance: <code>--high-availability ZOOKEEPER --recovery.zookeeper.quorum
+		 * JobManager, for instance: <code>--high-availability ZOOKEEPER --high-availability.zookeeper.quorum
 		 * "xyz:123:456"</code>.
 		 */
 		public static void main(String[] args) {

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/TaskManagerProcess.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/TaskManagerProcess.java b/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/TaskManagerProcess.java
index 417dc88..58bc50e 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/TaskManagerProcess.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/TaskManagerProcess.java
@@ -96,7 +96,7 @@ public class TaskManagerProcess extends TestJvmProcess {
 
 		/**
 		 * All arguments are parsed to a {@link Configuration} and passed to the Taskmanager,
-		 * for instance: <code>--high-availability ZOOKEEPER --recovery.zookeeper.quorum "xyz:123:456"</code>.
+		 * for instance: <code>--high-availability ZOOKEEPER --high-availability.zookeeper.quorum "xyz:123:456"</code>.
 		 */
 		public static void main(String[] args) throws Exception {
 			try {

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/ZooKeeperTestUtils.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/ZooKeeperTestUtils.java b/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/ZooKeeperTestUtils.java
index c94842f..7dd7067 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/ZooKeeperTestUtils.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/ZooKeeperTestUtils.java
@@ -38,10 +38,10 @@ public class ZooKeeperTestUtils {
 	 *                          recovery)
 	 * @return A new configuration to operate in {@link HighAvailabilityMode#ZOOKEEPER}.
 	 */
-	public static Configuration createZooKeeperRecoveryModeConfig(
+	public static Configuration createZooKeeperHAConfig(
 			String zooKeeperQuorum, String fsStateHandlePath) {
 
-		return setZooKeeperRecoveryMode(new Configuration(), zooKeeperQuorum, fsStateHandlePath);
+		return configureZooKeeperHA(new Configuration(), zooKeeperQuorum, fsStateHandlePath);
 	}
 
 	/**
@@ -53,7 +53,7 @@ public class ZooKeeperTestUtils {
 	 *                          recovery)
 	 * @return The modified configuration to operate in {@link HighAvailabilityMode#ZOOKEEPER}.
 	 */
-	public static Configuration setZooKeeperRecoveryMode(
+	public static Configuration configureZooKeeperHA(
 			Configuration config,
 			String zooKeeperQuorum,
 			String fsStateHandlePath) {
@@ -66,7 +66,7 @@ public class ZooKeeperTestUtils {
 		config.setInteger(ConfigConstants.JOB_MANAGER_WEB_PORT_KEY, -1);
 
 		// ZooKeeper recovery mode
-		config.setString(ConfigConstants.HIGH_AVAILABILITY, "ZOOKEEPER");
+		config.setString(ConfigConstants.HA_MODE, "ZOOKEEPER");
 		config.setString(ConfigConstants.HA_ZOOKEEPER_QUORUM_KEY, zooKeeperQuorum);
 
 		int connTimeout = 5000;
@@ -81,7 +81,7 @@ public class ZooKeeperTestUtils {
 		// File system state backend
 		config.setString(ConfigConstants.STATE_BACKEND, "FILESYSTEM");
 		config.setString(FsStateBackendFactory.CHECKPOINT_DIRECTORY_URI_CONF_KEY, fsStateHandlePath + "/checkpoints");
-		config.setString(ConfigConstants.ZOOKEEPER_HA_PATH, fsStateHandlePath + "/recovery");
+		config.setString(ConfigConstants.HA_ZOOKEEPER_STORAGE_PATH, fsStateHandlePath + "/recovery");
 
 		// Akka failure detection and execution retries
 		config.setString(ConfigConstants.AKKA_WATCH_HEARTBEAT_INTERVAL, "1000 ms");

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-runtime/src/test/scala/org/apache/flink/runtime/testingUtils/TestingUtils.scala
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/scala/org/apache/flink/runtime/testingUtils/TestingUtils.scala b/flink-runtime/src/test/scala/org/apache/flink/runtime/testingUtils/TestingUtils.scala
index 7d2b86c..5628f3c 100644
--- a/flink-runtime/src/test/scala/org/apache/flink/runtime/testingUtils/TestingUtils.scala
+++ b/flink-runtime/src/test/scala/org/apache/flink/runtime/testingUtils/TestingUtils.scala
@@ -423,8 +423,8 @@ object TestingUtils {
       prefix: String)
     : ActorGateway = {
 
-    configuration.setString(ConfigConstants.HIGH_AVAILABILITY,
-      ConfigConstants.DEFAULT_HIGH_AVAILABILTY)
+    configuration.setString(ConfigConstants.HA_MODE,
+      ConfigConstants.DEFAULT_HA_MODE)
 
       val (actor, _) = JobManager.startJobManagerActors(
         configuration,
@@ -503,8 +503,8 @@ object TestingUtils {
       configuration: Configuration)
   : ActorGateway = {
 
-    configuration.setString(ConfigConstants.HIGH_AVAILABILITY,
-      ConfigConstants.DEFAULT_HIGH_AVAILABILTY)
+    configuration.setString(ConfigConstants.HA_MODE,
+      ConfigConstants.DEFAULT_HA_MODE)
 
     val actor = FlinkResourceManager.startResourceManagerActors(
       configuration,

http://git-wip-us.apache.org/repos/asf/flink/blob/58165d69/flink-test-utils-parent/flink-test-utils/src/main/java/org/apache/flink/test/util/TestBaseUtils.java
----------------------------------------------------------------------
diff --git a/flink-test-utils-parent/flink-test-utils/src/main/java/org/apache/flink/test/util/TestBaseUtils.java b/flink-test-utils-parent/flink-test-utils/src/main/java/org/apache/flink/test/util/TestBaseUtils.java
index 7e5acee..4014b80 100644
--- a/flink-test-utils-parent/flink-test-utils/src/main/java/org/apache/flink/test/util/TestBaseUtils.java
+++ b/flink-test-utils-parent/flink-test-utils/src/main/java/org/apache/flink/test/util/TestBaseUtils.java
@@ -120,7 +120,7 @@ public class TestBaseUtils extends TestLogger {
 
 		if (startZooKeeper) {
 			config.setInteger(ConfigConstants.LOCAL_NUMBER_JOB_MANAGER, 3);
-			config.setString(ConfigConstants.HIGH_AVAILABILITY, "zookeeper");
+			config.setString(ConfigConstants.HA_MODE, "zookeeper");
 		}
 
 		return startCluster(config, singleActorSystem);


[16/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/event_timestamps_watermarks.md
----------------------------------------------------------------------
diff --git a/docs/dev/event_timestamps_watermarks.md b/docs/dev/event_timestamps_watermarks.md
new file mode 100644
index 0000000..8d152df
--- /dev/null
+++ b/docs/dev/event_timestamps_watermarks.md
@@ -0,0 +1,329 @@
+---
+title: "Generating Timestamps / Watermarks"
+nav-parent_id: event_time
+nav-pos: 1
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+* toc
+{:toc}
+
+
+This section is relevant for program running on **Event Time**. For an introduction to *Event Time*,
+*Processing Time*, and *Ingestion Time*, please refer to the [event time introduction]({{ site.baseurl }}/dev/event_time.html)
+
+To work with *Event Time*, streaming programs need to set the *time characteristic* accordingly.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
+env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = StreamExecutionEnvironment.getExecutionEnvironment
+env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
+{% endhighlight %}
+</div>
+</div>
+
+## Assigning Timestamps
+
+In order to work with *Event Time*, Flink needs to know the events' *timestamps*, meaning each element in the
+stream needs to get its event timestamp *assigned*. That happens usually by accessing/extracting the
+timestamp from some field in the element.
+
+Timestamp assignment goes hand-in-hand with generating watermarks, which tell the system about
+the progress in event time.
+
+There are two ways to assign timestamps and generate Watermarks:
+
+  1. Directly in the data stream source
+  2. Via a timestamp assigner / watermark generator: in Flink timestamp assigners also define the watermarks to be emitted
+
+<span class="label label-danger">Attention</span> Both timestamps and watermarks are specified as
+millliseconds since the Java epoch of 1970-01-01T00:00:00Z.
+
+### Source Functions with Timestamps and Watermarks
+
+Stream sources can also directly assign timestamps to the elements they produce and emit Watermarks. In that case,
+no Timestamp Assigner is needed.
+
+To assign a timestamp to an element in the source directly, the source must use the `collectWithTimestamp(...)`
+method on the `SourceContext`. To generate Watermarks, the source must call the `emitWatermark(Watermark)` function.
+
+Below is a simple example of a source *(non-checkpointed)* that assigns timestamps and generates Watermarks
+depending on special events:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+@Override
+public void run(SourceContext<MyType> ctx) throws Exception {
+	while (/* condition */) {
+		MyType next = getNext();
+		ctx.collectWithTimestamp(next, next.getEventTimestamp());
+
+		if (next.hasWatermarkTime()) {
+			ctx.emitWatermark(new Watermark(next.getWatermarkTime()));
+		}
+	}
+}
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+override def run(ctx: SourceContext[MyType]): Unit = {
+	while (/* condition */) {
+		val next: MyType = getNext()
+		ctx.collectWithTimestamp(next, next.eventTimestamp)
+
+		if (next.hasWatermarkTime) {
+			ctx.emitWatermark(new Watermark(next.getWatermarkTime))
+		}
+	}
+}
+{% endhighlight %}
+</div>
+</div>
+
+*Note:* If the streaming program uses a TimestampAssigner on a stream where elements have a timestamp already,
+those timestamps will be overwritten by the TimestampAssigner. Similarly, Watermarks will be overwritten as well.
+
+
+### Timestamp Assigners / Watermark Generators
+
+Timestamp Assigners take a stream and produce a new stream with timestamped elements and watermarks. If the
+original stream had timestamps and/or watermarks already, the timestamp assigner overwrites them.
+
+The timestamp assigners usually are specified immediately after the data source but it is not strictly required to do so.
+A common pattern is, for example, to parse (*MapFunction*) and filter (*FilterFunction*) before the timestamp assigner.
+In any case, the timestamp assigner needs to be specified before the first operation on event time
+(such as the first window operation). As a special case, when using Kafka as the source of a streaming job,
+Flink allows the specification of a timestamp assigner / watermark emitter inside
+the source (or consumer) itself. More information on how to do so can be found in the
+[Kafka Connector documentation]({{ site.baseurl }}/dev/connectors/kafka.html).
+
+
+**NOTE:** The remainder of this section presents the main interfaces a programmer has
+to implement in order to create her own timestamp extractors/watermark emitters.
+To see the pre-implemented extractors that ship with Flink, please refer to the
+[Pre-defined Timestamp Extractors / Watermark Emitters]({{ site.baseurl }}/dev/event_timestamp_extractors.html) page.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
+env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
+
+DataStream<MyEvent> stream = env.readFile(
+        myFormat, myFilePath, FileProcessingMode.PROCESS_CONTINUOUSLY, 100,
+        FilePathFilter.createDefaultFilter(), typeInfo);
+
+DataStream<MyEvent> withTimestampsAndWatermarks = stream
+        .filter( event -> event.severity() == WARNING )
+        .assignTimestampsAndWatermarks(new MyTimestampsAndWatermarks());
+
+withTimestampsAndWatermarks
+        .keyBy( (event) -> event.getGroup() )
+        .timeWindow(Time.seconds(10))
+        .reduce( (a, b) -> a.add(b) )
+        .addSink(...);
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = StreamExecutionEnvironment.getExecutionEnvironment
+env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
+
+val stream: DataStream[MyEvent] = env.readFile(
+         myFormat, myFilePath, FileProcessingMode.PROCESS_CONTINUOUSLY, 100,
+         FilePathFilter.createDefaultFilter());
+
+val withTimestampsAndWatermarks: DataStream[MyEvent] = stream
+        .filter( _.severity == WARNING )
+        .assignTimestampsAndWatermarks(new MyTimestampsAndWatermarks())
+
+withTimestampsAndWatermarks
+        .keyBy( _.getGroup )
+        .timeWindow(Time.seconds(10))
+        .reduce( (a, b) => a.add(b) )
+        .addSink(...)
+{% endhighlight %}
+</div>
+</div>
+
+
+#### **With Periodic Watermarks**
+
+The `AssignerWithPeriodicWatermarks` assigns timestamps and generates watermarks periodically (possibly depending
+on the stream elements, or purely based on processing time).
+
+The interval (every *n* milliseconds) in which the watermark will be generated is defined via
+`ExecutionConfig.setAutoWatermarkInterval(...)`. Each time, the assigner's `getCurrentWatermark()` method will be
+called, and a new Watermark will be emitted, if the returned Watermark is non-null and larger than the previous
+Watermark.
+
+Two simple examples of timestamp assigners with periodic watermark generation are below.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+/**
+ * This generator generates watermarks assuming that elements come out of order to a certain degree only.
+ * The latest elements for a certain timestamp t will arrive at most n milliseconds after the earliest
+ * elements for timestamp t.
+ */
+public class BoundedOutOfOrdernessGenerator extends AssignerWithPeriodicWatermarks<MyEvent> {
+
+    private final long maxOutOfOrderness = 3500; // 3.5 seconds
+
+    private long currentMaxTimestamp;
+
+    @Override
+    public long extractTimestamp(MyEvent element, long previousElementTimestamp) {
+        long timestamp = element.getCreationTime();
+        currentMaxTimestamp = Math.max(timestamp, currentMaxTimestamp);
+        return timestamp;
+    }
+
+    @Override
+    public Watermark getCurrentWatermark() {
+        // return the watermark as current highest timestamp minus the out-of-orderness bound
+        return new Watermark(currentMaxTimestamp - maxOutOfOrderness);
+    }
+}
+
+/**
+ * This generator generates watermarks that are lagging behind processing time by a certain amount.
+ * It assumes that elements arrive in Flink after at most a certain time.
+ */
+public class TimeLagWatermarkGenerator extends AssignerWithPeriodicWatermarks<MyEvent> {
+
+	private final long maxTimeLag = 5000; // 5 seconds
+
+	@Override
+	public long extractTimestamp(MyEvent element, long previousElementTimestamp) {
+		return element.getCreationTime();
+	}
+
+	@Override
+	public Watermark getCurrentWatermark() {
+		// return the watermark as current time minus the maximum time lag
+		return new Watermark(System.currentTimeMillis() - maxTimeLag);
+	}
+}
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+/**
+ * This generator generates watermarks assuming that elements come out of order to a certain degree only.
+ * The latest elements for a certain timestamp t will arrive at most n milliseconds after the earliest
+ * elements for timestamp t.
+ */
+class BoundedOutOfOrdernessGenerator extends AssignerWithPeriodicWatermarks[MyEvent] {
+
+    val maxOutOfOrderness = 3500L; // 3.5 seconds
+
+    var currentMaxTimestamp: Long;
+
+    override def extractTimestamp(element: MyEvent, previousElementTimestamp: Long): Long = {
+        val timestamp = element.getCreationTime()
+        currentMaxTimestamp = max(timestamp, currentMaxTimestamp)
+        timestamp;
+    }
+
+    override def getCurrentWatermark(): Watermark = {
+        // return the watermark as current highest timestamp minus the out-of-orderness bound
+        new Watermark(currentMaxTimestamp - maxOutOfOrderness);
+    }
+}
+
+/**
+ * This generator generates watermarks that are lagging behind processing time by a certain amount.
+ * It assumes that elements arrive in Flink after at most a certain time.
+ */
+class TimeLagWatermarkGenerator extends AssignerWithPeriodicWatermarks[MyEvent] {
+
+    val maxTimeLag = 5000L; // 5 seconds
+
+    override def extractTimestamp(element: MyEvent, previousElementTimestamp: Long): Long = {
+        element.getCreationTime
+    }
+
+    override def getCurrentWatermark(): Watermark = {
+        // return the watermark as current time minus the maximum time lag
+        new Watermark(System.currentTimeMillis() - maxTimeLag)
+    }
+}
+{% endhighlight %}
+</div>
+</div>
+
+#### **With Punctuated Watermarks**
+
+To generate Watermarks whenever a certain event indicates that a new watermark can be generated, use the
+`AssignerWithPunctuatedWatermarks`. For this class, Flink will first call the `extractTimestamp(...)` method
+to assign the element a timestamp, and then immediately call for that element the
+`checkAndGetNextWatermark(...)` method.
+
+The `checkAndGetNextWatermark(...)` method gets the timestamp that was assigned in the `extractTimestamp(...)`
+method, and can decide whether it wants to generate a Watermark. Whenever the `checkAndGetNextWatermark(...)`
+method returns a non-null Watermark, and that Watermark is larger than the latest previous Watermark, that
+new Watermark will be emitted.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+public class PunctuatedAssigner extends AssignerWithPunctuatedWatermarks<MyEvent> {
+
+	@Override
+	public long extractTimestamp(MyEvent element, long previousElementTimestamp) {
+		return element.getCreationTime();
+	}
+
+	@Override
+	public Watermark checkAndGetNextWatermark(MyEvent lastElement, long extractedTimestamp) {
+		return element.hasWatermarkMarker() ? new Watermark(extractedTimestamp) : null;
+	}
+}
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+class PunctuatedAssigner extends AssignerWithPunctuatedWatermarks[MyEvent] {
+
+	override def extractTimestamp(element: MyEvent, previousElementTimestamp: Long): Long = {
+		element.getCreationTime
+	}
+
+	override def checkAndGetNextWatermark(lastElement: MyEvent, extractedTimestamp: Long): Watermark = {
+		if (element.hasWatermarkMarker()) new Watermark(extractedTimestamp) else null
+	}
+}
+{% endhighlight %}
+</div>
+</div>
+
+*Note:* It is possible to generate a watermark on every single event. However, because each watermark causes some
+computation downstream, an excessive number of watermarks slows down performance.

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/index.md
----------------------------------------------------------------------
diff --git a/docs/dev/index.md b/docs/dev/index.md
new file mode 100644
index 0000000..67916c1
--- /dev/null
+++ b/docs/dev/index.md
@@ -0,0 +1,25 @@
+---
+title: "Application Development"
+nav-id: dev
+nav-title: '<i class="fa fa-code" aria-hidden="true"></i> Application Development'
+nav-parent_id: root
+nav-pos: 3
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/java8.md
----------------------------------------------------------------------
diff --git a/docs/dev/java8.md b/docs/dev/java8.md
new file mode 100644
index 0000000..3792e27
--- /dev/null
+++ b/docs/dev/java8.md
@@ -0,0 +1,196 @@
+---
+title: "Java 8"
+nav-parent_id: apis
+nav-pos: 105
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Java 8 introduces several new language features designed for faster and clearer coding. With the most important feature,
+the so-called "Lambda Expressions", Java 8 opens the door to functional programming. Lambda Expressions allow for implementing and
+passing functions in a straightforward way without having to declare additional (anonymous) classes.
+
+The newest version of Flink supports the usage of Lambda Expressions for all operators of the Java API.
+This document shows how to use Lambda Expressions and describes current limitations. For a general introduction to the
+Flink API, please refer to the [Programming Guide]({{ site.baseurl }}/dev/api_concepts.html)
+
+* TOC
+{:toc}
+
+### Examples
+
+The following example illustrates how to implement a simple, inline `map()` function that squares its input using a Lambda Expression.
+The types of input `i` and output parameters of the `map()` function need not to be declared as they are inferred by the Java 8 compiler.
+
+~~~java
+env.fromElements(1, 2, 3)
+// returns the squared i
+.map(i -> i*i)
+.print();
+~~~
+
+The next two examples show different implementations of a function that uses a `Collector` for output.
+Functions, such as `flatMap()`, require a output type (in this case `String`) to be defined for the `Collector` in order to be type-safe.
+If the `Collector` type can not be inferred from the surrounding context, it need to be declared in the Lambda Expression's parameter list manually.
+Otherwise the output will be treated as type `Object` which can lead to undesired behaviour.
+
+~~~java
+DataSet<Integer> input = env.fromElements(1, 2, 3);
+
+// collector type must be declared
+input.flatMap((Integer number, Collector<String> out) -> {
+    StringBuilder builder = new StringBuilder();
+    for(int i = 0; i < number; i++) {
+        builder.append("a");
+        out.collect(builder.toString());
+    }
+})
+// returns (on separate lines) "a", "a", "aa", "a", "aa", "aaa"
+.print();
+~~~
+
+~~~java
+DataSet<Integer> input = env.fromElements(1, 2, 3);
+
+// collector type must not be declared, it is inferred from the type of the dataset
+DataSet<String> manyALetters = input.flatMap((number, out) -> {
+    StringBuilder builder = new StringBuilder();
+    for(int i = 0; i < number; i++) {
+       builder.append("a");
+       out.collect(builder.toString());
+    }
+});
+
+// returns (on separate lines) "a", "a", "aa", "a", "aa", "aaa"
+manyALetters.print();
+~~~
+
+The following code demonstrates a word count which makes extensive use of Lambda Expressions.
+
+~~~java
+DataSet<String> input = env.fromElements("Please count", "the words", "but not this");
+
+// filter out strings that contain "not"
+input.filter(line -> !line.contains("not"))
+// split each line by space
+.map(line -> line.split(" "))
+// emit a pair <word,1> for each array element
+.flatMap((String[] wordArray, Collector<Tuple2<String, Integer>> out)
+    -> Arrays.stream(wordArray).forEach(t -> out.collect(new Tuple2<>(t, 1)))
+    )
+// group and sum up
+.groupBy(0).sum(1)
+// print
+.print();
+~~~
+
+### Compiler Limitations
+Currently, Flink only supports jobs containing Lambda Expressions completely if they are **compiled with the Eclipse JDT compiler contained in Eclipse Luna 4.4.2 (and above)**.
+
+Only the Eclipse JDT compiler preserves the generic type information necessary to use the entire Lambda Expressions feature type-safely.
+Other compilers such as the OpenJDK's and Oracle JDK's `javac` throw away all generic parameters related to Lambda Expressions. This means that types such as `Tuple2<String,Integer` or `Collector<String>` declared as a Lambda function input or output parameter will be pruned to `Tuple2` or `Collector` in the compiled `.class` files, which is too little information for the Flink Compiler.
+
+How to compile a Flink job that contains Lambda Expressions with the JDT compiler will be covered in the next section.
+
+However, it is possible to implement functions such as `map()` or `filter()` with Lambda Expressions in Java 8 compilers other than the Eclipse JDT compiler as long as the function has no `Collector`s or `Iterable`s *and* only if the function handles unparameterized types such as `Integer`, `Long`, `String`, `MyOwnClass` (types without Generics!).
+
+#### Compile Flink jobs with the Eclipse JDT compiler and Maven
+
+If you are using the Eclipse IDE, you can run and debug your Flink code within the IDE without any problems after some configuration steps. The Eclipse IDE by default compiles its Java sources with the Eclipse JDT compiler. The next section describes how to configure the Eclipse IDE.
+
+If you are using a different IDE such as IntelliJ IDEA or you want to package your Jar-File with Maven to run your job on a cluster, you need to modify your project's `pom.xml` file and build your program with Maven. The [quickstart]({{site.baseurl}}/quickstart/setup_quickstart.html) contains preconfigured Maven projects which can be used for new projects or as a reference. Uncomment the mentioned lines in your generated quickstart `pom.xml` file if you want to use Java 8 with Lambda Expressions.
+
+Alternatively, you can manually insert the following lines to your Maven `pom.xml` file. Maven will then use the Eclipse JDT compiler for compilation.
+
+~~~xml
+<!-- put these lines under "project/build/pluginManagement/plugins" of your pom.xml -->
+
+<plugin>
+    <!-- Use compiler plugin with tycho as the adapter to the JDT compiler. -->
+    <artifactId>maven-compiler-plugin</artifactId>
+    <configuration>
+        <source>1.8</source>
+        <target>1.8</target>
+        <compilerId>jdt</compilerId>
+    </configuration>
+    <dependencies>
+        <!-- This dependency provides the implementation of compiler "jdt": -->
+        <dependency>
+            <groupId>org.eclipse.tycho</groupId>
+            <artifactId>tycho-compiler-jdt</artifactId>
+            <version>0.21.0</version>
+        </dependency>
+    </dependencies>
+</plugin>
+~~~
+
+If you are using Eclipse for development, the m2e plugin might complain about the inserted lines above and marks your `pom.xml` as invalid. If so, insert the following lines to your `pom.xml`.
+
+~~~xml
+<!-- put these lines under "project/build/pluginManagement/plugins/plugin[groupId="org.eclipse.m2e", artifactId="lifecycle-mapping"]/configuration/lifecycleMappingMetadata/pluginExecutions" of your pom.xml -->
+
+<pluginExecution>
+    <pluginExecutionFilter>
+        <groupId>org.apache.maven.plugins</groupId>
+        <artifactId>maven-compiler-plugin</artifactId>
+        <versionRange>[3.1,)</versionRange>
+        <goals>
+            <goal>testCompile</goal>
+            <goal>compile</goal>
+        </goals>
+    </pluginExecutionFilter>
+    <action>
+        <ignore></ignore>
+    </action>
+</pluginExecution>
+~~~
+
+#### Run and debug Flink jobs within the Eclipse IDE
+
+First of all, make sure you are running a current version of Eclipse IDE (4.4.2 or later). Also make sure that you have a Java 8 Runtime Environment (JRE) installed in Eclipse IDE (`Window` -> `Preferences` -> `Java` -> `Installed JREs`).
+
+Create/Import your Eclipse project.
+
+If you are using Maven, you also need to change the Java version in your `pom.xml` for the `maven-compiler-plugin`. Otherwise right click the `JRE System Library` section of your project and open the `Properties` window in order to switch to a Java 8 JRE (or above) that supports Lambda Expressions.
+
+The Eclipse JDT compiler needs a special compiler flag in order to store type information in `.class` files. Open the JDT configuration file at `{project directoy}/.settings/org.eclipse.jdt.core.prefs` with your favorite text editor and add the following line:
+
+~~~
+org.eclipse.jdt.core.compiler.codegen.lambda.genericSignature=generate
+~~~
+
+If not already done, also modify the Java versions of the following properties to `1.8` (or above):
+
+~~~
+org.eclipse.jdt.core.compiler.codegen.targetPlatform=1.8
+org.eclipse.jdt.core.compiler.compliance=1.8
+org.eclipse.jdt.core.compiler.source=1.8
+~~~
+
+After you have saved the file, perform a complete project refresh in Eclipse IDE.
+
+If you are using Maven, right click your Eclipse project and select `Maven` -> `Update Project...`.
+
+You have configured everything correctly, if the following Flink program runs without exceptions:
+
+~~~java
+final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+env.fromElements(1, 2, 3).map((in) -> new Tuple1<String>(" " + in)).print();
+env.execute();
+~~~

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/libraries.md
----------------------------------------------------------------------
diff --git a/docs/dev/libraries.md b/docs/dev/libraries.md
new file mode 100644
index 0000000..dc22e97
--- /dev/null
+++ b/docs/dev/libraries.md
@@ -0,0 +1,24 @@
+---
+title: "Libraries"
+nav-id: libs
+nav-parent_id: dev
+nav-pos: 8
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/libs/cep.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/cep.md b/docs/dev/libs/cep.md
new file mode 100644
index 0000000..77266bc
--- /dev/null
+++ b/docs/dev/libs/cep.md
@@ -0,0 +1,652 @@
+---
+title: "FlinkCEP - Complex event processing for Flink"
+nav-title: Event Processing (CEP)
+nav-parent_id: libs
+nav-pos: 1
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+FlinkCEP is the complex event processing library for Flink.
+It allows you to easily detect complex event patterns in a stream of endless data.
+Complex events can then be constructed from matching sequences.
+This gives you the opportunity to quickly get hold of what's really important in your data.
+
+<span class="label label-danger">Attention</span> The events in the `DataStream` to which
+you want to apply pattern matching have to implement proper `equals()` and `hashCode()` methods
+because these are used for comparing and matching events.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Getting Started
+
+If you want to jump right in, you have to [set up a Flink program]({{ site.baseurl }}/dev/api_concepts.html#linking-with-flink).
+Next, you have to add the FlinkCEP dependency to the `pom.xml` of your project.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight xml %}
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-cep{{ site.scala_version_suffix }}</artifactId>
+  <version>{{site.version }}</version>
+</dependency>
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight xml %}
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-cep-scala{{ site.scala_version_suffix }}</artifactId>
+  <version>{{site.version }}</version>
+</dependency>
+{% endhighlight %}
+</div>
+</div>
+
+Note that FlinkCEP is currently not part of the binary distribution.
+See linking with it for cluster execution [here]({{site.baseurl}}/dev/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution).
+
+Now you can start writing your first CEP program using the pattern API.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+DataStream<Event> input = ...
+
+Pattern<Event, ?> pattern = Pattern.begin("start").where(evt -> evt.getId() == 42)
+    .next("middle").subtype(SubEvent.class).where(subEvt -> subEvt.getVolume() >= 10.0)
+    .followedBy("end").where(evt -> evt.getName().equals("end"));
+
+PatternStream<Event> patternStream = CEP.pattern(input, pattern);
+
+DataStream<Alert> result = patternStream.select(pattern -> {
+    return createAlertFrom(pattern);
+});
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val input: DataStream[Event] = ...
+
+val pattern = Pattern.begin("start").where(_.getId == 42)
+  .next("middle").subtype(classOf[SubEvent]).where(_.getVolume >= 10.0)
+  .followedBy("end").where(_.getName == "end")
+
+val patternStream = CEP.pattern(input, pattern)
+
+val result: DataStream[Alert] = patternStream.select(createAlert(_))
+{% endhighlight %}
+</div>
+</div>
+
+Note that we use use Java 8 lambdas in our Java code examples to make them more succinct.
+
+## The Pattern API
+
+The pattern API allows you to quickly define complex event patterns.
+
+Each pattern consists of multiple stages or what we call states.
+In order to go from one state to the next, the user can specify conditions.
+These conditions can be the contiguity of events or a filter condition on an event.
+
+Each pattern has to start with an initial state:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+Pattern<Event, ?> start = Pattern.<Event>begin("start");
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val start : Pattern[Event, _] = Pattern.begin("start")
+{% endhighlight %}
+</div>
+</div>
+
+Each state must have an unique name to identify the matched events later on.
+Additionally, we can specify a filter condition for the event to be accepted as the start event via the `where` method.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+start.where(new FilterFunction<Event>() {
+    @Override
+    public boolean filter(Event value) {
+        return ... // some condition
+    }
+});
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+start.where(event => ... /* some condition */)
+{% endhighlight %}
+</div>
+</div>
+
+We can also restrict the type of the accepted event to some subtype of the initial event type (here `Event`) via the `subtype` method.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+start.subtype(SubEvent.class).where(new FilterFunction<SubEvent>() {
+    @Override
+    public boolean filter(SubEvent value) {
+        return ... // some condition
+    }
+});
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+start.subtype(classOf[SubEvent]).where(subEvent => ... /* some condition */)
+{% endhighlight %}
+</div>
+</div>
+
+As it can be seen here, the subtype condition can also be combined with an additional filter condition on the subtype.
+In fact you can always provide multiple conditions by calling `where` and `subtype` multiple times.
+These conditions will then be combined using the logical AND operator.
+
+In order to construct or conditions, one has to call the `or` method with a respective filter function.
+Any existing filter function is then ORed with the given one.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+pattern.where(new FilterFunction<Event>() {
+    @Override
+    public boolean filter(Event value) {
+        return ... // some condition
+    }
+}).or(new FilterFunction<Event>() {
+    @Override
+    public boolean filter(Event value) {
+        return ... // or condition
+    }
+});
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+pattern.where(event => ... /* some condition */).or(event => ... /* or condition */)
+{% endhighlight %}
+</div>
+</div>
+
+Next, we can append further states to detect complex patterns.
+We can control the contiguity of two succeeding events to be accepted by the pattern.
+
+Strict contiguity means that two matching events have to succeed directly.
+This means that no other events can occur in between.
+A strict contiguity pattern state can be created via the `next` method.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+Pattern<Event, ?> strictNext = start.next("middle");
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val strictNext: Pattern[Event, _] = start.next("middle")
+{% endhighlight %}
+</div>
+</div>
+
+Non-strict contiguity means that other events are allowed to occur in-between two matching events.
+A non-strict contiguity pattern state can be created via the `followedBy` method.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+Pattern<Event, ?> nonStrictNext = start.followedBy("middle");
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val nonStrictNext : Pattern[Event, _] = start.followedBy("middle")
+{% endhighlight %}
+</div>
+</div>
+It is also possible to define a temporal constraint for the pattern to be valid.
+For example, one can define that a pattern should occur within 10 seconds via the `within` method.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+next.within(Time.seconds(10));
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+next.within(Time.seconds(10))
+{% endhighlight %}
+</div>
+</div>
+
+<br />
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+<table class="table table-bordered">
+    <thead>
+        <tr>
+            <th class="text-left" style="width: 25%">Pattern Operation</th>
+            <th class="text-center">Description</th>
+        </tr>
+    </thead>
+    <tbody>
+        <tr>
+            <td><strong>Begin</strong></td>
+            <td>
+            <p>Defines a starting pattern state:</p>
+{% highlight java %}
+Pattern<Event, ?> start = Pattern.<Event>begin("start");
+{% endhighlight %}
+            </td>
+        </tr>
+        <tr>
+            <td><strong>Next</strong></td>
+            <td>
+                <p>Appends a new pattern state. A matching event has to directly succeed the previous matching event:</p>
+{% highlight java %}
+Pattern<Event, ?> next = start.next("next");
+{% endhighlight %}
+            </td>
+        </tr>
+        <tr>
+            <td><strong>FollowedBy</strong></td>
+            <td>
+                <p>Appends a new pattern state. Other events can occur between a matching event and the previous matching event:</p>
+{% highlight java %}
+Pattern<Event, ?> followedBy = start.followedBy("next");
+{% endhighlight %}
+            </td>
+        </tr>
+        <tr>
+            <td><strong>Where</strong></td>
+            <td>
+                <p>Defines a filter condition for the current pattern state. Only if an event passes the filter, it can match the state:</p>
+{% highlight java %}
+patternState.where(new FilterFunction<Event>() {
+    @Override
+    public boolean filter(Event value) throws Exception {
+        return ... // some condition
+    }
+});
+{% endhighlight %}
+            </td>
+        </tr>
+        <tr>
+            <td><strong>Or</strong></td>
+            <td>
+                <p>Adds a new filter condition which is ORed with an existing filter condition. Only if an event passes the filter condition, it can match the state:</p>
+{% highlight java %}
+patternState.where(new FilterFunction<Event>() {
+    @Override
+    public boolean filter(Event value) throws Exception {
+        return ... // some condition
+    }
+}).or(new FilterFunction<Event>() {
+    @Override
+    public boolean filter(Event value) throws Exception {
+        return ... // alternative condition
+    }
+});
+{% endhighlight %}
+                    </td>
+                </tr>
+       <tr>
+           <td><strong>Subtype</strong></td>
+           <td>
+               <p>Defines a subtype condition for the current pattern state. Only if an event is of this subtype, it can match the state:</p>
+{% highlight java %}
+patternState.subtype(SubEvent.class);
+{% endhighlight %}
+           </td>
+       </tr>
+       <tr>
+          <td><strong>Within</strong></td>
+          <td>
+              <p>Defines the maximum time interval for an event sequence to match the pattern. If a non-completed event sequence exceeds this time, it is discarded:</p>
+{% highlight java %}
+patternState.within(Time.seconds(10));
+{% endhighlight %}
+          </td>
+      </tr>
+  </tbody>
+</table>
+</div>
+
+<div data-lang="scala" markdown="1">
+<table class="table table-bordered">
+    <thead>
+        <tr>
+            <th class="text-left" style="width: 25%">Pattern Operation</th>
+            <th class="text-center">Description</th>
+        </tr>
+    </thead>
+    <tbody>
+        <tr>
+            <td><strong>Begin</strong></td>
+            <td>
+            <p>Defines a starting pattern state:</p>
+{% highlight scala %}
+val start = Pattern.begin[Event]("start")
+{% endhighlight %}
+            </td>
+        </tr>
+        <tr>
+            <td><strong>Next</strong></td>
+            <td>
+                <p>Appends a new pattern state. A matching event has to directly succeed the previous matching event:</p>
+{% highlight scala %}
+val next = start.next("middle")
+{% endhighlight %}
+            </td>
+        </tr>
+        <tr>
+            <td><strong>FollowedBy</strong></td>
+            <td>
+                <p>Appends a new pattern state. Other events can occur between a matching event and the previous matching event:</p>
+{% highlight scala %}
+val followedBy = start.followedBy("middle")
+{% endhighlight %}
+            </td>
+        </tr>
+        <tr>
+            <td><strong>Where</strong></td>
+            <td>
+                <p>Defines a filter condition for the current pattern state. Only if an event passes the filter, it can match the state:</p>
+{% highlight scala %}
+patternState.where(event => ... /* some condition */)
+{% endhighlight %}
+            </td>
+        </tr>
+        <tr>
+            <td><strong>Or</strong></td>
+            <td>
+                <p>Adds a new filter condition which is ORed with an existing filter condition. Only if an event passes the filter condition, it can match the state:</p>
+{% highlight scala %}
+patternState.where(event => ... /* some condition */)
+    .or(event => ... /* alternative condition */)
+{% endhighlight %}
+                    </td>
+                </tr>
+       <tr>
+           <td><strong>Subtype</strong></td>
+           <td>
+               <p>Defines a subtype condition for the current pattern state. Only if an event is of this subtype, it can match the state:</p>
+{% highlight scala %}
+patternState.subtype(classOf[SubEvent])
+{% endhighlight %}
+           </td>
+       </tr>
+       <tr>
+          <td><strong>Within</strong></td>
+          <td>
+              <p>Defines the maximum time interval for an event sequence to match the pattern. If a non-completed event sequence exceeds this time, it is discarded:</p>
+{% highlight scala %}
+patternState.within(Time.seconds(10))
+{% endhighlight %}
+          </td>
+      </tr>
+  </tbody>
+</table>
+</div>
+
+</div>
+
+### Detecting Patterns
+
+In order to run a stream of events against your pattern, you have to create a `PatternStream`.
+Given an input stream `input` and a pattern `pattern`, you create the `PatternStream` by calling
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+DataStream<Event> input = ...
+Pattern<Event, ?> pattern = ...
+
+PatternStream<Event> patternStream = CEP.pattern(input, pattern);
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val input : DataStream[Event] = ...
+val pattern : Pattern[Event, _] = ...
+
+val patternStream: PatternStream[Event] = CEP.pattern(input, pattern)
+{% endhighlight %}
+</div>
+</div>
+
+### Selecting from Patterns
+Once you have obtained a `PatternStream` you can select from detected event sequences via the `select` or `flatSelect` methods.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+The `select` method requires a `PatternSelectFunction` implementation.
+A `PatternSelectFunction` has a `select` method which is called for each matching event sequence.
+It receives a map of string/event pairs of the matched events.
+The string is defined by the name of the state to which the event has been matched.
+The `select` method can return exactly one result.
+
+{% highlight java %}
+class MyPatternSelectFunction<IN, OUT> implements PatternSelectFunction<IN, OUT> {
+    @Override
+    public OUT select(Map<String, IN> pattern) {
+        IN startEvent = pattern.get("start");
+        IN endEvent = pattern.get("end");
+        return new OUT(startEvent, endEvent);
+    }
+}
+{% endhighlight %}
+
+A `PatternFlatSelectFunction` is similar to the `PatternSelectFunction`, with the only distinction that it can return an arbitrary number of results.
+In order to do this, the `select` method has an additional `Collector` parameter which is used for the element output.
+
+{% highlight java %}
+class MyPatternFlatSelectFunction<IN, OUT> implements PatternFlatSelectFunction<IN, OUT> {
+    @Override
+    public void select(Map<String, IN> pattern, Collector<OUT> collector) {
+        IN startEvent = pattern.get("start");
+        IN endEvent = pattern.get("end");
+
+        for (int i = 0; i < startEvent.getValue(); i++ ) {
+            collector.collect(new OUT(startEvent, endEvent));
+        }
+    }
+}
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+The `select` method takes a selection function as argument, which is called for each matching event sequence.
+It receives a map of string/event pairs of the matched events.
+The string is defined by the name of the state to which the event has been matched.
+The selection function returns exactly one result per call.
+
+{% highlight scala %}
+def selectFn(pattern : mutable.Map[String, IN]): OUT = {
+    val startEvent = pattern.get("start").get
+    val endEvent = pattern.get("end").get
+    OUT(startEvent, endEvent)
+}
+{% endhighlight %}
+
+The `flatSelect` method is similar to the `select` method. Their only difference is that the function passed to the `flatSelect` method can return an arbitrary number of results per call.
+In order to do this, the function for `flatSelect` has an additional `Collector` parameter which is used for the element output.
+
+{% highlight scala %}
+def flatSelectFn(pattern : mutable.Map[String, IN], collector : Collector[OUT]) = {
+    val startEvent = pattern.get("start").get
+    val endEvent = pattern.get("end").get
+    for (i <- 0 to startEvent.getValue) {
+        collector.collect(OUT(startEvent, endEvent))
+    }
+}
+{% endhighlight %}
+</div>
+</div>
+
+### Handling Timed Out Partial Patterns
+
+Whenever a pattern has a window length associated via the `within` key word, it is possible that partial event patterns will be discarded because they exceed the window length.
+In order to react to these timeout events the `select` and `flatSelect` API calls allow to specify a timeout handler.
+This timeout handler is called for each partial event pattern which has timed out.
+The timeout handler receives all so far matched events of the partial pattern and the timestamp when the timeout was detected.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+In order to treat partial patterns, the `select` and `flatSelect` API calls offer an overloaded version which takes as the first parameter a `PatternTimeoutFunction`/`PatternFlatTimeoutFunction` and as second parameter the known `PatternSelectFunction`/`PatternFlatSelectFunction`.
+The return type of the timeout function can be different from the select function.
+The timeout event and the select event are wrapped in `Either.Left` and `Either.Right` respectively so that the resulting data stream is of type `org.apache.flink.types.Either`.
+
+{% highlight java %}
+PatternStream<Event> patternStream = CEP.pattern(input, pattern);
+
+DataStream<Either<TimeoutEvent, ComplexEvent>> result = patternStream.select(
+    new PatternTimeoutFunction<Event, TimeoutEvent>() {...},
+    new PatternSelectFunction<Event, ComplexEvent>() {...}
+);
+
+DataStream<Either<TimeoutEvent, ComplexEvent>> flatResult = patternStream.flatSelect(
+    new PatternFlatTimeoutFunction<Event, TimeoutEvent>() {...},
+    new PatternFlatSelectFunction<Event, ComplexEvent>() {...}
+);
+{% endhighlight %}
+
+</div>
+
+<div data-lang="scala" markdown="1">
+In order to treat partial patterns, the `select` API call offers an overloaded version which takes as the first parameter a timeout function and as second parameter a selection function.
+The timeout function is called with a map of string-event pairs of the partial match which has timed out and a long indicating when the timeout occurred.
+The string is defined by the name of the state to which the event has been matched.
+The timeout function returns exactly one result per call.
+The return type of the timeout function can be different from the select function.
+The timeout event and the select event are wrapped in `Left` and `Right` respectively so that the resulting data stream is of type `Either`.
+
+{% highlight scala %}
+val patternStream: PatternStream[Event] = CEP.pattern(input, pattern)
+
+DataStream[Either[TimeoutEvent, ComplexEvent]] result = patternStream.select{
+    (pattern: mutable.Map[String, Event], timestamp: Long) => TimeoutEvent()
+} {
+    pattern: mutable.Map[String, Event] => ComplexEvent()
+}
+{% endhighlight %}
+
+The `flatSelect` API call offers the same overloaded version which takes as the first parameter a timeout function and as second parameter a selection function.
+In contrast to the `select` functions, the `flatSelect` functions are called with an `Collector`.
+The collector can be used to emit an arbitrary number of events.
+
+{% highlight scala %}
+val patternStream: PatternStream[Event] = CEP.pattern(input, pattern)
+
+DataStream[Either[TimeoutEvent, ComplexEvent]] result = patternStream.flatSelect{
+    (pattern: mutable.Map[String, Event], timestamp: Long, out: Collector[TimeoutEvent]) =>
+        out.collect(TimeoutEvent())
+} {
+    (pattern: mutable.Map[String, Event], out: Collector[ComplexEvent]) =>
+        out.collect(ComplexEvent())
+}
+{% endhighlight %}
+
+</div>
+</div>
+
+## Examples
+
+The following example detects the pattern `start, middle(name = "error") -> end(name = "critical")` on a keyed data stream of `Events`.
+The events are keyed by their ids and a valid pattern has to occur within 10 seconds.
+The whole processing is done with event time.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+StreamExecutionEnvironment env = ...
+env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
+
+DataStream<Event> input = ...
+
+DataStream<Event> partitionedInput = input.keyBy(new KeySelector<Event, Integer>() {
+	@Override
+	public Integer getKey(Event value) throws Exception {
+		return value.getId();
+	}
+});
+
+Pattern<Event, ?> pattern = Pattern.<Event>begin("start")
+	.next("middle").where(new FilterFunction<Event>() {
+		@Override
+		public boolean filter(Event value) throws Exception {
+			return value.getName().equals("error");
+		}
+	}).followedBy("end").where(new FilterFunction<Event>() {
+		@Override
+		public boolean filter(Event value) throws Exception {
+			return value.getName().equals("critical");
+		}
+	}).within(Time.seconds(10));
+
+PatternStream<Event> patternStream = CEP.pattern(partitionedInput, pattern);
+
+DataStream<Alert> alerts = patternStream.select(new PatternSelectFunction<Event, Alert>() {
+	@Override
+	public Alert select(Map<String, Event> pattern) throws Exception {
+		return createAlert(pattern);
+	}
+});
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env : StreamExecutionEnvironment = ...
+env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
+
+val input : DataStream[Event] = ...
+
+val partitionedInput = input.keyBy(event => event.getId)
+
+val pattern = Pattern.begin("start")
+  .next("middle").where(_.getName == "error")
+  .followedBy("end").where(_.getName == "critical")
+  .within(Time.seconds(10))
+
+val patternStream = CEP.pattern(partitionedInput, pattern)
+
+val alerts = patternStream.select(createAlert(_)))
+{% endhighlight %}
+</div>
+</div>

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/libs/gelly/graph_algorithms.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/gelly/graph_algorithms.md b/docs/dev/libs/gelly/graph_algorithms.md
new file mode 100644
index 0000000..09f6abc
--- /dev/null
+++ b/docs/dev/libs/gelly/graph_algorithms.md
@@ -0,0 +1,308 @@
+---
+title: Graph Algorithms
+nav-parent_id: graphs
+nav-pos: 4
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+The logic blocks with which the `Graph` API and top-level algorithms are assembled are accessible in Gelly as graph
+algorithms in the `org.apache.flink.graph.asm` package. These algorithms provide optimization and tuning through
+configuration parameters and may provide implicit runtime reuse when processing the same input with a similar
+configuration.
+
+<table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 20%">Algorithm</th>
+      <th class="text-center">Description</th>
+    </tr>
+  </thead>
+
+  <tbody>
+    <tr>
+      <td>degree.annotate.directed.<br/><strong>VertexInDegree</strong></td>
+      <td>
+        <p>Annotate vertices of a <a href="#graph-representation">directed graph</a> with the in-degree.</p>
+{% highlight java %}
+DataSet<Vertex<K, LongValue>> inDegree = graph
+  .run(new VertexInDegree()
+    .setIncludeZeroDegreeVertices(true));
+{% endhighlight %}
+        <p>Optional configuration:</p>
+        <ul>
+          <li><p><strong>setIncludeZeroDegreeVertices</strong>: by default only the edge set is processed for the computation of degree; when this flag is set an additional join is performed against the vertex set in order to output vertices with an in-degree of zero</p></li>
+          <li><p><strong>setParallelism</strong>: override the operator parallelism</p></li>
+        </ul>
+      </td>
+    </tr>
+
+    <tr>
+      <td>degree.annotate.directed.<br/><strong>VertexOutDegree</strong></td>
+      <td>
+        <p>Annotate vertices of a <a href="#graph-representation">directed graph</a> with the out-degree.</p>
+{% highlight java %}
+DataSet<Vertex<K, LongValue>> outDegree = graph
+  .run(new VertexOutDegree()
+    .setIncludeZeroDegreeVertices(true));
+{% endhighlight %}
+        <p>Optional configuration:</p>
+        <ul>
+          <li><p><strong>setIncludeZeroDegreeVertices</strong>: by default only the edge set is processed for the computation of degree; when this flag is set an additional join is performed against the vertex set in order to output vertices with an out-degree of zero</p></li>
+          <li><p><strong>setParallelism</strong>: override the operator parallelism</p></li>
+        </ul>
+      </td>
+    </tr>
+
+    <tr>
+      <td>degree.annotate.directed.<br/><strong>VertexDegrees</strong></td>
+      <td>
+        <p>Annotate vertices of a <a href="#graph-representation">directed graph</a> with the degree, out-degree, and in-degree.</p>
+{% highlight java %}
+DataSet<Vertex<K, Tuple2<LongValue, LongValue>>> degrees = graph
+  .run(new VertexDegrees()
+    .setIncludeZeroDegreeVertices(true));
+{% endhighlight %}
+        <p>Optional configuration:</p>
+        <ul>
+          <li><p><strong>setIncludeZeroDegreeVertices</strong>: by default only the edge set is processed for the computation of degree; when this flag is set an additional join is performed against the vertex set in order to output vertices with out- and in-degree of zero</p></li>
+          <li><p><strong>setParallelism</strong>: override the operator parallelism</p></li>
+        </ul>
+      </td>
+    </tr>
+
+    <tr>
+      <td>degree.annotate.directed.<br/><strong>EdgeSourceDegrees</strong></td>
+      <td>
+        <p>Annotate edges of a <a href="#graph-representation">directed graph</a> with the degree, out-degree, and in-degree of the source ID.</p>
+{% highlight java %}
+DataSet<Edge<K, Tuple2<EV, Degrees>>> sourceDegrees = graph
+  .run(new EdgeSourceDegrees());
+{% endhighlight %}
+        <p>Optional configuration:</p>
+        <ul>
+          <li><p><strong>setParallelism</strong>: override the operator parallelism</p></li>
+        </ul>
+      </td>
+    </tr>
+
+    <tr>
+      <td>degree.annotate.directed.<br/><strong>EdgeTargetDegrees</strong></td>
+      <td>
+        <p>Annotate edges of a <a href="#graph-representation">directed graph</a> with the degree, out-degree, and in-degree of the target ID.</p>
+{% highlight java %}
+DataSet<Edge<K, Tuple2<EV, Degrees>>> targetDegrees = graph
+  .run(new EdgeTargetDegrees();
+{% endhighlight %}
+        <p>Optional configuration:</p>
+        <ul>
+          <li><p><strong>setParallelism</strong>: override the operator parallelism</p></li>
+        </ul>
+      </td>
+    </tr>
+
+    <tr>
+      <td>degree.annotate.directed.<br/><strong>EdgeDegreesPair</strong></td>
+      <td>
+        <p>Annotate edges of a <a href="#graph-representation">directed graph</a> with the degree, out-degree, and in-degree of both the source and target vertices.</p>
+{% highlight java %}
+DataSet<Edge<K, Tuple2<EV, Degrees>>> degrees = graph
+  .run(new EdgeDegreesPair());
+{% endhighlight %}
+        <p>Optional configuration:</p>
+        <ul>
+          <li><p><strong>setParallelism</strong>: override the operator parallelism</p></li>
+        </ul>
+      </td>
+    </tr>
+
+    <tr>
+      <td>degree.annotate.undirected.<br/><strong>VertexDegree</strong></td>
+      <td>
+        <p>Annotate vertices of an <a href="#graph-representation">undirected graph</a> with the degree.</p>
+{% highlight java %}
+DataSet<Vertex<K, LongValue>> degree = graph
+  .run(new VertexDegree()
+    .setIncludeZeroDegreeVertices(true)
+    .setReduceOnTargetId(true));
+{% endhighlight %}
+        <p>Optional configuration:</p>
+        <ul>
+          <li><p><strong>setIncludeZeroDegreeVertices</strong>: by default only the edge set is processed for the computation of degree; when this flag is set an additional join is performed against the vertex set in order to output vertices with a degree of zero</p></li>
+          <li><p><strong>setParallelism</strong>: override the operator parallelism</p></li>
+          <li><p><strong>setReduceOnTargetId</strong>: the degree can be counted from either the edge source or target IDs. By default the source IDs are counted. Reducing on target IDs may optimize the algorithm if the input edge list is sorted by target ID.</p></li>
+        </ul>
+      </td>
+    </tr>
+
+    <tr>
+      <td>degree.annotate.undirected.<br/><strong>EdgeSourceDegree</strong></td>
+      <td>
+        <p>Annotate edges of an <a href="#graph-representation">undirected graph</a> with degree of the source ID.</p>
+{% highlight java %}
+DataSet<Edge<K, Tuple2<EV, LongValue>>> sourceDegree = graph
+  .run(new EdgeSourceDegree()
+    .setReduceOnTargetId(true));
+{% endhighlight %}
+        <p>Optional configuration:</p>
+        <ul>
+          <li><p><strong>setParallelism</strong>: override the operator parallelism</p></li>
+          <li><p><strong>setReduceOnTargetId</strong>: the degree can be counted from either the edge source or target IDs. By default the source IDs are counted. Reducing on target IDs may optimize the algorithm if the input edge list is sorted by target ID.</p></li>
+        </ul>
+      </td>
+    </tr>
+
+    <tr>
+      <td>degree.annotate.undirected.<br/><strong>EdgeTargetDegree</strong></td>
+      <td>
+        <p>Annotate edges of an <a href="#graph-representation">undirected graph</a> with degree of the target ID.</p>
+{% highlight java %}
+DataSet<Edge<K, Tuple2<EV, LongValue>>> targetDegree = graph
+  .run(new EdgeTargetDegree()
+    .setReduceOnSourceId(true));
+{% endhighlight %}
+        <p>Optional configuration:</p>
+        <ul>
+          <li><p><strong>setParallelism</strong>: override the operator parallelism</p></li>
+          <li><p><strong>setReduceOnSourceId</strong>: the degree can be counted from either the edge source or target IDs. By default the target IDs are counted. Reducing on source IDs may optimize the algorithm if the input edge list is sorted by source ID.</p></li>
+        </ul>
+      </td>
+    </tr>
+
+    <tr>
+      <td>degree.annotate.undirected.<br/><strong>EdgeDegreePair</strong></td>
+      <td>
+        <p>Annotate edges of an <a href="#graph-representation">undirected graph</a> with the degree of both the source and target vertices.</p>
+{% highlight java %}
+DataSet<Edge<K, Tuple3<EV, LongValue, LongValue>>> pairDegree = graph
+  .run(new EdgeDegreePair()
+    .setReduceOnTargetId(true));
+{% endhighlight %}
+        <p>Optional configuration:</p>
+        <ul>
+          <li><p><strong>setParallelism</strong>: override the operator parallelism</p></li>
+          <li><p><strong>setReduceOnTargetId</strong>: the degree can be counted from either the edge source or target IDs. By default the source IDs are counted. Reducing on target IDs may optimize the algorithm if the input edge list is sorted by target ID.</p></li>
+        </ul>
+      </td>
+    </tr>
+
+    <tr>
+      <td>degree.filter.undirected.<br/><strong>MaximumDegree</strong></td>
+      <td>
+        <p>Filter an <a href="#graph-representation">undirected graph</a> by maximum degree.</p>
+{% highlight java %}
+Graph<K, VV, EV> filteredGraph = graph
+  .run(new MaximumDegree(5000)
+    .setBroadcastHighDegreeVertices(true)
+    .setReduceOnTargetId(true));
+{% endhighlight %}
+        <p>Optional configuration:</p>
+        <ul>
+          <li><p><strong>setBroadcastHighDegreeVertices</strong>: join high-degree vertices using a broadcast-hash to reduce data shuffling when removing a relatively small number of high-degree vertices.</p></li>
+          <li><p><strong>setParallelism</strong>: override the operator parallelism</p></li>
+          <li><p><strong>setReduceOnTargetId</strong>: the degree can be counted from either the edge source or target IDs. By default the source IDs are counted. Reducing on target IDs may optimize the algorithm if the input edge list is sorted by target ID.</p></li>
+        </ul>
+      </td>
+    </tr>
+
+    <tr>
+      <td>simple.directed.<br/><strong>Simplify</strong></td>
+      <td>
+        <p>Remove self-loops and duplicate edges from a <a href="#graph-representation">directed graph</a>.</p>
+{% highlight java %}
+graph.run(new Simplify());
+{% endhighlight %}
+        <p>Optional configuration:</p>
+        <ul>
+          <li><p><strong>setParallelism</strong>: override the operator parallelism</p></li>
+        </ul>
+      </td>
+    </tr>
+
+    <tr>
+      <td>simple.undirected.<br/><strong>Simplify</strong></td>
+      <td>
+        <p>Add symmetric edges and remove self-loops and duplicate edges from an <a href="#graph-representation">undirected graph</a>.</p>
+{% highlight java %}
+graph.run(new Simplify());
+{% endhighlight %}
+        <p>Optional configuration:</p>
+        <ul>
+          <li><p><strong>setParallelism</strong>: override the operator parallelism</p></li>
+        </ul>
+      </td>
+    </tr>
+
+    <tr>
+      <td>translate.<br/><strong>TranslateGraphIds</strong></td>
+      <td>
+        <p>Translate vertex and edge IDs using the given <code>TranslateFunction</code>.</p>
+{% highlight java %}
+graph.run(new TranslateGraphIds(new LongValueToStringValue()));
+{% endhighlight %}
+        <p>Required configuration:</p>
+        <ul>
+          <li><p><strong>translator</strong>: implements type or value conversion</p></li>
+        </ul>
+        <p>Optional configuration:</p>
+        <ul>
+          <li><p><strong>setParallelism</strong>: override the operator parallelism</p></li>
+        </ul>
+      </td>
+    </tr>
+
+    <tr>
+      <td>translate.<br/><strong>TranslateVertexValues</strong></td>
+      <td>
+        <p>Translate vertex values using the given <code>TranslateFunction</code>.</p>
+{% highlight java %}
+graph.run(new TranslateVertexValues(new LongValueAddOffset(vertexCount)));
+{% endhighlight %}
+        <p>Required configuration:</p>
+        <ul>
+          <li><p><strong>translator</strong>: implements type or value conversion</p></li>
+        </ul>
+        <p>Optional configuration:</p>
+        <ul>
+          <li><p><strong>setParallelism</strong>: override the operator parallelism</p></li>
+        </ul>
+      </td>
+    </tr>
+
+    <tr>
+      <td>translate.<br/><strong>TranslateEdgeValues</strong></td>
+      <td>
+        <p>Translate edge values using the given <code>TranslateFunction</code>.</p>
+{% highlight java %}
+graph.run(new TranslateEdgeValues(new Nullify()));
+{% endhighlight %}
+        <p>Required configuration:</p>
+        <ul>
+          <li><p><strong>translator</strong>: implements type or value conversion</p></li>
+        </ul>
+        <p>Optional configuration:</p>
+        <ul>
+          <li><p><strong>setParallelism</strong>: override the operator parallelism</p></li>
+        </ul>
+      </td>
+    </tr>
+  </tbody>
+</table>
+
+{% top %}


[26/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/concepts/fig/state_partitioning.svg
----------------------------------------------------------------------
diff --git a/docs/concepts/fig/state_partitioning.svg b/docs/concepts/fig/state_partitioning.svg
deleted file mode 100644
index 4d75ca3..0000000
--- a/docs/concepts/fig/state_partitioning.svg
+++ /dev/null
@@ -1,291 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<svg
-   xmlns:dc="http://purl.org/dc/elements/1.1/"
-   xmlns:cc="http://creativecommons.org/ns#"
-   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
-   xmlns:svg="http://www.w3.org/2000/svg"
-   xmlns="http://www.w3.org/2000/svg"
-   version="1.1"
-   width="382.12958"
-   height="347.69376"
-   id="svg2">
-  <defs
-     id="defs4" />
-  <metadata
-     id="metadata7">
-    <rdf:RDF>
-      <cc:Work
-         rdf:about="">
-        <dc:format>image/svg+xml</dc:format>
-        <dc:type
-           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
-        <dc:title></dc:title>
-      </cc:Work>
-    </rdf:RDF>
-  </metadata>
-  <g
-     transform="translate(-274.89951,-401.36804)"
-     id="layer1">
-    <g
-       transform="translate(247.46142,381.88679)"
-       id="g2989">
-      <text
-         x="353.97946"
-         y="196.46558"
-         id="text2991"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">key/value</text>
-      <text
-         x="367.48282"
-         y="209.96893"
-         id="text2993"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">state</text>
-      <path
-         d="m 27.438086,116.36336 c 0,-21.577246 17.526241,-39.075356 39.150378,-39.075356 21.624137,0 39.150376,17.49811 39.150376,39.075356 0,21.57725 -17.526239,39.07536 -39.150376,39.07536 -21.624137,0 -39.150378,-17.49811 -39.150378,-39.07536"
-         id="path2995"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="45.106564"
-         y="113.87054"
-         id="text2997"
-         xml:space="preserve"
-         style="font-size:12.45310211px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source</text>
-      <text
-         x="56.959515"
-         y="128.87428"
-         id="text2999"
-         xml:space="preserve"
-         style="font-size:12.45310211px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[1]</text>
-      <path
-         d="m 114.95676,111.20583 107.99878,0 0,-5.15754 10.31507,10.31507 -10.31507,10.31507 0,-5.15753 -107.99878,0 z"
-         id="path3001"
-         style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 240.45365,116.37274 c 0,-21.586626 17.49811,-39.084736 39.08474,-39.084736 21.56787,0 39.06598,17.49811 39.06598,39.084736 0,21.56787 -17.49811,39.06598 -39.06598,39.06598 -21.58663,0 -39.08474,-17.49811 -39.08474,-39.06598"
-         id="path3003"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="255.03751"
-         y="113.87054"
-         id="text3005"
-         xml:space="preserve"
-         style="font-size:12.45310211px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">S</text>
-      <text
-         x="263.58963"
-         y="113.87054"
-         id="text3007"
-         xml:space="preserve"
-         style="font-size:12.45310211px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">tateful</text>
-      <text
-         x="269.8912"
-         y="128.87428"
-         id="text3009"
-         xml:space="preserve"
-         style="font-size:12.45310211px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[1]</text>
-      <path
-         d="m 27.438086,273.7432 c 0,-21.56788 17.535618,-39.06599 39.159755,-39.06599 21.624137,0 39.140999,17.49811 39.140999,39.06599 0,21.58662 -17.516862,39.08473 -39.140999,39.08473 -21.624137,0 -39.159755,-17.49811 -39.159755,-39.08473"
-         id="path3011"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="47.807236"
-         y="271.23022"
-         id="text3013"
-         xml:space="preserve"
-         style="font-size:12.45310211px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Souce</text>
-      <text
-         x="56.959515"
-         y="286.23395"
-         id="text3015"
-         xml:space="preserve"
-         style="font-size:12.45310211px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[2]</text>
-      <path
-         d="m 114.95676,268.58566 107.99878,0 0,-5.15753 10.31507,10.31507 -10.31507,10.31507 0,-5.15754 -107.99878,0 z"
-         id="path3017"
-         style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 240.45365,273.7432 c 0,-21.56788 17.49811,-39.06599 39.08474,-39.06599 21.56787,0 39.06598,17.49811 39.06598,39.06599 0,21.58662 -17.49811,39.08473 -39.06598,39.08473 -21.58663,0 -39.08474,-17.49811 -39.08474,-39.08473"
-         id="path3019"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="255.03751"
-         y="271.23022"
-         id="text3021"
-         xml:space="preserve"
-         style="font-size:12.45310211px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Stateful</text>
-      <text
-         x="269.8912"
-         y="286.23395"
-         id="text3023"
-         xml:space="preserve"
-         style="font-size:12.45310211px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[2]</text>
-      <path
-         d="m 366.01618,180.92632 -34.93995,-104.407257 1.2003,-0.393848 34.93995,104.407265 -1.2003,0.39384 z m -36.30904,-102.625563 0.76894,-5.532629 3.97599,3.938482 -4.74493,1.594147 z"
-         id="path3025"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 107.19233,139.07527 130.82322,94.5048 3.0195,-4.17292 2.3162,14.39421 -14.39421,2.3162 3.0195,-4.17291 -130.82322,-94.51417 z"
-         id="path3027"
-         style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 107.19233,247.98365 130.82322,-94.51417 3.0195,4.17292 2.3162,-14.39421 -14.39421,-2.31621 3.0195,4.17292 -130.82322,94.51417 z"
-         id="path3029"
-         style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="290.62192"
-         y="329.87106"
-         id="text3031"
-         xml:space="preserve"
-         style="font-size:13.20328903px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">D \u2026</text>
-      <text
-         x="290.62192"
-         y="345.62497"
-         id="text3033"
-         xml:space="preserve"
-         style="font-size:13.20328903px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">E </text>
-      <text
-         x="306.2258"
-         y="345.62497"
-         id="text3035"
-         xml:space="preserve"
-         style="font-size:13.20328903px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">\u2026</text>
-      <text
-         x="290.62192"
-         y="361.37891"
-         id="text3037"
-         xml:space="preserve"
-         style="font-size:13.20328903px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">Z </text>
-      <text
-         x="306.2258"
-         y="361.37891"
-         id="text3039"
-         xml:space="preserve"
-         style="font-size:13.20328903px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">\u2026</text>
-      <path
-         d="m 284.69592,317.81668 36.87169,0 0,48.76214 -36.87169,0 z"
-         id="path3041"
-         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 284.996,334.39581 36.57161,0"
-         id="path3043"
-         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 284.996,349.86841 36.57161,0"
-         id="path3045"
-         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="290.62192"
-         y="32.140686"
-         id="text3047"
-         xml:space="preserve"
-         style="font-size:13.20328903px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">A \u2026</text>
-      <text
-         x="290.62192"
-         y="47.894608"
-         id="text3049"
-         xml:space="preserve"
-         style="font-size:13.20328903px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">B </text>
-      <text
-         x="306.2258"
-         y="47.894608"
-         id="text3051"
-         xml:space="preserve"
-         style="font-size:13.20328903px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">\u2026</text>
-      <text
-         x="290.62192"
-         y="63.648533"
-         id="text3053"
-         xml:space="preserve"
-         style="font-size:13.20328903px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">Y </text>
-      <text
-         x="306.2258"
-         y="63.648533"
-         id="text3055"
-         xml:space="preserve"
-         style="font-size:13.20328903px;font-style:normal;font-weight:bold;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">\u2026</text>
-      <path
-         d="m 284.69592,20.086254 36.87169,0 0,48.762148 -36.87169,0 z"
-         id="path3057"
-         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 284.996,36.665384 36.57161,0"
-         id="path3059"
-         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 284.996,52.137989 36.57161,0"
-         id="path3061"
-         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 366.01618,216.93529 -37.45308,88.76587 1.14404,0.48762 37.45308,-88.76586 -1.14404,-0.48763 z m -38.70964,86.8904 0.35634,5.57014 4.25731,-3.61965 -4.61365,-1.95049 z"
-         id="path3063"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="114.804"
-         y="104.28868"
-         id="text3065"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">Keys(A,B,Y)</text>
-      <path
-         d="m 94.429775,235.02418 1.247186,1.64103 0.675168,-0.51575 c 0.09377,-0.0657 0.168792,-0.10315 0.234433,-0.0938 0.06564,0 0.112528,0.0281 0.159415,0.0844 0.03751,0.0563 0.05626,0.11253 0.03751,0.17817 -0.0094,0.0656 -0.05626,0.13128 -0.150037,0.19692 l -1.537883,1.17217 c -0.0844,0.0656 -0.168792,0.0938 -0.225056,0.0938 -0.06564,-0.009 -0.121906,-0.0375 -0.159415,-0.0938 -0.04689,-0.0563 -0.05626,-0.11253 -0.04689,-0.17817 0.0094,-0.0563 0.06564,-0.12191 0.150038,-0.18755 l 0.496998,-0.37509 -3.328954,-4.38859 -0.496999,0.37509 c -0.0844,0.0656 -0.159414,0.10315 -0.225056,0.0938 -0.06564,0 -0.121905,-0.0375 -0.159415,-0.0938 -0.04689,-0.0469 -0.05626,-0.11253 -0.04689,-0.16879 0.01875,-0.0656 0.06564,-0.13129 0.150037,-0.19693 l 1.537883,-1.17216 c 0.09377,-0.0657 0.168792,-0.0938 0.234434,-0.0938 0.06564,0.009 0.112528,0.0375 0.159414,0.0938 0.03751,0.0563 0.05626,0.11252 0.04689,0.17817 -0.01875,0.0563 -0.06564,0.1219 -0.159415,0.18754 l -0.675168,0.51576 1.716053,2.2693
 1 0.825205,-4.19167 -0.403225,0.30008 c -0.09377,0.075 -0.168792,0.10315 -0.234434,0.10315 -0.06564,-0.009 -0.112528,-0.0375 -0.159414,-0.0938 -0.03751,-0.0563 -0.05626,-0.11252 -0.04689,-0.17817 0.01875,-0.0563 0.06564,-0.1219 0.159415,-0.19692 l 1.078393,-0.81583 c 0.0844,-0.0656 0.159415,-0.10315 0.225056,-0.0938 0.06564,0 0.121906,0.0375 0.159415,0.0844 0.04689,0.0563 0.05626,0.1219 0.04689,0.17816 -0.01875,0.0657 -0.06564,0.13129 -0.150038,0.19693 l -0.150037,0.11253 -0.647036,3.33833 c 0.393848,-0.10315 0.759564,-0.14066 1.097148,-0.10315 0.337584,0.0281 0.722055,0.15003 1.16279,0.34696 0.253188,0.11253 0.778319,0.40322 1.594147,0.89085 l 0.45949,-0.34697 c 0.09377,-0.0656 0.168792,-0.10315 0.234433,-0.0938 0.05626,0 0.112528,0.0375 0.159415,0.0938 0.03751,0.0469 0.04689,0.10316 0.03751,0.1688 -0.0094,0.0656 -0.06564,0.13128 -0.150037,0.19692 l -0.712678,0.54389 c -1.181544,-0.75957 -2.025505,-1.19093 -2.513126,-1.29408 -0.487621,-0.10315 -0.98462,-0.0656 -1.472242,0.11253 z m
  8.711545,-6.30157 -3.741557,2.85071 c 0.42198,0.42198 0.909597,0.65641 1.462867,0.69392 0.55326,0.0375 1.07839,-0.13128 1.57539,-0.50638 0.27194,-0.21568 0.52513,-0.47824 0.75956,-0.79707 0.22506,-0.31883 0.38447,-0.61891 0.45949,-0.9096 0.0188,-0.0938 0.0563,-0.15004 0.0938,-0.17817 0.0469,-0.0375 0.10315,-0.0469 0.16879,-0.0375 0.0563,0.009 0.11253,0.0375 0.15004,0.0844 0.0375,0.0563 0.0469,0.12191 0.0375,0.19693 -0.0281,0.25319 -0.17817,0.58139 -0.43136,0.97524 -0.26257,0.40323 -0.5814,0.74081 -0.95649,1.03151 -0.63766,0.47824 -1.31283,0.66579 -2.05364,0.58139 -0.731429,-0.0938 -1.322201,-0.43135 -1.772313,-1.02213 -0.403226,-0.5345 -0.553263,-1.14403 -0.450112,-1.82858 0.112528,-0.67516 0.450112,-1.23781 1.031507,-1.68792 0.609528,-0.45011 1.247188,-0.62828 1.931728,-0.53451 0.68455,0.0938 1.26594,0.45949 1.73481,1.08778 z m -0.65641,-0.0844 c -0.3751,-0.35634 -0.82521,-0.53451 -1.32221,-0.55326 -0.50637,-0.0188 -0.97524,0.13128 -1.397221,0.45949 -0.431357,0.3282 -0.7033,0.7314
 3 -0.825205,1.21905 -0.121906,0.48762 -0.05626,0.96587 0.178169,1.43473 z m 5.43885,-0.7033 -4.3042,-2.11928 -0.10315,0.0844 c -0.0938,0.0656 -0.16879,0.0938 -0.23443,0.0938 -0.0563,-0.009 -0.11253,-0.0375 -0.15004,-0.0938 -0.0281,-0.0375 -0.0469,-0.075 -0.0563,-0.12191 0,-0.0469 0.009,-0.0844 0.0281,-0.1219 0.0188,-0.0375 0.0656,-0.075 0.13128,-0.12191 l 1.00338,-0.76894 c 0.0844,-0.0656 0.15941,-0.0938 0.22505,-0.0938 0.0656,0.009 0.12191,0.0375 0.15942,0.0938 0.0469,0.0563 0.0563,0.11253 0.0469,0.17817 -0.009,0.0656 -0.0657,0.13128 -0.15004,0.19692 l -0.497,0.3751 3.79782,1.87546 -0.81583,-4.14478 -0.497,0.37509 c -0.0844,0.0657 -0.15941,0.10316 -0.22505,0.0938 -0.0656,-0.009 -0.12191,-0.0375 -0.15942,-0.0938 -0.0375,-0.0469 -0.0563,-0.11252 -0.0469,-0.16879 0.0187,-0.0656 0.0656,-0.13128 0.15003,-0.19692 l 1.00338,-0.75957 c 0.0938,-0.075 0.16879,-0.10315 0.23443,-0.0938 0.0656,0 0.11253,0.0281 0.15942,0.0844 0.0281,0.0375 0.0469,0.0844 0.0469,0.13128 0,0.0563 -0.009,0.0938 -0.0
 375,0.13128 -0.0187,0.0281 -0.0938,0.0938 -0.22505,0.19693 l 1.26594,6.50787 0.57202,-0.44074 c 0.0938,-0.0656 0.16879,-0.0938 0.22505,-0.0844 0.0656,0 0.12191,0.0281 0.15942,0.0844 0.0469,0.0563 0.0563,0.11253 0.0469,0.17817 -0.009,0.0656 -0.0656,0.13129 -0.15003,0.19693 l -2.11928,1.60352 c -0.0844,0.0656 -0.15942,0.10315 -0.22506,0.0938 -0.0656,0 -0.11253,-0.0375 -0.15941,-0.0938 -0.0375,-0.0469 -0.0563,-0.11253 -0.0469,-0.16879 0.0188,-0.0656 0.0656,-0.13128 0.15004,-0.19693 l 1.15341,-0.87209 z m 3.87284,-8.53338 c -0.0656,-0.0844 -0.0938,-0.15941 -0.0844,-0.22505 0,-0.0656 0.0281,-0.11253 0.0844,-0.15004 0.0563,-0.0469 0.11253,-0.0656 0.17817,-0.0469 0.0656,0.009 0.13128,0.0563 0.19692,0.15004 l 0.47825,0.62828 c 0.0656,0.0844 0.10315,0.16879 0.0938,0.22506 -0.009,0.0656 -0.0375,0.1219 -0.0938,0.15941 -0.0469,0.0375 -0.10315,0.0563 -0.15942,0.0469 -0.0563,0 -0.1219,-0.0375 -0.17817,-0.11253 -0.15003,-0.15941 -0.35633,-0.23443 -0.60014,-0.22505 -0.36572,0.0187 -0.74081,0.17816 
 -1.13466,0.47824 -0.41261,0.30945 -0.66579,0.63766 -0.76894,0.97524 -0.075,0.26257 -0.0563,0.45949 0.0563,0.61891 0.13128,0.17817 0.34696,0.24381 0.63766,0.2063 0.2063,-0.0281 0.51575,-0.16879 0.94711,-0.42198 0.54389,-0.32821 0.94711,-0.53451 1.2003,-0.61891 0.36572,-0.11252 0.68454,-0.13128 0.95649,-0.0563 0.27194,0.075 0.497,0.22506 0.65641,0.43136 0.23443,0.30945 0.2907,0.7033 0.17817,1.17216 -0.11253,0.47825 -0.45949,0.92836 -1.04088,1.3691 -0.5814,0.44073 -1.17217,0.65641 -1.76294,0.64703 0.075,0.0938 0.1219,0.16879 0.13128,0.2063 0.009,0.0375 0,0.075 -0.009,0.12191 -0.0187,0.0375 -0.0469,0.075 -0.0844,0.10315 -0.0469,0.0469 -0.10315,0.0563 -0.1688,0.0469 -0.0656,-0.009 -0.13128,-0.0563 -0.19692,-0.15004 l -0.57202,-0.75957 c -0.0656,-0.0844 -0.10315,-0.15941 -0.0938,-0.22505 0,-0.0563 0.0281,-0.11253 0.0844,-0.15942 0.0563,-0.0375 0.11252,-0.0469 0.17816,-0.0469 0.0657,0.009 0.12191,0.0469 0.1688,0.11252 0.10315,0.13129 0.21568,0.21568 0.35634,0.25319 0.2063,0.0656 0.45948,0.
 0563 0.75018,-0.0188 0.2907,-0.075 0.60015,-0.24381 0.92836,-0.48762 0.47824,-0.36572 0.75956,-0.72206 0.86271,-1.0784 0.0938,-0.35634 0.075,-0.62828 -0.0844,-0.8252 -0.16879,-0.23444 -0.42198,-0.31883 -0.76894,-0.28132 -0.33759,0.0469 -0.75957,0.21568 -1.24719,0.51575 -0.497,0.30008 -0.87209,0.48762 -1.12528,0.56264 -0.25319,0.075 -0.497,0.0844 -0.72206,0.009 -0.22505,-0.0656 -0.40322,-0.18755 -0.5345,-0.35634 -0.24381,-0.31883 -0.28132,-0.68455 -0.11253,-1.10653 0.16879,-0.42198 0.46887,-0.78769 0.90022,-1.1159 0.50638,-0.38447 1.01276,-0.58139 1.51913,-0.57202 z m 5.15754,-1.76294 c -0.25319,-0.3282 -0.47825,-0.72205 -0.67517,-1.16279 -0.19693,-0.44073 -0.36572,-1.00337 -0.48762,-1.68792 -0.13129,-0.68454 -0.1688,-1.1159 -0.14066,-1.30345 0.0187,-0.0563 0.0375,-0.10315 0.0844,-0.13128 0.0469,-0.0469 0.10315,-0.0563 0.16879,-0.0563 0.0657,0.009 0.11253,0.0375 0.15004,0.0844 0.0281,0.0375 0.0375,0.075 0.0469,0.12191 0.1219,0.83458 0.30007,1.53788 0.52513,2.12865 0.22506,0.5814 0.52
 513,1.10653 0.89085,1.58477 0.36571,0.48763 0.79707,0.90961 1.29407,1.2847 0.497,0.37509 1.13466,0.73143 1.9036,1.07839 0.0469,0.0188 0.075,0.0375 0.10315,0.075 0.0375,0.0469 0.0469,0.10315 0.0375,0.16879 -0.009,0.0656 -0.0375,0.11253 -0.0844,0.15004 -0.0469,0.0375 -0.0938,0.0469 -0.15004,0.0469 -0.18754,-0.0188 -0.58139,-0.17817 -1.19092,-0.46887 -0.6189,-0.30007 -1.10652,-0.60015 -1.48162,-0.90022 -0.37509,-0.30008 -0.7033,-0.63766 -0.99399,-1.01276 z m 6.98611,-4.56676 -2.41935,1.83796 0.53451,1.71605 0.7033,-0.53451 c 0.0844,-0.0656 0.15941,-0.0938 0.22505,-0.0938 0.0656,0.009 0.11253,0.0375 0.15942,0.0938 0.0375,0.0563 0.0469,0.11253 0.0375,0.17817 -0.009,0.0563 -0.0563,0.12191 -0.15004,0.18755 l -1.36909,1.05026 c -0.0844,0.0656 -0.16879,0.0938 -0.22506,0.0844 -0.0656,0 -0.1219,-0.0281 -0.15941,-0.0844 -0.0469,-0.0563 -0.0563,-0.11253 -0.0469,-0.17817 0.009,-0.0656 0.0656,-0.1219 0.15004,-0.18754 l 0.28132,-0.22506 -1.70668,-5.61702 -1.07839,0.8252 c -0.0938,0.0656 -0.16879,0.
 10315 -0.22506,0.0938 -0.0656,0 -0.1219,-0.0281 -0.15941,-0.0844 -0.0469,-0.0563 -0.0563,-0.12191 -0.0469,-0.17817 0.009,-0.0656 0.0656,-0.13128 0.15004,-0.19693 l 1.84734,-1.4066 5.41072,3.3946 0.28132,-0.21568 c 0.0844,-0.075 0.15941,-0.10315 0.22505,-0.0938 0.0657,0 0.11253,0.0281 0.15942,0.0844 0.0375,0.0563 0.0563,0.11253 0.0375,0.17817 -0.009,0.0656 -0.0563,0.13129 -0.14066,0.19693 l -1.36909,1.04088 c -0.0938,0.0656 -0.16879,0.0938 -0.23444,0.0938 -0.0656,-0.009 -0.1219,-0.0375 -0.15941,-0.0938 -0.0375,-0.0563 -0.0563,-0.11253 -0.0469,-0.17817 0.0188,-0.0563 0.0656,-0.1219 0.15942,-0.18754 l 0.69392,-0.53451 z m -0.42198,-0.26256 -3.04763,-1.89423 -0.13129,0.10315 1.05027,3.41336 z m 4.15416,-1.95987 1.2003,-0.90022 0.53451,3.39459 c 0.0281,0.20631 -0.009,0.35634 -0.12191,0.44074 -0.075,0.0563 -0.15003,0.075 -0.24381,0.0656 -0.0938,-0.0188 -0.15941,-0.0563 -0.21568,-0.13128 -0.0281,-0.0281 -0.0469,-0.0656 -0.0656,-0.10315 z m 5.13878,-2.4006 -3.32895,-4.37921 -0.48762,0.37509
  c -0.0938,0.0656 -0.1688,0.0938 -0.23444,0.0938 -0.0563,-0.009 -0.11253,-0.0375 -0.15003,-0.0938 -0.0469,-0.0563 -0.0657,-0.11253 -0.0469,-0.17817 0.009,-0.0563 0.0563,-0.1219 0.15004,-0.18754 l 2.30682,-1.76294 c 0.45949,-0.34696 0.92836,-0.497 1.41598,-0.45012 0.48762,0.0469 0.87209,0.24382 1.14403,0.60015 0.33759,0.45012 0.3751,0.994 0.10315,1.64104 0.48763,-0.15004 0.90023,-0.18755 1.25657,-0.0938 0.34696,0.0938 0.62828,0.28132 0.83458,0.55327 0.18755,0.24381 0.30007,0.51575 0.33758,0.80645 0.0375,0.2907 -0.009,0.60015 -0.14066,0.91898 -0.13128,0.3282 -0.33758,0.60015 -0.62828,0.81583 l -2.74756,2.08176 c -0.0844,0.075 -0.15941,0.10316 -0.22505,0.0938 -0.0656,-0.009 -0.11253,-0.0375 -0.15942,-0.0844 -0.0375,-0.0563 -0.0563,-0.11253 -0.0375,-0.17817 0.009,-0.0656 0.0563,-0.13128 0.15004,-0.19692 z m -1.50975,-2.75693 1.30345,-0.98462 c 0.27194,-0.2063 0.48762,-0.45011 0.63766,-0.73143 0.1219,-0.21568 0.16879,-0.44074 0.15004,-0.6658 -0.0188,-0.23443 -0.0844,-0.43135 -0.2063,-0.5
 9077 -0.17817,-0.22505 -0.45012,-0.35634 -0.81583,-0.38447 -0.35634,-0.0375 -0.71268,0.0844 -1.05964,0.34696 l -1.46287,1.10653 z m 1.88485,2.47561 1.85671,-1.4066 c 0.23443,-0.17817 0.40322,-0.39384 0.497,-0.62828 0.0938,-0.23443 0.13128,-0.46886 0.10315,-0.69392 -0.0281,-0.22506 -0.10315,-0.4126 -0.23444,-0.58139 -0.14066,-0.18755 -0.33758,-0.31883 -0.60015,-0.39385 -0.26256,-0.075 -0.54388,-0.075 -0.84396,0.009 -0.30007,0.0844 -0.65641,0.28132 -1.05964,0.59078 l -1.3222,1.00337 z m 5.12002,-5.39196 1.19092,-0.90961 0.54389,3.40398 c 0.0281,0.2063 -0.009,0.34696 -0.12191,0.43136 -0.075,0.0563 -0.15941,0.0844 -0.24381,0.0656 -0.0938,-0.009 -0.15941,-0.0563 -0.21568,-0.12191 -0.0281,-0.0375 -0.0469,-0.0656 -0.0656,-0.11253 z m 5.55138,-5.77644 1.47225,1.94111 0.95648,-0.72206 c 0.0844,-0.0656 0.15942,-0.0938 0.22506,-0.0938 0.0656,0.009 0.11253,0.0375 0.15941,0.0938 0.0375,0.0469 0.0563,0.11253 0.0375,0.16879 -0.009,0.0656 -0.0563,0.13129 -0.14066,0.19693 l -2.28807,1.7348 c -0.0844
 ,0.0657 -0.15941,0.0938 -0.22505,0.0844 -0.0656,0 -0.11253,-0.0281 -0.15942,-0.0844 -0.0375,-0.0563 -0.0563,-0.11252 -0.0469,-0.17816 0.0187,-0.0563 0.0656,-0.12191 0.15941,-0.18755 l 0.94711,-0.73143 -1.47224,-1.94111 -3.46961,-1.20968 -0.21568,0.15942 c -0.0938,0.0656 -0.1688,0.10315 -0.22506,0.0938 -0.0656,0 -0.1219,-0.0281 -0.15941,-0.0844 -0.0469,-0.0563 -0.0563,-0.1219 -0.0469,-0.17817 0.009,-0.0656 0.0656,-0.13128 0.15004,-0.19692 l 1.01275,-0.76894 c 0.0844,-0.0656 0.15941,-0.0938 0.22506,-0.0938 0.0656,0.009 0.11252,0.0375 0.15941,0.0938 0.0375,0.0563 0.0563,0.11253 0.0469,0.17817 -0.0188,0.0656 -0.0656,0.1219 -0.15004,0.19692 l -0.36572,0.27194 2.94449,1.02213 -0.22506,-3.08514 -0.36572,0.27194 c -0.0844,0.0656 -0.16879,0.10315 -0.22505,0.0938 -0.0656,0 -0.12191,-0.0375 -0.15942,-0.0844 -0.0469,-0.0563 -0.0563,-0.1219 -0.0469,-0.17817 0.009,-0.0656 0.0656,-0.13128 0.15941,-0.19692 l 1.00338,-0.76894 c 0.0844,-0.0657 0.15941,-0.0938 0.22505,-0.0938 0.0656,0.009 0.11253,0.03
 75 0.15942,0.0938 0.0375,0.0563 0.0563,0.11252 0.0469,0.17816 -0.0187,0.0563 -0.0656,0.12191 -0.15003,0.18755 l -0.22506,0.16879 z m 5.39197,-3.88222 c 0.25319,0.33758 0.47825,0.72206 0.67517,1.16279 0.19692,0.44073 0.35634,1.00337 0.48762,1.68792 0.13128,0.68455 0.16879,1.1159 0.13128,1.30345 -0.009,0.0656 -0.0281,0.11253 -0.075,0.14066 -0.0469,0.0375 -0.10316,0.0563 -0.1688,0.0469 -0.0656,-0.009 -0.1219,-0.0375 -0.15941,-0.0844 -0.0188,-0.0375 -0.0281,-0.075 -0.0375,-0.1219 -0.12191,-0.83459 -0.30008,-1.53789 -0.52513,-2.11928 -0.22506,-0.5814 -0.52513,-1.11591 -0.89085,-1.59415 -0.36571,-0.47824 -0.79707,-0.9096 -1.29407,-1.28469 -0.497,-0.3751 -1.13466,-0.73144 -1.9036,-1.0784 -0.0469,-0.0187 -0.0844,-0.0469 -0.10315,-0.075 -0.0375,-0.0469 -0.0563,-0.10315 -0.0469,-0.16879 0.009,-0.0563 0.0469,-0.11253 0.0938,-0.15004 0.0469,-0.0281 0.0938,-0.0469 0.15003,-0.0375 0.18755,0.0188 0.5814,0.1688 1.19093,0.46887 0.6189,0.2907 1.10652,0.59077 1.48161,0.89085 0.3751,0.30007 0.7033,0.63
 766 0.994,1.01275 z"
-         id="path3067"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:none" />
-      <path
-         d="m 114.51603,133.51451 -1.23781,1.63166 0.67517,0.51575 c 0.0938,0.075 0.14066,0.14066 0.15941,0.19693 0.009,0.0656 -0.009,0.1219 -0.0469,0.17817 -0.0375,0.0563 -0.0938,0.0844 -0.15942,0.0844 -0.0656,0.009 -0.14066,-0.0188 -0.23443,-0.0844 l -1.53788,-1.17217 c -0.0938,-0.0656 -0.14066,-0.13128 -0.15004,-0.19693 -0.009,-0.0563 0,-0.1219 0.0375,-0.16879 0.0469,-0.0563 0.10315,-0.0844 0.15941,-0.0938 0.0656,0 0.14066,0.0281 0.23444,0.0938 l 0.48762,0.3751 3.31957,-4.3886 -0.48762,-0.37509 c -0.0938,-0.0656 -0.14066,-0.13128 -0.15003,-0.19693 -0.0188,-0.0656 0,-0.1219 0.0375,-0.17816 0.0469,-0.0563 0.0938,-0.0844 0.15942,-0.0844 0.0656,-0.009 0.14066,0.0188 0.22506,0.0844 l 1.54726,1.17216 c 0.0938,0.0656 0.14066,0.13129 0.15003,0.19693 0.0188,0.0563 0,0.1219 -0.0375,0.17817 -0.0469,0.0469 -0.0938,0.0844 -0.15941,0.0844 -0.0656,0.009 -0.14066,-0.0281 -0.23443,-0.0938 l -0.67517,-0.51576 -1.71606,2.26932 4.25732,-0.34696 -0.40323,-0.30008 c -0.0938,-0.075 -0.14066,-0.14066 -0
 .15941,-0.19692 -0.009,-0.0656 0.009,-0.12191 0.0469,-0.17817 0.0375,-0.0563 0.0938,-0.0844 0.15942,-0.0938 0.0656,0 0.14066,0.0281 0.22505,0.0938 l 1.08777,0.82521 c 0.0844,0.0656 0.14066,0.13128 0.15004,0.18754 0.009,0.0656 0,0.12191 -0.0469,0.17817 -0.0375,0.0563 -0.0938,0.0844 -0.15941,0.0938 -0.0563,0 -0.13128,-0.0281 -0.22506,-0.0938 l -0.15003,-0.11253 -3.3946,0.28132 c 0.21568,0.34697 0.34696,0.69393 0.40323,1.02213 0.0656,0.33759 0.0563,0.74081 -0.0188,1.21906 -0.0375,0.27194 -0.17817,0.86271 -0.43136,1.77231 l 0.45949,0.34697 c 0.0938,0.075 0.14066,0.14066 0.15942,0.19692 0.009,0.0656 -0.009,0.12191 -0.0469,0.17817 -0.0375,0.0563 -0.0938,0.0844 -0.15941,0.0938 -0.0656,0 -0.14066,-0.0281 -0.23444,-0.0938 l -0.7033,-0.54389 c 0.41261,-1.34096 0.60015,-2.26931 0.56264,-2.76631 -0.0281,-0.50638 -0.19692,-0.96587 -0.50637,-1.38785 z m 8.42085,6.67667 -3.75094,-2.84134 c -0.30007,0.52513 -0.38447,1.05964 -0.27194,1.60353 0.11253,0.54388 0.4126,1.00337 0.9096,1.37847 0.27194,0.20
 63 0.60015,0.38447 0.96587,0.51575 0.37509,0.14066 0.7033,0.2063 1.01275,0.19692 0.0844,0 0.15004,0.0188 0.18755,0.0469 0.0469,0.0375 0.075,0.0844 0.0844,0.15004 0.009,0.0563 -0.009,0.11252 -0.0469,0.16879 -0.0375,0.0469 -0.0938,0.075 -0.17817,0.0938 -0.25319,0.0375 -0.60015,-0.0187 -1.05964,-0.15941 -0.45012,-0.14066 -0.86272,-0.35634 -1.24719,-0.64704 -0.62828,-0.47824 -0.994,-1.07839 -1.10652,-1.80982 -0.11253,-0.73144 0.0563,-1.39723 0.50637,-1.97862 0.40323,-0.53451 0.94711,-0.84396 1.63166,-0.92836 0.68454,-0.0844 1.3222,0.0938 1.9036,0.53451 0.60015,0.45949 0.94711,1.03151 1.04088,1.71605 0.0938,0.68455 -0.10315,1.34096 -0.58139,1.95987 z m -0.0938,-0.65642 c 0.23444,-0.45949 0.2907,-0.93773 0.17817,-1.42535 -0.1219,-0.48762 -0.39385,-0.89085 -0.8252,-1.21906 -0.43136,-0.3282 -0.89085,-0.47824 -1.39723,-0.45949 -0.49699,0.0188 -0.93773,0.19693 -1.33158,0.55327 z m 2.13804,5.05439 0.88147,-4.7168 -0.11253,-0.0844 c -0.0844,-0.0656 -0.13129,-0.13128 -0.14066,-0.19692 -0.0188,-0
 .0656 0,-0.12191 0.0375,-0.1688 0.0281,-0.0375 0.0656,-0.0656 0.10315,-0.0844 0.0469,-0.0188 0.0844,-0.0188 0.13128,-0.009 0.0375,0.009 0.0938,0.0375 0.15004,0.0844 l 1.01275,0.76894 c 0.0844,0.0656 0.14066,0.13128 0.15004,0.18754 0.009,0.0657 -0.009,0.13129 -0.0469,0.17817 -0.0375,0.0563 -0.0938,0.0844 -0.15941,0.0938 -0.0656,0.009 -0.14066,-0.0281 -0.22506,-0.0938 l -0.497,-0.37509 -0.77832,4.16354 3.76031,-1.9036 -0.48762,-0.37509 c -0.0938,-0.0656 -0.14066,-0.13129 -0.15003,-0.19693 -0.0188,-0.0563 0,-0.1219 0.0469,-0.17817 0.0375,-0.0469 0.0844,-0.075 0.15004,-0.0844 0.0656,-0.009 0.14066,0.0188 0.23443,0.0844 l 1.00338,0.75957 c 0.0844,0.075 0.14066,0.14066 0.15003,0.2063 0.009,0.0563 -0.009,0.1219 -0.0469,0.17817 -0.0281,0.0375 -0.0656,0.0563 -0.11253,0.075 -0.0469,0.0188 -0.0938,0.0188 -0.13128,0 -0.0375,-0.009 -0.12191,-0.0657 -0.25319,-0.1688 l -5.92648,2.982 0.5814,0.44073 c 0.0844,0.0656 0.13128,0.13128 0.15004,0.18755 0.009,0.0656 -0.009,0.1219 -0.0469,0.17817 -0.0375,0
 .0563 -0.0938,0.0844 -0.15942,0.0844 -0.0656,0.009 -0.14066,-0.0187 -0.22505,-0.0844 l -2.11928,-1.60353 c -0.0938,-0.0656 -0.14066,-0.13128 -0.15004,-0.19692 -0.0187,-0.0563 0,-0.12191 0.0375,-0.17817 0.0469,-0.0563 0.10315,-0.0844 0.15942,-0.0844 0.0656,-0.009 0.14066,0.0188 0.22505,0.0938 l 1.15341,0.86271 z m 9.2648,1.41597 c 0.0657,-0.0844 0.13129,-0.13128 0.19693,-0.14066 0.0656,-0.009 0.1219,0 0.16879,0.0375 0.0563,0.0469 0.0938,0.0938 0.0938,0.15942 0.009,0.0656 -0.0188,0.14066 -0.0938,0.23443 l -0.47824,0.62828 c -0.0657,0.0938 -0.13129,0.14066 -0.18755,0.15004 -0.0656,0.0188 -0.13128,0 -0.17817,-0.0469 -0.0563,-0.0375 -0.0844,-0.0844 -0.0938,-0.14066 -0.009,-0.0563 0.009,-0.1219 0.0563,-0.19692 0.12191,-0.19692 0.14066,-0.40323 0.0656,-0.64704 -0.1219,-0.33758 -0.37509,-0.66579 -0.76894,-0.95649 -0.4126,-0.31882 -0.79707,-0.47824 -1.15341,-0.47824 -0.26257,-0.009 -0.45949,0.0656 -0.57202,0.22506 -0.13128,0.16879 -0.14066,0.39384 -0.0281,0.66579 0.0844,0.18754 0.30007,0.450
 11 0.65641,0.79707 0.46887,0.44074 0.77832,0.76894 0.91898,0.994 0.2063,0.30945 0.31883,0.6189 0.31883,0.90022 0,0.2907 -0.075,0.53451 -0.23443,0.74081 -0.24381,0.31883 -0.60015,0.47825 -1.08777,0.497 -0.48763,0.0188 -1.02213,-0.19692 -1.60353,-0.63766 -0.58139,-0.44073 -0.93773,-0.94711 -1.08777,-1.51913 -0.075,0.10315 -0.13128,0.15942 -0.16879,0.17817 -0.0281,0.0188 -0.0656,0.0281 -0.11253,0.0281 -0.0469,0 -0.0938,-0.0188 -0.13128,-0.0469 -0.0469,-0.0375 -0.0844,-0.0938 -0.0844,-0.15942 -0.009,-0.0656 0.0188,-0.14066 0.0844,-0.22505 l 0.57202,-0.75957 c 0.075,-0.0844 0.13128,-0.14066 0.19692,-0.15003 0.0656,-0.009 0.1219,0 0.17817,0.0469 0.0469,0.0375 0.0844,0.0844 0.0938,0.15004 0.009,0.0656 -0.009,0.13128 -0.0563,0.18755 -0.10315,0.14066 -0.15942,0.27194 -0.15942,0.42198 0,0.21568 0.075,0.45011 0.22506,0.71267 0.15941,0.26257 0.39385,0.51576 0.72205,0.75957 0.47825,0.36572 0.90023,0.54388 1.26594,0.54388 0.3751,0 0.62829,-0.10315 0.77832,-0.30007 0.17817,-0.23443 0.19693,-0.5063
 8 0.0656,-0.81583 -0.14066,-0.31883 -0.42198,-0.67517 -0.84396,-1.06901 -0.42198,-0.39385 -0.7033,-0.70331 -0.84396,-0.92836 -0.14066,-0.22506 -0.2063,-0.45949 -0.2063,-0.69392 0.009,-0.23444 0.075,-0.44074 0.2063,-0.61891 0.24382,-0.30945 0.5814,-0.45011 1.03151,-0.39385 0.45011,0.0469 0.89085,0.23444 1.32221,0.55327 0.50637,0.38447 0.8252,0.8252 0.95648,1.31282 z m 3.09452,4.49175 c 0.25319,-0.33759 0.56265,-0.65642 0.93774,-0.96587 0.37509,-0.30945 0.87209,-0.6189 1.491,-0.92835 0.6189,-0.30008 1.0315,-0.45949 1.21905,-0.47825 0.0563,-0.009 0.11253,0.009 0.15004,0.0375 0.0563,0.0375 0.0844,0.0938 0.0938,0.15004 0.009,0.0656 0,0.1219 -0.0375,0.16879 -0.0281,0.0281 -0.0563,0.0563 -0.10315,0.075 -0.76894,0.34696 -1.4066,0.71268 -1.9036,1.08777 -0.497,0.37509 -0.92836,0.79707 -1.29407,1.27532 -0.36572,0.48762 -0.65642,1.01275 -0.88147,1.60352 -0.22506,0.5814 -0.40323,1.2847 -0.52513,2.11928 0,0.0469 -0.0188,0.0844 -0.0375,0.11253 -0.0375,0.0563 -0.0938,0.0844 -0.15942,0.0938 -0.0563,
 0.009 -0.1219,-0.009 -0.16879,-0.0469 -0.0469,-0.0281 -0.0656,-0.075 -0.0844,-0.14066 -0.0281,-0.17816 0.0187,-0.60014 0.13128,-1.26594 0.1219,-0.66579 0.27194,-1.22843 0.45949,-1.66916 0.18754,-0.44074 0.42198,-0.84396 0.71267,-1.22843 z m 2.6069,4.82933 3.31958,-4.3886 -0.30945,-0.23443 c -0.0844,-0.0656 -0.14066,-0.13128 -0.15004,-0.19693 -0.009,-0.0656 0,-0.1219 0.0469,-0.17816 0.0375,-0.0563 0.0938,-0.0844 0.15941,-0.0844 0.0563,-0.009 0.13129,0.0187 0.22506,0.0844 l 1.97862,1.50037 c 0.30945,0.23443 0.54388,0.51575 0.7033,0.83458 0.15941,0.31883 0.24381,0.61891 0.25319,0.88147 0.0187,0.45011 -0.0281,0.88147 -0.14066,1.27532 -0.0844,0.2907 -0.24382,0.59077 -0.47825,0.90022 l -0.38447,0.50638 c -0.28132,0.37509 -0.64704,0.67517 -1.07839,0.9096 -0.43136,0.22506 -0.94711,0.31883 -1.52851,0.2907 -0.44073,-0.0281 -0.84396,-0.17817 -1.22843,-0.46887 l -1.97862,-1.50037 c -0.0844,-0.0656 -0.13128,-0.13128 -0.15004,-0.19693 -0.009,-0.0563 0.009,-0.1219 0.0469,-0.16879 0.0375,-0.0563 0.
 0938,-0.0844 0.15942,-0.0938 0.0656,0 0.14066,0.0281 0.22505,0.0938 z m 0.36572,0.28132 1.34096,1.01275 c 0.30945,0.23443 0.67517,0.34696 1.09715,0.34696 0.42198,0.009 0.80645,-0.0844 1.16279,-0.26257 0.34696,-0.17817 0.6189,-0.40322 0.83458,-0.67516 l 0.497,-0.6658 c 0.17817,-0.23443 0.2907,-0.45948 0.35634,-0.69392 0.0938,-0.33758 0.13128,-0.7033 0.11253,-1.07839 -0.009,-0.2063 -0.075,-0.44074 -0.19693,-0.7033 -0.1219,-0.26257 -0.30945,-0.48762 -0.54388,-0.66579 l -1.33159,-1.01275 z m 6.46099,3.37584 1.19092,0.9096 -3.13203,1.43473 c -0.18755,0.0844 -0.33759,0.0844 -0.45012,0 -0.075,-0.0563 -0.11252,-0.13128 -0.1219,-0.21568 -0.0188,-0.0938 0.009,-0.16879 0.0563,-0.24381 0.0281,-0.0281 0.0563,-0.0563 0.0938,-0.0938 z m 5.66391,2.47561 -1.59415,2.1099 2.83195,2.13804 0.82521,-1.08777 c 0.0656,-0.0844 0.12191,-0.13129 0.18755,-0.15004 0.0656,-0.009 0.1219,0 0.17817,0.0469 0.0563,0.0375 0.0844,0.0938 0.0938,0.15941 0,0.0656 -0.0281,0.14066 -0.0938,0.22506 l -1.10653,1.45348 -4.06039
 ,-3.07576 c -0.0844,-0.0656 -0.13128,-0.13129 -0.15003,-0.18755 -0.009,-0.0656 0.009,-0.12191 0.0469,-0.17817 0.0375,-0.0563 0.0938,-0.0844 0.15942,-0.0938 0.0656,0 0.14066,0.0281 0.22505,0.0938 l 0.497,0.37509 3.31958,-4.38859 -0.497,-0.37509 c -0.0844,-0.0656 -0.14066,-0.13129 -0.15004,-0.19693 -0.009,-0.0563 0,-0.1219 0.0469,-0.16879 0.0375,-0.0563 0.0938,-0.0938 0.15942,-0.0938 0.0563,-0.009 0.13128,0.0281 0.22505,0.0938 l 3.86346,2.92573 -0.95648,1.26594 c -0.0657,0.0844 -0.13129,0.14066 -0.18755,0.15004 -0.0656,0.009 -0.12191,0 -0.17817,-0.0469 -0.0563,-0.0375 -0.0844,-0.0938 -0.0938,-0.15941 0,-0.0563 0.0281,-0.13129 0.0938,-0.22506 l 0.68454,-0.90022 -2.6444,-1.988 -1.44411,1.91298 1.31282,0.99399 0.31883,-0.4126 c 0.0656,-0.0938 0.13129,-0.14066 0.19693,-0.15004 0.0656,-0.009 0.1219,0 0.17817,0.0469 0.0469,0.0375 0.0844,0.0938 0.0844,0.15942 0.009,0.0563 -0.0188,0.13128 -0.0844,0.22505 l -0.90961,1.2003 c -0.0656,0.0844 -0.13128,0.14066 -0.19692,0.15004 -0.0656,0.009 -0.121
 9,-0.009 -0.17817,-0.0469 -0.0469,-0.0375 -0.0844,-0.0938 -0.0844,-0.15941 -0.009,-0.0656 0.0188,-0.14066 0.0844,-0.22506 l 0.31883,-0.42198 z m 4.97936,5.57952 1.2003,0.9096 -3.13203,1.43473 c -0.18755,0.0844 -0.33758,0.0844 -0.45011,0 -0.075,-0.0563 -0.11253,-0.13128 -0.13128,-0.21568 -0.009,-0.0938 0.009,-0.16879 0.0656,-0.24381 0.0187,-0.0281 0.0563,-0.0563 0.0844,-0.0844 z m 6.90172,7.29557 -3.54463,-2.67254 0.40322,-0.53451 6.16091,-1.94111 0.0281,-0.0281 -2.45687,-1.85672 -0.81583,1.08777 c -0.0656,0.0938 -0.13128,0.14066 -0.19692,0.15004 -0.0656,0.009 -0.1219,0 -0.17817,-0.0469 -0.0563,-0.0375 -0.0844,-0.0844 -0.0938,-0.15004 0,-0.0656 0.0281,-0.14066 0.0938,-0.23444 l 1.09715,-1.45348 3.17892,2.4006 -0.39385,0.52513 -6.16091,1.94111 -0.0281,0.0375 2.8132,2.13804 0.97524,-1.2847 c 0.0656,-0.0938 0.13128,-0.15004 0.19692,-0.15941 0.0657,-0.009 0.12191,0 0.17817,0.0469 0.0563,0.0375 0.0844,0.0938 0.0844,0.15942 0.009,0.0563 -0.0188,0.14066 -0.0938,0.22505 z m 5.35446,0.62828 c
  -0.25319,0.33758 -0.56264,0.65641 -0.93773,0.96586 -0.3751,0.30946 -0.8721,0.61891 -1.491,0.92836 -0.62828,0.30945 -1.03151,0.46887 -1.21905,0.47824 -0.0657,0.009 -0.11253,0 -0.15004,-0.0281 -0.0563,-0.0469 -0.0844,-0.0938 -0.0938,-0.15941 -0.009,-0.0656 0,-0.12191 0.0375,-0.1688 0.0281,-0.0281 0.0563,-0.0563 0.10316,-0.075 0.76894,-0.34697 1.39722,-0.7033 1.90359,-1.0784 0.497,-0.37509 0.92836,-0.80645 1.29408,-1.28469 0.36571,-0.47825 0.65641,-1.01275 0.88147,-1.59415 0.22505,-0.58139 0.40322,-1.29407 0.52513,-2.12866 0.009,-0.0469 0.0188,-0.0844 0.0375,-0.11252 0.0375,-0.0563 0.0938,-0.0844 0.15941,-0.0844 0.0563,-0.009 0.11253,0 0.16879,0.0469 0.0375,0.0281 0.0656,0.075 0.075,0.13128 0.0375,0.17817 -0.009,0.60953 -0.1219,1.27532 -0.12191,0.66579 -0.28132,1.21905 -0.45949,1.65979 -0.18755,0.44073 -0.43136,0.85333 -0.71268,1.22843 z"
-         id="path3069"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:none" />
-      <text
-         x="114.804"
-         y="265.14883"
-         id="text3071"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Courier New">Keys(D,E,Z)</text>
-      <path
-         d="m 260.93376,314.23453 c 0,3.35709 -2.1943,6.09527 -4.91373,6.09527 l -77.30676,0 c -2.70067,0 -4.91372,2.71943 -4.91372,6.09527 0,-3.37584 -2.1943,-6.09527 -4.91373,-6.09527 l -77.288001,0 c -2.719427,0 -4.932478,-2.73818 -4.932478,-6.09527"
-         id="path3073"
-         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="137.67361"
-         y="346.35568"
-         id="text3075"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Redistribution</text>
-      <text
-         x="149.67661"
-         y="359.85904"
-         id="text3077"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[</text>
-      <text
-         x="154.77788"
-         y="359.85904"
-         id="text3079"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">keyBy</text>
-      <text
-         x="188.9864"
-         y="359.85904"
-         id="text3081"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">()]</text>
-    </g>
-  </g>
-</svg>


[62/89] [abbrv] flink git commit: [FLINK-4437] [checkpoints] Properly lock the triggerCheckpoint() method

Posted by se...@apache.org.
[FLINK-4437] [checkpoints] Properly lock the triggerCheckpoint() method

This introduces a trigger-lock that makes sure checkpoint trigger attemps to not overtake
each other (as may otherwise be the case for periodic checkpoints and manual savepoints).

This also fixes the evaluation of the min-delay-between-checkpoints


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/4da40bcb
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/4da40bcb
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/4da40bcb

Branch: refs/heads/flip-6
Commit: 4da40bcb9ea01cb0c5e6fd0d7472dc09397f648e
Parents: 4e45659
Author: Stephan Ewen <se...@apache.org>
Authored: Wed Aug 24 14:02:47 2016 +0200
Committer: Stephan Ewen <se...@apache.org>
Committed: Wed Aug 24 19:56:17 2016 +0200

----------------------------------------------------------------------
 .../checkpoint/CheckpointCoordinator.java       | 219 +++++++++++--------
 .../checkpoint/CoordinatorShutdownTest.java     |   3 +-
 2 files changed, 133 insertions(+), 89 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/4da40bcb/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java b/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java
index ff54bad..2c0e63b 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java
@@ -80,6 +80,13 @@ public class CheckpointCoordinator {
 	/** Coordinator-wide lock to safeguard the checkpoint updates */
 	private final Object lock = new Object();
 
+	/** Lock specially to make sure that trigger requests do not overtake each other.
+	 * This is not done with the coordinator-wide lock, because as part of triggering,
+	 * blocking operations may happen (distributed atomic counters).
+	 * Using a dedicated lock, we avoid blocking the processing of 'acknowledge/decline'
+	 * messages during that phase. */
+	private final Object triggerLock = new Object();
+
 	/** The job whose checkpoint this coordinator coordinates */
 	private final JobID job;
 
@@ -179,6 +186,12 @@ public class CheckpointCoordinator {
 		checkArgument(minPauseBetweenCheckpoints >= 0, "minPauseBetweenCheckpoints must be >= 0");
 		checkArgument(maxConcurrentCheckpointAttempts >= 1, "maxConcurrentCheckpointAttempts must be >= 1");
 
+		// it does not make sense to schedule checkpoints more often then the desired
+		// time between checkpoints
+		if (baseInterval < minPauseBetweenCheckpoints) {
+			baseInterval = minPauseBetweenCheckpoints;
+		}
+
 		this.job = checkNotNull(job);
 		this.baseInterval = baseInterval;
 		this.checkpointTimeout = checkpointTimeout;
@@ -202,8 +215,8 @@ public class CheckpointCoordinator {
 			// Make sure the checkpoint ID enumerator is running. Possibly
 			// issues a blocking call to ZooKeeper.
 			checkpointIDCounter.start();
-		} catch (Exception e) {
-			throw new Exception("Failed to start checkpoint ID counter: " + e.getMessage(), e);
+		} catch (Throwable t) {
+			throw new Exception("Failed to start checkpoint ID counter: " + t.getMessage(), t);
 		}
 	}
 
@@ -335,7 +348,13 @@ public class CheckpointCoordinator {
 				}
 
 				// make sure the minimum interval between checkpoints has passed
-				if (lastTriggeredCheckpoint + minPauseBetweenCheckpoints > timestamp) {
+				long nextCheckpointEarliest = lastTriggeredCheckpoint + minPauseBetweenCheckpoints;
+				if (nextCheckpointEarliest < 0) {
+					// overflow
+					nextCheckpointEarliest = Long.MAX_VALUE;
+				}
+
+				if (nextCheckpointEarliest > timestamp) {
 					if (currentPeriodicTrigger != null) {
 						currentPeriodicTrigger.cancel();
 						currentPeriodicTrigger = null;
@@ -343,7 +362,8 @@ public class CheckpointCoordinator {
 					ScheduledTrigger trigger = new ScheduledTrigger();
 					// Reassign the new trigger to the currentPeriodicTrigger
 					currentPeriodicTrigger = trigger;
-					timer.scheduleAtFixedRate(trigger, minPauseBetweenCheckpoints, baseInterval);
+					long delay = nextCheckpointEarliest - timestamp;
+					timer.scheduleAtFixedRate(trigger, delay, baseInterval);
 					return new CheckpointTriggerResult(CheckpointDeclineReason.MINIMUM_TIME_BETWEEN_CHECKPOINTS);
 				}
 			}
@@ -380,105 +400,130 @@ public class CheckpointCoordinator {
 
 		// we will actually trigger this checkpoint!
 
-		lastTriggeredCheckpoint = timestamp;
-		final long checkpointID;
-		try {
-			// this must happen outside the locked scope, because it communicates
-			// with external services (in HA mode) and may block for a while.
-			checkpointID = checkpointIdCounter.getAndIncrement();
-		}
-		catch (Throwable t) {
-			int numUnsuccessful = ++numUnsuccessfulCheckpointsTriggers;
-			LOG.warn("Failed to trigger checkpoint (" + numUnsuccessful + " consecutive failed attempts so far)", t);
-			return new CheckpointTriggerResult(CheckpointDeclineReason.EXCEPTION);
-		}
+		// we lock with a special lock to make sure that trigger requests do not overtake each other.
+		// this is not done with the coordinator-wide lock, because the 'checkpointIdCounter'
+		// may issue blocking operations. Using a different lock than teh coordinator-wide lock,
+		// we avoid blocking the processing of 'acknowledge/decline' messages during that time.
+		synchronized (triggerLock) {
+			final long checkpointID;
+			try {
+				// this must happen outside the coordinator-wide lock, because it communicates
+				// with external services (in HA mode) and may block for a while.
+				checkpointID = checkpointIdCounter.getAndIncrement();
+			}
+			catch (Throwable t) {
+				int numUnsuccessful = ++numUnsuccessfulCheckpointsTriggers;
+				LOG.warn("Failed to trigger checkpoint (" + numUnsuccessful + " consecutive failed attempts so far)", t);
+				return new CheckpointTriggerResult(CheckpointDeclineReason.EXCEPTION);
+			}
 
-		LOG.info("Triggering checkpoint " + checkpointID + " @ " + timestamp);
+			final PendingCheckpoint checkpoint = props.isSavepoint() ?
+				new PendingSavepoint(job, checkpointID, timestamp, ackTasks, userClassLoader, savepointStore) :
+				new PendingCheckpoint(job, checkpointID, timestamp, ackTasks, userClassLoader);
+
+			// schedule the timer that will clean up the expired checkpoints
+			TimerTask canceller = new TimerTask() {
+				@Override
+				public void run() {
+					try {
+						synchronized (lock) {
+							// only do the work if the checkpoint is not discarded anyways
+							// note that checkpoint completion discards the pending checkpoint object
+							if (!checkpoint.isDiscarded()) {
+								LOG.info("Checkpoint " + checkpointID + " expired before completing.");
+	
+								checkpoint.abortExpired();
+								pendingCheckpoints.remove(checkpointID);
+								rememberRecentCheckpointId(checkpointID);
+	
+								triggerQueuedRequests();
+							}
+						}
+					}
+					catch (Throwable t) {
+						LOG.error("Exception while handling checkpoint timeout", t);
+					}
+				}
+			};
 
-		final PendingCheckpoint checkpoint = props.isSavepoint() ?
-			new PendingSavepoint(job, checkpointID, timestamp, ackTasks, userClassLoader, savepointStore) :
-			new PendingCheckpoint(job, checkpointID, timestamp, ackTasks, userClassLoader);
+			try {
+				// re-acquire the coordinator-wide lock
+				synchronized (lock) {
+					// since we released the lock in the meantime, we need to re-check
+					// that the conditions still hold.
+					if (shutdown) {
+						return new CheckpointTriggerResult(CheckpointDeclineReason.COORDINATOR_SHUTDOWN);
+					}
+					else if (!props.isSavepoint()) {
+						if (triggerRequestQueued) {
+							LOG.warn("Trying to trigger another checkpoint while one was queued already");
+							return new CheckpointTriggerResult(CheckpointDeclineReason.ALREADY_QUEUED);
+						}
 
-		// schedule the timer that will clean up the expired checkpoints
-		TimerTask canceller = new TimerTask() {
-			@Override
-			public void run() {
-				try {
-					synchronized (lock) {
-						// only do the work if the checkpoint is not discarded anyways
-						// note that checkpoint completion discards the pending checkpoint object
-						if (!checkpoint.isDiscarded()) {
-							LOG.info("Checkpoint " + checkpointID + " expired before completing.");
+						if (pendingCheckpoints.size() >= maxConcurrentCheckpointAttempts) {
+							triggerRequestQueued = true;
+							if (currentPeriodicTrigger != null) {
+								currentPeriodicTrigger.cancel();
+								currentPeriodicTrigger = null;
+							}
+							return new CheckpointTriggerResult(CheckpointDeclineReason.TOO_MANY_CONCURRENT_CHECKPOINTS);
+						}
 
-							checkpoint.abortExpired();
-							pendingCheckpoints.remove(checkpointID);
-							rememberRecentCheckpointId(checkpointID);
+						// make sure the minimum interval between checkpoints has passed
+						long nextCheckpointEarliest = lastTriggeredCheckpoint + minPauseBetweenCheckpoints;
+						if (nextCheckpointEarliest < 0) {
+							// overflow
+							nextCheckpointEarliest = Long.MAX_VALUE;
+						}
 
-							triggerQueuedRequests();
+						if (nextCheckpointEarliest > timestamp) {
+							if (currentPeriodicTrigger != null) {
+								currentPeriodicTrigger.cancel();
+								currentPeriodicTrigger = null;
+							}
+							ScheduledTrigger trigger = new ScheduledTrigger();
+							// Reassign the new trigger to the currentPeriodicTrigger
+							currentPeriodicTrigger = trigger;
+							long delay = nextCheckpointEarliest - timestamp;
+							timer.scheduleAtFixedRate(trigger, delay, baseInterval);
+							return new CheckpointTriggerResult(CheckpointDeclineReason.MINIMUM_TIME_BETWEEN_CHECKPOINTS);
 						}
 					}
-				}
-				catch (Throwable t) {
-					LOG.error("Exception while handling checkpoint timeout", t);
-				}
-			}
-		};
 
-		try {
-			// re-acquire the lock
-			synchronized (lock) {
-				// since we released the lock in the meantime, we need to re-check
-				// that the conditions still hold. this is clumsy, but it allows us to
-				// release the lock in the meantime while calls to external services are
-				// blocking progress, and still gives us early checks that skip work
-				// if no checkpoint can happen anyways
-				if (shutdown) {
-					return new CheckpointTriggerResult(CheckpointDeclineReason.COORDINATOR_SHUTDOWN);
+					LOG.info("Triggering checkpoint " + checkpointID + " @ " + timestamp);
+
+					lastTriggeredCheckpoint = Math.max(timestamp, lastTriggeredCheckpoint);
+					pendingCheckpoints.put(checkpointID, checkpoint);
+					timer.schedule(canceller, checkpointTimeout);
 				}
-				else if (!props.isSavepoint()) {
-					if (triggerRequestQueued) {
-						LOG.warn("Trying to trigger another checkpoint while one was queued already");
-						return new CheckpointTriggerResult(CheckpointDeclineReason.ALREADY_QUEUED);
-					}
-					else if (pendingCheckpoints.size() >= maxConcurrentCheckpointAttempts) {
-						triggerRequestQueued = true;
-						if (currentPeriodicTrigger != null) {
-							currentPeriodicTrigger.cancel();
-							currentPeriodicTrigger = null;
-						}
-						return new CheckpointTriggerResult(CheckpointDeclineReason.TOO_MANY_CONCURRENT_CHECKPOINTS);
-					}
+				// end of lock scope
+
+				// send the messages to the tasks that trigger their checkpoint
+				for (int i = 0; i < tasksToTrigger.length; i++) {
+					ExecutionAttemptID id = triggerIDs[i];
+					TriggerCheckpoint message = new TriggerCheckpoint(job, id, checkpointID, timestamp);
+					tasksToTrigger[i].sendMessageToCurrentExecution(message, id);
 				}
 
-				pendingCheckpoints.put(checkpointID, checkpoint);
-				timer.schedule(canceller, checkpointTimeout);
+				numUnsuccessfulCheckpointsTriggers = 0;
+				return new CheckpointTriggerResult(checkpoint);
 			}
-			// end of lock scope
+			catch (Throwable t) {
+				// guard the map against concurrent modifications
+				synchronized (lock) {
+					pendingCheckpoints.remove(checkpointID);
+				}
 
-			// send the messages to the tasks that trigger their checkpoint
-			for (int i = 0; i < tasksToTrigger.length; i++) {
-				ExecutionAttemptID id = triggerIDs[i];
-				TriggerCheckpoint message = new TriggerCheckpoint(job, id, checkpointID, timestamp);
-				tasksToTrigger[i].sendMessageToCurrentExecution(message, id);
-			}
+				int numUnsuccessful = ++numUnsuccessfulCheckpointsTriggers;
+				LOG.warn("Failed to trigger checkpoint (" + numUnsuccessful + " consecutive failed attempts so far)", t);
 
-			numUnsuccessfulCheckpointsTriggers = 0;
-			return new CheckpointTriggerResult(checkpoint);
-		}
-		catch (Throwable t) {
-			// guard the map against concurrent modifications
-			synchronized (lock) {
-				pendingCheckpoints.remove(checkpointID);
+				if (!checkpoint.isDiscarded()) {
+					checkpoint.abortError(new Exception("Failed to trigger checkpoint"));
+				}
+				return new CheckpointTriggerResult(CheckpointDeclineReason.EXCEPTION);
 			}
 
-			int numUnsuccessful = ++numUnsuccessfulCheckpointsTriggers;
-			LOG.warn("Failed to trigger checkpoint (" + numUnsuccessful + " consecutive failed attempts so far)", t);
-
-			if (!checkpoint.isDiscarded()) {
-				checkpoint.abortError(new Exception("Failed to trigger checkpoint"));
-			}
-			return new CheckpointTriggerResult(CheckpointDeclineReason.EXCEPTION);
-		}
+		} // end trigger lock
 	}
 
 	/**

http://git-wip-us.apache.org/repos/asf/flink/blob/4da40bcb/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CoordinatorShutdownTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CoordinatorShutdownTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CoordinatorShutdownTest.java
index 91a83b2..c43cf2e 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CoordinatorShutdownTest.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CoordinatorShutdownTest.java
@@ -18,7 +18,6 @@
 
 package org.apache.flink.runtime.checkpoint;
 
-import org.apache.flink.api.common.ExecutionConfig;
 import org.apache.flink.configuration.ConfigConstants;
 import org.apache.flink.configuration.Configuration;
 import org.apache.flink.runtime.akka.ListeningBehaviour;
@@ -29,10 +28,10 @@ import org.apache.flink.runtime.jobgraph.JobGraph;
 import org.apache.flink.runtime.jobgraph.JobVertexID;
 import org.apache.flink.runtime.jobgraph.tasks.JobSnapshottingSettings;
 import org.apache.flink.runtime.jobmanager.Tasks;
-
 import org.apache.flink.runtime.messages.JobManagerMessages;
 import org.apache.flink.runtime.minicluster.LocalFlinkMiniCluster;
 import org.apache.flink.runtime.testingUtils.TestingUtils;
+
 import org.junit.Test;
 
 import scala.concurrent.Await;


[02/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/fig/processes.svg
----------------------------------------------------------------------
diff --git a/docs/fig/processes.svg b/docs/fig/processes.svg
new file mode 100644
index 0000000..fe83a9d
--- /dev/null
+++ b/docs/fig/processes.svg
@@ -0,0 +1,749 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<svg
+   xmlns:dc="http://purl.org/dc/elements/1.1/"
+   xmlns:cc="http://creativecommons.org/ns#"
+   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+   xmlns:svg="http://www.w3.org/2000/svg"
+   xmlns="http://www.w3.org/2000/svg"
+   version="1.1"
+   width="851.09106"
+   height="613.16156"
+   id="svg2">
+  <defs
+     id="defs4" />
+  <metadata
+     id="metadata7">
+    <rdf:RDF>
+      <cc:Work
+         rdf:about="">
+        <dc:format>image/svg+xml</dc:format>
+        <dc:type
+           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+        <dc:title></dc:title>
+      </cc:Work>
+    </rdf:RDF>
+  </metadata>
+  <g
+     transform="translate(50.54889,-225.78139)"
+     id="layer1">
+    <g
+       transform="translate(-391.17389,218.44297)"
+       id="g2989">
+      <path
+         d="m 341.26002,269.37336 0,209.43342 264.44088,0 0,-209.43342 -264.44088,0 z"
+         id="path2991"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 341.26002,269.37336 264.44088,0 0,209.43342 -264.44088,0 z"
+         id="path2993"
+         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="350.33728"
+         y="291.3476"
+         id="text2995"
+         xml:space="preserve"
+         style="font-size:17.55437279px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Flink Program</text>
+      <path
+         d="m 495.68599,390.9599 0,81.43278 105.02616,0 0,-81.43278 -105.02616,0 z"
+         id="path2997"
+         style="fill:#e4eaf4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 495.68599,390.9599 105.02616,0 0,81.43278 -105.02616,0 z"
+         id="path2999"
+         style="fill:none;stroke:#898c92;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="504.73251"
+         y="413.00705"
+         id="text3001"
+         xml:space="preserve"
+         style="font-size:17.55437279px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Client</text>
+      <path
+         d="m 943.285,29.932457 0,251.950263 204.1258,0 0,-251.950263 -204.1258,0 z"
+         id="path3003"
+         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 943.285,29.932457 204.1258,0 0,251.950263 -204.1258,0 z"
+         id="path3005"
+         style="fill:none;stroke:#935f1c;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="952.29791"
+         y="51.877296"
+         id="text3007"
+         xml:space="preserve"
+         style="font-size:17.55437279px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">TaskManager</text>
+      <path
+         d="m 1018.6413,77.306759 0,88.297001 53.9009,0 0,-88.297001 -53.9009,0 z"
+         id="path3009"
+         style="fill:#e4eaf4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 1018.6413,77.306759 53.9009,0 0,88.297001 -53.9009,0 z"
+         id="path3011"
+         style="fill:none;stroke:#898c92;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="1029.1053"
+         y="96.706863"
+         id="text3013"
+         xml:space="preserve"
+         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task </text>
+      <text
+         x="1031.0559"
+         y="114.71135"
+         id="text3015"
+         xml:space="preserve"
+         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Slot</text>
+      <path
+         d="m 1083.0073,77.306759 0,88.297001 53.9384,0 0,-88.297001 -53.9384,0 z"
+         id="path3017"
+         style="fill:#e4eaf4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 1083.0073,77.306759 53.9384,0 0,88.297001 -53.9384,0 z"
+         id="path3019"
+         style="fill:none;stroke:#898c92;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="1093.4702"
+         y="96.706863"
+         id="text3021"
+         xml:space="preserve"
+         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task </text>
+      <text
+         x="1095.4207"
+         y="114.71135"
+         id="text3023"
+         xml:space="preserve"
+         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Slot</text>
+      <path
+         d="m 1026.5933,139.90986 c 0,-10.50262 8.5146,-19.01724 19.0172,-19.01724 10.4651,0 18.9797,8.51462 18.9797,19.01724 0,10.4651 -8.5146,18.97972 -18.9797,18.97972 -10.5026,0 -19.0172,-8.51462 -19.0172,-18.97972"
+         id="path3025"
+         style="fill:#be73f1;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 1026.5933,139.90986 c 0,-10.50262 8.5146,-19.01724 19.0172,-19.01724 10.4651,0 18.9797,8.51462 18.9797,19.01724 0,10.4651 -8.5146,18.97972 -18.9797,18.97972 -10.5026,0 -19.0172,-8.51462 -19.0172,-18.97972"
+         id="path3027"
+         style="fill:none;stroke:#724591;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="1032.5719"
+         y="144.36874"
+         id="text3029"
+         xml:space="preserve"
+         style="font-size:11.2528038px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task</text>
+      <path
+         d="m 953.78761,77.306759 0,88.297001 53.75089,0 0,-88.297001 -53.75089,0 z"
+         id="path3031"
+         style="fill:#e4eaf4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 953.78761,77.306759 53.75089,0 0,88.297001 -53.75089,0 z"
+         id="path3033"
+         style="fill:none;stroke:#898c92;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="964.16815"
+         y="96.706863"
+         id="text3035"
+         xml:space="preserve"
+         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task </text>
+      <text
+         x="966.11865"
+         y="114.71135"
+         id="text3037"
+         xml:space="preserve"
+         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Slot</text>
+      <path
+         d="m 961.58956,139.90986 c 0,-10.50262 8.55213,-19.01724 19.05474,-19.01724 10.54013,0 19.09226,8.51462 19.09226,19.01724 0,10.4651 -8.55213,18.97972 -19.09226,18.97972 -10.50261,0 -19.05474,-8.51462 -19.05474,-18.97972"
+         id="path3039"
+         style="fill:#be73f1;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 961.58956,139.90986 c 0,-10.50262 8.55213,-19.01724 19.05474,-19.01724 10.54013,0 19.09226,8.51462 19.09226,19.01724 0,10.4651 -8.55213,18.97972 -19.09226,18.97972 -10.50261,0 -19.05474,-8.51462 -19.05474,-18.97972"
+         id="path3041"
+         style="fill:none;stroke:#724591;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="967.63464"
+         y="144.36874"
+         id="text3043"
+         xml:space="preserve"
+         style="font-size:11.2528038px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task</text>
+      <path
+         d="m 951.27449,206.714 0,31.7329 188.48441,0 0,-31.7329 -188.48441,0 z"
+         id="path3045"
+         style="fill:#b8bec6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 951.27449,206.714 188.48441,0 0,31.7329 -188.48441,0 z"
+         id="path3047"
+         style="fill:none;stroke:#6e7277;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="987.11847"
+         y="227.79962"
+         id="text3049"
+         xml:space="preserve"
+         style="font-size:13.20328903px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Network Manager</text>
+      <path
+         d="m 951.27449,243.28561 0,32.5206 188.48441,0 0,-32.5206 -188.48441,0 z"
+         id="path3051"
+         style="fill:#b8bec6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 951.27449,243.28561 188.48441,0 0,32.5206 -188.48441,0 z"
+         id="path3053"
+         style="fill:none;stroke:#6e7277;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="1001.0752"
+         y="264.77148"
+         id="text3055"
+         xml:space="preserve"
+         style="font-size:13.20328903px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Actor System</text>
+      <path
+         d="m 951.27449,170.44246 0,31.73291 188.48441,0 0,-31.73291 -188.48441,0 z"
+         id="path3057"
+         style="fill:#b8bec6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 951.27449,170.44246 188.48441,0 0,31.73291 -188.48441,0 z"
+         id="path3059"
+         style="fill:none;stroke:#6e7277;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="967.76367"
+         y="191.52837"
+         id="text3061"
+         xml:space="preserve"
+         style="font-size:13.20328903px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Memory &amp; I/O Manager</text>
+      <path
+         d="m 804.98804,438.48424 0,158.1769 200.52496,0 0,-158.1769 -200.52496,0 z"
+         id="path3063"
+         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 804.98804,438.48424 200.52496,0 0,158.1769 -200.52496,0 z"
+         id="path3065"
+         style="fill:none;stroke:#935f1c;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="814.04663"
+         y="460.43439"
+         id="text3067"
+         xml:space="preserve"
+         style="font-size:17.55437279px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">JobManager</text>
+      <text
+         x="1006.6214"
+         y="17.8258"
+         id="text3069"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">(Worker)</text>
+      <text
+         x="782.64081"
+         y="617.72314"
+         id="text3071"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">(Master / YARN Application Master)</text>
+      <path
+         d="m 811.2521,517.55394 0,56.45156 89.0847,0 0,-56.45156 -89.0847,0 z"
+         id="path3073"
+         style="fill:#e4eaf4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 811.2521,517.55394 89.0847,0 0,56.45156 -89.0847,0 z"
+         id="path3075"
+         style="fill:none;stroke:#898c92;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="816.61139"
+         y="532.03253"
+         id="text3077"
+         xml:space="preserve"
+         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Dataflow Graph</text>
+      <path
+         d="m 820.32936,554.91324 c 0.93774,-2.47561 3.67592,-3.75093 6.15154,-2.85071 2.51312,0.90023 3.78844,3.67592 2.85071,6.15154 -0.90023,2.47561 -3.63841,3.75093 -6.15154,2.85071 -2.47561,-0.90023 -3.75093,-3.67592 -2.85071,-6.15154"
+         id="path3079"
+         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 820.32936,554.91324 c 0.93774,-2.47561 3.67592,-3.75093 6.15154,-2.85071 2.51312,0.90023 3.78844,3.67592 2.85071,6.15154 -0.90023,2.47561 -3.63841,3.75093 -6.15154,2.85071 -2.47561,-0.90023 -3.75093,-3.67592 -2.85071,-6.15154"
+         id="path3081"
+         style="fill:none;stroke:#000000;stroke-width:0.63765883px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 847.97375,550.82472 c 0.4126,-2.58814 2.88822,-4.38859 5.51388,-3.93848 2.62565,0.41261 4.38859,2.88822 3.93848,5.51388 -0.41261,2.58814 -2.88822,4.38859 -5.51388,3.93848 -2.58814,-0.4126 -4.35108,-2.88822 -3.93848,-5.51388"
+         id="path3083"
+         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 847.97375,550.82472 c 0.4126,-2.58814 2.88822,-4.38859 5.51388,-3.93848 2.62565,0.41261 4.38859,2.88822 3.93848,5.51388 -0.41261,2.58814 -2.88822,4.38859 -5.51388,3.93848 -2.58814,-0.4126 -4.35108,-2.88822 -3.93848,-5.51388"
+         id="path3085"
+         style="fill:none;stroke:#000000;stroke-width:0.63765883px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 858.55139,564.47813 c 0.90022,-2.47562 3.67591,-3.75094 6.15153,-2.81321 2.47562,0.90023 3.75093,3.67592 2.85071,6.15154 -0.93773,2.47561 -3.67592,3.75093 -6.18904,2.8132 -2.47562,-0.90023 -3.75094,-3.67592 -2.8132,-6.15153"
+         id="path3087"
+         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 858.55139,564.47813 c 0.90022,-2.47562 3.67591,-3.75094 6.15153,-2.81321 2.47562,0.90023 3.75093,3.67592 2.85071,6.15154 -0.93773,2.47561 -3.67592,3.75093 -6.18904,2.8132 -2.47562,-0.90023 -3.75094,-3.67592 -2.8132,-6.15153"
+         id="path3089"
+         style="fill:none;stroke:#000000;stroke-width:0.63765883px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 882.2948,554.46313 c 0.82521,-2.51313 3.52588,-3.90097 6.07652,-3.07577 2.51312,0.86272 3.86346,3.56339 3.03825,6.07652 -0.8252,2.51312 -3.56339,3.86346 -6.07651,3.03826 -2.51313,-0.82521 -3.86346,-3.52588 -3.03826,-6.03901"
+         id="path3091"
+         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 882.2948,554.46313 c 0.82521,-2.51313 3.52588,-3.90097 6.07652,-3.07577 2.51312,0.86272 3.86346,3.56339 3.03825,6.07652 -0.8252,2.51312 -3.56339,3.86346 -6.07651,3.03826 -2.51313,-0.82521 -3.86346,-3.52588 -3.03826,-6.03901"
+         id="path3093"
+         style="fill:none;stroke:#000000;stroke-width:0.63765883px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 882.06975,538.52166 c 0.86271,-2.51313 3.6384,-3.82595 6.11402,-2.92573 2.51312,0.90022 3.78844,3.63841 2.92573,6.11402 -0.90023,2.51313 -3.63841,3.82596 -6.11403,2.92573 -2.51312,-0.90022 -3.82595,-3.6384 -2.92572,-6.11402"
+         id="path3095"
+         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 882.06975,538.52166 c 0.86271,-2.51313 3.6384,-3.82595 6.11402,-2.92573 2.51312,0.90022 3.78844,3.63841 2.92573,6.11402 -0.90023,2.51313 -3.63841,3.82596 -6.11403,2.92573 -2.51312,-0.90022 -3.82595,-3.6384 -2.92572,-6.11402"
+         id="path3097"
+         style="fill:none;stroke:#000000;stroke-width:0.63765883px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 829.33161,555.17581 15.07875,-2.96324 -0.22505,-1.23781 -15.07876,2.96324 0.22506,1.23781 z m 14.21604,-0.90023 4.4261,-3.41335 -5.36383,-1.50037 0.93773,4.91372 z"
+         id="path3099"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 829.48164,557.61392 25.54387,5.47636 -0.26257,1.2003 -25.54386,-5.47637 0.26256,-1.20029 z m 24.71866,3.37584 4.35109,3.48837 -5.40135,1.38784 1.05026,-4.87621 z"
+         id="path3101"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 867.59114,564.66567 11.6654,-4.65116 -0.45011,-1.16279 -11.70291,4.65116 0.48762,1.16279 z m 11.21529,-2.43811 3.71343,-4.16353 -5.58889,-0.48762 1.87546,4.65115 z"
+         id="path3103"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 857.53863,551.79997 21.11777,1.76294 -0.075,1.2378 -21.11776,-1.76293 0.075,-1.23781 z m 20.02999,-0.22506 4.8012,2.92573 -5.2138,2.06301 0.4126,-4.98874 z"
+         id="path3105"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 856.71343,549.36186 22.01798,-5.70142 -0.30007,-1.2003 -22.01799,5.70142 0.30008,1.2003 z m 21.2678,-3.56339 4.23855,-3.67591 -5.47636,-1.16279 1.23781,4.8387 z"
+         id="path3107"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 811.2521,479.59448 0,32.5206 188.48446,0 0,-32.5206 -188.48446,0 z"
+         id="path3109"
+         style="fill:#b8bec6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 811.2521,479.59448 188.48446,0 0,32.5206 -188.48446,0 z"
+         id="path3111"
+         style="fill:none;stroke:#6e7277;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="860.96033"
+         y="501.12317"
+         id="text3113"
+         xml:space="preserve"
+         style="font-size:13.20328903px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Actor System</text>
+      <path
+         d="m 518.8105,428.46924 0,38.76591 76.89415,0 0,-38.76591 -76.89415,0 z"
+         id="path3115"
+         style="fill:#b8bec6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 518.8105,428.46924 76.89415,0 0,38.76591 -76.89415,0 z"
+         id="path3117"
+         style="fill:none;stroke:#6e7277;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="539.96814"
+         y="445.22284"
+         id="text3119"
+         xml:space="preserve"
+         style="font-size:13.20328903px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Actor</text>
+      <text
+         x="532.46631"
+         y="460.97678"
+         id="text3121"
+         xml:space="preserve"
+         style="font-size:13.20328903px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">System</text>
+      <path
+         d="m 986.57078,487.17137 2.43811,-2.88822 0.97524,0.8252 -2.43811,2.85071 -0.97524,-0.78769 z m 3.26331,-3.82596 2.4006,-2.85071 0.97524,0.7877 -2.4381,2.88822 -0.93774,-0.82521 z m 3.22581,-3.82595 2.4381,-2.85071 0.93774,0.82521 -2.43811,2.85071 -0.93773,-0.82521 z m 3.2258,-3.78844 2.43811,-2.88822 0.93773,0.8252 -2.4006,2.85071 -0.97524,-0.78769 z m 3.26331,-3.82596 1.23779,-1.50037 1.1628,-1.35034 0.9753,0.7877 -1.1628,1.35034 -1.2754,1.53788 -0.93769,-0.82521 z m 3.22579,-3.82595 2.4006,-2.85071 0.9753,0.7877 -2.4382,2.88822 -0.9377,-0.82521 z m 3.2258,-3.82595 1.838,-2.21305 0,0.0375 0.5626,-0.67517 0.9753,0.78769 -0.5627,0.67517 -1.8754,2.21305 -0.9378,-0.8252 z m 3.2258,-3.82596 2.4006,-2.85071 0.9378,0.7877 -2.4006,2.88822 -0.9378,-0.82521 z m 3.1883,-3.82595 2.4006,-2.85071 0,0 0,0 0.9753,0.7877 0,0 -2.4006,2.88822 -0.9753,-0.82521 z m 3.2258,-3.82595 2.4006,-2.88822 0.9378,0.8252 -2.4006,2.85071 -0.9378,-0.78769 z m 3.1883,-3.82596 2.4006,-2.88821 0.9753,0.78769
  -2.4006,2.88822 -0.9753,-0.7877 z m 3.1883,-3.86346 2.4006,-2.88822 0.9753,0.7877 -2.4006,2.88822 -0.9753,-0.7877 z m 3.1883,-3.86346 2.4006,-2.88822 0.9377,0.7877 -2.3631,2.88822 -0.9752,-0.7877 z m 3.1883,-3.86346 0.075,-0.075 0,0 2.2881,-2.8132 0.9752,0.78769 -2.3255,2.8132 -0.038,0.11253 -0.9752,-0.8252 z m 3.1508,-3.86347 2.3631,-2.92572 0.9752,0.78769 -2.3631,2.92573 -0.9752,-0.7877 z m 3.1508,-3.90097 0.3751,-0.45011 0,0 1.9504,-2.43811 0.9753,0.75019 -1.9505,2.47562 -0.3751,0.45011 -0.9752,-0.7877 z m 3.1132,-3.86346 2.3631,-2.96324 0.9753,0.7877 -2.3631,2.92573 -0.9753,-0.75019 z m 3.1508,-3.93848 0.5252,-0.67517 0,0 1.7629,-2.25056 1.0128,0.7877 -1.763,2.25056 -0.5626,0.67517 -0.9753,-0.7877 z m 3.0758,-3.90097 2.3256,-2.96324 0.9752,0.7877 -2.3256,2.92572 -0.9752,-0.75018 z m 3.0758,-3.93848 0.6376,-0.7877 0,0 1.6504,-2.17554 1.0128,0.75018 -1.6879,2.17555 -0.6002,0.8252 -1.0127,-0.78769 z m 3.0757,-3.93849 0.6002,-0.78769 0,0 1.6879,-2.21305 0.9752,0.78769 -1.6879,2.175
 55 -0.6001,0.78769 -0.9753,-0.75019 z m 3.0383,-3.97599 0.5251,-0.71267 0,0 1.7254,-2.28807 1.0128,0.75018 -1.7254,2.28807 -0.5627,0.71268 -0.9752,-0.75019 z m 3.0007,-3.97599 0.4501,-0.60015 0,0 1.8005,-2.4381 1.0127,0.75018 -1.8004,2.43811 -0.4501,0.60015 -1.0128,-0.75019 z m 3.0008,-4.0135 0.3001,-0.45011 0,0 1.9129,-2.58814 1.0128,0.75018 -1.913,2.58815 -0.3376,0.45011 -0.9752,-0.75019 z m 2.9257,-4.05101 0.1876,-0.18754 -0.038,0 2.0631,-2.85071 1.0127,0.75018 -2.063,2.81321 -0.15,0.22505 -1.0128,-0.75019 z m 2.9257,-4.051 2.1756,-3.03826 1.0127,0.71268 -2.1755,3.03825 -1.0128,-0.71267 z m 2.8883,-4.08852 2.138,-3.07577 1.0127,0.71268 -2.138,3.07576 -1.0127,-0.71267 z m 2.8507,-4.08852 1.9129,-2.85071 0,0 0.1876,-0.26257 1.0127,0.71268 -0.15,0.22506 -1.9505,2.88822 -1.0127,-0.71268 z m 2.7757,-4.16354 1.5754,-2.32558 -0.038,0 0.5252,-0.78769 1.0502,0.67516 -0.5251,0.82521 -1.5379,2.32558 -1.0502,-0.71268 z m 2.7381,-4.16354 1.1253,-1.72543 0,0 0.9002,-1.42535 1.0503,0.67517 -0.9
 002,1.42535 -1.1253,1.72543 -1.0503,-0.67517 z m 2.7007,-4.20104 0.6752,-1.08777 0,0.0375 1.2753,-2.13804 1.0878,0.67517 -1.3129,2.10052 -0.6751,1.08778 -1.0503,-0.67517 z m 2.6257,-4.23856 0.1875,-0.33758 0,0 1.7254,-2.88822 1.0878,0.63766 -1.7629,2.92573 -0.1876,0.30007 -1.0502,-0.63766 z m 2.5506,-4.31357 1.6129,-2.85071 0,0 0.2251,-0.3751 1.0877,0.60015 -0.225,0.3751 -1.6129,2.88822 -1.0878,-0.63766 z m 2.4381,-4.31358 1.0503,-1.87546 0,0 0.7502,-1.42536 1.0877,0.60015 -0.7502,1.42535 -1.0502,1.87547 -1.0878,-0.60015 z m 2.4006,-4.38859 0.4126,-0.82521 0,0.0375 1.3128,-2.55063 1.0878,0.56264 -1.3128,2.55063 -0.4126,0.7877 -1.0878,-0.56264 z m 2.2881,-4.4261 1.3878,-2.85071 0,0 0.2251,-0.52513 1.1628,0.52513 -0.2626,0.52513 -1.4254,2.85071 -1.0877,-0.52513 z m 2.1755,-4.50113 0.7127,-1.53788 0,0 0.8627,-1.83796 1.1253,0.48763 -0.8627,1.87546 -0.7127,1.53789 -1.1253,-0.52514 z m 2.063,-4.53863 0.075,-0.15003 0,0 1.3503,-3.11328 0,0.0375 0.075,-0.18755 1.1628,0.48763 -0.075,0.18754
  -1.3504,3.11328 -0.075,0.15004 -1.1628,-0.52514 z m 1.988,-4.57614 0.7502,-1.72543 -0.037,0 0.7127,-1.72543 1.1628,0.45012 -0.7127,1.72543 -0.7127,1.76294 -1.1628,-0.48763 z m 1.8755,-4.61365 0.075,-0.15003 0,0 1.1253,-3.00075 0,0 0.15,-0.33758 1.1628,0.4126 -0.1125,0.37509 -1.1628,3.00075 -0.075,0.15004 -1.1628,-0.45012 z m 1.7629,-4.65115 0.5252,-1.46287 0,0 0.7126,-2.06301 1.2003,0.4126 -0.7501,2.06301 -0.5252,1.46287 -1.1628,-0.4126 z m 1.6504,-4.72618 0.8628,-2.62566 0,0.0375 0.3,-0.93773 1.1628,0.33758 -0.2625,0.97525 -0.9003,2.62565 -1.1628,-0.4126 z m 1.5004,-4.72618 1.1253,-3.6009 1.1628,0.3751 -1.0878,3.56339 -1.2003,-0.33759 z m 1.4254,-4.80119 1.0127,-3.6009 1.2003,0.33758 -0.9752,3.6009 -1.2378,-0.33758 z m 1.3503,-4.8012 0.9002,-3.63841 1.2003,0.30008 -0.9002,3.6384 -1.2003,-0.30007 z m 1.2003,-4.83871 0.1125,-0.4126 0,0.0375 0.7502,-3.26331 1.2003,0.26256 -0.7502,3.26332 -0.075,0.4126 -1.2378,-0.30008 z m 1.1253,-4.87621 0.225,-1.01275 0,0 0.5627,-2.62566 1.2003,0.26
 257 -0.5251,2.62565 -0.2626,1.05026 -1.2003,-0.30007 z m 1.0503,-4.87622 0.3,-1.57539 0,0 0.4126,-2.10052 1.2378,0.26256 -0.4126,2.06302 -0.3375,1.6129 -1.2003,-0.26257 z m 0.9377,-4.87621 0.4126,-2.10052 0,0 0.2626,-1.61291 1.2378,0.22506 -0.2626,1.6129 -0.4126,2.10053 -1.2378,-0.22506 z m 0.9002,-4.91372 0.4876,-2.55064 0,0 0.1876,-1.16279 1.2378,0.22506 -0.2251,1.16279 -0.4501,2.55063 -1.2378,-0.22505 z m 0.8627,-4.91373 0.3751,-2.13803 1.2378,0.22505 -0.3751,2.10053 -1.2378,-0.18755 z m -2.9257,-1.42535 4.9512,-6.7892 2.4381,8.027 -7.3893,-1.2378 z"
+         id="path3123"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 1010.1642,268.942 -1.0503,3.6009 -1.2003,-0.3751 1.0503,-3.60089 1.2003,0.37509 z m -1.4254,4.8012 -1.0503,3.60089 -1.2003,-0.37509 1.0503,-3.6009 1.2003,0.3751 z m -1.4254,4.80119 -1.0502,3.56339 -1.2003,-0.33758 1.0502,-3.6009 1.2003,0.37509 z m -1.4253,4.76369 -0.4126,1.46286 -0.6377,2.13804 -1.2003,-0.33759 0.6377,-2.17554 0.4126,-1.42535 1.2003,0.33758 z m -1.3879,4.8012 -1.0877,3.60089 -1.1628,-0.33758 1.0502,-3.6009 1.2003,0.33759 z m -1.4253,4.80119 -1.0503,3.6009 -1.2003,-0.33759 1.0503,-3.60089 1.2003,0.33758 z m -1.3879,4.8012 -0.8627,2.8132 0,0 -0.225,0.78769 -1.20032,-0.33758 0.22505,-0.7877 0.86267,-2.8132 1.2003,0.33759 z m -1.4253,4.80119 -1.05027,3.6009 -1.2003,-0.33758 1.05026,-3.6009 1.20031,0.33758 z m -1.38786,4.8012 -0.30007,1.05026 0,0 -0.71268,2.55064 -1.2003,-0.33759 0.71268,-2.55063 0.30008,-1.05026 1.20029,0.33758 z m -1.35033,4.8012 -1.05026,3.6384 -1.2003,-0.37509 1.05026,-3.6009 1.2003,0.33759 z m -1.38785,4.8387 -1.01275,3.6009 -1.2003,-0
 .33759 1.01275,-3.60089 1.2003,0.33758 z m -1.31282,4.8012 -0.60015,2.10052 0,0 -0.41261,1.50037 -1.2003,-0.33758 0.41261,-1.46286 0.56264,-2.13804 1.23781,0.33759 z m -1.35034,4.80119 -0.97524,3.63841 -1.2003,-0.33758 0.97524,-3.6009 1.2003,0.30007 z m -1.31283,4.83871 -0.0375,0.15004 0,0 -0.93773,3.48837 -1.2003,-0.33759 0.93773,-3.48837 0.0375,-0.15004 1.2003,0.33759 z m -1.27532,4.8387 -0.78769,2.92573 0,-0.0375 -0.18755,0.71268 -1.2003,-0.30007 0.18755,-0.71268 0.7877,-2.92573 1.20029,0.33758 z m -1.27531,4.83871 -0.93774,3.6009 -1.2003,-0.30008 0.93774,-3.6384 1.2003,0.33758 z m -1.23781,4.83871 -0.18755,0.75018 0,-0.0375 -0.71268,2.92573 -1.2378,-0.30007 0.71267,-2.92573 0.18755,-0.75019 1.23781,0.33759 z m -1.2003,4.8387 -0.82521,3.33833 0,0 -0.075,0.30008 -1.2378,-0.30008 0.075,-0.30007 0.8252,-3.33833 1.23781,0.30007 z m -1.2003,4.83871 -0.86271,3.67591 -1.2003,-0.30007 0.86271,-3.63841 1.2003,0.26257 z m -1.16279,4.87621 -0.22506,0.93773 0,0 -0.60015,2.70068 -1.20029,-0.2
 6257 0.60015,-2.73818 0.22505,-0.93773 1.2003,0.30007 z m -1.08777,4.87622 -0.75019,3.26331 0,0 -0.075,0.37509 -1.2003,-0.26256 0.075,-0.3751 0.71268,-3.26331 1.23781,0.26257 z m -1.08777,4.87621 -0.75019,3.67592 -1.23781,-0.26257 0.7877,-3.67592 1.2003,0.26257 z m -1.01275,4.87622 -0.15004,0.63765 0,-0.0375 -0.60015,3.07577 -1.2003,-0.22506 0.60015,-3.07576 0.11253,-0.60015 1.23781,0.22506 z m -0.97525,4.91372 -0.52513,2.70067 0,-0.0375 -0.18754,1.01276 -1.23781,-0.22506 0.18754,-1.01275 0.52513,-2.66317 1.23781,0.22506 z m -0.93773,4.91372 -0.63766,3.67592 -1.23781,-0.22506 0.63766,-3.67591 1.23781,0.22505 z m -0.86272,4.91373 -0.60014,3.67591 -1.23781,-0.18754 0.60015,-3.71343 1.2378,0.22506 z m -0.78769,4.91372 -0.26257,1.46287 0,0 -0.30007,2.25056 -1.23781,-0.18755 0.30007,-2.25056 0.26257,-1.46287 1.23781,0.18755 z m -0.75019,4.95124 -0.45011,3.07576 0,0 -0.075,0.63766 -1.23781,-0.15004 0.075,-0.63766 0.45011,-3.11327 1.23781,0.18755 z m -0.67517,4.95123 -0.45011,3.71342 -1.23
 781,-0.15003 0.45011,-3.71343 1.23781,0.15004 z m -0.60015,4.95123 -0.4126,3.71343 -1.23781,-0.11253 0.41261,-3.75094 1.2378,0.15004 z m -0.52513,4.95124 -0.11253,1.01275 0,0 -0.26256,2.73818 -1.23781,-0.11253 0.26257,-2.77569 0.075,-0.97524 1.27531,0.11253 z m -0.48762,4.98874 -0.22505,2.25056 0,-0.0375 -0.11253,1.50037 -1.23781,-0.075 0.11253,-1.50037 0.22505,-2.25056 1.23781,0.11253 z m -0.4126,4.95123 -0.26257,3.37584 0,0 -0.0375,0.3751 -1.2378,-0.075 0.0375,-0.3751 0.22506,-3.37584 1.27532,0.075 z m -0.37509,4.98874 -0.22506,3.75094 -1.23781,-0.075 0.22506,-3.75093 1.23781,0.075 z m -0.30008,4.98875 -0.18755,3.75093 -1.2378,-0.075 0.18754,-3.75093 1.23781,0.075 z m -0.22506,4.98874 -0.0375,0.4126 0,0 -0.11253,3.33833 -1.23781,-0.0375 0.11253,-3.33834 0,-0.4126 1.27531,0.0375 z m -0.18754,4.98874 -0.075,1.31283 0,0 -0.0375,2.43811 -1.27532,-0.0375 0.075,-2.43811 0.0375,-1.31283 1.27532,0.0375 z m -0.15004,4.98875 -0.075,2.13803 0,0 0,1.6129 -1.23781,0 0,-1.65041 0.075,-2.13803 1
 .23781,0.0375 z m -0.11253,4.98874 -0.0375,3.75093 -1.23781,0 0.0375,-3.75093 1.23781,0 z m -0.0375,5.02625 -0.0375,3.6009 0,0 0,0.15004 -1.27531,0 0,-0.15004 0.0375,-3.63841 1.27531,0.0375 z m -0.0375,4.98874 0,3.75094 -1.27531,0 0,-3.75094 1.27531,0 z m 0,4.98875 0,3.75093 -1.2378,0 -0.0375,-3.75093 1.27531,0 z m 0,4.98874 0.0375,3.75093 -1.27531,0.0375 0,-3.75093 1.2378,-0.0375 z m 3.15079,3.75093 -3.71343,7.50187 -3.78844,-7.46436 7.50187,-0.0375 z"
+         id="path3125"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="1098.799"
+         y="347.50183"
+         id="text3127"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Deploy/Stop/</text>
+      <text
+         x="1098.799"
+         y="364.00595"
+         id="text3129"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Cancel Tasks</text>
+      <text
+         x="1077.9341"
+         y="393.53741"
+         id="text3131"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Trigger</text>
+      <text
+         x="1060.6798"
+         y="410.04153"
+         id="text3133"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Checkpoints</text>
+      <text
+         x="906.68597"
+         y="312.69434"
+         id="text3135"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task Status</text>
+      <text
+         x="905.30804"
+         y="341.08121"
+         id="text3137"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Heartbeats</text>
+      <text
+         x="912.66595"
+         y="368.71213"
+         id="text3139"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Statistics</text>
+      <text
+         x="1045.498"
+         y="439.573"
+         id="text3141"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">\u2026</text>
+      <text
+         x="936.66449"
+         y="397.45572"
+         id="text3143"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">\u2026</text>
+      <path
+         d="m 661.96491,29.932457 0,251.950263 204.27589,0 0,-251.950263 -204.27589,0 z"
+         id="path3145"
+         style="fill:#f5a030;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 661.96491,29.932457 204.27589,0 0,251.950263 -204.27589,0 z"
+         id="path3147"
+         style="fill:none;stroke:#935f1c;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="671.09534"
+         y="51.877296"
+         id="text3149"
+         xml:space="preserve"
+         style="font-size:17.55437279px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">TaskManager</text>
+      <path
+         d="m 737.47122,77.306759 0,88.297001 53.90093,0 0,-88.297001 -53.90093,0 z"
+         id="path3151"
+         style="fill:#e4eaf4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 737.47122,77.306759 53.90093,0 0,88.297001 -53.90093,0 z"
+         id="path3153"
+         style="fill:none;stroke:#898c92;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="747.90277"
+         y="96.706863"
+         id="text3155"
+         xml:space="preserve"
+         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task </text>
+      <text
+         x="749.85327"
+         y="114.71135"
+         id="text3157"
+         xml:space="preserve"
+         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Slot</text>
+      <path
+         d="m 801.87477,77.306759 0,88.297001 53.75089,0 0,-88.297001 -53.75089,0 z"
+         id="path3159"
+         style="fill:#e4eaf4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 801.87477,77.306759 53.75089,0 0,88.297001 -53.75089,0 z"
+         id="path3161"
+         style="fill:none;stroke:#898c92;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="812.26758"
+         y="96.706863"
+         id="text3163"
+         xml:space="preserve"
+         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task </text>
+      <text
+         x="814.21808"
+         y="114.71135"
+         id="text3165"
+         xml:space="preserve"
+         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Slot</text>
+      <path
+         d="m 745.4232,139.90986 c 0,-10.50262 8.51462,-19.01724 19.01724,-19.01724 10.46511,0 18.97973,8.51462 18.97973,19.01724 0,10.4651 -8.51462,18.97972 -18.97973,18.97972 -10.50262,0 -19.01724,-8.51462 -19.01724,-18.97972"
+         id="path3167"
+         style="fill:#be73f1;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 745.4232,139.90986 c 0,-10.50262 8.51462,-19.01724 19.01724,-19.01724 10.46511,0 18.97973,8.51462 18.97973,19.01724 0,10.4651 -8.51462,18.97972 -18.97973,18.97972 -10.50262,0 -19.01724,-8.51462 -19.01724,-18.97972"
+         id="path3169"
+         style="fill:none;stroke:#724591;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="751.36926"
+         y="144.36874"
+         id="text3171"
+         xml:space="preserve"
+         style="font-size:11.2528038px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task</text>
+      <path
+         d="m 672.46753,77.306759 0,88.297001 53.90093,0 0,-88.297001 -53.90093,0 z"
+         id="path3173"
+         style="fill:#e4eaf4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 672.46753,77.306759 53.90093,0 0,88.297001 -53.90093,0 z"
+         id="path3175"
+         style="fill:none;stroke:#898c92;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="682.96558"
+         y="96.706863"
+         id="text3177"
+         xml:space="preserve"
+         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task </text>
+      <text
+         x="684.91608"
+         y="114.71135"
+         id="text3179"
+         xml:space="preserve"
+         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Slot</text>
+      <path
+         d="m 680.41951,139.90986 c 0,-10.50262 8.51462,-19.01724 18.97973,-19.01724 10.50261,0 19.01724,8.51462 19.01724,19.01724 0,10.4651 -8.51463,18.97972 -19.01724,18.97972 -10.46511,0 -18.97973,-8.51462 -18.97973,-18.97972"
+         id="path3181"
+         style="fill:#be73f1;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 680.41951,139.90986 c 0,-10.50262 8.51462,-19.01724 18.97973,-19.01724 10.50261,0 19.01724,8.51462 19.01724,19.01724 0,10.4651 -8.51463,18.97972 -19.01724,18.97972 -10.46511,0 -18.97973,-8.51462 -18.97973,-18.97972"
+         id="path3183"
+         style="fill:none;stroke:#724591;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="686.43207"
+         y="144.36874"
+         id="text3185"
+         xml:space="preserve"
+         style="font-size:11.2528038px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task</text>
+      <path
+         d="m 670.10444,206.714 0,31.7329 188.48446,0 0,-31.7329 -188.48446,0 z"
+         id="path3187"
+         style="fill:#b8bec6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 670.10444,206.714 188.48446,0 0,31.7329 -188.48446,0 z"
+         id="path3189"
+         style="fill:none;stroke:#6e7277;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="705.91583"
+         y="227.79962"
+         id="text3191"
+         xml:space="preserve"
+         style="font-size:13.20328903px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Network Manager</text>
+      <path
+         d="m 670.10444,243.28561 0,32.5206 188.48446,0 0,-32.5206 -188.48446,0 z"
+         id="path3193"
+         style="fill:#b8bec6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 670.10444,243.28561 188.48446,0 0,32.5206 -188.48446,0 z"
+         id="path3195"
+         style="fill:none;stroke:#6e7277;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="719.87256"
+         y="264.77148"
+         id="text3197"
+         xml:space="preserve"
+         style="font-size:13.20328903px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Actor System</text>
+      <path
+         d="m 670.10444,170.44246 0,31.73291 188.48446,0 0,-31.73291 -188.48446,0 z"
+         id="path3199"
+         style="fill:#b8bec6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 670.10444,170.44246 188.48446,0 0,31.73291 -188.48446,0 z"
+         id="path3201"
+         style="fill:none;stroke:#6e7277;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="686.56104"
+         y="191.52837"
+         id="text3203"
+         xml:space="preserve"
+         style="font-size:13.20328903px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Memory &amp; I/O Manager</text>
+      <text
+         x="725.4187"
+         y="17.8258"
+         id="text3205"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">(Worker)</text>
+      <path
+         d="m 844.22282,223.29313 24.23103,-24.23104 0,12.11552 69.8424,0 0,-12.11552 24.23104,24.23104 -24.23104,24.19353 0,-12.07801 -69.8424,0 0,12.07801 z"
+         id="path3207"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 844.22282,223.29313 24.23103,-24.23104 0,12.11552 69.8424,0 0,-12.11552 24.23104,24.23104 -24.23104,24.19353 0,-12.07801 -69.8424,0 0,12.07801 z"
+         id="path3209"
+         style="fill:none;stroke:#000000;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="857.99353"
+         y="228.51564"
+         id="text3211"
+         xml:space="preserve"
+         style="font-size:13.20328903px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Data Streams</text>
+      <path
+         d="m 961.02692,479.89455 -1.72543,-3.33833 1.12528,-0.60015 1.72543,3.33833 -1.12528,0.60015 z m -2.28807,-4.46361 -1.72543,-3.33833 1.12528,-0.56264 1.72543,3.33833 -1.12528,0.56264 z m -2.28807,-4.4261 -1.72543,-3.33833 1.12528,-0.60015 1.72543,3.33833 -1.12528,0.60015 z m -2.28807,-4.46361 -1.72543,-3.33833 1.12528,-0.56264 1.72543,3.33833 -1.12528,0.56264 z m -2.28807,-4.46361 -1.12528,-2.17555 -0.60015,-1.12528 1.12528,-0.60015 0.60015,1.16279 1.08777,2.17554 -1.08777,0.56265 z m -2.28807,-4.42611 -1.72543,-3.33833 1.08777,-0.56264 1.72543,3.33833 -1.08777,0.56264 z m -2.32558,-4.4261 -1.65041,-3.2258 -0.075,-0.11253 1.12528,-0.60015 0.075,0.15004 1.65041,3.18829 -1.12528,0.60015 z m -2.28807,-4.46361 -1.72543,-3.30082 1.12528,-0.60015 1.72543,3.33833 -1.12528,0.56264 z m -2.28807,-4.4261 -1.72543,-3.33834 1.08777,-0.56264 1.72543,3.33833 -1.08777,0.56265 z m -2.32558,-4.42611 -1.72543,-3.33833 1.08777,-0.56264 1.76294,3.30082 -1.12528,0.60015 z m -2.32558,-4.4261 -1.7
 2543,-3.33833 1.08777,-0.56264 1.76294,3.30082 -1.12528,0.60015 z m -2.32558,-4.46361 -0.30008,-0.56264 -1.42535,-2.73818 1.08777,-0.60015 1.46286,2.73818 0.30008,0.60015 -1.12528,0.56264 z m -2.32558,-4.3886 -1.72543,-3.33833 1.08777,-0.56264 1.76294,3.30083 -1.12528,0.60014 z m -2.32558,-4.4261 -0.71268,-1.35033 0,0 -1.05026,-1.988 1.12528,-0.56264 1.05026,1.95049 0.71268,1.35033 -1.12528,0.60015 z m -2.32558,-4.4261 -1.76294,-3.30082 1.08777,-0.60015 1.76294,3.30082 -1.08777,0.60015 z m -2.36309,-4.4261 -1.05026,-1.95049 0,0 -0.71268,-1.35034 1.08777,-0.60015 0.75019,1.35034 1.01275,1.988 -1.08777,0.56264 z m -2.36309,-4.3886 -1.76294,-3.30082 1.08777,-0.60015 1.76294,3.30082 -1.08777,0.60015 z m -2.36309,-4.4261 -1.27531,-2.4006 0,0 -0.52513,-0.90022 1.12528,-0.60015 0.48762,0.93773 1.27532,2.36309 -1.08778,0.60015 z m -2.36308,-4.38859 -1.38785,-2.55064 -0.4126,-0.75018 1.08777,-0.60015 0.4126,0.75018 1.38785,2.55064 -1.08777,0.60015 z m -2.4006,-4.3886 -1.46287,-2.66316 -0.337
 58,-0.63766 1.08777,-0.60015 0.33758,0.63766 1.46287,2.66316 -1.08777,0.60015 z m -2.43811,-4.38859 -1.46286,-2.70067 0,0 -0.33759,-0.56264 1.08777,-0.60015 0.33759,0.56264 1.50037,2.70067 -1.12528,0.60015 z m -2.4006,-4.35108 -1.50037,-2.70068 -0.33759,-0.60015 1.08778,-0.60015 0.33758,0.56265 1.50037,2.70067 -1.08777,0.63766 z m -2.43811,-4.3886 -1.50037,-2.62565 0,0 -0.33758,-0.63766 1.08777,-0.60015 0.33758,0.60015 1.50038,2.66316 -1.08778,0.60015 z m -2.47561,-4.35108 -1.42536,-2.51313 0,0 -0.4126,-0.75018 1.08777,-0.60015 0.4126,0.71267 1.42536,2.55064 -1.08777,0.60015 z m -2.43811,-4.35108 -1.35034,-2.32558 0,0 -0.52513,-0.90023 1.05026,-0.63766 0.52514,0.90023 1.35033,2.36309 -1.05026,0.60015 z m -2.51313,-4.31358 -1.2378,-2.10052 0,0 -0.63766,-1.12528 1.05026,-0.63766 0.67517,1.12528 1.23781,2.10052 -1.08778,0.63766 z m -2.51312,-4.31357 -1.05026,-1.80045 0,0 -0.86272,-1.42536 1.08777,-0.63766 0.82521,1.42536 1.08777,1.80045 -1.08777,0.63766 z m -2.55064,-4.31358 -0.86271,-
 1.42535 0,0 -1.05026,-1.76294 1.05026,-0.67517 1.08777,1.80045 0.86271,1.42535 -1.08777,0.63766 z m -2.58814,-4.27606 -0.60015,-0.97525 0,0 -1.35034,-2.21305 1.08777,-0.67517 1.35034,2.25056 0.60015,0.97525 -1.08777,0.63766 z m -2.58815,-4.27607 -0.30007,-0.45011 0,0 -1.68792,-2.73818 1.08777,-0.63766 1.68792,2.70067 0.26256,0.48762 -1.05026,0.63766 z m -2.62565,-4.23855 -2.02551,-3.15079 1.05027,-0.67517 2.0255,3.15079 -1.05026,0.67517 z m -2.70067,-4.20105 -1.72543,-2.70067 0,0 -0.30008,-0.45012 1.05026,-0.67516 0.30008,0.45011 1.72543,2.70067 -1.05026,0.67517 z m -2.70068,-4.20105 -1.2378,-1.87547 0,0 -0.82521,-1.2378 1.05026,-0.71268 0.82521,1.27532 1.23781,1.87546 -1.05027,0.67517 z m -2.77569,-4.16354 -0.63766,-0.97524 0,0.0375 -1.46286,-2.13803 1.05026,-0.71268 1.46286,2.13803 0.63766,0.97525 -1.05026,0.67516 z m -2.8132,-4.12602 -2.13803,-3.03826 0,0 0,0 1.01275,-0.75019 0.0375,0.0375 2.13803,3.03826 -1.05026,0.71268 z m -2.85071,-4.08852 -1.38784,-1.91298 0,0 -0.82521,-1.08
 777 1.01275,-0.75019 0.82521,1.12528 1.38784,1.91298 -1.01275,0.71268 z m -2.96324,-4.0135 -0.52513,-0.75019 0,0 -1.68792,-2.25056 1.01275,-0.75019 1.68792,2.25056 0.52514,0.75019 -1.01276,0.75019 z m -2.96324,-4.0135 -1.72543,-2.21305 0.0375,0 -0.60015,-0.75019 0.97525,-0.7877 0.60015,0.7877 1.68792,2.21305 -0.97525,0.75019 z m -3.07576,-3.93848 -0.67517,-0.90023 0,0 -1.65041,-2.06301 1.01275,-0.75019 1.6129,2.06302 0.71268,0.86271 -1.01275,0.7877 z m -3.07577,-3.90097 -1.6129,-1.988 0,0 -0.7877,-0.90022 0.97525,-0.82521 0.75018,0.93773 1.61291,1.988 -0.93774,0.7877 z m -3.18829,-3.86347 -0.45011,-0.48762 0.0375,0 -1.988,-2.32558 0,0 -0.0375,-0.0375 0.93773,-0.8252 0.0375,0.0375 1.988,2.32558 0.4126,0.52513 -0.93773,0.78769 z m -3.26332,-3.78844 -1.08777,-1.27532 0,0 -1.38784,-1.53788 0.93773,-0.82521 1.38785,1.53789 1.08777,1.27531 -0.93774,0.82521 z m -3.30082,-3.75094 -1.68792,-1.83795 0,0 -0.8252,-0.90023 0.90022,-0.86271 0.86272,0.90022 1.68792,1.87547 -0.93774,0.8252 z m -3.3
 7584,-3.67591 -0.22505,-0.22506 0,0 -2.36309,-2.47561 0.90022,-0.86272 2.36309,2.47562 0.26257,0.26256 -0.93774,0.82521 z m -3.45086,-3.6009 -0.60015,-0.63766 0.0375,0 -2.06301,-2.0255 0.90022,-0.90023 2.02551,2.06302 0.60015,0.63766 -0.90023,0.86271 z m -3.48837,-3.56339 -0.86271,-0.86271 0,0 -1.80045,-1.76294 0.86271,-0.90022 1.80045,1.76294 0.90023,0.90022 -0.90023,0.86271 z m -3.56338,-3.48836 -1.05027,-1.01276 0,0 -1.65041,-1.57539 0.86272,-0.90022 1.68792,1.57539 1.01275,1.01275 -0.86271,0.90023 z m -3.6009,-3.45086 -1.16279,-1.08778 0.0375,0 -1.6129,-1.46286 0.86271,-0.93773 1.57539,1.50037 1.16279,1.08777 -0.86271,0.90023 z m -3.63841,-3.41335 -1.2003,-1.08778 0,0 -1.57539,-1.46286 0.86272,-0.90022 1.57539,1.42535 1.16279,1.08777 -0.82521,0.93774 z m -3.67591,-3.37585 -1.16279,-1.05026 0,0 -1.65042,-1.46286 0.86272,-0.93774 1.6129,1.46287 1.16279,1.05026 -0.8252,0.93773 z m -3.71343,-3.33833 -0.11253,-0.075 0.82521,-0.93773 0.11253,0.075 -0.82521,0.93773 z m -1.27532,3.07577
  -3.07576,-7.80194 8.10202,2.21305 -5.02626,5.58889 z"
+         id="path3213"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 783.12009,269.01702 2.36309,2.92573 -0.97524,0.75018 -2.36309,-2.92572 0.97524,-0.75019 z m 3.15079,3.90097 2.32558,2.88822 -0.97524,0.7877 -2.32558,-2.92573 0.97524,-0.75019 z m 3.11328,3.86346 2.36308,2.92573 -0.97524,0.7877 -2.36309,-2.92573 0.97525,-0.7877 z m 3.15078,3.90098 2.32558,2.92572 -0.97524,0.7877 -2.32558,-2.92573 0.97524,-0.78769 z m 3.11328,3.90097 2.36308,2.92573 -0.97524,0.78769 -2.36309,-2.92573 0.97525,-0.78769 z m 3.15078,3.90097 2.32558,2.92573 -0.97524,0.78769 -2.32558,-2.92573 0.97524,-0.78769 z m 3.11328,3.90097 2.36309,2.92573 -0.97525,0.7877 -2.36309,-2.92573 0.97525,-0.7877 z m 3.15078,3.90097 2.32558,2.92573 -0.97524,0.7877 -2.32558,-2.92573 0.97524,-0.7877 z m 3.11328,3.90097 2.36309,2.92573 -0.97525,0.7877 -2.36309,-2.92573 0.97525,-0.7877 z m 3.11327,3.90098 2.36309,2.92572 -0.97524,0.7877 -2.36309,-2.92573 0.97524,-0.78769 z m 3.15079,3.90097 2.36309,2.92573 -0.97525,0.78769 -2.36309,-2.92573 0.97525,-0.78769 z m 3.11327,3.90097 2.3630
 9,2.92573 -0.97524,0.78769 -2.36309,-2.92572 0.97524,-0.7877 z m 3.15079,3.90097 2.32558,2.92573 -0.97525,0.7877 -2.32557,-2.92573 0.97524,-0.7877 z m 3.11327,3.90097 2.36309,2.92573 -0.97524,0.7877 -2.36309,-2.92573 0.97524,-0.7877 z m 3.15079,3.90098 2.32558,2.92572 -0.97525,0.7877 -2.32557,-2.92573 0.97524,-0.78769 z m 3.11327,3.90097 2.36309,2.92573 -0.97524,0.78769 -2.36309,-2.92573 0.97524,-0.78769 z m 3.15079,3.90097 2.32558,2.92573 -0.97524,0.78769 -2.32558,-2.92572 0.97524,-0.7877 z m 3.11327,3.90097 2.36309,2.92573 -0.97524,0.7877 -2.36309,-2.92573 0.97524,-0.7877 z m 3.15079,3.90097 2.32558,2.92573 -0.97524,0.7877 -2.32558,-2.92573 0.97524,-0.7877 z m 3.11328,3.90097 2.36308,2.92573 -0.97524,0.7877 -2.36309,-2.92573 0.97525,-0.7877 z m 3.15078,3.90098 2.32558,2.92573 -0.97524,0.78769 -2.32558,-2.92573 0.97524,-0.78769 z m 3.11328,3.90097 2.36308,2.92573 -0.97524,0.78769 -2.36309,-2.92573 0.97525,-0.78769 z m 3.15078,3.90097 2.32558,2.92573 -0.97524,0.78769 -2.32558,-2.925
 72 0.97524,-0.7877 z m 3.11328,3.90097 2.36309,2.92573 -0.97525,0.7877 -2.36309,-2.92573 0.97525,-0.7877 z m 3.15078,3.90097 2.32558,2.92573 -0.97524,0.7877 -2.32558,-2.92573 0.97524,-0.7877 z m 3.11328,3.90098 2.36309,2.92572 -0.97525,0.7877 -2.36309,-2.92573 0.97525,-0.78769 z m 3.11327,3.90097 2.36309,2.92573 -0.97524,0.78769 -2.36309,-2.92573 0.97524,-0.78769 z m 3.15079,3.90097 2.36309,2.92573 -0.97525,0.78769 -2.36309,-2.92572 0.97525,-0.7877 z m 3.11327,3.90097 2.36309,2.92573 -0.97524,0.7877 -2.36309,-2.92573 0.97524,-0.7877 z m 3.15079,3.90097 2.32558,2.92573 -0.97525,0.7877 -2.32558,-2.92573 0.97525,-0.7877 z m 3.11327,3.90097 2.36309,2.92573 -0.97524,0.7877 -2.36309,-2.92573 0.97524,-0.7877 z m 3.15079,3.90098 2.32558,2.92573 -0.97525,0.78769 -2.32557,-2.92573 0.97524,-0.78769 z m 3.11327,3.90097 2.36309,2.92573 -0.97524,0.75018 -2.36309,-2.88822 0.97524,-0.78769 z m 3.15079,3.90097 2.32558,2.92573 -0.97524,0.75019 -2.32558,-2.92573 0.97524,-0.75019 z m 3.11327,3.90097 1.
 12528,1.35034 1.23781,1.57539 -0.97524,0.75019 -1.23781,-1.53789 -1.12528,-1.38784 0.97524,-0.75019 z m 3.15079,3.90097 0.63766,0.7877 1.68792,2.13803 -0.97524,0.7877 -1.68792,-2.13803 -0.67517,-0.7877 1.01275,-0.7877 z m 3.11328,3.90098 0.11252,0.15003 2.21305,2.77569 -0.97524,0.7877 -2.21305,-2.77569 -0.11253,-0.15004 0.97525,-0.78769 z m 3.11327,3.90097 2.02551,2.58814 0.30007,0.3751 -0.97524,0.75018 -0.30008,-0.37509 -2.0255,-2.55064 0.97524,-0.78769 z m 3.11328,3.93848 1.35033,1.68792 0.97525,1.23781 -0.97525,0.78769 -1.01275,-1.2378 -1.31283,-1.72543 0.97525,-0.75019 z m 3.11327,3.90097 0.60015,0.7877 1.72543,2.17554 -0.97524,0.75019 -1.72543,-2.13804 -0.63766,-0.78769 1.01275,-0.7877 z m 3.07577,3.93848 2.13803,2.66316 0.18755,0.26257 -0.97525,0.7877 -0.22505,-0.30008 -2.10052,-2.66316 0.97524,-0.75019 z m 3.11327,3.90097 1.23781,1.5754 1.08777,1.38784 -0.97524,0.75019 -1.08777,-1.38785 -1.23781,-1.53788 0.97524,-0.7877 z m 3.11328,3.93848 0.30007,0.3751 2.02551,2.55063 -0.97
 524,0.7877 -2.02551,-2.55064 -0.30007,-0.4126 0.97524,-0.75019 z m 3.11327,3.90098 1.42536,1.83795 0.90022,1.12528 -1.01275,0.7877 -0.86271,-1.12528 -1.46287,-1.83796 1.01275,-0.78769 z m 3.07577,3.93848 2.32558,2.96324 -0.97524,0.75018 -2.32558,-2.92573 0.97524,-0.78769 z m 3.11328,3.93848 2.32558,2.92573 -1.01276,0.78769 -2.28807,-2.96323 0.97525,-0.75019 z m 3.07576,3.93848 0.18755,0.22506 2.13803,2.70067 -0.97524,0.78769 -2.13803,-2.73818 -0.18755,-0.22505 0.97524,-0.75019 z m 3.11328,3.90097 0.90022,1.16279 1.42536,1.80045 -1.01275,0.7877 -1.38785,-1.80045 -0.90022,-1.16279 0.97524,-0.7877 z m 3.07576,3.93848 1.53789,1.95049 0.78769,1.01275 -0.97524,0.75019 -0.8252,-1.01275 -1.50038,-1.91298 0.97524,-0.7877 z m 3.07577,3.93848 2.06302,2.58815 0.26256,0.37509 -0.97524,0.75019 -0.26257,-0.33759 -2.06301,-2.58814 0.97524,-0.7877 z m 3.11328,3.93849 2.32558,2.96323 -1.01276,0.75019 -2.28807,-2.92573 0.97525,-0.78769 z m 3.07576,3.93848 2.32558,2.96323 -1.01275,0.75019 -2.28807,-2.9
 6324 0.97524,-0.75018 z m 3.07577,3.93848 2.32558,2.96324 -1.01275,0.75018 -2.28807,-2.92573 0.97524,-0.78769 z m 3.07577,3.93848 0.15003,0.15004 2.17554,2.8132 -0.97524,0.75018 -2.21305,-2.77569 -0.11253,-0.15004 0.97525,-0.78769 z m 4.76368,0.97524 1.65041,8.21455 -7.57688,-3.6009 5.92647,-4.61365 z"
+         id="path3215"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <path
+         d="m 586.8712,457.53898 3.71343,0.52514 0.18755,-1.23781 -3.71343,-0.52513 -0.18755,1.2378 z m 4.95124,0.71268 3.71342,0.52513 0.18755,-1.23781 -3.71343,-0.52513 -0.18754,1.23781 z m 4.95123,0.67517 3.71343,0.52513 0.18754,-1.23781 -3.71342,-0.52513 -0.18755,1.23781 z m 4.95123,0.71268 3.71343,0.52513 0.18755,-1.23781 -3.71343,-0.52513 -0.18755,1.23781 z m 4.95124,0.67517 3.71342,0.52513 0.18755,-1.23781 -3.71343,-0.52513 -0.18754,1.23781 z m 4.95123,0.71267 3.71343,0.52513 0.18754,-1.2378 -3.71342,-0.52513 -0.18755,1.2378 z m 4.95124,0.67517 1.20029,0.18755 2.51313,0.33758 0.18755,-1.23781 -2.51313,-0.33758 -1.2003,-0.18755 -0.18754,1.23781 z m 4.95123,0.71268 3.71342,0.52513 0.18755,-1.23781 -3.71342,-0.52513 -0.18755,1.23781 z m 4.95123,0.71268 3.71343,0.52513 0.18754,-1.23781 -3.71342,-0.52513 -0.18755,1.23781 z m 4.95124,0.71267 1.76293,0.22506 0,0 1.95049,0.30008 0.18755,-1.23781 -1.95049,-0.30008 -1.76294,-0.22505 -0.18754,1.2378 z m 4.95123,0.71268 3.71342,0.56264 0
 .18755,-1.23781 -3.71342,-0.56264 -0.18755,1.23781 z m 4.95123,0.71268 3.71343,0.56264 0.18754,-1.23781 -3.71342,-0.56264 -0.18755,1.23781 z m 4.95124,0.75019 2.28807,0.33758 0,0 1.42535,0.22506 0.18755,-1.23781 -1.42536,-0.22506 -2.28807,-0.33758 -0.18754,1.23781 z m 4.95123,0.75018 3.71342,0.56264 0.18755,-1.23781 -3.71342,-0.56264 -0.18755,1.23781 z m 4.95123,0.75019 3.71343,0.56264 0.18754,-1.23781 -3.71342,-0.56264 -0.18755,1.23781 z m 4.91373,0.75019 2.8132,0.4126 0,0 0.90022,0.15004 0.18755,-1.23781 -0.90023,-0.15004 -2.77569,-0.4126 -0.22505,1.23781 z m 4.95123,0.75018 3.71343,0.60015 0.18754,-1.23781 -3.71342,-0.60014 -0.18755,1.2378 z m 4.95123,0.7877 3.71343,0.60015 0.18755,-1.23781 -3.71343,-0.60015 -0.18755,1.23781 z m 4.95124,0.7877 3.2258,0.52513 0,0 0.45011,0.075 0.22506,-1.23781 -0.48762,-0.075 -3.22581,-0.52513 -0.18754,1.23781 z m 4.91372,0.78769 3.71343,0.63766 0.18754,-1.23781 -3.71342,-0.63766 -0.18755,1.23781 z m 4.95124,0.82521 3.67591,0.63766 0.22506,-1.2378
 1 -3.71343,-0.63766 -0.18754,1.23781 z m 4.91372,0.8252 3.63841,0.60015 0,0 0.075,0.0375 0.18755,-1.23781 -0.0375,0 -3.6384,-0.63765 -0.22506,1.2378 z m 4.91372,0.82521 3.71343,0.67517 0.22505,-1.23781 -3.71342,-0.63766 -0.22506,1.2003 z m 4.95124,0.90022 3.67591,0.63766 0.22506,-1.23781 -3.71343,-0.63765 -0.18754,1.2378 z m 4.91372,0.86272 3.67592,0.63766 0.22505,-1.23781 -3.67591,-0.63766 -0.22506,1.23781 z m 4.91373,0.86271 3.67591,0.67517 0.22506,-1.2003 -3.67592,-0.71267 -0.22505,1.2378 z m 4.91372,0.90023 3.67592,0.71268 0.26256,-1.23781 -3.71342,-0.67517 -0.22506,1.2003 z m 4.91372,0.93773 3.71343,0.67517 0.22505,-1.23781 -3.71342,-0.67517 -0.22506,1.23781 z m 4.91373,0.90023 3.67591,0.75018 0.26257,-1.23781 -3.67592,-0.71267 -0.26256,1.2003 z m 4.91372,0.97524 3.67592,0.71268 0.22505,-1.2003 -3.67591,-0.75019 -0.22506,1.23781 z m 4.91373,0.97524 3.67591,0.71268 0.22506,-1.23781 -3.67592,-0.71268 -0.22505,1.23781 z m 4.87621,0.97524 3.67592,0.75019 0.26256,-1.23781 -3.67591,-
 0.75018 -0.26257,1.2378 z m 4.91372,0.97525 3.67592,0.75018 0.26257,-1.20029 -3.67592,-0.75019 -0.26257,1.2003 z m 4.91373,1.01275 3.67591,0.75019 0.22506,-1.23781 -3.67592,-0.75019 -0.22505,1.23781 z m 4.87621,1.01275 3.67592,0.75019 0.26256,-1.2003 -3.67591,-0.7877 -0.26257,1.23781 z m 4.91373,1.01275 3.67591,0.7877 0.22506,-1.23781 -3.63841,-0.75018 -0.26256,1.20029 z m 4.87621,1.05027 3.67592,0.78769 0.26256,-1.23781 -3.67591,-0.78769 -0.26257,1.23781 z m 4.87622,1.01275 3.67591,0.8252 0.26257,-1.2378 -3.63841,-0.7877 -0.30007,1.2003 z m 4.91372,1.08777 3.67592,0.7877 0.26256,-1.23781 -3.67591,-0.7877 -0.26257,1.23781 z m 4.87622,1.05026 3.67591,0.7877 0.26257,-1.2003 -3.67592,-0.82521 -0.26256,1.23781 z m 4.87621,1.05026 0.075,0.0375 0,-0.0375 3.6009,0.82521 0.26256,-1.23781 -3.60089,-0.7877 -0.075,0 -0.26257,1.2003 z m 4.91373,1.08777 3.6384,0.82521 0.26257,-1.23781 -3.63841,-0.8252 -0.26256,1.2378 z m 4.87621,1.08777 3.63841,0.7877 0.30007,-1.2003 -3.67591,-0.8252 -0.26257,1.
 2378 z m 4.87621,1.08778 0.15004,0 3.48837,0.8252 0.30008,-1.23781 -3.52588,-0.78769 -0.15004,-0.0375 -0.26257,1.23781 z m 4.87622,1.08777 3.63841,0.8252 0.30007,-1.23781 -3.67592,-0.8252 -0.26256,1.23781 z m 4.87621,1.08777 3.67592,0.8252 0.26257,-1.23781 -3.67592,-0.78769 -0.26257,1.2003 z m 4.87622,1.08777 3.67591,0.8252 0.26257,-1.20029 -3.67592,-0.82521 -0.26256,1.2003 z m 4.87621,1.12528 3.67592,0.7877 0.26257,-1.2003 -3.63841,-0.82521 -0.30008,1.23781 z m 2.06302,3.63841 8.13953,-2.02551 -6.48912,-5.28882 -1.65041,7.31433 z"
+         id="path3217"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="620.69214"
+         y="485.10345"
+         id="text3219"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Submit job</text>
+      <text
+         x="604.18805"
+         y="501.60757"
+         id="text3221"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">(send dataflow)</text>
+      <text
+         x="731.01959"
+         y="505.01871"
+         id="text3223"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Cancel /</text>
+      <text
+         x="722.16742"
+         y="521.52283"
+         id="text3225"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">update job</text>
+      <path
+         d="m 819.16657,486.83378 -3.03825,-2.25056 0.75018,-0.97524 3.00075,2.21305 -0.71268,1.01275 z m -4.051,-2.96324 -3.00075,-2.25056 0.71268,-1.01275 3.03825,2.25056 -0.75018,1.01275 z m -4.0135,-2.96323 -3.03826,-2.25056 0.75019,-1.01276 3.03825,2.25056 -0.75018,1.01276 z m -4.0135,-2.96324 -3.07577,-2.21305 0.75019,-1.01276 3.03825,2.21306 -0.71267,1.01275 z m -4.08852,-2.92573 -1.53789,-1.12528 0,0 -1.50037,-1.08777 0.75019,-1.01275 1.50037,1.08777 1.53788,1.12528 -0.75018,1.01275 z m -4.05101,-2.92573 -3.03826,-2.17554 0.71268,-1.01275 3.03826,2.17554 -0.71268,1.01275 z m -4.08852,-2.88822 -3.07577,-2.10052 0.71268,-1.05026 3.07577,2.13803 -0.71268,1.01275 z m -4.08852,-2.8132 -1.23781,-0.86272 0.0375,0.0375 -1.91297,-1.27531 0.71267,-1.05027 1.87547,1.27532 1.23781,0.82521 -0.71268,1.05026 z m -4.16354,-2.8132 -3.07576,-2.06302 0,0 -0.0375,0 0.67517,-1.05026 0.0375,0 3.11327,2.06302 -0.71268,1.05026 z m -4.16353,-2.73818 -3.15079,-2.02551 0.67517,-1.05026 3.15078,2.02551
  -0.67516,1.05026 z m -4.20105,-2.70068 -0.82521,-0.48762 0.0375,0 -2.40059,-1.46286 0.63765,-1.05026 2.4006,1.42535 0.82521,0.52513 -0.67517,1.05026 z m -4.27607,-2.58814 -2.70067,-1.65041 0,0 -0.48762,-0.26257 0.63766,-1.08777 0.48762,0.26257 2.73818,1.65041 -0.67517,1.08777 z m -4.27606,-2.55064 -3.26331,-1.87546 0.63766,-1.05027 3.26331,1.83796 -0.63766,1.08777 z m -4.35108,-2.47561 -0.33759,-0.18755 0.0375,0 -2.96324,-1.57539 0.56264,-1.12528 3.00075,1.6129 0.33758,0.18755 -0.63765,1.08777 z m -4.3886,-2.36309 -2.28807,-1.2003 0.0375,0 -1.05026,-0.52513 0.56264,-1.12528 1.05026,0.52513 2.28807,1.23781 -0.60015,1.08777 z m -4.4261,-2.28807 -1.05026,-0.52513 0,0 -2.28807,-1.12528 0.52513,-1.12528 2.32558,1.12528 1.05026,0.52513 -0.56264,1.12528 z m -4.46361,-2.17554 -3.07577,-1.42536 0,0 -0.33758,-0.11253 0.48762,-1.16279 0.33758,0.15004 3.11328,1.42536 -0.52513,1.12528 z m -4.53863,-2.06302 -1.83796,-0.78769 0,0 -1.6129,-0.63766 0.48762,-1.16279 1.6129,0.67517 1.83796,0.78769 -0
 .48762,1.12528 z m -4.61365,-1.91297 -0.56264,-0.22506 0,0 -2.92573,-1.16279 0.45011,-1.16279 2.96324,1.16279 0.56264,0.22506 -0.48762,1.16279 z m -4.65116,-1.80045 -2.66316,-0.97525 0.0375,0 -0.86271,-0.30007 0.37509,-1.16279 0.90023,0.30007 2.66316,0.97525 -0.45011,1.16279 z m -4.68867,-1.65041 -1.38785,-0.48763 0,0 -2.17554,-0.67516 0.3751,-1.2003 2.21305,0.67517 1.38784,0.48762 -0.4126,1.2003 z m -4.76369,-1.53789 -0.075,-0.0375 0,0 -3.48837,-1.01275 0.33758,-1.2003 3.52588,1.01275 0.075,0.0375 -0.3751,1.2003 z m -4.76368,-1.35033 -2.36309,-0.63766 0,0 -1.27532,-0.30008 0.30008,-1.2003 1.27531,0.30008 2.36309,0.63766 -0.30007,1.2003 z m -4.83871,-1.2003 -1.08777,-0.26257 0,0 -2.55063,-0.56264 0.26256,-1.2003 2.55064,0.52513 1.12528,0.26257 -0.30008,1.23781 z m -4.87621,-1.05026 -3.52588,-0.67517 0.0375,0 -0.18755,-0.0375 0.22506,-1.2003 0.15003,0 3.56339,0.67517 -0.26256,1.23781 z m -4.87622,-0.90023 -2.32558,-0.37509 0,0 -1.38784,-0.18755 0.18754,-1.23781 1.38785,0.18755 2.3255
 8,0.37509 -0.18755,1.23781 z m -4.95123,-0.75019 -1.12528,-0.15003 0,0 -2.58815,-0.30008 0.15004,-1.23781 2.62566,0.30008 1.12528,0.15004 -0.18755,1.2378 z m -4.95123,-0.60014 -3.71343,-0.3751 0.11253,-1.23781 3.75093,0.33759 -0.15003,1.27532 z m -4.95124,-0.48763 -2.66316,-0.22505 0,0 -1.05026,-0.075 0.075,-1.23781 1.08777,0.075 2.66316,0.22506 -0.11253,1.2378 z m -4.98874,-0.37509 -1.53788,-0.075 0,0 -2.17555,-0.11253 0.0375,-1.2378 2.21305,0.075 1.5754,0.11253 -0.11253,1.23781 z m -4.95123,-0.26257 -0.52514,0 0.0375,0 -3.26331,-0.11252 0.0375,-1.23781 3.26331,0.11253 0.48762,0 -0.0375,1.2378 z m -4.98875,-0.15003 -3.45086,-0.0375 0,0 -0.30007,0 0,-1.23781 0.33758,0 3.41335,0.0375 0,1.23781 z m -4.98874,-0.0375 -2.47562,0 0.0375,0 -1.31282,0 -0.0375,-1.23781 1.31282,-0.0375 2.47562,0 0,1.27532 z m -4.98874,0.0375 -3.75094,0.075 -0.0375,-1.27532 3.75094,-0.075 0.0375,1.27532 z m -5.02626,0.075 -0.52513,0 0,0 -3.18829,0.15003 -0.075,-1.2378 3.2258,-0.15004 0.56264,0 0,1.23781 z m -4
 .95123,0.18754 -3.75093,0.15004 0,0 0,0 -0.075,-1.23781 0,0 3.75093,-0.15004 0.075,1.23781 z m -4.98874,0.22506 -3.75094,0.26256 -0.075,-1.27531 3.75093,-0.22506 0.075,1.23781 z m -4.98874,0.33758 -2.06302,0.11253 0.0375,0 -1.72543,0.11253 -0.075,-1.23781 1.68792,-0.11253 2.06302,-0.15003 0.075,1.27531 z m -4.98875,0.33759 -3.75093,0.30007 -0.075,-1.27532 3.71342,-0.26256 0.11253,1.23781 z m -4.98874,0.37509 -0.4126,0.0375 0,0 -3.30083,0.30007 -0.11252,-1.2378 3.30082,-0.30008 0.4126,-0.0375 0.11253,1.23781 z m -4.98874,0.45011 -3.71343,0.3751 -0.11253,-1.27532 3.71343,-0.33759 0.11253,1.23781 z m -4.95124,0.48762 -3.75093,0.3751 -0.11253,-1.23781 3.71343,-0.37509 0.15003,1.2378 z m -4.98874,0.52514 -2.36309,0.26256 0,0 -1.35033,0.15004 -0.15004,-1.23781 1.38784,-0.18755 2.36309,-0.22505 0.11253,1.23781 z m -4.95123,0.52513 -3.71343,0.45011 -0.15003,-1.23781 3.71342,-0.45011 0.15004,1.23781 z m -4.98875,0.60015 -0.90022,0.11252 0,0 -2.8132,0.30008 -0.15004,-1.23781 2.8132,-0.33758 0
 .93774,-0.075 0.11252,1.23781 z m -4.95123,0.56264 -3.71342,0.45011 -0.15004,-1.23781 3.71342,-0.45011 0.15004,1.23781 z m -4.95123,0.63765 -3.71343,0.45012 -0.18754,-1.23781 3.75093,-0.48762 0.15004,1.27531 z m -4.95124,0.60015 -3.75093,0.48763 -0.15004,-1.23781 3.71343,-0.48762 0.18754,1.2378 z m -4.98874,0.63766 -0.56264,0.075 -0.15004,-1.23781 0.56264,-0.075 0.15004,1.23781 z m 1.08777,3.03826 -7.91447,-2.77569 6.97674,-4.68867 0.93773,7.46436 z"
+         id="path3227"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="637.29187"
+         y="402.23114"
+         id="text3229"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Status</text>
+      <text
+         x="632.04059"
+         y="418.73523"
+         id="text3231"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">updates</text>
+      <text
+         x="724.87628"
+         y="415.9035"
+         id="text3233"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Statistics &amp;</text>
+      <text
+         x="740.78027"
+         y="432.40762"
+         id="text3235"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">results</text>
+      <path
+         d="m 421.75507,306.2638 116.27897,0 0,47.24302 c -58.13949,0 -58.13949,18.00448 -116.27897,7.78319 z"
+         id="path3237"
+         style="fill:#e6526e;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 421.75507,306.2638 116.27897,0 0,47.24302 c -58.13949,0 -58.13949,18.00448 -116.27897,7.78319 z"
+         id="path3239"
+         style="fill:none;stroke:#8a3142;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="451.71744"
+         y="327.26007"
+         id="text3241"
+         xml:space="preserve"
+         style="font-size:13.20328903px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Program</text>
+      <text
+         x="464.47064"
+         y="343.01398"
+         id="text3243"
+         xml:space="preserve"
+         style="font-size:13.20328903px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">code</text>
+      <path
+         d="m 904.23777,517.74148 0,32.48309 95.49879,0 0,-32.48309 -95.49879,0 z"
+         id="path3245"
+         style="fill:#b8bec6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 904.23777,517.74148 95.49879,0 0,32.48309 -95.49879,0 z"
+         id="path3247"
+         style="fill:none;stroke:#6e7277;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="919.12915"
+         y="539.22919"
+         id="text3249"
+         xml:space="preserve"
+         style="font-size:13.20328903px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Scheduler</text>
+      <path
+         d="m 904.23777,553.82547 0,32.37057 95.49879,0 0,-32.37057 -95.49879,0 z"
+         id="path3251"
+         style="fill:#b8bec6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 904.23777,553.82547 95.49879,0 0,32.37057 -95.49879,0 z"
+         id="path3253"
+         style="fill:none;stroke:#6e7277;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="915.22498"
+         y="567.3382"
+         id="text3255"
+         xml:space="preserve"
+         style="font-size:13.20328903px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Checkpoint</text>
+      <text
+         x="913.12445"
+         y="583.0921"
+         id="text3257"
+         xml:space="preserve"
+         style="font-size:13.20328903px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Coordinator</text>
+      <path
+         d="m 352.98169,391.89763 0,43.30454 107.83936,0 0,-43.30454 -107.83936,0 z"
+         id="path3259"
+         style="fill:#e4eaf4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+      <path
+         d="m 352.98169,391.89763 107.83936,0 0,43.30454 -107.83936,0 z"
+         id="path3261"
+         style="fill:none;stroke:#898c92;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="367.77243"
+         y="410.74432"
+         id="text3263"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Optimizer /</text>
+      <text
+         x="359.37033"
+         y="427.24844"
+         id="text3265"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Graph Builder</text>
+      <path
+         d="m 400.61855,432.22018 -0.75018,1.36909 0.0187,0 -0.71268,1.36909 0,-0.0188 -0.28132,0.5814 -1.12528,-0.54389 0.28132,-0.58139 0.73144,-1.38785 0.75018,-1.38784 1.08777,0.60015 z m -2.2318,4.38859 -0.4126,1.01275 0.0188,-0.0375 -0.43136,1.31283 0.0188,-0.075 -0.28132,1.25656 -1.21906,-0.26256 0.28132,-1.29407 0.45012,-1.36909 0.43135,-1.01276 1.14404,0.46887 z m -1.16279,4.6324 0,0 0,-0.075 0.0375,0.60015 -0.0188,-0.0563 0.0938,0.5814 0,-0.0563 0.15003,0.56264 -0.0188,-0.0563 0.22506,0.56264 -0.0375,-0.075 0.30007,0.54388 -0.0375,-0.0563 0.37509,0.52513 -0.0375,-0.0563 0.0375,0.0563 -0.93773,0.82521 -0.075,-0.075 -0.39385,-0.5814 -0.33758,-0.6189 -0.26257,-0.6189 -0.16879,-0.63766 -0.11253,-0.65642 -0.0375,-0.67517 0,-0.0563 1.25656,0.0938 z m 1.89422,3.86347 0.0563,0.0375 -0.0375,-0.0375 0.6189,0.48762 -0.0563,-0.0375 0.71267,0.45011 -0.0375,-0.0188 0.80645,0.43135 -0.0563,-0.0188 0.90022,0.4126 -0.0375,-0.0188 0.16879,0.0563 -0.45011,1.18155 -0.1688,-0.075 -0.95649,-0.4
 3136 -0.8252,-0.45011 -0.75019,-0.48762 -0.65641,-0.50638 -0.075,-0.075 0.84396,-0.90022 z m 4.18229,2.17554 0.7877,0.26256 -0.0375,0 1.21905,0.33759 -0.0187,0 1.31283,0.31883 -0.0188,-0.0188 0.30008,0.075 -0.26257,1.21906 -0.30007,-0.075 -1.33159,-0.31883 -1.25656,-0.33759 -0.80645,-0.28132 0.4126,-1.18154 z m 4.74493,1.21905 1.50038,0.24381 -0.0188,0 1.70668,0.22506 -0.0188,0 0.50638,0.075 -0.15004,1.23781 -0.50637,-0.0563 -1.72543,-0.24381 -1.50038,-0.26256 0.2063,-1.21906 z m 4.91373,0.67517 0.0563,0.0188 0,0 1.93173,0.18754 -0.0188,0 1.74418,0.13128 -0.0938,1.23781 -1.74418,-0.13128 -1.95049,-0.18755 -0.075,0 0.15004,-1.25656 z m 4.95123,0.4126 1.21906,0.0938 -0.0188,0 2.26932,0.11252 -0.0188,0 0.28132,0 -0.0563,1.25657 -0.28132,-0.0188 -2.25056,-0.11252 -1.21906,-0.075 0.075,-1.25657 z m 4.98874,0.26257 0.82521,0.0375 0,0 2.45686,0.075 0,0 0.45011,0 -0.0188,1.25656 -0.45011,-0.0188 -2.47562,-0.075 -0.84396,-0.0375 0.0563,-1.23781 z m 4.96999,0.13128 0.86272,0.0188 0,0 2.64441,
 0.0375 0,0 0.24381,0 0,1.25656 -0.26257,0 -2.64441,-0.0375 -0.86271,-0.0188 0.0188,-1.25656 z m 5.0075,0.0563 1.23781,0.0188 -0.0188,0 2.51313,-0.0188 0.0188,1.25656 -2.53188,0 -1.2378,0 0.0188,-1.25656 z m 4.98874,0 1.93173,-0.0188 0,0 1.81921,-0.0187 0.0188,1.25656 -1.8192,0.0188 -1.95049,0 0,-1.2378 z m 4.98875,-0.0563 2.94448,-0.0563 0.80645,-0.0188 0.0375,1.25657 -0.82521,0.0188 -2.94448,0.0563 -0.0188,-1.25656 z m 5.00749,-0.0938 1.03151,-0.0188 0,0 2.70067,-0.075 0.0375,1.23781 -2.70067,0.075 -1.05026,0.0188 -0.0188,-1.23781 z m 4.98875,-0.13128 2.4381,-0.075 0,0 1.31283,-0.0375 0.0375,1.23781 -1.31283,0.0563 -2.4381,0.075 -0.0375,-1.25656 z m 4.98874,-0.1688 3.75093,-0.11252 0.0563,1.2378 -3.75094,0.13129 -0.0563,-1.25657 z m 5.0075,-0.16879 3.75093,-0.15004 0.0375,1.25657 -3.75093,0.15003 -0.0375,-1.25656 z m 4.98874,-0.18755 0.86271,-0.0375 0,0 2.88822,-0.13129 0.0563,1.25657 -2.90698,0.13128 -0.84396,0.0375 -0.0563,-1.25657 z m 5.0075,-0.2063 2.8132,-0.13128 -0.0188,0 0.9
 3774,-0.0375 0.0563,1.23781 -0.93773,0.0563 -2.8132,0.11253 -0.0375,-1.23781 z m 4.98874,-0.22505 3.75093,-0.18755 0.0563,1.25656 -3.75094,0.1688 -0.0563,-1.23781 z m 4.98874,-0.24381 3.75094,-0.1688 0.0563,1.23781 -3.75093,0.18755 -0.0563,-1.25656 z m 4.98875,-0.22506 2.08176,-0.11253 1.66917,-0.075 0.075,1.23781 -1.66917,0.0938 -2.08177,0.0938 -0.075,-1.23781 z m 5.00749,-0.26257 3.73218,-0.18754 0.075,1.25656 -3.75093,0.18755 -0.0563,-1.25657 z m 4.98875,-0.24381 0.35633,-0.0187 0.0563,1.23781 -0.35634,0.0188 -0.0563,-1.23781 z m -1.05027,-3.07576 7.67067,3.35708 -7.29557,4.12603 -0.3751,-7.48311 z"
+         id="path3267"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="363.56607"
+         y="467.15714"
+         id="text3269"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Dataflow graph</text>
+      <path
+         d="m 442.44147,350.206 -3.00074,2.2318 -0.75019,-0.994 3.0195,-2.2318 0.73143,0.994 z m -4.0135,2.98199 -3.00074,2.25056 -0.75019,-0.994 3.00075,-2.25056 0.75018,0.994 z m -3.97599,3.00075 -2.96324,2.30682 -0.76894,-0.97524 2.96324,-2.30683 0.76894,0.97525 z m -3.91972,3.09452 -2.88822,2.38184 -0.7877,-0.95649 2.88822,-2.4006 0.7877,0.97525 z m -3.8072,3.20705 -0.80645,0.69392 0.0188,0 -1.63165,1.48162 0.0188,-0.0188 -0.35634,0.33759 -0.86271,-0.90023 0.35633,-0.33758 1.63166,-1.50038 0.80645,-0.71267 0.82521,0.95649 z m -3.65716,3.35708 -0.30008,0.30008 0,0 -1.48162,1.51912 0.0188,-0.0188 -0.80645,0.88147 -0.91898,-0.84396 0.80645,-0.88147 1.50037,-1.53788 0.31883,-0.31883 0.86272,0.90022 z m -3.3946,3.6009 -1.06902,1.25656 0.0188,-0.0188 -1.2003,1.57539 0.0188,-0.0375 -0.075,0.0938 -1.01276,-0.71267 0.075,-0.0938 1.21906,-1.59414 1.08777,-1.27532 0.93773,0.80645 z m -3.0195,3.90097 -0.31883,0.46887 0.0188,-0.0188 -1.01275,1.6129 0.0188,-0.0187 -0.61891,1.10653 -1.08777,-0
 .60015 0.61891,-1.12528 1.0315,-1.65042 0.33759,-0.48762 1.01275,0.71268 z m -2.45686,4.23856 -0.52513,1.05026 0.0188,-0.0188 -0.71268,1.66917 0.0188,-0.0375 -0.28132,0.71268 -1.16279,-0.43136 0.26256,-0.73143 0.73144,-1.68792 0.52513,-1.06902 1.12528,0.54389 z m -1.89423,4.53863 -0.48762,1.50037 0.0188,-0.0188 -0.48762,1.70667 0,-0.0187 -0.0938,0.37509 -1.21905,-0.30007 0.0938,-0.39385 0.50638,-1.72543 0.48762,-1.51913 1.18154,0.39385 z m -1.35033,4.76368 -0.0375,0.13129 0.0188,-0.0375 -0.69392,3.48837 0,-0.0188 0,0.0375 -1.23781,-0.16879 0,-0.075 0.69393,-3.50712 0.0375,-0.15004 1.21906,0.30007 z m -0.90023,4.83871 -0.43135,3.11328 -1.23781,-0.1688 0.43136,-3.11327 1.2378,0.16879 z m 2.83196,2.21305 -4.57614,7.033 -2.88822,-7.87696 7.46436,0.84396 z"
+         id="path3271"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+      <text
+         x="347.75757"
+         y="363.58319"
+         id="text3273"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Program</text>
+      <text
+         x="346.85733"
+         y="380.08731"
+         id="text3275"
+         xml:space="preserve"
+         style="font-size:13.80343914px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Dataflow</text>
+    </g>
+  </g>
+</svg>


[44/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/libs/ml/svm.md
----------------------------------------------------------------------
diff --git a/docs/apis/batch/libs/ml/svm.md b/docs/apis/batch/libs/ml/svm.md
deleted file mode 100644
index 6d09482..0000000
--- a/docs/apis/batch/libs/ml/svm.md
+++ /dev/null
@@ -1,223 +0,0 @@
----
-mathjax: include
-title: SVM using CoCoA
-# Sub navigation
-sub-nav-group: batch
-sub-nav-parent: flinkml
-sub-nav-title: SVM (CoCoA)
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-* This will be replaced by the TOC
-{:toc}
-
-## Description
-
-Implements an SVM with soft-margin using the communication-efficient distributed dual coordinate
-ascent algorithm with hinge-loss function.
-The algorithm solves the following minimization problem:
-
-$$\min_{\mathbf{w} \in \mathbb{R}^d} \frac{\lambda}{2} \left\lVert \mathbf{w} \right\rVert^2 + \frac{1}{n} \sum_{i=1}^n l_{i}\left(\mathbf{w}^T\mathbf{x}_i\right)$$
-
-with $\mathbf{w}$ being the weight vector, $\lambda$ being the regularization constant,
-$$\mathbf{x}_i \in \mathbb{R}^d$$ being the data points and $$l_{i}$$ being the convex loss
-functions, which can also depend on the labels $$y_{i} \in \mathbb{R}$$.
-In the current implementation the regularizer is the $\ell_2$-norm and the loss functions are the hinge-loss functions:
-
-  $$l_{i} = \max\left(0, 1 - y_{i} \mathbf{w}^T\mathbf{x}_i \right)$$
-
-With these choices, the problem definition is equivalent to a SVM with soft-margin.
-Thus, the algorithm allows us to train a SVM with soft-margin.
-
-The minimization problem is solved by applying stochastic dual coordinate ascent (SDCA).
-In order to make the algorithm efficient in a distributed setting, the CoCoA algorithm calculates
-several iterations of SDCA locally on a data block before merging the local updates into a
-valid global state.
-This state is redistributed to the different data partitions where the next round of local SDCA
-iterations is then executed.
-The number of outer iterations and local SDCA iterations control the overall network costs, because
-there is only network communication required for each outer iteration.
-The local SDCA iterations are embarrassingly parallel once the individual data partitions have been
-distributed across the cluster.
-
-The implementation of this algorithm is based on the work of
-[Jaggi et al.](http://arxiv.org/abs/1409.1458)
-
-## Operations
-
-`SVM` is a `Predictor`.
-As such, it supports the `fit` and `predict` operation.
-
-### Fit
-
-SVM is trained given a set of `LabeledVector`:
-
-* `fit: DataSet[LabeledVector] => Unit`
-
-### Predict
-
-SVM predicts for all subtypes of FlinkML's `Vector` the corresponding class label:
-
-* `predict[T <: Vector]: DataSet[T] => DataSet[(T, Double)]`, where the `(T, Double)` tuple
-  corresponds to (original_features, label)
-
-If we call evaluate with a `DataSet[(Vector, Double)]`, we make a prediction on the class label
-for each example, and return a `DataSet[(Double, Double)]`. In each tuple the first element
-is the true value, as was provided from the input `DataSet[(Vector, Double)]` and the second element
-is the predicted value. You can then use these `(truth, prediction)` tuples to evaluate
-the algorithm's performance.
-
-* `predict: DataSet[(Vector, Double)] => DataSet[(Double, Double)]`
-
-## Parameters
-
-The SVM implementation can be controlled by the following parameters:
-
-<table class="table table-bordered">
-<thead>
-  <tr>
-    <th class="text-left" style="width: 20%">Parameters</th>
-    <th class="text-center">Description</th>
-  </tr>
-</thead>
-
-<tbody>
-  <tr>
-    <td><strong>Blocks</strong></td>
-    <td>
-      <p>
-        Sets the number of blocks into which the input data will be split.
-        On each block the local stochastic dual coordinate ascent method is executed.
-        This number should be set at least to the degree of parallelism.
-        If no value is specified, then the parallelism of the input DataSet is used as the number of blocks.
-        (Default value: <strong>None</strong>)
-      </p>
-    </td>
-  </tr>
-  <tr>
-    <td><strong>Iterations</strong></td>
-    <td>
-      <p>
-        Defines the maximum number of iterations of the outer loop method.
-        In other words, it defines how often the SDCA method is applied to the blocked data.
-        After each iteration, the locally computed weight vector updates have to be reduced to update the global weight vector value.
-        The new weight vector is broadcast to all SDCA tasks at the beginning of each iteration.
-        (Default value: <strong>10</strong>)
-      </p>
-    </td>
-  </tr>
-  <tr>
-    <td><strong>LocalIterations</strong></td>
-    <td>
-      <p>
-        Defines the maximum number of SDCA iterations.
-        In other words, it defines how many data points are drawn from each local data block to calculate the stochastic dual coordinate ascent.
-        (Default value: <strong>10</strong>)
-      </p>
-    </td>
-  </tr>
-  <tr>
-    <td><strong>Regularization</strong></td>
-    <td>
-      <p>
-        Defines the regularization constant of the SVM algorithm.
-        The higher the value, the smaller will the 2-norm of the weight vector be.
-        In case of a SVM with hinge loss this means that the SVM margin will be wider even though it might contain some false classifications.
-        (Default value: <strong>1.0</strong>)
-      </p>
-    </td>
-  </tr>
-  <tr>
-    <td><strong>Stepsize</strong></td>
-    <td>
-      <p>
-        Defines the initial step size for the updates of the weight vector.
-        The larger the step size is, the larger will be the contribution of the weight vector updates to the next weight vector value.
-        The effective scaling of the updates is $\frac{stepsize}{blocks}$.
-        This value has to be tuned in case that the algorithm becomes unstable.
-        (Default value: <strong>1.0</strong>)
-      </p>
-    </td>
-  </tr>
-  <tr>
-    <td><strong>ThresholdValue</strong></td>
-    <td>
-      <p>
-        Defines the limiting value for the decision function above which examples are labeled as
-        positive (+1.0). Examples with a decision function value below this value are classified
-        as negative (-1.0). In order to get the raw decision function values you need to indicate it by
-        using the OutputDecisionFunction parameter.  (Default value: <strong>0.0</strong>)
-      </p>
-    </td>
-  </tr>
-  <tr>
-    <td><strong>OutputDecisionFunction</strong></td>
-    <td>
-      <p>
-        Determines whether the predict and evaluate functions of the SVM should return the distance
-        to the separating hyperplane, or binary class labels. Setting this to true will 
-        return the raw distance to the hyperplane for each example. Setting it to false will 
-        return the binary class label (+1.0, -1.0) (Default value: <strong>false</strong>)
-      </p>
-    </td>
-  </tr>
-  <tr>
-  <td><strong>Seed</strong></td>
-  <td>
-    <p>
-      Defines the seed to initialize the random number generator.
-      The seed directly controls which data points are chosen for the SDCA method.
-      (Default value: <strong>Random Long Integer</strong>)
-    </p>
-  </td>
-</tr>
-</tbody>
-</table>
-
-## Examples
-
-{% highlight scala %}
-import org.apache.flink.api.scala._
-import org.apache.flink.ml.math.Vector
-import org.apache.flink.ml.common.LabeledVector
-import org.apache.flink.ml.classification.SVM
-import org.apache.flink.ml.RichExecutionEnvironment
-
-val pathToTrainingFile: String = ???
-val pathToTestingFile: String = ???
-val env = ExecutionEnvironment.getExecutionEnvironment
-
-// Read the training data set, from a LibSVM formatted file
-val trainingDS: DataSet[LabeledVector] = env.readLibSVM(pathToTrainingFile)
-
-// Create the SVM learner
-val svm = SVM()
-  .setBlocks(10)
-
-// Learn the SVM model
-svm.fit(trainingDS)
-
-// Read the testing data set
-val testingDS: DataSet[Vector] = env.readLibSVM(pathToTestingFile).map(_.vector)
-
-// Calculate the predictions for the testing data set
-val predictionDS: DataSet[(Vector, Double)] = svm.predict(testingDS)
-
-{% endhighlight %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/libs/table.md
----------------------------------------------------------------------
diff --git a/docs/apis/batch/libs/table.md b/docs/apis/batch/libs/table.md
deleted file mode 100644
index c37b952..0000000
--- a/docs/apis/batch/libs/table.md
+++ /dev/null
@@ -1,26 +0,0 @@
----
-title: "Table API and SQL"
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-
-<meta http-equiv="refresh" content="1; url={{ site.baseurl }}/apis/table.html" />
-
-The *Table API guide* has been moved. Redirecting to [{{ site.baseurl }}/apis/table.html]({{ site.baseurl }}/apis/table.html) in 1 second.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/python.md
----------------------------------------------------------------------
diff --git a/docs/apis/batch/python.md b/docs/apis/batch/python.md
deleted file mode 100644
index b5e81c5..0000000
--- a/docs/apis/batch/python.md
+++ /dev/null
@@ -1,638 +0,0 @@
----
-title: "Python Programming Guide"
-is_beta: true
-
-# Sub-level navigation
-sub-nav-group: batch
-sub-nav-id: python_api
-sub-nav-pos: 4
-sub-nav-title: Python API
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-Analysis programs in Flink are regular programs that implement transformations on data sets
-(e.g., filtering, mapping, joining, grouping). The data sets are initially created from certain
-sources (e.g., by reading files, or from collections). Results are returned via sinks, which may for
-example write the data to (distributed) files, or to standard output (for example the command line
-terminal). Flink programs run in a variety of contexts, standalone, or embedded in other programs.
-The execution can happen in a local JVM, or on clusters of many machines.
-
-In order to create your own Flink program, we encourage you to start with the
-[program skeleton](#program-skeleton) and gradually add your own
-[transformations](#transformations). The remaining sections act as references for additional
-operations and advanced features.
-
-* This will be replaced by the TOC
-{:toc}
-
-Example Program
----------------
-
-The following program is a complete, working example of WordCount. You can copy &amp; paste the code
-to run it locally.
-
-{% highlight python %}
-from flink.plan.Environment import get_environment
-from flink.functions.GroupReduceFunction import GroupReduceFunction
-
-class Adder(GroupReduceFunction):
-  def reduce(self, iterator, collector):
-    count, word = iterator.next()
-    count += sum([x[0] for x in iterator])
-    collector.collect((count, word))
-
-env = get_environment()
-data = env.from_elements("Who's there?",
- "I think I hear them. Stand, ho! Who's there?")
-
-data \
-  .flat_map(lambda x, c: [(1, word) for word in x.lower().split()]) \
-  .group_by(1) \
-  .reduce_group(Adder(), combinable=True) \
-  .output()
-
-env.execute(local=True)
-{% endhighlight %}
-
-{% top %}
-
-Program Skeleton
-----------------
-
-As we already saw in the example, Flink programs look like regular python programs. 
-Each program consists of the same basic parts:
-
-1. Obtain an `Environment`,
-2. Load/create the initial data,
-3. Specify transformations on this data,
-4. Specify where to put the results of your computations, and
-5. Execute your program.
-
-We will now give an overview of each of those steps but please refer to the respective sections for
-more details.
-
-
-The `Environment` is the basis for all Flink programs. You can
-obtain one using these static methods on class `Environment`:
-
-{% highlight python %}
-get_environment()
-{% endhighlight %}
-
-For specifying data sources the execution environment has several methods
-to read from files. To just read a text file as a sequence of lines, you can use:
-
-{% highlight python %}
-env = get_environment()
-text = env.read_text("file:///path/to/file")
-{% endhighlight %}
-
-This will give you a DataSet on which you can then apply transformations. For
-more information on data sources and input formats, please refer to
-[Data Sources](#data-sources).
-
-Once you have a DataSet you can apply transformations to create a new
-DataSet which you can then write to a file, transform again, or
-combine with other DataSets. You apply transformations by calling
-methods on DataSet with your own custom transformation function. For example,
-a map transformation looks like this:
-
-{% highlight python %}
-data.map(lambda x: x*2)
-{% endhighlight %}
-
-This will create a new DataSet by doubling every value in the original DataSet.
-For more information and a list of all the transformations,
-please refer to [Transformations](#transformations).
-
-Once you have a DataSet that needs to be written to disk you can call one
-of these methods on DataSet:
-
-{% highlight python %}
-data.write_text("<file-path>", WriteMode=Constants.NO_OVERWRITE)
-write_csv("<file-path>", line_delimiter='\n', field_delimiter=',', write_mode=Constants.NO_OVERWRITE)
-output()
-{% endhighlight %}
-
-The last method is only useful for developing/debugging on a local machine,
-it will output the contents of the DataSet to standard output. (Note that in
-a cluster, the result goes to the standard out stream of the cluster nodes and ends
-up in the *.out* files of the workers).
-The first two do as the name suggests.
-Please refer to [Data Sinks](#data-sinks) for more information on writing to files.
-
-Once you specified the complete program you need to call `execute` on
-the `Environment`. This will either execute on your local machine or submit your program
-for execution on a cluster, depending on how Flink was started. You can force
-a local execution by using `execute(local=True)`.
-
-{% top %}
-
-Project setup
----------------
-
-Apart from setting up Flink, no additional work is required. The python package can be found in the /resource folder of your Flink distribution. The flink package, along with the plan and optional packages are automatically distributed among the cluster via HDFS when running a job.
-
-The Python API was tested on Linux/Windows systems that have Python 2.7 or 3.4 installed.
-
-By default Flink will start python processes by calling "python" or "python3", depending on which start-script
-was used. By setting the "python.binary.python[2/3]" key in the flink-conf.yaml you can modify this behaviour to use a binary of your choice.
-
-{% top %}
-
-Lazy Evaluation
----------------
-
-All Flink programs are executed lazily: When the program's main method is executed, the data loading
-and transformations do not happen directly. Rather, each operation is created and added to the
-program's plan. The operations are actually executed when one of the `execute()` methods is invoked
-on the Environment object. Whether the program is executed locally or on a cluster depends
-on the environment of the program.
-
-The lazy evaluation lets you construct sophisticated programs that Flink executes as one
-holistically planned unit.
-
-{% top %}
-
-
-Transformations
----------------
-
-Data transformations transform one or more DataSets into a new DataSet. Programs can combine
-multiple transformations into sophisticated assemblies.
-
-This section gives a brief overview of the available transformations. The [transformations
-documentation](dataset_transformations.html) has a full description of all transformations with
-examples.
-
-<br />
-
-<table class="table table-bordered">
-  <thead>
-    <tr>
-      <th class="text-left" style="width: 20%">Transformation</th>
-      <th class="text-center">Description</th>
-    </tr>
-  </thead>
-
-  <tbody>
-    <tr>
-      <td><strong>Map</strong></td>
-      <td>
-        <p>Takes one element and produces one element.</p>
-{% highlight python %}
-data.map(lambda x: x * 2)
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>FlatMap</strong></td>
-      <td>
-        <p>Takes one element and produces zero, one, or more elements. </p>
-{% highlight python %}
-data.flat_map(
-  lambda x,c: [(1,word) for word in line.lower().split() for line in x])
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>MapPartition</strong></td>
-      <td>
-        <p>Transforms a parallel partition in a single function call. The function get the partition
-        as an `Iterator` and can produce an arbitrary number of result values. The number of
-        elements in each partition depends on the parallelism and previous operations.</p>
-{% highlight python %}
-data.map_partition(lambda x,c: [value * 2 for value in x])
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>Filter</strong></td>
-      <td>
-        <p>Evaluates a boolean function for each element and retains those for which the function
-        returns true.</p>
-{% highlight python %}
-data.filter(lambda x: x > 1000)
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>Reduce</strong></td>
-      <td>
-        <p>Combines a group of elements into a single element by repeatedly combining two elements
-        into one. Reduce may be applied on a full data set, or on a grouped data set.</p>
-{% highlight python %}
-data.reduce(lambda x,y : x + y)
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>ReduceGroup</strong></td>
-      <td>
-        <p>Combines a group of elements into one or more elements. ReduceGroup may be applied on a
-        full data set, or on a grouped data set.</p>
-{% highlight python %}
-class Adder(GroupReduceFunction):
-  def reduce(self, iterator, collector):
-    count, word = iterator.next()
-    count += sum([x[0] for x in iterator)      
-    collector.collect((count, word))
-
-data.reduce_group(Adder())
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>Aggregate</strong></td>
-      <td>
-        <p>Performs a built-in operation (sum, min, max) on one field of all the Tuples in a
-        data set or in each group of a data set. Aggregation can be applied on a full dataset
-        or on a grouped data set.</p>
-{% highlight python %}
-# This code finds the sum of all of the values in the first field and the maximum of all of the values in the second field
-data.aggregate(Aggregation.Sum, 0).and_agg(Aggregation.Max, 1)
-
-# min(), max(), and sum() syntactic sugar functions are also available
-data.sum(0).and_agg(Aggregation.Max, 1)
-{% endhighlight %}
-      </td>
-    </tr>
-
-    </tr>
-      <td><strong>Join</strong></td>
-      <td>
-        Joins two data sets by creating all pairs of elements that are equal on their keys.
-        Optionally uses a JoinFunction to turn the pair of elements into a single element.
-        See <a href="#specifying-keys">keys</a> on how to define join keys.
-{% highlight python %}
-# In this case tuple fields are used as keys.
-# "0" is the join field on the first tuple
-# "1" is the join field on the second tuple.
-result = input1.join(input2).where(0).equal_to(1)
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>CoGroup</strong></td>
-      <td>
-        <p>The two-dimensional variant of the reduce operation. Groups each input on one or more
-        fields and then joins the groups. The transformation function is called per pair of groups.
-        See <a href="#specifying-keys">keys</a> on how to define coGroup keys.</p>
-{% highlight python %}
-data1.co_group(data2).where(0).equal_to(1)
-{% endhighlight %}
-      </td>
-    </tr>
-
-    <tr>
-      <td><strong>Cross</strong></td>
-      <td>
-        <p>Builds the Cartesian product (cross product) of two inputs, creating all pairs of
-        elements. Optionally uses a CrossFunction to turn the pair of elements into a single
-        element.</p>
-{% highlight python %}
-result = data1.cross(data2)
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>Union</strong></td>
-      <td>
-        <p>Produces the union of two data sets.</p>
-{% highlight python %}
-data.union(data2)
-{% endhighlight %}
-      </td>
-    </tr>
-    <tr>
-      <td><strong>ZipWithIndex</strong></td>
-      <td>
-        <p>Assigns consecutive indexes to each element. For more information, please refer to
-        the [Zip Elements Guide](zip_elements_guide.html#zip-with-a-dense-index).</p>
-{% highlight python %}
-data.zip_with_index()
-{% endhighlight %}
-      </td>
-    </tr>
-  </tbody>
-</table>
-
-{% top %}
-
-
-Specifying Keys
--------------
-
-Some transformations (like Join or CoGroup) require that a key is defined on
-its argument DataSets, and other transformations (Reduce, GroupReduce) allow that the DataSet is grouped on a key before they are
-applied.
-
-A DataSet is grouped as
-{% highlight python %}
-reduced = data \
-  .group_by(<define key here>) \
-  .reduce_group(<do something>)
-{% endhighlight %}
-
-The data model of Flink is not based on key-value pairs. Therefore,
-you do not need to physically pack the data set types into keys and
-values. Keys are "virtual": they are defined as functions over the
-actual data to guide the grouping operator.
-
-### Define keys for Tuples
-{:.no_toc}
-
-The simplest case is grouping a data set of Tuples on one or more
-fields of the Tuple:
-{% highlight python %}
-reduced = data \
-  .group_by(0) \
-  .reduce_group(<do something>)
-{% endhighlight %}
-
-The data set is grouped on the first field of the tuples.
-The group-reduce function will thus receive groups of tuples with
-the same value in the first field.
-
-{% highlight python %}
-grouped = data \
-  .group_by(0,1) \
-  .reduce(/*do something*/)
-{% endhighlight %}
-
-The data set is grouped on the composite key consisting of the first and the
-second fields, therefore the reduce function will receive groups
-with the same value for both fields.
-
-A note on nested Tuples: If you have a DataSet with a nested tuple
-specifying `group_by(<index of tuple>)` will cause the system to use the full tuple as a key.
-
-{% top %}
-
-
-Passing Functions to Flink
---------------------------
-
-Certain operations require user-defined functions, whereas all of them accept lambda functions and rich functions as arguments.
-
-{% highlight python %}
-data.filter(lambda x: x > 5)
-{% endhighlight %}
-
-{% highlight python %}
-class Filter(FilterFunction):
-    def filter(self, value):
-        return value > 5
-
-data.filter(Filter())
-{% endhighlight %}
-
-Rich functions allow the use of imported functions, provide access to broadcast-variables,
-can be parameterized using __init__(), and are the go-to-option for complex functions.
-They are also the only way to define an optional `combine` function for a reduce operation.
-
-Lambda functions allow the easy insertion of one-liners. Note that a lambda function has to return
-an iterable, if the operation can return multiple values. (All functions receiving a collector argument)
-
-{% top %}
-
-Data Types
-----------
-
-Flink's Python API currently only offers native support for primitive python types (int, float, bool, string) and byte arrays.
-
-The type support can be extended by passing a serializer, deserializer and type class to the environment.
-{% highlight python %}
-class MyObj(object):
-    def __init__(self, i):
-        self.value = i
-
-
-class MySerializer(object):
-    def serialize(self, value):
-        return struct.pack(">i", value.value)
-
-
-class MyDeserializer(object):
-    def _deserialize(self, read):
-        i = struct.unpack(">i", read(4))[0]
-        return MyObj(i)
-
-
-env.register_custom_type(MyObj, MySerializer(), MyDeserializer())
-{% endhighlight %}
-
-#### Tuples/Lists
-
-You can use the tuples (or lists) for composite types. Python tuples are mapped to the Flink Tuple type, that contain
-a fix number of fields of various types (up to 25). Every field of a tuple can be a primitive type - including further tuples, resulting in nested tuples.
-
-{% highlight python %}
-word_counts = env.from_elements(("hello", 1), ("world",2))
-
-counts = word_counts.map(lambda x: x[1])
-{% endhighlight %}
-
-When working with operators that require a Key for grouping or matching records,
-Tuples let you simply specify the positions of the fields to be used as key. You can specify more
-than one position to use composite keys (see [Section Data Transformations](#transformations)).
-
-{% highlight python %}
-wordCounts \
-    .group_by(0) \
-    .reduce(MyReduceFunction())
-{% endhighlight %}
-
-{% top %}
-
-Data Sources
-------------
-
-Data sources create the initial data sets, such as from files or from collections.
-
-File-based:
-
-- `read_text(path)` - Reads files line wise and returns them as Strings.
-- `read_csv(path, type)` - Parses files of comma (or another char) delimited fields.
-  Returns a DataSet of tuples. Supports the basic java types and their Value counterparts as field
-  types.
-
-Collection-based:
-
-- `from_elements(*args)` - Creates a data set from a Seq. All elements
-- `generate_sequence(from, to)` - Generates the sequence of numbers in the given interval, in parallel. 
-
-**Examples**
-
-{% highlight python %}
-env  = get_environment
-
-\# read text file from local files system
-localLiens = env.read_text("file:#/path/to/my/textfile")
-
-\# read text file from a HDFS running at nnHost:nnPort
-hdfsLines = env.read_text("hdfs://nnHost:nnPort/path/to/my/textfile")
-
-\# read a CSV file with three fields, schema defined using constants defined in flink.plan.Constants
-csvInput = env.read_csv("hdfs:///the/CSV/file", (INT, STRING, DOUBLE))
-
-\# create a set from some given elements
-values = env.from_elements("Foo", "bar", "foobar", "fubar")
-
-\# generate a number sequence
-numbers = env.generate_sequence(1, 10000000)
-{% endhighlight %}
-
-{% top %}
-
-Data Sinks
-----------
-
-Data sinks consume DataSets and are used to store or return them:
-
-- `write_text()` - Writes elements line-wise as Strings. The Strings are
-  obtained by calling the *str()* method of each element.
-- `write_csv(...)` - Writes tuples as comma-separated value files. Row and field
-  delimiters are configurable. The value for each field comes from the *str()* method of the objects.
-- `output()` - Prints the *str()* value of each element on the
-  standard out.
-
-A DataSet can be input to multiple operations. Programs can write or print a data set and at the
-same time run additional transformations on them.
-
-**Examples**
-
-Standard data sink methods:
-
-{% highlight scala %}
- write DataSet to a file on the local file system
-textData.write_text("file:///my/result/on/localFS")
-
- write DataSet to a file on a HDFS with a namenode running at nnHost:nnPort
-textData.write_text("hdfs://nnHost:nnPort/my/result/on/localFS")
-
- write DataSet to a file and overwrite the file if it exists
-textData.write_text("file:///my/result/on/localFS", WriteMode.OVERWRITE)
-
- tuples as lines with pipe as the separator "a|b|c"
-values.write_csv("file:///path/to/the/result/file", line_delimiter="\n", field_delimiter="|")
-
- this writes tuples in the text formatting "(a, b, c)", rather than as CSV lines
-values.write_text("file:///path/to/the/result/file")
-{% endhighlight %}
-
-{% top %}
-
-Broadcast Variables
--------------------
-
-Broadcast variables allow you to make a data set available to all parallel instances of an
-operation, in addition to the regular input of the operation. This is useful for auxiliary data
-sets, or data-dependent parameterization. The data set will then be accessible at the operator as a
-Collection.
-
-- **Broadcast**: broadcast sets are registered by name via `with_broadcast_set(DataSet, String)`
-- **Access**: accessible via `self.context.get_broadcast_variable(String)` at the target operator
-
-{% highlight python %}
-class MapperBcv(MapFunction):
-    def map(self, value):
-        factor = self.context.get_broadcast_variable("bcv")[0][0]
-        return value * factor
-
-# 1. The DataSet to be broadcasted
-toBroadcast = env.from_elements(1, 2, 3)
-data = env.from_elements("a", "b")
-
-# 2. Broadcast the DataSet
-data.map(MapperBcv()).with_broadcast_set("bcv", toBroadcast) 
-{% endhighlight %}
-
-Make sure that the names (`bcv` in the previous example) match when registering and
-accessing broadcasted data sets.
-
-**Note**: As the content of broadcast variables is kept in-memory on each node, it should not become
-too large. For simpler things like scalar values you can simply parameterize the rich function.
-
-{% top %}
-
-Parallel Execution
-------------------
-
-This section describes how the parallel execution of programs can be configured in Flink. A Flink
-program consists of multiple tasks (operators, data sources, and sinks). A task is split into
-several parallel instances for execution and each parallel instance processes a subset of the task's
-input data. The number of parallel instances of a task is called its *parallelism* or *degree of
-parallelism (DOP)*.
-
-The degree of parallelism of a task can be specified in Flink on different levels.
-
-### Execution Environment Level
-
-Flink programs are executed in the context of an [execution environment](#program-skeleton). An
-execution environment defines a default parallelism for all operators, data sources, and data sinks
-it executes. Execution environment parallelism can be overwritten by explicitly configuring the
-parallelism of an operator.
-
-The default parallelism of an execution environment can be specified by calling the
-`set_parallelism()` method. To execute all operators, data sources, and data sinks of the
-[WordCount](#example-program) example program with a parallelism of `3`, set the default parallelism of the
-execution environment as follows:
-
-{% highlight python %}
-env = get_environment()
-env.set_parallelism(3)
-
-text.flat_map(lambda x,c: x.lower().split()) \
-    .group_by(1) \
-    .reduce_group(Adder(), combinable=True) \
-    .output()
-
-env.execute()
-{% endhighlight %}
-
-### System Level
-
-A system-wide default parallelism for all execution environments can be defined by setting the
-`parallelism.default` property in `./conf/flink-conf.yaml`. See the
-[Configuration]({{ site.baseurl }}/setup/config.html) documentation for details.
-
-{% top %}
-
-Executing Plans
----------------
-
-To run the plan with Flink, go to your Flink distribution, and run the pyflink.sh script from the /bin folder.
-use pyflink2.sh for python 2.7, and pyflink3.sh for python 3.4. The script containing the plan has to be passed
-as the first argument, followed by a number of additional python packages, and finally, separated by - additional
-arguments that will be fed to the script.
-
-{% highlight python %}
-./bin/pyflink<2/3>.sh <Script>[ <pathToPackage1>[ <pathToPackageX]][ - <param1>[ <paramX>]]
-{% endhighlight %}
-
-{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/zip_elements_guide.md
----------------------------------------------------------------------
diff --git a/docs/apis/batch/zip_elements_guide.md b/docs/apis/batch/zip_elements_guide.md
deleted file mode 100644
index e3e93b5..0000000
--- a/docs/apis/batch/zip_elements_guide.md
+++ /dev/null
@@ -1,128 +0,0 @@
----
-title: "Zipping Elements in a DataSet"
-# Sub-level navigation
-sub-nav-group: batch
-sub-nav-parent: dataset_api
-sub-nav-pos: 2
-sub-nav-title: Zipping Elements
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-In certain algorithms, one may need to assign unique identifiers to data set elements.
-This document shows how {% gh_link /flink-java/src/main/java/org/apache/flink/api/java/utils/DataSetUtils.java "DataSetUtils" %} can be used for that purpose.
-
-* This will be replaced by the TOC
-{:toc}
-
-### Zip with a Dense Index
-`zipWithIndex` assigns consecutive labels to the elements, receiving a data set as input and returning a new data set of `(unique id, initial value)` 2-tuples.
-This process requires two passes, first counting then labeling elements, and cannot be pipelined due to the synchronization of counts.
-The alternative `zipWithUniqueId` works in a pipelined fashion and is preferred when a unique labeling is sufficient.
-For example, the following code:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-env.setParallelism(2);
-DataSet<String> in = env.fromElements("A", "B", "C", "D", "E", "F", "G", "H");
-
-DataSet<Tuple2<Long, String>> result = DataSetUtils.zipWithIndex(in);
-
-result.writeAsCsv(resultPath, "\n", ",");
-env.execute();
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-import org.apache.flink.api.scala._
-
-val env: ExecutionEnvironment = ExecutionEnvironment.getExecutionEnvironment
-env.setParallelism(2)
-val input: DataSet[String] = env.fromElements("A", "B", "C", "D", "E", "F", "G", "H")
-
-val result: DataSet[(Long, String)] = input.zipWithIndex
-
-result.writeAsCsv(resultPath, "\n", ",")
-env.execute()
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight python %}
-from flink.plan.Environment import get_environment
-
-env = get_environment()
-env.set_parallelism(2)
-input = env.from_elements("A", "B", "C", "D", "E", "F", "G", "H")
-
-result = input.zipWithIndex()
-
-result.write_text(result_path)
-env.execute()
-{% endhighlight %}
-</div>
-
-</div>
-
-may yield the tuples: (0,G), (1,H), (2,A), (3,B), (4,C), (5,D), (6,E), (7,F)
-
-[Back to top](#top)
-
-### Zip with a Unique Identifier
-In many cases one may not need to assign consecutive labels.
-`zipWithUniqueId` works in a pipelined fashion, speeding up the label assignment process. This method receives a data set as input and returns a new data set of `(unique id, initial value)` 2-tuples.
-For example, the following code:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-env.setParallelism(2);
-DataSet<String> in = env.fromElements("A", "B", "C", "D", "E", "F", "G", "H");
-
-DataSet<Tuple2<Long, String>> result = DataSetUtils.zipWithUniqueId(in);
-
-result.writeAsCsv(resultPath, "\n", ",");
-env.execute();
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-import org.apache.flink.api.scala._
-
-val env: ExecutionEnvironment = ExecutionEnvironment.getExecutionEnvironment
-env.setParallelism(2)
-val input: DataSet[String] = env.fromElements("A", "B", "C", "D", "E", "F", "G", "H")
-
-val result: DataSet[(Long, String)] = input.zipWithUniqueId
-
-result.writeAsCsv(resultPath, "\n", ",")
-env.execute()
-{% endhighlight %}
-</div>
-
-</div>
-
-may yield the tuples: (0,G), (1,A), (2,H), (3,B), (5,C), (7,D), (9,E), (11,F)
-
-[Back to top](#top)

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/best_practices.md
----------------------------------------------------------------------
diff --git a/docs/apis/best_practices.md b/docs/apis/best_practices.md
deleted file mode 100644
index 7ae1b64..0000000
--- a/docs/apis/best_practices.md
+++ /dev/null
@@ -1,403 +0,0 @@
----
-title: "Best Practices"
-# Top-level navigation
-top-nav-group: apis
-top-nav-pos: 5
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-This page contains a collection of best practices for Flink programmers on how to solve frequently encountered problems.
-
-
-* This will be replaced by the TOC
-{:toc}
-
-## Parsing command line arguments and passing them around in your Flink application
-
-
-Almost all Flink applications, both batch and streaming rely on external configuration parameters.
-For example for specifying input and output sources (like paths or addresses), also system parameters (parallelism, runtime configuration) and application specific parameters (often used within the user functions).
-
-Since version 0.9 we are providing a simple utility called `ParameterTool` to provide at least some basic tooling for solving these problems.
-
-Please note that you don't have to use the `ParameterTool` explained here. Other frameworks such as [Commons CLI](https://commons.apache.org/proper/commons-cli/),
-[argparse4j](http://argparse4j.sourceforge.net/) and others work well with Flink as well.
-
-
-### Getting your configuration values into the `ParameterTool`
-
-The `ParameterTool` provides a set of predefined static methods for reading the configuration. The tool is internally expecting a `Map<String, String>`, so its very easy to integrate it with your own configuration style.
-
-
-#### From `.properties` files
-
-The following method will read a [Properties](https://docs.oracle.com/javase/tutorial/essential/environment/properties.html) file and provide the key/value pairs:
-{% highlight java %}
-String propertiesFile = "/home/sam/flink/myjob.properties";
-ParameterTool parameter = ParameterTool.fromPropertiesFile(propertiesFile);
-{% endhighlight %}
-
-
-#### From the command line arguments
-
-This allows getting arguments like `--input hdfs:///mydata --elements 42` from the command line.
-{% highlight java %}
-public static void main(String[] args) {
-	ParameterTool parameter = ParameterTool.fromArgs(args);
-	// .. regular code ..
-{% endhighlight %}
-
-
-#### From system properties
-
-When starting a JVM, you can pass system properties to it: `-Dinput=hdfs:///mydata`. You can also initialize the `ParameterTool` from these system properties:
-
-{% highlight java %}
-ParameterTool parameter = ParameterTool.fromSystemProperties();
-{% endhighlight %}
-
-
-### Using the parameters in your Flink program
-
-Now that we've got the parameters from somewhere (see above) we can use them in various ways.
-
-**Directly from the `ParameterTool`**
-
-The `ParameterTool` itself has methods for accessing the values.
-{% highlight java %}
-ParameterTool parameters = // ...
-parameter.getRequired("input");
-parameter.get("output", "myDefaultValue");
-parameter.getLong("expectedCount", -1L);
-parameter.getNumberOfParameters()
-// .. there are more methods available.
-{% endhighlight %}
-
-You can use the return values of these methods directly in the main() method (=the client submitting the application).
-For example you could set the parallelism of a operator like this:
-
-{% highlight java %}
-ParameterTool parameters = ParameterTool.fromArgs(args);
-int parallelism = parameters.get("mapParallelism", 2);
-DataSet<Tuple2<String, Integer>> counts = text.flatMap(new Tokenizer()).setParallelism(parallelism);
-{% endhighlight %}
-
-Since the `ParameterTool` is serializable, you can pass it to the functions itself:
-
-{% highlight java %}
-ParameterTool parameters = ParameterTool.fromArgs(args);
-DataSet<Tuple2<String, Integer>> counts = text.flatMap(new Tokenizer(parameters));
-{% endhighlight %}
-
-and then use them inside the function for getting values from the command line.
-
-
-#### Passing it as a `Configuration` object to single functions
-
-The example below shows how to pass the parameters as a `Configuration` object to a user defined function.
-
-{% highlight java %}
-ParameterTool parameters = ParameterTool.fromArgs(args);
-DataSet<Tuple2<String, Integer>> counts = text.flatMap(new Tokenizer()).withParameters(parameters.getConfiguration())
-{% endhighlight %}
-
-In the `Tokenizer`, the object is now accessible in the `open(Configuration conf)` method:
-
-{% highlight java %}
-public static final class Tokenizer extends RichFlatMapFunction<String, Tuple2<String, Integer>> {
-	@Override
-	public void open(Configuration parameters) throws Exception {
-		parameters.getInteger("myInt", -1);
-		// .. do
-{% endhighlight %}
-
-
-#### Register the parameters globally
-
-Parameters registered as a [global job parameter](programming_guide.html#passing-parameters-to-functions) at the `ExecutionConfig` allow you to access the configuration values from the JobManager web interface and all functions defined by the user.
-
-**Register the parameters globally**
-
-{% highlight java %}
-ParameterTool parameters = ParameterTool.fromArgs(args);
-
-// set up the execution environment
-final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-env.getConfig().setGlobalJobParameters(parameters);
-{% endhighlight %}
-
-Access them in any rich user function:
-
-{% highlight java %}
-public static final class Tokenizer extends RichFlatMapFunction<String, Tuple2<String, Integer>> {
-
-	@Override
-	public void flatMap(String value, Collector<Tuple2<String, Integer>> out) {
-		ParameterTool parameters = (ParameterTool) getRuntimeContext().getExecutionConfig().getGlobalJobParameters();
-		parameters.getRequired("input");
-		// .. do more ..
-{% endhighlight %}
-
-
-## Naming large TupleX types
-
-It is recommended to use POJOs (Plain old Java objects) instead of `TupleX` for data types with many fields.
-Also, POJOs can be used to give large `Tuple`-types a name.
-
-**Example**
-
-Instead of using:
-
-
-~~~java
-Tuple11<String, String, ..., String> var = new ...;
-~~~
-
-
-It is much easier to create a custom type extending from the large Tuple type.
-
-~~~java
-CustomType var = new ...;
-
-public static class CustomType extends Tuple11<String, String, ..., String> {
-    // constructor matching super
-}
-~~~
-
-
-## Register a custom serializer for your Flink program
-
-If you use a custom type in your Flink program which cannot be serialized by the
-Flink type serializer, Flink falls back to using the generic Kryo
-serializer. You may register your own serializer or a serialization system like
-Google Protobuf or Apache Thrift with Kryo. To do that, simply register the type
-class and the serializer in the `ExecutionConfig` of your Flink program.
-
-
-{% highlight java %}
-final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-
-// register the class of the serializer as serializer for a type
-env.getConfig().registerTypeWithKryoSerializer(MyCustomType.class, MyCustomSerializer.class);
-
-// register an instance as serializer for a type
-MySerializer mySerializer = new MySerializer();
-env.getConfig().registerTypeWithKryoSerializer(MyCustomType.class, mySerializer);
-{% endhighlight %}
-
-Note that your custom serializer has to extend Kryo's Serializer class. In the
-case of Google Protobuf or Apache Thrift, this has already been done for
-you:
-
-{% highlight java %}
-
-final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-
-// register the Google Protobuf serializer with Kryo
-env.getConfig().registerTypeWithKryoSerializer(MyCustomType.class, ProtobufSerializer.class);
-
-// register the serializer included with Apache Thrift as the standard serializer
-// TBaseSerializer states it should be initialized as a default Kryo serializer
-env.getConfig().addDefaultKryoSerializer(MyCustomType.class, TBaseSerializer.class);
-
-{% endhighlight %}
-
-For the above example to work, you need to include the necessary dependencies in
-your Maven project file (pom.xml). In the dependency section, add the following
-for Apache Thrift:
-
-{% highlight xml %}
-
-<dependency>
-	<groupId>com.twitter</groupId>
-	<artifactId>chill-thrift</artifactId>
-	<version>0.5.2</version>
-</dependency>
-<!-- libthrift is required by chill-thrift -->
-<dependency>
-	<groupId>org.apache.thrift</groupId>
-	<artifactId>libthrift</artifactId>
-	<version>0.6.1</version>
-	<exclusions>
-		<exclusion>
-			<groupId>javax.servlet</groupId>
-			<artifactId>servlet-api</artifactId>
-		</exclusion>
-		<exclusion>
-			<groupId>org.apache.httpcomponents</groupId>
-			<artifactId>httpclient</artifactId>
-		</exclusion>
-	</exclusions>
-</dependency>
-
-{% endhighlight %}
-
-For Google Protobuf you need the following Maven dependency:
-
-{% highlight xml %}
-
-<dependency>
-	<groupId>com.twitter</groupId>
-	<artifactId>chill-protobuf</artifactId>
-	<version>0.5.2</version>
-</dependency>
-<!-- We need protobuf for chill-protobuf -->
-<dependency>
-	<groupId>com.google.protobuf</groupId>
-	<artifactId>protobuf-java</artifactId>
-	<version>2.5.0</version>
-</dependency>
-
-{% endhighlight %}
-
-
-Please adjust the versions of both libraries as needed.
-
-
-## Using Logback instead of Log4j
-
-**Note: This tutorial is applicable starting from Flink 0.10**
-
-Apache Flink is using [slf4j](http://www.slf4j.org/) as the logging abstraction in the code. Users are advised to use sfl4j as well in their user functions.
-
-Sfl4j is a compile-time logging interface that can use different logging implementations at runtime, such as [log4j](http://logging.apache.org/log4j/2.x/) or [Logback](http://logback.qos.ch/).
-
-Flink is depending on Log4j by default. This page describes how to use Flink with Logback. Users reported that they were also able to set up centralized logging with Graylog using this tutorial.
-
-To get a logger instance in the code, use the following code:
-
-
-{% highlight java %}
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-public class MyClass implements MapFunction {
-	private static final Logger LOG = LoggerFactory.getLogger(MyClass.class);
-	// ...
-{% endhighlight %}
-
-
-### Use Logback when running Flink out of the IDE / from a Java application
-
-
-In all cases were classes are executed with a classpath created by a dependency manager such as Maven, Flink will pull log4j into the classpath.
-
-Therefore, you will need to exclude log4j from Flink's dependencies. The following description will assume a Maven project created from a [Flink quickstart](../quickstart/java_api_quickstart.html).
-
-Change your projects `pom.xml` file like this:
-
-{% highlight xml %}
-<dependencies>
-	<!-- Add the two required logback dependencies -->
-	<dependency>
-		<groupId>ch.qos.logback</groupId>
-		<artifactId>logback-core</artifactId>
-		<version>1.1.3</version>
-	</dependency>
-	<dependency>
-		<groupId>ch.qos.logback</groupId>
-		<artifactId>logback-classic</artifactId>
-		<version>1.1.3</version>
-	</dependency>
-
-	<!-- Add the log4j -> sfl4j (-> logback) bridge into the classpath
-	 Hadoop is logging to log4j! -->
-	<dependency>
-		<groupId>org.slf4j</groupId>
-		<artifactId>log4j-over-slf4j</artifactId>
-		<version>1.7.7</version>
-	</dependency>
-
-	<dependency>
-		<groupId>org.apache.flink</groupId>
-		<artifactId>flink-java</artifactId>
-		<version>{{ site.version }}</version>
-		<exclusions>
-			<exclusion>
-				<groupId>log4j</groupId>
-				<artifactId>*</artifactId>
-			</exclusion>
-			<exclusion>
-				<groupId>org.slf4j</groupId>
-				<artifactId>slf4j-log4j12</artifactId>
-			</exclusion>
-		</exclusions>
-	</dependency>
-	<dependency>
-		<groupId>org.apache.flink</groupId>
-		<artifactId>flink-streaming-java{{ site.scala_version_suffix }}</artifactId>
-		<version>{{ site.version }}</version>
-		<exclusions>
-			<exclusion>
-				<groupId>log4j</groupId>
-				<artifactId>*</artifactId>
-			</exclusion>
-			<exclusion>
-				<groupId>org.slf4j</groupId>
-				<artifactId>slf4j-log4j12</artifactId>
-			</exclusion>
-		</exclusions>
-	</dependency>
-	<dependency>
-		<groupId>org.apache.flink</groupId>
-		<artifactId>flink-clients{{ site.scala_version_suffix }}</artifactId>
-		<version>{{ site.version }}</version>
-		<exclusions>
-			<exclusion>
-				<groupId>log4j</groupId>
-				<artifactId>*</artifactId>
-			</exclusion>
-			<exclusion>
-				<groupId>org.slf4j</groupId>
-				<artifactId>slf4j-log4j12</artifactId>
-			</exclusion>
-		</exclusions>
-	</dependency>
-</dependencies>
-{% endhighlight %}
-
-The following changes were done in the `<dependencies>` section:
-
- * Exclude all `log4j` dependencies from all Flink dependencies: This causes Maven to ignore Flink's transitive dependencies to log4j.
- * Exclude the `slf4j-log4j12` artifact from Flink's dependencies: Since we are going to use the slf4j to logback binding, we have to remove the slf4j to log4j binding.
- * Add the Logback dependencies: `logback-core` and `logback-classic`
- * Add dependencies for `log4j-over-slf4j`. `log4j-over-slf4j` is a tool which allows legacy applications which are directly using the Log4j APIs to use the Slf4j interface. Flink depends on Hadoop which is directly using Log4j for logging. Therefore, we need to redirect all logger calls from Log4j to Slf4j which is in turn logging to Logback.
-
-Please note that you need to manually add the exclusions to all new Flink dependencies you are adding to the pom file.
-
-You may also need to check if other dependencies (non Flink) are pulling in log4j bindings. You can analyze the dependencies of your project with `mvn dependency:tree`.
-
-
-
-### Use Logback when running Flink on a cluster
-
-This tutorial is applicable when running Flink on YARN or as a standalone cluster.
-
-In order to use Logback instead of Log4j with Flink, you need to remove the `log4j-1.2.xx.jar` and `sfl4j-log4j12-xxx.jar` from the `lib/` directory.
-
-Next, you need to put the following jar files into the `lib/` folder:
-
- * `logback-classic.jar`
- * `logback-core.jar`
- * `log4j-over-slf4j.jar`: This bridge needs to be present in the classpath for redirecting logging calls from Hadoop (which is using Log4j) to Slf4j.
-
-Note that you need to explicitly set the `lib/` directory when using a per job YARN cluster.
-
-The command to submit Flink on YARN with a custom logger is: `./bin/flink run -yt $FLINK_HOME/lib <... remaining arguments ...>`

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/cli.md
----------------------------------------------------------------------
diff --git a/docs/apis/cli.md b/docs/apis/cli.md
deleted file mode 100644
index c272413..0000000
--- a/docs/apis/cli.md
+++ /dev/null
@@ -1,322 +0,0 @@
----
-title:  "Command-Line Interface"
-# Top-level navigation
-top-nav-group: apis
-top-nav-pos: 5
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-Flink provides a command-line interface to run programs that are packaged
-as JAR files, and control their execution.  The command line interface is part
-of any Flink setup, available in local single node setups and in
-distributed setups. It is located under `<flink-home>/bin/flink`
-and connects by default to the running Flink master (JobManager) that was
-started from the same installation directory.
-
-A prerequisite to using the command line interface is that the Flink
-master (JobManager) has been started (via
-`<flink-home>/bin/start-local.sh` or
-`<flink-home>/bin/start-cluster.sh`) or that a YARN environment is
-available.
-
-The command line can be used to
-
-- submit jobs for execution,
-- cancel a running job,
-- provide information about a job, and
-- list running and waiting jobs.
-
-* This will be replaced by the TOC
-{:toc}
-
-## Examples
-
--   Run example program with no arguments.
-
-        ./bin/flink run ./examples/batch/WordCount.jar
-
--   Run example program with arguments for input and result files
-
-        ./bin/flink run ./examples/batch/WordCount.jar \
-                               file:///home/user/hamlet.txt file:///home/user/wordcount_out
-
--   Run example program with parallelism 16 and arguments for input and result files
-
-        ./bin/flink run -p 16 ./examples/batch/WordCount.jar \
-                                file:///home/user/hamlet.txt file:///home/user/wordcount_out
-
--   Run example program with flink log output disabled
-
-            ./bin/flink run -q ./examples/batch/WordCount.jar
-
--   Run example program in detached mode
-
-            ./bin/flink run -d ./examples/batch/WordCount.jar
-
--   Run example program on a specific JobManager:
-
-        ./bin/flink run -m myJMHost:6123 \
-                               ./examples/batch/WordCount.jar \
-                               file:///home/user/hamlet.txt file:///home/user/wordcount_out
-
--   Run example program with a specific class as an entry point:
-
-        ./bin/flink run -c org.apache.flink.examples.java.wordcount.WordCount \
-                               ./examples/batch/WordCount.jar \
-                               file:///home/user/hamlet.txt file:///home/user/wordcount_out
-
--   Run example program using a [per-job YARN cluster]({{site.baseurl}}/setup/yarn_setup.html#run-a-single-flink-job-on-hadoop-yarn) with 2 TaskManagers:
-
-        ./bin/flink run -m yarn-cluster -yn 2 \
-                               ./examples/batch/WordCount.jar \
-                               hdfs:///user/hamlet.txt hdfs:///user/wordcount_out
-
--   Display the optimized execution plan for the WordCount example program as JSON:
-
-        ./bin/flink info ./examples/batch/WordCount.jar \
-                                file:///home/user/hamlet.txt file:///home/user/wordcount_out
-
--   List scheduled and running jobs (including their JobIDs):
-
-        ./bin/flink list
-
--   List scheduled jobs (including their JobIDs):
-
-        ./bin/flink list -s
-
--   List running jobs (including their JobIDs):
-
-        ./bin/flink list -r
-
--   List running Flink jobs inside Flink YARN session:
-
-        ./bin/flink list -m yarn-cluster -yid <yarnApplicationID> -r
-
--   Cancel a job:
-
-        ./bin/flink cancel <jobID>
-
--   Stop a job (streaming jobs only):
-
-        ./bin/flink stop <jobID>
-
-
-The difference between cancelling and stopping a (streaming) job is the following:
-
-On a cancel call, the operators in a job immediately receive a `cancel()` method call to cancel them as
-soon as possible.
-If operators are not not stopping after the cancel call, Flink will start interrupting the thread periodically
-until it stops.
-
-A "stop" call is a more graceful way of stopping a running streaming job. Stop is only available for jobs
-which use sources that implement the `StoppableFunction` interface. When the user requests to stop a job,
-all sources will receive a `stop()` method call. The job will keep running until all sources properly shut down.
-This allows the job to finish processing all inflight data.
-
-### Savepoints
-
-[Savepoints]({{site.baseurl}}/apis/streaming/savepoints.html) are controlled via the command line client:
-
-#### Trigger a savepoint
-
-{% highlight bash %}
-./bin/flink savepoint <jobID>
-{% endhighlight %}
-
-Returns the path of the created savepoint. You need this path to restore and dispose savepoints.
-
-#### **Restore a savepoint**
-
-{% highlight bash %}
-./bin/flink run -s <savepointPath> ...
-{% endhighlight %}
-
-The run command has a savepoint flag to submit a job, which restores its state from a savepoint. The savepoint path is returned by the savepoint trigger command.
-
-#### **Dispose a savepoint**
-
-{% highlight bash %}
-./bin/flink savepoint -d <savepointPath>
-{% endhighlight %}
-
-Disposes the savepoint at the given path. The savepoint path is returned by the savepoint trigger command.
-
-If you use custom state instances (for example custom reducing state or RocksDB state), you have to specify the path to the program JAR with which the savepoint was triggered in order to dispose the savepoint with the user code class loader:
-
-{% highlight bash %}
-./bin/flink savepoint -d <savepointPath> -j <jarFile>
-{% endhighlight %}
-
-Otherwise, you will run into a `ClassNotFoundException`.
-
-## Usage
-
-The command line syntax is as follows:
-
-~~~
-./flink <ACTION> [OPTIONS] [ARGUMENTS]
-
-The following actions are available:
-
-Action "run" compiles and runs a program.
-
-  Syntax: run [OPTIONS] <jar-file> <arguments>
-  "run" action options:
-     -c,--class <classname>               Class with the program entry point
-                                          ("main" method or "getPlan()" method.
-                                          Only needed if the JAR file does not
-                                          specify the class in its manifest.
-     -C,--classpath <url>                 Adds a URL to each user code
-                                          classloader  on all nodes in the
-                                          cluster. The paths must specify a
-                                          protocol (e.g. file://) and be
-                                          accessible on all nodes (e.g. by means
-                                          of a NFS share). You can use this
-                                          option multiple times for specifying
-                                          more than one URL. The protocol must
-                                          be supported by the {@link
-                                          java.net.URLClassLoader}.
-     -d,--detached                        If present, runs the job in detached
-                                          mode
-     -m,--jobmanager <host:port>          Address of the JobManager (master) to
-                                          which to connect. Specify
-                                          'yarn-cluster' as the JobManager to
-                                          deploy a YARN cluster for the job. Use
-                                          this flag to connect to a different
-                                          JobManager than the one specified in
-                                          the configuration.
-     -p,--parallelism <parallelism>       The parallelism with which to run the
-                                          program. Optional flag to override the
-                                          default value specified in the
-                                          configuration.
-     -q,--sysoutLogging                   If present, supress logging output to
-                                          standard out.
-     -s,--fromSavepoint <savepointPath>   Path to a savepoint to reset the job
-                                          back to (for example
-                                          file:///flink/savepoint-1537).
-  Additional arguments if -m yarn-cluster is set:
-     -yD <arg>                            Dynamic properties
-     -yd,--yarndetached                   Start detached
-     -yj,--yarnjar <arg>                  Path to Flink jar file
-     -yjm,--yarnjobManagerMemory <arg>    Memory for JobManager Container [in
-                                          MB]
-     -yn,--yarncontainer <arg>            Number of YARN container to allocate
-                                          (=Number of Task Managers)
-     -ynm,--yarnname <arg>                Set a custom name for the application
-                                          on YARN
-     -yq,--yarnquery                      Display available YARN resources
-                                          (memory, cores)
-     -yqu,--yarnqueue <arg>               Specify YARN queue.
-     -ys,--yarnslots <arg>                Number of slots per TaskManager
-     -yst,--yarnstreaming                 Start Flink in streaming mode
-     -yt,--yarnship <arg>                 Ship files in the specified directory
-                                          (t for transfer)
-     -ytm,--yarntaskManagerMemory <arg>   Memory per TaskManager Container [in
-                                          MB]
-
-
-Action "info" shows the optimized execution plan of the program (JSON).
-
-  Syntax: info [OPTIONS] <jar-file> <arguments>
-  "info" action options:
-     -c,--class <classname>           Class with the program entry point ("main"
-                                      method or "getPlan()" method. Only needed
-                                      if the JAR file does not specify the class
-                                      in its manifest.
-     -m,--jobmanager <host:port>      Address of the JobManager (master) to
-                                      which to connect. Specify 'yarn-cluster'
-                                      as the JobManager to deploy a YARN cluster
-                                      for the job. Use this flag to connect to a
-                                      different JobManager than the one
-                                      specified in the configuration.
-     -p,--parallelism <parallelism>   The parallelism with which to run the
-                                      program. Optional flag to override the
-                                      default value specified in the
-                                      configuration.
-
-
-Action "list" lists running and scheduled programs.
-
-  Syntax: list [OPTIONS]
-  "list" action options:
-     -m,--jobmanager <host:port>   Address of the JobManager (master) to which
-                                   to connect. Specify 'yarn-cluster' as the
-                                   JobManager to deploy a YARN cluster for the
-                                   job. Use this flag to connect to a different
-                                   JobManager than the one specified in the
-                                   configuration.
-     -r,--running                  Show only running programs and their JobIDs
-     -s,--scheduled                Show only scheduled programs and their JobIDs
-  Additional arguments if -m yarn-cluster is set:
-     -yid <yarnApplicationId>      YARN application ID of Flink YARN session to
-                                   connect to. Must not be set if JobManager HA
-                                   is used. In this case, JobManager RPC
-                                   location is automatically retrieved from
-                                   Zookeeper.
-
-
-Action "cancel" cancels a running program.
-
-  Syntax: cancel [OPTIONS] <Job ID>
-  "cancel" action options:
-     -m,--jobmanager <host:port>   Address of the JobManager (master) to which
-                                   to connect. Specify 'yarn-cluster' as the
-                                   JobManager to deploy a YARN cluster for the
-                                   job. Use this flag to connect to a different
-                                   JobManager than the one specified in the
-                                   configuration.
-  Additional arguments if -m yarn-cluster is set:
-     -yid <yarnApplicationId>      YARN application ID of Flink YARN session to
-                                   connect to. Must not be set if JobManager HA
-                                   is used. In this case, JobManager RPC
-                                   location is automatically retrieved from
-                                   Zookeeper.
-
-
-Action "stop" stops a running program (streaming jobs only). There are no strong consistency
-guarantees for a stop request.
-
-  Syntax: stop [OPTIONS] <Job ID>
-  "stop" action options:
-     -m,--jobmanager <host:port>   Address of the JobManager (master) to which
-                                   to connect. Use this flag to connect to a
-                                   different JobManager than the one specified
-                                   in the configuration.
-  Additional arguments if -m yarn-cluster is set:
-     -yid <yarnApplicationId>      YARN application ID of Flink YARN session to
-                                   connect to. Must not be set if JobManager HA
-                                   is used. In this case, JobManager RPC
-                                   location is automatically retrieved from
-                                   Zookeeper.
-
-
-Action "savepoint" triggers savepoints for a running job or disposes existing ones.
-
- Syntax: savepoint [OPTIONS] <Job ID>
- "savepoint" action options:
-    -d,--dispose <arg>            Path of savepoint to dispose.
-    -j,--jarfile <jarfile>        Flink program JAR file.
-    -m,--jobmanager <host:port>   Address of the JobManager (master) to which
-                                  to connect. Use this flag to connect to a
-                                  different JobManager than the one specified
-                                  in the configuration.
- Options for yarn-cluster mode:
-    -yid,--yarnapplicationId <arg>   Attach to running YARN session
-~~~

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/cluster_execution.md
----------------------------------------------------------------------
diff --git a/docs/apis/cluster_execution.md b/docs/apis/cluster_execution.md
deleted file mode 100644
index 79501db..0000000
--- a/docs/apis/cluster_execution.md
+++ /dev/null
@@ -1,156 +0,0 @@
----
-title:  "Cluster Execution"
-# Top-level navigation
-top-nav-group: apis
-top-nav-pos: 8
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-* This will be replaced by the TOC
-{:toc}
-
-Flink programs can run distributed on clusters of many machines. There
-are two ways to send a program to a cluster for execution:
-
-## Command Line Interface
-
-The command line interface lets you submit packaged programs (JARs) to a cluster
-(or single machine setup).
-
-Please refer to the [Command Line Interface](cli.html) documentation for
-details.
-
-## Remote Environment
-
-The remote environment lets you execute Flink Java programs on a cluster
-directly. The remote environment points to the cluster on which you want to
-execute the program.
-
-### Maven Dependency
-
-If you are developing your program as a Maven project, you have to add the
-`flink-clients` module using this dependency:
-
-~~~xml
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-clients{{ site.scala_version_suffix }}</artifactId>
-  <version>{{ site.version }}</version>
-</dependency>
-~~~
-
-### Example
-
-The following illustrates the use of the `RemoteEnvironment`:
-
-~~~java
-public static void main(String[] args) throws Exception {
-    ExecutionEnvironment env = ExecutionEnvironment
-        .createRemoteEnvironment("flink-master", 6123, "/home/user/udfs.jar");
-
-    DataSet<String> data = env.readTextFile("hdfs://path/to/file");
-
-    data
-        .filter(new FilterFunction<String>() {
-            public boolean filter(String value) {
-                return value.startsWith("http://");
-            }
-        })
-        .writeAsText("hdfs://path/to/result");
-
-    env.execute();
-}
-~~~
-
-Note that the program contains custom user code and hence requires a JAR file with
-the classes of the code attached. The constructor of the remote environment
-takes the path(s) to the JAR file(s).
-
-## Linking with modules not contained in the binary distribution
-
-The binary distribution contains jar packages in the `lib` folder that are automatically
-provided to the classpath of your distributed programs. Almost all of Flink classes are
-located there with a few exceptions, for example the streaming connectors and some freshly
-added modules. To run code depending on these modules you need to make them accessible
-during runtime, for which we suggest two options:
-
-1. Either copy the required jar files to the `lib` folder onto all of your TaskManagers.
-Note that you have to restart your TaskManagers after this.
-2. Or package them with your code.
-
-The latter version is recommended as it respects the classloader management in Flink.
-
-### Packaging dependencies with your usercode with Maven
-
-To provide these dependencies not included by Flink we suggest two options with Maven.
-
-1. The maven assembly plugin builds a so-called uber-jar (executable jar) containing all your dependencies.
-The assembly configuration is straight-forward, but the resulting jar might become bulky. 
-See [maven-assembly-plugin](http://maven.apache.org/plugins/maven-assembly-plugin/usage.html) for further information.
-2. The maven unpack plugin unpacks the relevant parts of the dependencies and
-then packages it with your code.
-
-Using the latter approach in order to bundle the Kafka connector, `flink-connector-kafka`
-you would need to add the classes from both the connector and the Kafka API itself. Add
-the following to your plugins section.
-
-~~~xml
-<plugin>
-    <groupId>org.apache.maven.plugins</groupId>
-    <artifactId>maven-dependency-plugin</artifactId>
-    <version>2.9</version>
-    <executions>
-        <execution>
-            <id>unpack</id>
-            <!-- executed just before the package phase -->
-            <phase>prepare-package</phase>
-            <goals>
-                <goal>unpack</goal>
-            </goals>
-            <configuration>
-                <artifactItems>
-                    <!-- For Flink connector classes -->
-                    <artifactItem>
-                        <groupId>org.apache.flink</groupId>
-                        <artifactId>flink-connector-kafka</artifactId>
-                        <version>{{ site.version }}</version>
-                        <type>jar</type>
-                        <overWrite>false</overWrite>
-                        <outputDirectory>${project.build.directory}/classes</outputDirectory>
-                        <includes>org/apache/flink/**</includes>
-                    </artifactItem>
-                    <!-- For Kafka API classes -->
-                    <artifactItem>
-                        <groupId>org.apache.kafka</groupId>
-                        <artifactId>kafka_<YOUR_SCALA_VERSION></artifactId>
-                        <version><YOUR_KAFKA_VERSION></version>
-                        <type>jar</type>
-                        <overWrite>false</overWrite>
-                        <outputDirectory>${project.build.directory}/classes</outputDirectory>
-                        <includes>kafka/**</includes>
-                    </artifactItem>
-                </artifactItems>
-            </configuration>
-        </execution>
-    </executions>
-</plugin>
-~~~
-
-Now when running `mvn clean package` the produced jar includes the required dependencies.

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/common/fig/plan_visualizer.png
----------------------------------------------------------------------
diff --git a/docs/apis/common/fig/plan_visualizer.png b/docs/apis/common/fig/plan_visualizer.png
deleted file mode 100644
index 85b8c55..0000000
Binary files a/docs/apis/common/fig/plan_visualizer.png and /dev/null differ


[85/89] [abbrv] flink git commit: [FLINK-4382] [rpc] Buffer rpc calls until the RpcEndpoint has been started

Posted by se...@apache.org.
[FLINK-4382] [rpc] Buffer rpc calls until the RpcEndpoint has been started

This PR allows the AkkaRpcActor to stash messages until the corresponding RcpEndpoint
has been started. When receiving a Processing.START message, the AkkaRpcActor
unstashes all messages and starts processing rpcs. When receiving a Processing.STOP
message, it will stop processing messages and stash incoming messages again.

Add test case for message stashing

This closes #2358.


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/84bd3759
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/84bd3759
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/84bd3759

Branch: refs/heads/flip-6
Commit: 84bd3759efbd760d2d07b4b55ab9a3d8af36ab82
Parents: baf4a61
Author: Till Rohrmann <tr...@apache.org>
Authored: Thu Aug 11 18:13:25 2016 +0200
Committer: Stephan Ewen <se...@apache.org>
Committed: Thu Aug 25 20:21:04 2016 +0200

----------------------------------------------------------------------
 .../apache/flink/runtime/rpc/RpcEndpoint.java   |  15 ++-
 .../flink/runtime/rpc/StartStoppable.java       |  35 ++++++
 .../runtime/rpc/akka/AkkaInvocationHandler.java |  21 +++-
 .../flink/runtime/rpc/akka/AkkaRpcActor.java    |  39 ++++++-
 .../flink/runtime/rpc/akka/AkkaRpcService.java  |   8 +-
 .../runtime/rpc/akka/messages/Processing.java   |  27 +++++
 .../flink/runtime/rpc/RpcCompletenessTest.java  |  45 +++++++-
 .../runtime/rpc/akka/AkkaRpcActorTest.java      | 108 +++++++++++++++++++
 .../runtime/rpc/akka/AkkaRpcServiceTest.java    |   3 +
 .../flink/runtime/rpc/akka/AsyncCallsTest.java  |   5 +-
 .../rpc/akka/MainThreadValidationTest.java      |   4 +-
 .../rpc/akka/MessageSerializationTest.java      |   4 +
 .../rpc/taskexecutor/TaskExecutorTest.java      |  18 ++++
 13 files changed, 315 insertions(+), 17 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/84bd3759/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java
index d36a283..67ac182 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java
@@ -74,7 +74,7 @@ public abstract class RpcEndpoint<C extends RpcGateway> {
 
 	/** The main thread execution context to be used to execute future callbacks in the main thread
 	 * of the executing rpc server. */
-	private final MainThreadExecutionContext mainThreadExecutionContext;
+	private final ExecutionContext mainThreadExecutionContext;
 
 	/** A reference to the endpoint's main thread, if the current method is called by the main thread */
 	final AtomicReference<Thread> currentMainThread = new AtomicReference<>(null); 
@@ -106,10 +106,21 @@ public abstract class RpcEndpoint<C extends RpcGateway> {
 	}
 	
 	// ------------------------------------------------------------------------
-	//  Shutdown
+	//  Start & Shutdown
 	// ------------------------------------------------------------------------
 
 	/**
+	 * Starts the rpc endpoint. This tells the underlying rpc server that the rpc endpoint is ready
+	 * to process remote procedure calls.
+	 *
+	 * IMPORTANT: Whenever you override this method, call the parent implementation to enable
+	 * rpc processing. It is advised to make the parent call last.
+	 */
+	public void start() {
+		((StartStoppable) self).start();
+	}
+
+	/**
 	 * Shuts down the underlying RPC endpoint via the RPC service.
 	 * After this method was called, the RPC endpoint will no longer be reachable, neither remotely,
 	 * not via its {@link #getSelf() self gateway}. It will also not accepts executions in main thread

http://git-wip-us.apache.org/repos/asf/flink/blob/84bd3759/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/StartStoppable.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/StartStoppable.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/StartStoppable.java
new file mode 100644
index 0000000..dd5595f
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/StartStoppable.java
@@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc;
+
+/**
+ * Interface to start and stop the processing of rpc calls in the rpc server.
+ */
+public interface StartStoppable {
+
+	/**
+	 * Starts the processing of remote procedure calls.
+	 */
+	void start();
+
+	/**
+	 * Stops the processing of remote procedure calls.
+	 */
+	void stop();
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/84bd3759/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaInvocationHandler.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaInvocationHandler.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaInvocationHandler.java
index 297104b..524bf74 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaInvocationHandler.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaInvocationHandler.java
@@ -24,8 +24,10 @@ import akka.util.Timeout;
 import org.apache.flink.api.java.tuple.Tuple2;
 import org.apache.flink.runtime.rpc.MainThreadExecutor;
 import org.apache.flink.runtime.rpc.RpcTimeout;
+import org.apache.flink.runtime.rpc.StartStoppable;
 import org.apache.flink.runtime.rpc.akka.messages.CallAsync;
 import org.apache.flink.runtime.rpc.akka.messages.LocalRpcInvocation;
+import org.apache.flink.runtime.rpc.akka.messages.Processing;
 import org.apache.flink.runtime.rpc.akka.messages.RemoteRpcInvocation;
 import org.apache.flink.runtime.rpc.akka.messages.RpcInvocation;
 import org.apache.flink.runtime.rpc.akka.messages.RunAsync;
@@ -50,7 +52,7 @@ import static org.apache.flink.util.Preconditions.checkArgument;
  * rpc in a {@link LocalRpcInvocation} message and then sends it to the {@link AkkaRpcActor} where it is
  * executed.
  */
-class AkkaInvocationHandler implements InvocationHandler, AkkaGateway, MainThreadExecutor {
+class AkkaInvocationHandler implements InvocationHandler, AkkaGateway, MainThreadExecutor, StartStoppable {
 	private static final Logger LOG = Logger.getLogger(AkkaInvocationHandler.class);
 
 	private final ActorRef rpcEndpoint;
@@ -76,7 +78,8 @@ class AkkaInvocationHandler implements InvocationHandler, AkkaGateway, MainThrea
 
 		Object result;
 
-		if (declaringClass.equals(AkkaGateway.class) || declaringClass.equals(MainThreadExecutor.class) || declaringClass.equals(Object.class)) {
+		if (declaringClass.equals(AkkaGateway.class) || declaringClass.equals(MainThreadExecutor.class) ||
+			declaringClass.equals(Object.class) || declaringClass.equals(StartStoppable.class)) {
 			result = method.invoke(this, args);
 		} else {
 			String methodName = method.getName();
@@ -171,6 +174,20 @@ class AkkaInvocationHandler implements InvocationHandler, AkkaGateway, MainThrea
 		}
 	}
 
+	@Override
+	public void start() {
+		rpcEndpoint.tell(Processing.START, ActorRef.noSender());
+	}
+
+	@Override
+	public void stop() {
+		rpcEndpoint.tell(Processing.STOP, ActorRef.noSender());
+	}
+
+	// ------------------------------------------------------------------------
+	//  Helper methods
+	// ------------------------------------------------------------------------
+
 	/**
 	 * Extracts the {@link RpcTimeout} annotated rpc timeout value from the list of given method
 	 * arguments. If no {@link RpcTimeout} annotated parameter could be found, then the default

http://git-wip-us.apache.org/repos/asf/flink/blob/84bd3759/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActor.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActor.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActor.java
index dfcbcc3..2373be9 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActor.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActor.java
@@ -20,13 +20,15 @@ package org.apache.flink.runtime.rpc.akka;
 
 import akka.actor.ActorRef;
 import akka.actor.Status;
-import akka.actor.UntypedActor;
+import akka.actor.UntypedActorWithStash;
+import akka.japi.Procedure;
 import akka.pattern.Patterns;
 import org.apache.flink.runtime.rpc.MainThreadValidatorUtil;
 import org.apache.flink.runtime.rpc.RpcEndpoint;
 import org.apache.flink.runtime.rpc.RpcGateway;
 import org.apache.flink.runtime.rpc.akka.messages.CallAsync;
 import org.apache.flink.runtime.rpc.akka.messages.LocalRpcInvocation;
+import org.apache.flink.runtime.rpc.akka.messages.Processing;
 import org.apache.flink.runtime.rpc.akka.messages.RpcInvocation;
 import org.apache.flink.runtime.rpc.akka.messages.RunAsync;
 
@@ -45,18 +47,23 @@ import static org.apache.flink.util.Preconditions.checkNotNull;
 
 /**
  * Akka rpc actor which receives {@link LocalRpcInvocation}, {@link RunAsync} and {@link CallAsync}
- * messages.
+ * {@link Processing} messages.
  * <p>
  * The {@link LocalRpcInvocation} designates a rpc and is dispatched to the given {@link RpcEndpoint}
  * instance.
  * <p>
  * The {@link RunAsync} and {@link CallAsync} messages contain executable code which is executed
  * in the context of the actor thread.
+ * <p>
+ * The {@link Processing} message controls the processing behaviour of the akka rpc actor. A
+ * {@link Processing#START} message unstashes all stashed messages and starts processing incoming
+ * messages. A {@link Processing#STOP} message stops processing messages and stashes incoming
+ * messages.
  *
  * @param <C> Type of the {@link RpcGateway} associated with the {@link RpcEndpoint}
  * @param <T> Type of the {@link RpcEndpoint}
  */
-class AkkaRpcActor<C extends RpcGateway, T extends RpcEndpoint<C>> extends UntypedActor {
+class AkkaRpcActor<C extends RpcGateway, T extends RpcEndpoint<C>> extends UntypedActorWithStash {
 	
 	private static final Logger LOG = LoggerFactory.getLogger(AkkaRpcActor.class);
 
@@ -73,6 +80,27 @@ class AkkaRpcActor<C extends RpcGateway, T extends RpcEndpoint<C>> extends Untyp
 
 	@Override
 	public void onReceive(final Object message) {
+		if (message.equals(Processing.START)) {
+			unstashAll();
+			getContext().become(new Procedure<Object>() {
+				@Override
+				public void apply(Object message) throws Exception {
+					if (message.equals(Processing.STOP)) {
+						getContext().unbecome();
+					} else {
+						handleMessage(message);
+					}
+				}
+			});
+		} else {
+			LOG.info("The rpc endpoint {} has not been started yet. Stashing message {} until processing is started.",
+				rpcEndpoint.getClass().getName(),
+				message.getClass().getName());
+			stash();
+		}
+	}
+
+	private void handleMessage(Object message) {
 		mainThreadValidator.enterMainThread();
 		try {
 			if (message instanceof RunAsync) {
@@ -82,7 +110,10 @@ class AkkaRpcActor<C extends RpcGateway, T extends RpcEndpoint<C>> extends Untyp
 			} else if (message instanceof RpcInvocation) {
 				handleRpcInvocation((RpcInvocation) message);
 			} else {
-				LOG.warn("Received message of unknown type {}. Dropping this message!", message.getClass());
+				LOG.warn(
+					"Received message of unknown type {} with value {}. Dropping this message!",
+					message.getClass().getName(),
+					message);
 			}
 		} finally {
 			mainThreadValidator.exitMainThread();

http://git-wip-us.apache.org/repos/asf/flink/blob/84bd3759/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
index b963c53..7b33524 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
@@ -34,7 +34,7 @@ import org.apache.flink.runtime.rpc.MainThreadExecutor;
 import org.apache.flink.runtime.rpc.RpcGateway;
 import org.apache.flink.runtime.rpc.RpcEndpoint;
 import org.apache.flink.runtime.rpc.RpcService;
-
+import org.apache.flink.runtime.rpc.StartStoppable;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -136,7 +136,11 @@ public class AkkaRpcService implements RpcService {
 		@SuppressWarnings("unchecked")
 		C self = (C) Proxy.newProxyInstance(
 			classLoader,
-			new Class<?>[]{rpcEndpoint.getSelfGatewayType(), MainThreadExecutor.class, AkkaGateway.class},
+			new Class<?>[]{
+				rpcEndpoint.getSelfGatewayType(),
+				MainThreadExecutor.class,
+				StartStoppable.class,
+				AkkaGateway.class},
 			akkaInvocationHandler);
 
 		return self;

http://git-wip-us.apache.org/repos/asf/flink/blob/84bd3759/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/Processing.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/Processing.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/Processing.java
new file mode 100644
index 0000000..5c7df5d
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/Processing.java
@@ -0,0 +1,27 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.akka.messages;
+
+/**
+ * Controls the processing behaviour of the {@link org.apache.flink.runtime.rpc.akka.AkkaRpcActor}
+ */
+public enum Processing {
+	START, // Unstashes all stashed messages and starts processing incoming messages
+	STOP // Stop processing messages and stashes all incoming messages
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/84bd3759/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/RpcCompletenessTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/RpcCompletenessTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/RpcCompletenessTest.java
index e50533e..97cf0cb 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/RpcCompletenessTest.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/RpcCompletenessTest.java
@@ -18,6 +18,8 @@
 
 package org.apache.flink.runtime.rpc;
 
+import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
 import org.apache.flink.util.ReflectionUtil;
 import org.apache.flink.util.TestLogger;
 import org.junit.Test;
@@ -140,7 +142,7 @@ public class RpcCompletenessTest extends TestLogger {
 		int rpcTimeoutParameters = 0;
 
 		for (int i = 0; i < parameterAnnotations.length; i++) {
-			if (isRpcTimeout(parameterAnnotations[i])) {
+			if (RpcCompletenessTest.isRpcTimeout(parameterAnnotations[i])) {
 				assertTrue(
 					"The rpc timeout has to be of type " + FiniteDuration.class.getName() + ".",
 					parameterTypes[i].equals(FiniteDuration.class));
@@ -185,7 +187,7 @@ public class RpcCompletenessTest extends TestLogger {
 
 		// filter out the RpcTimeout parameters
 		for (int i = 0; i < gatewayParameterTypes.length; i++) {
-			if (!isRpcTimeout(gatewayParameterAnnotations[i])) {
+			if (!RpcCompletenessTest.isRpcTimeout(gatewayParameterAnnotations[i])) {
 				filteredGatewayParameterTypes.add(gatewayParameterTypes[i]);
 			}
 		}
@@ -235,7 +237,22 @@ public class RpcCompletenessTest extends TestLogger {
 	}
 
 	private boolean checkType(Class<?> firstType, Class<?> secondType) {
-		return firstType.equals(secondType);
+		Class<?> firstResolvedType;
+		Class<?> secondResolvedType;
+
+		if (firstType.isPrimitive()) {
+			firstResolvedType = RpcCompletenessTest.resolvePrimitiveType(firstType);
+		} else {
+			firstResolvedType = firstType;
+		}
+
+		if (secondType.isPrimitive()) {
+			secondResolvedType = RpcCompletenessTest.resolvePrimitiveType(secondType);
+		} else {
+			secondResolvedType = secondType;
+		}
+
+		return firstResolvedType.equals(secondResolvedType);
 	}
 
 	/**
@@ -279,7 +296,7 @@ public class RpcCompletenessTest extends TestLogger {
 
 		for (int i = 0; i < parameterTypes.length; i++) {
 			// filter out the RpcTimeout parameters
-			if (!isRpcTimeout(parameterAnnotations[i])) {
+			if (!RpcCompletenessTest.isRpcTimeout(parameterAnnotations[i])) {
 				builder.append(parameterTypes[i].getName());
 
 				if (i < parameterTypes.length -1) {
@@ -293,7 +310,7 @@ public class RpcCompletenessTest extends TestLogger {
 		return builder.toString();
 	}
 
-	private boolean isRpcTimeout(Annotation[] annotations) {
+	private static boolean isRpcTimeout(Annotation[] annotations) {
 		for (Annotation annotation : annotations) {
 			if (annotation.annotationType().equals(RpcTimeout.class)) {
 				return true;
@@ -302,4 +319,22 @@ public class RpcCompletenessTest extends TestLogger {
 
 		return false;
 	}
+
+	/**
+	 * Returns the boxed type for a primitive type.
+	 *
+	 * @param primitveType Primitive type to resolve
+	 * @return Boxed type for the given primitive type
+	 */
+	private static Class<?> resolvePrimitiveType(Class<?> primitveType) {
+		assert primitveType.isPrimitive();
+
+		TypeInformation<?> typeInformation = BasicTypeInfo.getInfoFor(primitveType);
+
+		if (typeInformation != null) {
+			return typeInformation.getTypeClass();
+		} else {
+			throw new RuntimeException("Could not retrive basic type information for primitive type " + primitveType + '.');
+		}
+	}
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/84bd3759/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActorTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActorTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActorTest.java
new file mode 100644
index 0000000..1653fac
--- /dev/null
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActorTest.java
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.akka;
+
+import akka.actor.ActorSystem;
+import akka.util.Timeout;
+import org.apache.flink.runtime.akka.AkkaUtils;
+import org.apache.flink.runtime.rpc.RpcEndpoint;
+import org.apache.flink.runtime.rpc.RpcGateway;
+import org.apache.flink.runtime.rpc.RpcMethod;
+import org.apache.flink.runtime.rpc.RpcService;
+import org.apache.flink.util.TestLogger;
+import org.hamcrest.core.Is;
+import org.junit.AfterClass;
+import org.junit.Test;
+import scala.concurrent.Await;
+import scala.concurrent.Future;
+
+import java.util.concurrent.TimeUnit;
+
+import static org.junit.Assert.assertThat;
+
+public class AkkaRpcActorTest extends TestLogger {
+
+	// ------------------------------------------------------------------------
+	//  shared test members
+	// ------------------------------------------------------------------------
+
+	private static ActorSystem actorSystem = AkkaUtils.createDefaultActorSystem();
+
+	private static Timeout timeout = new Timeout(10000, TimeUnit.MILLISECONDS);
+
+	private static AkkaRpcService akkaRpcService =
+		new AkkaRpcService(actorSystem, timeout);
+
+	@AfterClass
+	public static void shutdown() {
+		akkaRpcService.stopService();
+		actorSystem.shutdown();
+		actorSystem.awaitTermination();
+	}
+
+	/**
+	 * Tests that the {@link AkkaRpcActor} stashes messages until the corresponding
+	 * {@link RpcEndpoint} has been started.
+	 */
+	@Test
+	public void testMessageStashing() throws Exception {
+		int expectedValue = 1337;
+
+		DummyRpcEndpoint rpcEndpoint = new DummyRpcEndpoint(akkaRpcService);
+
+		DummyRpcGateway rpcGateway = rpcEndpoint.getSelf();
+
+		// this message should not be processed until we've started the rpc endpoint
+		Future<Integer> result = rpcGateway.foobar();
+
+		// set a new value which we expect to be returned
+		rpcEndpoint.setFoobar(expectedValue);
+
+		// now process the rpc
+		rpcEndpoint.start();
+
+		Integer actualValue = Await.result(result, timeout.duration());
+
+		assertThat("The new foobar value should have been returned.", actualValue, Is.is(expectedValue));
+
+		rpcEndpoint.shutDown();
+	}
+
+	private interface DummyRpcGateway extends RpcGateway {
+		Future<Integer> foobar();
+	}
+
+	private static class DummyRpcEndpoint extends RpcEndpoint<DummyRpcGateway> {
+
+		private volatile int _foobar = 42;
+
+		protected DummyRpcEndpoint(RpcService rpcService) {
+			super(rpcService);
+		}
+
+		@RpcMethod
+		public int foobar() {
+			return _foobar;
+		}
+
+		public void setFoobar(int value) {
+			_foobar = value;
+		}
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/84bd3759/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java
index f26b40b..fd55904 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java
@@ -57,6 +57,9 @@ public class AkkaRpcServiceTest extends TestLogger {
 		ResourceManager resourceManager = new ResourceManager(akkaRpcService, executorService);
 		JobMaster jobMaster = new JobMaster(akkaRpcService2, executorService);
 
+		resourceManager.start();
+		jobMaster.start();
+
 		ResourceManagerGateway rm = resourceManager.getSelf();
 
 		assertTrue(rm instanceof AkkaGateway);

http://git-wip-us.apache.org/repos/asf/flink/blob/84bd3759/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AsyncCallsTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AsyncCallsTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AsyncCallsTest.java
index f2ce52d..d33987c 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AsyncCallsTest.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AsyncCallsTest.java
@@ -28,6 +28,7 @@ import org.apache.flink.runtime.rpc.RpcGateway;
 import org.apache.flink.runtime.rpc.RpcMethod;
 import org.apache.flink.runtime.rpc.RpcService;
 
+import org.apache.flink.util.TestLogger;
 import org.junit.AfterClass;
 import org.junit.Test;
 
@@ -42,7 +43,7 @@ import java.util.concurrent.locks.ReentrantLock;
 
 import static org.junit.Assert.*;
 
-public class AsyncCallsTest {
+public class AsyncCallsTest extends TestLogger {
 
 	// ------------------------------------------------------------------------
 	//  shared test members
@@ -72,6 +73,7 @@ public class AsyncCallsTest {
 		final AtomicBoolean concurrentAccess = new AtomicBoolean(false);
 
 		TestEndpoint testEndpoint = new TestEndpoint(akkaRpcService, lock);
+		testEndpoint.start();
 		TestGateway gateway = testEndpoint.getSelf();
 
 		// a bunch of gateway calls
@@ -127,6 +129,7 @@ public class AsyncCallsTest {
 		final long delay = 200;
 
 		TestEndpoint testEndpoint = new TestEndpoint(akkaRpcService, lock);
+		testEndpoint.start();
 
 		// run something asynchronously
 		testEndpoint.runAsync(new Runnable() {

http://git-wip-us.apache.org/repos/asf/flink/blob/84bd3759/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/MainThreadValidationTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/MainThreadValidationTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/MainThreadValidationTest.java
index b854143..9ffafda 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/MainThreadValidationTest.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/MainThreadValidationTest.java
@@ -27,13 +27,14 @@ import org.apache.flink.runtime.rpc.RpcGateway;
 import org.apache.flink.runtime.rpc.RpcMethod;
 import org.apache.flink.runtime.rpc.RpcService;
 
+import org.apache.flink.util.TestLogger;
 import org.junit.Test;
 
 import java.util.concurrent.TimeUnit;
 
 import static org.junit.Assert.assertTrue;
 
-public class MainThreadValidationTest {
+public class MainThreadValidationTest extends TestLogger {
 
 	@Test
 	public void failIfNotInMainThread() {
@@ -51,6 +52,7 @@ public class MainThreadValidationTest {
 
 		try {
 			TestEndpoint testEndpoint = new TestEndpoint(akkaRpcService);
+			testEndpoint.start();
 
 			// this works, because it is executed as an RPC call
 			testEndpoint.getSelf().someConcurrencyCriticalFunction();

http://git-wip-us.apache.org/repos/asf/flink/blob/84bd3759/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/MessageSerializationTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/MessageSerializationTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/MessageSerializationTest.java
index ca8179c..9d2ed99 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/MessageSerializationTest.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/MessageSerializationTest.java
@@ -86,6 +86,7 @@ public class MessageSerializationTest extends TestLogger {
 	public void testNonSerializableLocalMessageTransfer() throws InterruptedException, IOException {
 		LinkedBlockingQueue<Object> linkedBlockingQueue = new LinkedBlockingQueue<>();
 		TestEndpoint testEndpoint = new TestEndpoint(akkaRpcService1, linkedBlockingQueue);
+		testEndpoint.start();
 
 		TestGateway testGateway = testEndpoint.getSelf();
 
@@ -106,6 +107,7 @@ public class MessageSerializationTest extends TestLogger {
 		LinkedBlockingQueue<Object> linkedBlockingQueue = new LinkedBlockingQueue<>();
 
 		TestEndpoint testEndpoint = new TestEndpoint(akkaRpcService1, linkedBlockingQueue);
+		testEndpoint.start();
 
 		String address = testEndpoint.getAddress();
 
@@ -126,6 +128,7 @@ public class MessageSerializationTest extends TestLogger {
 		LinkedBlockingQueue<Object> linkedBlockingQueue = new LinkedBlockingQueue<>();
 
 		TestEndpoint testEndpoint = new TestEndpoint(akkaRpcService1, linkedBlockingQueue);
+		testEndpoint.start();
 
 		String address = testEndpoint.getAddress();
 
@@ -149,6 +152,7 @@ public class MessageSerializationTest extends TestLogger {
 		LinkedBlockingQueue<Object> linkedBlockingQueue = new LinkedBlockingQueue<>();
 
 		TestEndpoint testEndpoint = new TestEndpoint(akkaRpcService1, linkedBlockingQueue);
+		testEndpoint.start();
 
 		String address = testEndpoint.getAddress();
 

http://git-wip-us.apache.org/repos/asf/flink/blob/84bd3759/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorTest.java
index 33c9cb6..c96f4f6 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorTest.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorTest.java
@@ -28,17 +28,26 @@ import org.apache.flink.runtime.deployment.TaskDeploymentDescriptor;
 import org.apache.flink.runtime.executiongraph.ExecutionAttemptID;
 import org.apache.flink.runtime.jobgraph.JobVertexID;
 import org.apache.flink.runtime.messages.Acknowledge;
+import org.apache.flink.runtime.rpc.MainThreadExecutor;
+import org.apache.flink.runtime.rpc.RpcEndpoint;
+import org.apache.flink.runtime.rpc.RpcGateway;
 import org.apache.flink.runtime.rpc.RpcService;
+import org.apache.flink.runtime.rpc.StartStoppable;
 import org.apache.flink.runtime.util.DirectExecutorService;
 import org.apache.flink.util.SerializedValue;
 import org.apache.flink.util.TestLogger;
 import org.junit.Test;
+import org.mockito.Matchers;
+import org.mockito.cglib.proxy.InvocationHandler;
+import org.mockito.cglib.proxy.Proxy;
+import scala.concurrent.Future;
 
 import java.net.URL;
 import java.util.Collections;
 
 import static org.junit.Assert.fail;
 import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
 
 public class TaskExecutorTest extends TestLogger {
 
@@ -48,8 +57,13 @@ public class TaskExecutorTest extends TestLogger {
 	@Test
 	public void testTaskExecution() throws Exception {
 		RpcService testingRpcService = mock(RpcService.class);
+		InvocationHandler invocationHandler = mock(InvocationHandler.class);
+		Object selfGateway = Proxy.newProxyInstance(ClassLoader.getSystemClassLoader(), new Class<?>[] {TaskExecutorGateway.class, MainThreadExecutor.class, StartStoppable.class}, invocationHandler);
+		when(testingRpcService.startServer(Matchers.any(RpcEndpoint.class))).thenReturn((RpcGateway)selfGateway);
+
 		DirectExecutorService directExecutorService = new DirectExecutorService();
 		TaskExecutor taskExecutor = new TaskExecutor(testingRpcService, directExecutorService);
+		taskExecutor.start();
 
 		TaskDeploymentDescriptor tdd = new TaskDeploymentDescriptor(
 			new JobID(),
@@ -82,8 +96,12 @@ public class TaskExecutorTest extends TestLogger {
 	@Test(expected=Exception.class)
 	public void testWrongTaskCancellation() throws Exception {
 		RpcService testingRpcService = mock(RpcService.class);
+		InvocationHandler invocationHandler = mock(InvocationHandler.class);
+		Object selfGateway = Proxy.newProxyInstance(ClassLoader.getSystemClassLoader(), new Class<?>[] {TaskExecutorGateway.class, MainThreadExecutor.class, StartStoppable.class}, invocationHandler);
+		when(testingRpcService.startServer(Matchers.any(RpcEndpoint.class))).thenReturn((RpcGateway)selfGateway);
 		DirectExecutorService directExecutorService = null;
 		TaskExecutor taskExecutor = new TaskExecutor(testingRpcService, directExecutorService);
+		taskExecutor.start();
 
 		taskExecutor.cancelTask(new ExecutionAttemptID());
 


[12/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/libs/storm_compatibility.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/storm_compatibility.md b/docs/dev/libs/storm_compatibility.md
new file mode 100644
index 0000000..89d7706
--- /dev/null
+++ b/docs/dev/libs/storm_compatibility.md
@@ -0,0 +1,287 @@
+---
+title: "Storm Compatibility"
+is_beta: true
+nav-parent_id: libs
+nav-pos: 2
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+[Flink streaming]({{ site.baseurl }}/dev/datastream_api.html) is compatible with Apache Storm interfaces and therefore allows
+reusing code that was implemented for Storm.
+
+You can:
+
+- execute a whole Storm `Topology` in Flink.
+- use Storm `Spout`/`Bolt` as source/operator in Flink streaming programs.
+
+This document shows how to use existing Storm code with Flink.
+
+* This will be replaced by the TOC
+{:toc}
+
+# Project Configuration
+
+Support for Storm is contained in the `flink-storm` Maven module.
+The code resides in the `org.apache.flink.storm` package.
+
+Add the following dependency to your `pom.xml` if you want to execute Storm code in Flink.
+
+~~~xml
+<dependency>
+	<groupId>org.apache.flink</groupId>
+	<artifactId>flink-storm{{ site.scala_version_suffix }}</artifactId>
+	<version>{{site.version}}</version>
+</dependency>
+~~~
+
+**Please note**: Do not add `storm-core` as a dependency. It is already included via `flink-storm`.
+
+**Please note**: `flink-storm` is not part of the provided binary Flink distribution.
+Thus, you need to include `flink-storm` classes (and their dependencies) in your program jar (also called ueber-jar or fat-jar) that is submitted to Flink's JobManager.
+See *WordCount Storm* within `flink-storm-examples/pom.xml` for an example how to package a jar correctly.
+
+If you want to avoid large ueber-jars, you can manually copy `storm-core-0.9.4.jar`, `json-simple-1.1.jar` and `flink-storm-{{site.version}}.jar` into Flink's `lib/` folder of each cluster node (*before* the cluster is started).
+For this case, it is sufficient to include only your own Spout and Bolt classes (and their internal dependencies) into the program jar.
+
+# Execute Storm Topologies
+
+Flink provides a Storm compatible API (`org.apache.flink.storm.api`) that offers replacements for the following classes:
+
+- `StormSubmitter` replaced by `FlinkSubmitter`
+- `NimbusClient` and `Client` replaced by `FlinkClient`
+- `LocalCluster` replaced by `FlinkLocalCluster`
+
+In order to submit a Storm topology to Flink, it is sufficient to replace the used Storm classes with their Flink replacements in the Storm *client code that assembles* the topology.
+The actual runtime code, ie, Spouts and Bolts, can be used *unmodified*.
+If a topology is executed in a remote cluster, parameters `nimbus.host` and `nimbus.thrift.port` are used as `jobmanger.rpc.address` and `jobmanger.rpc.port`, respectively.  If a parameter is not specified, the value is taken from `flink-conf.yaml`.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+~~~java
+TopologyBuilder builder = new TopologyBuilder(); // the Storm topology builder
+
+// actual topology assembling code and used Spouts/Bolts can be used as-is
+builder.setSpout("source", new FileSpout(inputFilePath));
+builder.setBolt("tokenizer", new BoltTokenizer()).shuffleGrouping("source");
+builder.setBolt("counter", new BoltCounter()).fieldsGrouping("tokenizer", new Fields("word"));
+builder.setBolt("sink", new BoltFileSink(outputFilePath)).shuffleGrouping("counter");
+
+Config conf = new Config();
+if(runLocal) { // submit to test cluster
+	// replaces: LocalCluster cluster = new LocalCluster();
+	FlinkLocalCluster cluster = new FlinkLocalCluster();
+	cluster.submitTopology("WordCount", conf, FlinkTopology.createTopology(builder));
+} else { // submit to remote cluster
+	// optional
+	// conf.put(Config.NIMBUS_HOST, "remoteHost");
+	// conf.put(Config.NIMBUS_THRIFT_PORT, 6123);
+	// replaces: StormSubmitter.submitTopology(topologyId, conf, builder.createTopology());
+	FlinkSubmitter.submitTopology("WordCount", conf, FlinkTopology.createTopology(builder));
+}
+~~~
+</div>
+</div>
+
+# Embed Storm Operators in Flink Streaming Programs
+
+As an alternative, Spouts and Bolts can be embedded into regular streaming programs.
+The Storm compatibility layer offers a wrapper classes for each, namely `SpoutWrapper` and `BoltWrapper` (`org.apache.flink.storm.wrappers`).
+
+Per default, both wrappers convert Storm output tuples to Flink's [Tuple]({{site.baseurl}}/dev/api_concepts.html#tuples-and-case-classes) types (ie, `Tuple0` to `Tuple25` according to the number of fields of the Storm tuples).
+For single field output tuples a conversion to the field's data type is also possible (eg, `String` instead of `Tuple1<String>`).
+
+Because Flink cannot infer the output field types of Storm operators, it is required to specify the output type manually.
+In order to get the correct `TypeInformation` object, Flink's `TypeExtractor` can be used.
+
+## Embed Spouts
+
+In order to use a Spout as Flink source, use `StreamExecutionEnvironment.addSource(SourceFunction, TypeInformation)`.
+The Spout object is handed to the constructor of `SpoutWrapper<OUT>` that serves as first argument to `addSource(...)`.
+The generic type declaration `OUT` specifies the type of the source output stream.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+~~~java
+StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
+
+// stream has `raw` type (single field output streams only)
+DataStream<String> rawInput = env.addSource(
+	new SpoutWrapper<String>(new FileSpout(localFilePath), new String[] { Utils.DEFAULT_STREAM_ID }), // emit default output stream as raw type
+	TypeExtractor.getForClass(String.class)); // output type
+
+// process data stream
+[...]
+~~~
+</div>
+</div>
+
+If a Spout emits a finite number of tuples, `SpoutWrapper` can be configures to terminate automatically by setting `numberOfInvocations` parameter in its constructor.
+This allows the Flink program to shut down automatically after all data is processed.
+Per default the program will run until it is [canceled]({{site.baseurl}}/setup/cli.html) manually.
+
+
+## Embed Bolts
+
+In order to use a Bolt as Flink operator, use `DataStream.transform(String, TypeInformation, OneInputStreamOperator)`.
+The Bolt object is handed to the constructor of `BoltWrapper<IN,OUT>` that serves as last argument to `transform(...)`.
+The generic type declarations `IN` and `OUT` specify the type of the operator's input and output stream, respectively.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+~~~java
+StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
+DataStream<String> text = env.readTextFile(localFilePath);
+
+DataStream<Tuple2<String, Integer>> counts = text.transform(
+	"tokenizer", // operator name
+	TypeExtractor.getForObject(new Tuple2<String, Integer>("", 0)), // output type
+	new BoltWrapper<String, Tuple2<String, Integer>>(new BoltTokenizer())); // Bolt operator
+
+// do further processing
+[...]
+~~~
+</div>
+</div>
+
+### Named Attribute Access for Embedded Bolts
+
+Bolts can accesses input tuple fields via name (additionally to access via index).
+To use this feature with embedded Bolts, you need to have either a
+
+ 1. [POJO]({{site.baseurl}}/dev/api_concepts.html#pojos) type input stream or
+ 2. [Tuple]({{site.baseurl}}/dev/api_concepts.html#tuples-and-case-classes) type input stream and specify the input schema (i.e. name-to-index-mapping)
+
+For POJO input types, Flink accesses the fields via reflection.
+For this case, Flink expects either a corresponding public member variable or public getter method.
+For example, if a Bolt accesses a field via name `sentence` (eg, `String s = input.getStringByField("sentence");`), the input POJO class must have a member variable `public String sentence;` or method `public String getSentence() { ... };` (pay attention to camel-case naming).
+
+For `Tuple` input types, it is required to specify the input schema using Storm's `Fields` class.
+For this case, the constructor of `BoltWrapper` takes an additional argument: `new BoltWrapper<Tuple1<String>, ...>(..., new Fields("sentence"))`.
+The input type is `Tuple1<String>` and `Fields("sentence")` specify that `input.getStringByField("sentence")` is equivalent to `input.getString(0)`.
+
+See [BoltTokenizerWordCountPojo](https://github.com/apache/flink/tree/master/flink-contrib/flink-storm-examples/src/main/java/org/apache/flink/storm/wordcount/BoltTokenizerWordCountPojo.java) and [BoltTokenizerWordCountWithNames](https://github.com/apache/flink/tree/master/flink-contrib/flink-storm-examples/src/main/java/org/apache/flink/storm/wordcount/BoltTokenizerWordCountWithNames.java) for examples.
+
+## Configuring Spouts and Bolts
+
+In Storm, Spouts and Bolts can be configured with a globally distributed `Map` object that is given to `submitTopology(...)` method of `LocalCluster` or `StormSubmitter`.
+This `Map` is provided by the user next to the topology and gets forwarded as a parameter to the calls `Spout.open(...)` and `Bolt.prepare(...)`.
+If a whole topology is executed in Flink using `FlinkTopologyBuilder` etc., there is no special attention required &ndash; it works as in regular Storm.
+
+For embedded usage, Flink's configuration mechanism must be used.
+A global configuration can be set in a `StreamExecutionEnvironment` via `.getConfig().setGlobalJobParameters(...)`.
+Flink's regular `Configuration` class can be used to configure Spouts and Bolts.
+However, `Configuration` does not support arbitrary key data types as Storm does (only `String` keys are allowed).
+Thus, Flink additionally provides `StormConfig` class that can be used like a raw `Map` to provide full compatibility to Storm.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+~~~java
+StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
+
+StormConfig config = new StormConfig();
+// set config values
+[...]
+
+// set global Storm configuration
+env.getConfig().setGlobalJobParameters(config);
+
+// assemble program with embedded Spouts and/or Bolts
+[...]
+~~~
+</div>
+</div>
+
+## Multiple Output Streams
+
+Flink can also handle the declaration of multiple output streams for Spouts and Bolts.
+If a whole topology is executed in Flink using `FlinkTopologyBuilder` etc., there is no special attention required &ndash; it works as in regular Storm.
+
+For embedded usage, the output stream will be of data type `SplitStreamType<T>` and must be split by using `DataStream.split(...)` and `SplitStream.select(...)`.
+Flink provides the predefined output selector `StormStreamSelector<T>` for `.split(...)` already.
+Furthermore, the wrapper type `SplitStreamTuple<T>` can be removed using `SplitStreamMapper<T>`.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+~~~java
+[...]
+
+// get DataStream from Spout or Bolt which declares two output streams s1 and s2 with output type SomeType
+DataStream<SplitStreamType<SomeType>> multiStream = ...
+
+SplitStream<SplitStreamType<SomeType>> splitStream = multiStream.split(new StormStreamSelector<SomeType>());
+
+// remove SplitStreamType using SplitStreamMapper to get data stream of type SomeType
+DataStream<SomeType> s1 = splitStream.select("s1").map(new SplitStreamMapper<SomeType>()).returns(SomeType.class);
+DataStream<SomeType> s2 = splitStream.select("s2").map(new SplitStreamMapper<SomeType>()).returns(SomeType.class);
+
+// do further processing on s1 and s2
+[...]
+~~~
+</div>
+</div>
+
+See [SpoutSplitExample.java](https://github.com/apache/flink/tree/master/flink-contrib/flink-storm-examples/src/main/java/org/apache/flink/storm/split/SpoutSplitExample.java) for a full example.
+
+# Flink Extensions
+
+## Finite Spouts
+
+In Flink, streaming sources can be finite, ie, emit a finite number of records and stop after emitting the last record. However, Spouts usually emit infinite streams.
+The bridge between the two approaches is the `FiniteSpout` interface which, in addition to `IRichSpout`, contains a `reachedEnd()` method, where the user can specify a stopping-condition.
+The user can create a finite Spout by implementing this interface instead of (or additionally to) `IRichSpout`, and implementing the `reachedEnd()` method in addition.
+In contrast to a `SpoutWrapper` that is configured to emit a finite number of tuples, `FiniteSpout` interface allows to implement more complex termination criteria.
+
+Although finite Spouts are not necessary to embed Spouts into a Flink streaming program or to submit a whole Storm topology to Flink, there are cases where they may come in handy:
+
+ * to achieve that a native Spout behaves the same way as a finite Flink source with minimal modifications
+ * the user wants to process a stream only for some time; after that, the Spout can stop automatically
+ * reading a file into a stream
+ * for testing purposes
+
+An example of a finite Spout that emits records for 10 seconds only:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+~~~java
+public class TimedFiniteSpout extends BaseRichSpout implements FiniteSpout {
+	[...] // implement open(), nextTuple(), ...
+
+	private long starttime = System.currentTimeMillis();
+
+	public boolean reachedEnd() {
+		return System.currentTimeMillis() - starttime > 10000l;
+	}
+}
+~~~
+</div>
+</div>
+
+# Storm Compatibility Examples
+
+You can find more examples in Maven module `flink-storm-examples`.
+For the different versions of WordCount, see [README.md](https://github.com/apache/flink/tree/master/flink-contrib/flink-storm-examples/README.md).
+To run the examples, you need to assemble a correct jar file.
+`flink-storm-examples-{{ site.version }}.jar` is **no** valid jar file for job execution (it is only a standard maven artifact).
+
+There are example jars for embedded Spout and Bolt, namely `WordCount-SpoutSource.jar` and `WordCount-BoltTokenizer.jar`, respectively.
+Compare `pom.xml` to see how both jars are built.
+Furthermore, there is one example for whole Storm topologies (`WordCount-StormTopology.jar`).
+
+You can run each of those examples via `bin/flink run <jarname>.jar`. The correct entry point class is contained in each jar's manifest file.

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/local_execution.md
----------------------------------------------------------------------
diff --git a/docs/dev/local_execution.md b/docs/dev/local_execution.md
new file mode 100644
index 0000000..a348951
--- /dev/null
+++ b/docs/dev/local_execution.md
@@ -0,0 +1,125 @@
+---
+title:  "Local Execution"
+nav-parent_id: dev
+nav-pos: 11
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Flink can run on a single machine, even in a single Java Virtual Machine. This allows users to test and debug Flink programs locally. This section gives an overview of the local execution mechanisms.
+
+The local environments and executors allow you to run Flink programs in a local Java Virtual Machine, or with within any JVM as part of existing programs. Most examples can be launched locally by simply hitting the "Run" button of your IDE.
+
+There are two different kinds of local execution supported in Flink. The `LocalExecutionEnvironment` is starting the full Flink runtime, including a JobManager and a TaskManager. These include memory management and all the internal algorithms that are executed in the cluster mode.
+
+The `CollectionEnvironment` is executing the Flink program on Java collections. This mode will not start the full Flink runtime, so the execution is very low-overhead and lightweight. For example a `DataSet.map()`-transformation will be executed by applying the `map()` function to all elements in a Java list.
+
+* TOC
+{:toc}
+
+
+## Debugging
+
+If you are running Flink programs locally, you can also debug your program like any other Java program. You can either use `System.out.println()` to write out some internal variables or you can use the debugger. It is possible to set breakpoints within `map()`, `reduce()` and all the other methods.
+Please also refer to the [debugging section]({{ site.baseurl }}/dev/batch/index.html#debugging) in the Java API documentation for a guide to testing and local debugging utilities in the Java API.
+
+## Maven Dependency
+
+If you are developing your program in a Maven project, you have to add the `flink-clients` module using this dependency:
+
+~~~xml
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-clients{{ site.scala_version_suffix }}</artifactId>
+  <version>{{site.version}}</version>
+</dependency>
+~~~
+
+## Local Environment
+
+The `LocalEnvironment` is a handle to local execution for Flink programs. Use it to run a program within a local JVM - standalone or embedded in other programs.
+
+The local environment is instantiated via the method `ExecutionEnvironment.createLocalEnvironment()`. By default, it will use as many local threads for execution as your machine has CPU cores (hardware contexts). You can alternatively specify the desired parallelism. The local environment can be configured to log to the console using `enableLogging()`/`disableLogging()`.
+
+In most cases, calling `ExecutionEnvironment.getExecutionEnvironment()` is the even better way to go. That method returns a `LocalEnvironment` when the program is started locally (outside the command line interface), and it returns a pre-configured environment for cluster execution, when the program is invoked by the [command line interface]({{ site.baseurl }}/setup/cli.html).
+
+~~~java
+public static void main(String[] args) throws Exception {
+    ExecutionEnvironment env = ExecutionEnvironment.createLocalEnvironment();
+
+    DataSet<String> data = env.readTextFile("file:///path/to/file");
+
+    data
+        .filter(new FilterFunction<String>() {
+            public boolean filter(String value) {
+                return value.startsWith("http://");
+            }
+        })
+        .writeAsText("file:///path/to/result");
+
+    JobExecutionResult res = env.execute();
+}
+~~~
+
+The `JobExecutionResult` object, which is returned after the execution finished, contains the program runtime and the accumulator results.
+
+The `LocalEnvironment` allows also to pass custom configuration values to Flink.
+
+~~~java
+Configuration conf = new Configuration();
+conf.setFloat(ConfigConstants.TASK_MANAGER_MEMORY_FRACTION_KEY, 0.5f);
+final ExecutionEnvironment env = ExecutionEnvironment.createLocalEnvironment(conf);
+~~~
+
+*Note:* The local execution environments do not start any web frontend to monitor the execution.
+
+## Collection Environment
+
+The execution on Java Collections using the `CollectionEnvironment` is a low-overhead approach for executing Flink programs. Typical use-cases for this mode are automated tests, debugging and code re-use.
+
+Users can use algorithms implemented for batch processing also for cases that are more interactive. A slightly changed variant of a Flink program could be used in a Java Application Server for processing incoming requests.
+
+**Skeleton for Collection-based execution**
+
+~~~java
+public static void main(String[] args) throws Exception {
+    // initialize a new Collection-based execution environment
+    final ExecutionEnvironment env = new CollectionEnvironment();
+
+    DataSet<User> users = env.fromCollection( /* get elements from a Java Collection */);
+
+    /* Data Set transformations ... */
+
+    // retrieve the resulting Tuple2 elements into a ArrayList.
+    Collection<...> result = new ArrayList<...>();
+    resultDataSet.output(new LocalCollectionOutputFormat<...>(result));
+
+    // kick off execution.
+    env.execute();
+
+    // Do some work with the resulting ArrayList (=Collection).
+    for(... t : result) {
+        System.err.println("Result = "+t);
+    }
+}
+~~~
+
+The `flink-examples-batch` module contains a full example, called `CollectionExecutionExample`.
+
+Please note that the execution of the collection-based Flink programs is only possible on small data, which fits into the JVM heap. The execution on collections is not multi-threaded, only one thread is used.

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/quickstarts.md
----------------------------------------------------------------------
diff --git a/docs/dev/quickstarts.md b/docs/dev/quickstarts.md
new file mode 100644
index 0000000..ef21ca6
--- /dev/null
+++ b/docs/dev/quickstarts.md
@@ -0,0 +1,24 @@
+---
+title: "Quickstarts"
+nav-id: quickstarts
+nav-parent_id: dev
+nav-pos: 1
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/scala_api_extensions.md
----------------------------------------------------------------------
diff --git a/docs/dev/scala_api_extensions.md b/docs/dev/scala_api_extensions.md
new file mode 100644
index 0000000..ffa6145
--- /dev/null
+++ b/docs/dev/scala_api_extensions.md
@@ -0,0 +1,408 @@
+---
+title: "Scala API Extensions"
+nav-parent_id: apis
+nav-pos: 104
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+In order to keep a fair amount of consistency between the Scala and Java APIs, some
+of the features that allow a high-level of expressiveness in Scala have been left
+out from the standard APIs for both batch and streaming.
+
+If you want to _enjoy the full Scala experience_ you can choose to opt-in to
+extensions that enhance the Scala API via implicit conversions.
+
+To use all the available extensions, you can just add a simple `import` for the
+DataSet API
+
+{% highlight scala %}
+import org.apache.flink.api.scala.extensions._
+{% endhighlight %}
+
+or the DataStream API
+
+{% highlight scala %}
+import org.apache.flink.streaming.api.scala.extensions._
+{% endhighlight %}
+
+Alternatively, you can import individual extensions _a-l�-carte_ to only use those
+you prefer.
+
+## Accept partial functions
+
+Normally, both the DataSet and DataStream APIs don't accept anonymous pattern
+matching functions to deconstruct tuples, case classes or collections, like the
+following:
+
+{% highlight scala %}
+val data: DataSet[(Int, String, Double)] = // [...]
+data.map {
+  case (id, name, temperature) => // [...]
+  // The previous line causes the following compilation error:
+  // "The argument types of an anonymous function must be fully known. (SLS 8.5)"
+}
+{% endhighlight %}
+
+This extension introduces new methods in both the DataSet and DataStream Scala API
+that have a one-to-one correspondance in the extended API. These delegating methods
+do support anonymous pattern matching functions.
+
+#### DataSet API
+
+<table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 20%">Method</th>
+      <th class="text-left" style="width: 20%">Original</th>
+      <th class="text-center">Example</th>
+    </tr>
+  </thead>
+
+  <tbody>
+    <tr>
+      <td><strong>mapWith</strong></td>
+      <td><strong>map (DataSet)</strong></td>
+      <td>
+{% highlight scala %}
+data.mapWith {
+  case (_, value) => value.toString
+}
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>mapPartitionWith</strong></td>
+      <td><strong>mapPartition (DataSet)</strong></td>
+      <td>
+{% highlight scala %}
+data.mapPartitionWith {
+  case head #:: _ => head
+}
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>flatMapWith</strong></td>
+      <td><strong>flatMap (DataSet)</strong></td>
+      <td>
+{% highlight scala %}
+data.flatMapWith {
+  case (_, name, visitTimes) => visitTimes.map(name -> _)
+}
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>filterWith</strong></td>
+      <td><strong>filter (DataSet)</strong></td>
+      <td>
+{% highlight scala %}
+data.filterWith {
+  case Train(_, isOnTime) => isOnTime
+}
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>reduceWith</strong></td>
+      <td><strong>reduce (DataSet, GroupedDataSet)</strong></td>
+      <td>
+{% highlight scala %}
+data.reduceWith {
+  case ((_, amount1), (_, amount2)) => amount1 + amount2
+}
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>reduceGroupWith</strong></td>
+      <td><strong>reduceGroup (GroupedDataSet)</strong></td>
+      <td>
+{% highlight scala %}
+data.reduceGroupWith {
+  case id #:: value #:: _ => id -> value
+}
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>groupingBy</strong></td>
+      <td><strong>groupBy (DataSet)</strong></td>
+      <td>
+{% highlight scala %}
+data.groupingBy {
+  case (id, _, _) => id
+}
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>sortGroupWith</strong></td>
+      <td><strong>sortGroup (GroupedDataSet)</strong></td>
+      <td>
+{% highlight scala %}
+grouped.sortGroupWith(Order.ASCENDING) {
+  case House(_, value) => value
+}
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>combineGroupWith</strong></td>
+      <td><strong>combineGroup (GroupedDataSet)</strong></td>
+      <td>
+{% highlight scala %}
+grouped.combineGroupWith {
+  case header #:: amounts => amounts.sum
+}
+{% endhighlight %}
+      </td>
+    <tr>
+      <td><strong>projecting</strong></td>
+      <td><strong>apply (JoinDataSet, CrossDataSet)</strong></td>
+      <td>
+{% highlight scala %}
+data1.join(data2).
+  whereClause(case (pk, _) => pk).
+  isEqualTo(case (_, fk) => fk).
+  projecting {
+    case ((pk, tx), (products, fk)) => tx -> products
+  }
+
+data1.cross(data2).projecting {
+  case ((a, _), (_, b) => a -> b
+}
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>projecting</strong></td>
+      <td><strong>apply (CoGroupDataSet)</strong></td>
+      <td>
+{% highlight scala %}
+data1.coGroup(data2).
+  whereClause(case (pk, _) => pk).
+  isEqualTo(case (_, fk) => fk).
+  projecting {
+    case (head1 #:: _, head2 #:: _) => head1 -> head2
+  }
+}
+{% endhighlight %}
+      </td>
+    </tr>
+    </tr>
+  </tbody>
+</table>
+
+#### DataStream API
+
+<table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 20%">Method</th>
+      <th class="text-left" style="width: 20%">Original</th>
+      <th class="text-center">Example</th>
+    </tr>
+  </thead>
+
+  <tbody>
+    <tr>
+      <td><strong>mapWith</strong></td>
+      <td><strong>map (DataStream)</strong></td>
+      <td>
+{% highlight scala %}
+data.mapWith {
+  case (_, value) => value.toString
+}
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>mapPartitionWith</strong></td>
+      <td><strong>mapPartition (DataStream)</strong></td>
+      <td>
+{% highlight scala %}
+data.mapPartitionWith {
+  case head #:: _ => head
+}
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>flatMapWith</strong></td>
+      <td><strong>flatMap (DataStream)</strong></td>
+      <td>
+{% highlight scala %}
+data.flatMapWith {
+  case (_, name, visits) => visits.map(name -> _)
+}
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>filterWith</strong></td>
+      <td><strong>filter (DataStream)</strong></td>
+      <td>
+{% highlight scala %}
+data.filterWith {
+  case Train(_, isOnTime) => isOnTime
+}
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>keyingBy</strong></td>
+      <td><strong>keyBy (DataStream)</strong></td>
+      <td>
+{% highlight scala %}
+data.keyingBy {
+  case (id, _, _) => id
+}
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>mapWith</strong></td>
+      <td><strong>map (ConnectedDataStream)</strong></td>
+      <td>
+{% highlight scala %}
+data.mapWith(
+  map1 = case (_, value) => value.toString,
+  map2 = case (_, _, value, _) => value + 1
+)
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>flatMapWith</strong></td>
+      <td><strong>flatMap (ConnectedDataStream)</strong></td>
+      <td>
+{% highlight scala %}
+data.flatMapWith(
+  flatMap1 = case (_, json) => parse(json),
+  flatMap2 = case (_, _, json, _) => parse(json)
+)
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>keyingBy</strong></td>
+      <td><strong>keyBy (ConnectedDataStream)</strong></td>
+      <td>
+{% highlight scala %}
+data.keyingBy(
+  key1 = case (_, timestamp) => timestamp,
+  key2 = case (id, _, _) => id
+)
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>reduceWith</strong></td>
+      <td><strong>reduce (KeyedDataStream, WindowedDataStream)</strong></td>
+      <td>
+{% highlight scala %}
+data.reduceWith {
+  case ((_, sum1), (_, sum2) => sum1 + sum2
+}
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>foldWith</strong></td>
+      <td><strong>fold (KeyedDataStream, WindowedDataStream)</strong></td>
+      <td>
+{% highlight scala %}
+data.foldWith(User(bought = 0)) {
+  case (User(b), (_, items)) => User(b + items.size)
+}
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>applyWith</strong></td>
+      <td><strong>apply (WindowedDataStream)</strong></td>
+      <td>
+{% highlight scala %}
+data.applyWith(0)(
+  foldFunction = case (sum, amount) => sum + amount
+  windowFunction = case (k, w, sum) => // [...]
+)
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>projecting</strong></td>
+      <td><strong>apply (JoinedDataStream)</strong></td>
+      <td>
+{% highlight scala %}
+data1.join(data2).
+  whereClause(case (pk, _) => pk).
+  isEqualTo(case (_, fk) => fk).
+  projecting {
+    case ((pk, tx), (products, fk)) => tx -> products
+  }
+{% endhighlight %}
+      </td>
+    </tr>
+  </tbody>
+</table>
+
+
+
+For more information on the semantics of each method, please refer to the
+[DataSet]({{ site.baseurl }}/dev/batch/index.html) and [DataStream]({{ site.baseurl }}/dev/datastream_api.html) API documentation.
+
+To use this extension exclusively, you can add the following `import`:
+
+{% highlight scala %}
+import org.apache.flink.api.scala.extensions.acceptPartialFunctions
+{% endhighlight %}
+
+for the DataSet extensions and
+
+{% highlight scala %}
+import org.apache.flink.streaming.api.scala.extensions.acceptPartialFunctions
+{% endhighlight %}
+
+The following snippet shows a minimal example of how to use these extension
+methods together (with the DataSet API):
+
+{% highlight scala %}
+object Main {
+  import org.apache.flink.api.scala.extensions._
+  case class Point(x: Double, y: Double)
+  def main(args: Array[String]): Unit = {
+    val env = ExecutionEnvironment.getExecutionEnvironment
+    val ds = env.fromElements(Point(1, 2), Point(3, 4), Point(5, 6))
+    ds.filterWith {
+      case Point(x, _) => x > 1
+    }.reduceWith {
+      case (Point(x1, y1), (Point(x2, y2))) => Point(x1 + y1, x2 + y2)
+    }.mapWith {
+      case Point(x, y) => (x, y)
+    }.flatMapWith {
+      case (x, y) => Seq("x" -> x, "y" -> y)
+    }.groupingBy {
+      case (id, value) => id
+    }
+  }
+}
+{% endhighlight %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/scala_shell.md
----------------------------------------------------------------------
diff --git a/docs/dev/scala_shell.md b/docs/dev/scala_shell.md
new file mode 100644
index 0000000..0728812
--- /dev/null
+++ b/docs/dev/scala_shell.md
@@ -0,0 +1,193 @@
+---
+title: "Scala Shell"
+nav-parent_id: dev
+nav-pos: 10
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Flink comes with an integrated interactive Scala Shell.
+It can be used in a local setup as well as in a cluster setup.
+
+To use the shell with an integrated Flink cluster just execute:
+
+~~~bash
+bin/start-scala-shell.sh local
+~~~
+
+in the root directory of your binary Flink directory. To run the Shell on a
+cluster, please see the Setup section below.
+
+## Usage
+
+The shell supports Batch and Streaming.
+Two different ExecutionEnvironments are automatically prebound after startup.
+Use "benv" and "senv" to access the Batch and Streaming environment respectively.
+
+### DataSet API
+
+The following example will execute the wordcount program in the Scala shell:
+
+~~~scala
+Scala-Flink> val text = benv.fromElements(
+  "To be, or not to be,--that is the question:--",
+  "Whether 'tis nobler in the mind to suffer",
+  "The slings and arrows of outrageous fortune",
+  "Or to take arms against a sea of troubles,")
+Scala-Flink> val counts = text
+    .flatMap { _.toLowerCase.split("\\W+") }
+    .map { (_, 1) }.groupBy(0).sum(1)
+Scala-Flink> counts.print()
+~~~
+
+The print() command will automatically send the specified tasks to the JobManager for execution and will show the result of the computation in the terminal.
+
+It is possible to write results to a file. However, in this case you need to call `execute`, to run your program:
+
+~~~scala
+Scala-Flink> benv.execute("MyProgram")
+~~~
+
+### DataStream API
+
+Similar to the the batch program above, we can execute a streaming program through the DataStream API:
+
+~~~scala
+Scala-Flink> val textStreaming = senv.fromElements(
+  "To be, or not to be,--that is the question:--",
+  "Whether 'tis nobler in the mind to suffer",
+  "The slings and arrows of outrageous fortune",
+  "Or to take arms against a sea of troubles,")
+Scala-Flink> val countsStreaming = textStreaming
+    .flatMap { _.toLowerCase.split("\\W+") }
+    .map { (_, 1) }.keyBy(0).sum(1)
+Scala-Flink> countsStreaming.print()
+Scala-Flink> senv.execute("Streaming Wordcount")
+~~~
+
+Note, that in the Streaming case, the print operation does not trigger execution directly.
+
+The Flink Shell comes with command history and auto-completion.
+
+
+## Adding external dependencies
+
+It is possible to add external classpaths to the Scala-shell. These will be sent to the Jobmanager automatically alongside your shell program, when calling execute.
+
+Use the parameter `-a <path/to/jar.jar>` or `--addclasspath <path/to/jar.jar>` to load additional classes.
+
+~~~bash
+bin/start-scala-shell.sh [local | remote <host> <port> | yarn] --addclasspath <path/to/jar.jar>
+~~~
+
+
+## Setup
+
+To get an overview of what options the Scala Shell provides, please use
+
+~~~bash
+bin/start-scala-shell.sh --help
+~~~
+
+### Local
+
+To use the shell with an integrated Flink cluster just execute:
+
+~~~bash
+bin/start-scala-shell.sh local
+~~~
+
+
+### Remote
+
+To use it with a running cluster start the scala shell with the keyword `remote`
+and supply the host and port of the JobManager with:
+
+~~~bash
+bin/start-scala-shell.sh remote <hostname> <portnumber>
+~~~
+
+### Yarn Scala Shell cluster
+
+The shell can deploy a Flink cluster to YARN, which is used exclusively by the
+shell. The number of YARN containers can be controlled by the parameter `-n <arg>`.
+The shell deploys a new Flink cluster on YARN and connects the
+cluster. You can also specify options for YARN cluster such as memory for
+JobManager, name of YARN application, etc.
+
+For example, to start a Yarn cluster for the Scala Shell with two TaskManagers
+use the following:
+
+~~~bash
+ bin/start-scala-shell.sh yarn -n 2
+~~~
+
+For all other options, see the full reference at the bottom.
+
+
+### Yarn Session
+
+If you have previously deployed a Flink cluster using the Flink Yarn Session,
+the Scala shell can connect with it using the following command:
+
+~~~bash
+ bin/start-scala-shell.sh yarn
+~~~
+
+
+## Full Reference
+
+~~~bash
+Flink Scala Shell
+Usage: start-scala-shell.sh [local|remote|yarn] [options] <args>...
+
+Command: local [options]
+Starts Flink scala shell with a local Flink cluster
+  -a <path/to/jar> | --addclasspath <path/to/jar>
+        Specifies additional jars to be used in Flink
+Command: remote [options] <host> <port>
+Starts Flink scala shell connecting to a remote cluster
+  <host>
+        Remote host name as string
+  <port>
+        Remote port as integer
+
+  -a <path/to/jar> | --addclasspath <path/to/jar>
+        Specifies additional jars to be used in Flink
+Command: yarn [options]
+Starts Flink scala shell connecting to a yarn cluster
+  -n arg | --container arg
+        Number of YARN container to allocate (= Number of TaskManagers)
+  -jm arg | --jobManagerMemory arg
+        Memory for JobManager container [in MB]
+  -nm <value> | --name <value>
+        Set a custom name for the application on YARN
+  -qu <arg> | --queue <arg>
+        Specifies YARN queue
+  -s <arg> | --slots <arg>
+        Number of slots per TaskManager
+  -tm <arg> | --taskManagerMemory <arg>
+        Memory per TaskManager container [in MB]
+  -a <path/to/jar> | --addclasspath <path/to/jar>
+        Specifies additional jars to be used in Flink
+  --configDir <value>
+        The configuration directory.
+  -h | --help
+        Prints this usage text
+~~~

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/state.md
----------------------------------------------------------------------
diff --git a/docs/dev/state.md b/docs/dev/state.md
new file mode 100644
index 0000000..ec8c5eb
--- /dev/null
+++ b/docs/dev/state.md
@@ -0,0 +1,293 @@
+---
+title: "Working with State"
+nav-parent_id: dev
+nav-pos: 3
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+All transformations in Flink may look like functions (in the functional processing terminology), but
+are in fact stateful operators. You can make *every* transformation (`map`, `filter`, etc) stateful
+by using Flink's state interface or checkpointing instance fields of your function. You can register
+any instance field
+as ***managed*** state by implementing an interface. In this case, and also in the case of using
+Flink's native state interface, Flink will automatically take consistent snapshots of your state
+periodically, and restore its value in the case of a failure.
+
+The end effect is that updates to any form of state are the same under failure-free execution and
+execution under failures.
+
+First, we look at how to make instance fields consistent under failures, and then we look at
+Flink's state interface.
+
+By default state checkpoints will be stored in-memory at the JobManager. For proper persistence of large
+state, Flink supports storing the checkpoints on file systems (HDFS, S3, or any mounted POSIX file system),
+which can be configured in the `flink-conf.yaml` or via `StreamExecutionEnvironment.setStateBackend(\u2026)`.
+See [state backends]({{ site.baseurl }}/dev/state_backends.html) for information
+about the available state backends and how to configure them.
+
+* ToC
+{:toc}
+
+## Using the Key/Value State Interface
+
+The Key/Value state interface provides access to different types of state that are all scoped to
+the key of the current input element. This means that this type of state can only be used
+on a `KeyedStream`, which can be created via `stream.keyBy(\u2026)`.
+
+Now, we will first look at the different types of state available and then we will see
+how they can be used in a program. The available state primitives are:
+
+* `ValueState<T>`: This keeps a value that can be updated and
+retrieved (scoped to key of the input element, mentioned above, so there will possibly be one value
+for each key that the operation sees). The value can be set using `update(T)` and retrieved using
+`T value()`.
+
+* `ListState<T>`: This keeps a list of elements. You can append elements and retrieve an `Iterable`
+over all currently stored elements. Elements are added using `add(T)`, the Iterable can
+be retrieved using `Iterable<T> get()`.
+
+* `ReducingState<T>`: This keeps a single value that represents the aggregation of all values
+added to the state. The interface is the same as for `ListState` but elements added using
+`add(T)` are reduced to an aggregate using a specified `ReduceFunction`.
+
+All types of state also have a method `clear()` that clears the state for the currently
+active key (i.e. the key of the input element).
+
+It is important to keep in mind that these state objects are only used for interfacing
+with state. The state is not necessarily stored inside but might reside on disk or somewhere else.
+The second thing to keep in mind is that the value you get from the state
+depend on the key of the input element. So the value you get in one invocation of your
+user function can be different from the one you get in another invocation if the key of
+the element is different.
+
+To get a state handle you have to create a `StateDescriptor` this holds the name of the state
+(as we will later see you can create several states, and they have to have unique names so
+that you can reference them), the type of the values that the state holds and possibly
+a user-specified function, such as a `ReduceFunction`. Depending on what type of state you
+want to retrieve you create one of `ValueStateDescriptor`, `ListStateDescriptor` or
+`ReducingStateDescriptor`.
+
+State is accessed using the `RuntimeContext`, so it is only possible in *rich functions*.
+Please see [here]({{ site.baseurl }}/apis/common/#specifying-transformation-functions) for
+information about that but we will also see an example shortly. The `RuntimeContext` that
+is available in a `RichFunction` has these methods for accessing state:
+
+* `ValueState<T> getState(ValueStateDescriptor<T>)`
+* `ReducingState<T> getReducingState(ReducingStateDescriptor<T>)`
+* `ListState<T> getListState(ListStateDescriptor<T>)`
+
+This is an example `FlatMapFunction` that shows how all of the parts fit together:
+
+{% highlight java %}
+public class CountWindowAverage extends RichFlatMapFunction<Tuple2<Long, Long>, Tuple2<Long, Long>> {
+
+    /**
+     * The ValueState handle. The first field is the count, the second field a running sum.
+     */
+    private transient ValueState<Tuple2<Long, Long>> sum;
+
+    @Override
+    public void flatMap(Tuple2<Long, Long> input, Collector<Tuple2<Long, Long>> out) throws Exception {
+
+        // access the state value
+        Tuple2<Long, Long> currentSum = sum.value();
+
+        // update the count
+        currentSum.f0 += 1;
+
+        // add the second field of the input value
+        currentSum.f1 += input.f1;
+
+        // update the state
+        sum.update(currentSum);
+
+        // if the count reaches 2, emit the average and clear the state
+        if (currentSum.f0 >= 2) {
+            out.collect(new Tuple2<>(input.f0, currentSum.f1 / currentSum.f0));
+            sum.clear();
+        }
+    }
+
+    @Override
+    public void open(Configuration config) {
+        ValueStateDescriptor<Tuple2<Long, Long>> descriptor =
+                new ValueStateDescriptor<>(
+                        "average", // the state name
+                        TypeInformation.of(new TypeHint<Tuple2<Long, Long>>() {}), // type information
+                        Tuple2.of(0L, 0L)); // default value of the state, if nothing was set
+        sum = getRuntimeContext().getState(descriptor);
+    }
+}
+
+// this can be used in a streaming program like this (assuming we have a StreamExecutionEnvironment env)
+env.fromElements(Tuple2.of(1L, 3L), Tuple2.of(1L, 5L), Tuple2.of(1L, 7L), Tuple2.of(1L, 4L), Tuple2.of(1L, 2L))
+        .keyBy(0)
+        .flatMap(new CountWindowAverage())
+        .print();
+
+// the printed output will be (1,4) and (1,5)
+{% endhighlight %}
+
+This example implements a poor man's counting window. We key the tuples by the first field
+(in the example all have the same key `1`). The function stores the count and a running sum in
+a `ValueState`, once the count reaches 2 it will emit the average and clear the state so that
+we start over from `0`. Note that this would keep a different state value for each different input
+key if we had tuples with different values in the first field.
+
+### State in the Scala DataStream API
+
+In addition to the interface described above, the Scala API has shortcuts for stateful
+`map()` or `flatMap()` functions with a single `ValueState` on `KeyedStream`. The user function
+gets the current value of the `ValueState` in an `Option` and must return an updated value that
+will be used to update the state.
+
+{% highlight scala %}
+val stream: DataStream[(String, Int)] = ...
+
+val counts: DataStream[(String, Int)] = stream
+  .keyBy(_._1)
+  .mapWithState((in: (String, Int), count: Option[Int]) =>
+    count match {
+      case Some(c) => ( (in._1, c), Some(c + in._2) )
+      case None => ( (in._1, 0), Some(in._2) )
+    })
+{% endhighlight %}
+
+## Checkpointing Instance Fields
+
+Instance fields can be checkpointed by using the `Checkpointed` interface.
+
+When the user-defined function implements the `Checkpointed` interface, the `snapshotState(\u2026)` and `restoreState(\u2026)`
+methods will be executed to draw and restore function state.
+
+In addition to that, user functions can also implement the `CheckpointNotifier` interface to receive notifications on
+completed checkpoints via the `notifyCheckpointComplete(long checkpointId)` method.
+Note that there is no guarantee for the user function to receive a notification if a failure happens between
+checkpoint completion and notification. The notifications should hence be treated in a way that notifications from
+later checkpoints can subsume missing notifications.
+
+The above example for `ValueState` can be implemented using instance fields like this:
+
+{% highlight java %}
+
+public class CountWindowAverage
+        extends RichFlatMapFunction<Tuple2<Long, Long>, Tuple2<Long, Long>>
+        implements Checkpointed<Tuple2<Long, Long>> {
+
+    private Tuple2<Long, Long> sum = null;
+
+    @Override
+    public void flatMap(Tuple2<Long, Long> input, Collector<Tuple2<Long, Long>> out) throws Exception {
+
+        // update the count
+        sum.f0 += 1;
+
+        // add the second field of the input value
+        sum.f1 += input.f1;
+
+
+        // if the count reaches 2, emit the average and clear the state
+        if (sum.f0 >= 2) {
+            out.collect(new Tuple2<>(input.f0, sum.f1 / sum.f0));
+            sum = Tuple2.of(0L, 0L);
+        }
+    }
+
+    @Override
+    public void open(Configuration config) {
+        if (sum == null) {
+            // only recreate if null
+            // restoreState will be called before open()
+            // so this will already set the sum to the restored value
+            sum = Tuple2.of(0L, 0L);
+        }
+    }
+
+    // regularly persists state during normal operation
+    @Override
+    public Serializable snapshotState(long checkpointId, long checkpointTimestamp) {
+        return sum;
+    }
+
+    // restores state on recovery from failure
+    @Override
+    public void restoreState(Tuple2<Long, Long> state) {
+        sum = state;
+    }
+}
+{% endhighlight %}
+
+## Stateful Source Functions
+
+Stateful sources require a bit more care as opposed to other operators.
+In order to make the updates to the state and output collection atomic (required for exactly-once semantics
+on failure/recovery), the user is required to get a lock from the source's context.
+
+{% highlight java %}
+public static class CounterSource
+        extends RichParallelSourceFunction<Long>
+        implements Checkpointed<Long> {
+
+    /**  current offset for exactly once semantics */
+    private long offset;
+
+    /** flag for job cancellation */
+    private volatile boolean isRunning = true;
+
+    @Override
+    public void run(SourceContext<Long> ctx) {
+        final Object lock = ctx.getCheckpointLock();
+
+        while (isRunning) {
+            // output and state update are atomic
+            synchronized (lock) {
+                ctx.collect(offset);
+                offset += 1;
+            }
+        }
+    }
+
+    @Override
+    public void cancel() {
+        isRunning = false;
+    }
+
+    @Override
+    public Long snapshotState(long checkpointId, long checkpointTimestamp) {
+        return offset;
+
+    }
+
+    @Override
+	public void restoreState(Long state) {
+        offset = state;
+    }
+}
+{% endhighlight %}
+
+Some operators might need the information when a checkpoint is fully acknowledged by Flink to communicate that with the outside world. In this case see the `flink.streaming.api.checkpoint.CheckpointNotifier` interface.
+
+## State Checkpoints in Iterative Jobs
+
+Flink currently only provides processing guarantees for jobs without iterations. Enabling checkpointing on an iterative job causes an exception. In order to force checkpointing on an iterative program the user needs to set a special flag when enabling checkpointing: `env.enableCheckpointing(interval, force = true)`.
+
+Please note that records in flight in the loop edges (and the state changes associated with them) will be lost during failure.
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/state_backends.md
----------------------------------------------------------------------
diff --git a/docs/dev/state_backends.md b/docs/dev/state_backends.md
new file mode 100644
index 0000000..e5b9c2a
--- /dev/null
+++ b/docs/dev/state_backends.md
@@ -0,0 +1,162 @@
+---
+title: "State Backends"
+nav-parent_id: dev
+nav-pos: 5
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Programs written in the [Data Stream API]({{ site.baseurl }}/dev/datastream_api.html) often hold state in various forms:
+
+- Windows gather elements or aggregates until they are triggered
+- Transformation functions may use the key/value state interface to store values
+- Transformation functions may implement the `Checkpointed` interface to make their local variables fault tolerant
+
+See also [Working with State]({{ site.baseurl }}/dev/state.html) in the streaming API guide.
+
+When checkpointing is activated, such state is persisted upon checkpoints to guard against data loss and recover consistently.
+How the state is represented internally, and how and where it is persisted upon checkpoints depends on the
+chosen **State Backend**.
+
+* ToC
+{:toc}
+
+## Available State Backends
+
+Out of the box, Flink bundles these state backends:
+
+ - *MemoryStateBacked*
+ - *FsStateBackend*
+ - *RocksDBStateBackend*
+
+If nothing else is configured, the system will use the MemoryStateBacked.
+
+
+### The MemoryStateBackend
+
+The *MemoryStateBacked* holds data internally as objects on the Java heap. Key/value state and window operators hold hash tables
+that store the values, triggers, etc.
+
+Upon checkpoints, this state backend will snapshot the state and send it as part of the checkpoint acknowledgement messages to the
+JobManager (master), which stores it on its heap as well.
+
+Limitations of the MemoryStateBackend:
+
+  - The size of each individual state is by default limited to 5 MB. This value can be increased in the constructor of the MemoryStateBackend.
+  - Irrespective of the configured maximal state size, the state cannot be larger than the akka frame size (see [Configuration]({{ site.baseurl }}/setup/config.html)).
+  - The aggregate state must fit into the JobManager memory.
+
+The MemoryStateBackend is encouraged for:
+
+  - Local development and debugging
+  - Jobs that do hold little state, such as jobs that consist only of record-at-a-time functions (Map, FlatMap, Filter, ...). The Kafka Consumer requires very little state.
+
+
+### The FsStateBackend
+
+The *FsStateBackend* is configured with a file system URL (type, address, path), such as "hdfs://namenode:40010/flink/checkpoints" or "file:///data/flink/checkpoints".
+
+The FsStateBackend holds in-flight data in the TaskManager's memory. Upon checkpointing, it writes state snapshots into files in the configured file system and directory. Minimal metadata is stored in the JobManager's memory (or, in high-availability mode, in the metadata checkpoint).
+
+The FsStateBackend is encouraged for:
+
+  - Jobs with large state, long windows, large key/value states.
+  - All high-availability setups.
+
+### The RocksDBStateBackend
+
+The *RocksDBStateBackend* is configured with a file system URL (type, address, path), such as "hdfs://namenode:40010/flink/checkpoints" or "file:///data/flink/checkpoints".
+
+The RocksDBStateBackend holds in-flight data in a [RocksDB](http://rocksdb.org) data base
+that is (per default) stored in the TaskManager data directories. Upon checkpointing, the whole
+RocksDB data base will be checkpointed into the configured file system and directory. Minimal
+metadata is stored in the JobManager's memory (or, in high-availability mode, in the metadata checkpoint).
+
+The RocksDBStateBackend is encouraged for:
+
+  - Jobs with very large state, long windows, large key/value states.
+  - All high-availability setups.
+
+Note that the amount of state that you can keep is only limited by the amount of disc space available.
+This allows keeping very large state, compared to the FsStateBackend that keeps state in memory.
+This also means, however, that the maximum throughput that can be achieved will be lower with
+this state backend.
+
+**NOTE:** To use the RocksDBStateBackend you also have to add the correct maven dependency to your
+project:
+
+{% highlight xml %}
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-statebackend-rocksdb{{ site.scala_version_suffix }}</artifactId>
+  <version>{{site.version }}</version>
+</dependency>
+{% endhighlight %}
+
+The backend is currently not part of the binary distribution. See
+[here]({{ site.baseurl}}/dev/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution)
+for an explanation of how to include it for cluster execution.
+
+## Configuring a State Backend
+
+State backends can be configured per job. In addition, you can define a default state backend to be used when the
+job does not explicitly define a state backend.
+
+
+### Setting the Per-job State Backend
+
+The per-job state backend is set on the `StreamExecutionEnvironment` of the job, as shown in the example below:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
+env.setStateBackend(new FsStateBackend("hdfs://namenode:40010/flink/checkpoints"));
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = StreamExecutionEnvironment.getExecutionEnvironment()
+env.setStateBackend(new FsStateBackend("hdfs://namenode:40010/flink/checkpoints"))
+{% endhighlight %}
+</div>
+</div>
+
+
+### Setting Default State Backend
+
+A default state backend can be configured in the `flink-conf.yaml`, using the configuration key `state.backend`.
+
+Possible values for the config entry are *jobmanager* (MemoryStateBackend), *filesystem* (FsStateBackend), or the fully qualified class
+name of the class that implements the state backend factory [FsStateBackendFactory](https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/filesystem/FsStateBackendFactory.java).
+
+In the case where the default state backend is set to *filesystem*, the entry `state.backend.fs.checkpointdir` defines the directory where the checkpoint data will be stored.
+
+A sample section in the configuration file could look as follows:
+
+~~~
+# The backend that will be used to store operator state checkpoints
+
+state.backend: filesystem
+
+
+# Directory for storing checkpoints
+
+state.backend.fs.checkpointdir: hdfs://namenode:40010/flink/checkpoints
+~~~


[20/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/batch/index.md
----------------------------------------------------------------------
diff --git a/docs/dev/batch/index.md b/docs/dev/batch/index.md
new file mode 100644
index 0000000..5cdc36d
--- /dev/null
+++ b/docs/dev/batch/index.md
@@ -0,0 +1,2267 @@
+---
+title: "Flink DataSet API Programming Guide"
+nav-id: batch
+nav-title: Batch (DataSet API)
+nav-parent_id: apis
+nav-pos: 3
+nav-show_overview: true
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+DataSet programs in Flink are regular programs that implement transformations on data sets
+(e.g., filtering, mapping, joining, grouping). The data sets are initially created from certain
+sources (e.g., by reading files, or from local collections). Results are returned via sinks, which may for
+example write the data to (distributed) files, or to standard output (for example the command line
+terminal). Flink programs run in a variety of contexts, standalone, or embedded in other programs.
+The execution can happen in a local JVM, or on clusters of many machines.
+
+Please see [basic concepts]({{ site.baseurl }}/dev/api_concepts.html) for an introduction
+to the basic concepts of the Flink API.
+
+In order to create your own Flink DataSet program, we encourage you to start with the
+[anatomy of a Flink Program]({{ site.baseurl }}/dev/api_concepts.html#anatomy-of-a-flink-program)
+and gradually add your own
+[transformations](#dataset-transformations). The remaining sections act as references for additional
+operations and advanced features.
+
+* This will be replaced by the TOC
+{:toc}
+
+Example Program
+---------------
+
+The following program is a complete, working example of WordCount. You can copy &amp; paste the code
+to run it locally. You only have to include the correct Flink's library into your project
+(see Section [Linking with Flink]({{ site.baseurl }}/dev/api_concepts.html#linking-with-flink)) and specify the imports. Then you are ready
+to go!
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+{% highlight java %}
+public class WordCountExample {
+    public static void main(String[] args) throws Exception {
+        final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+        DataSet<String> text = env.fromElements(
+            "Who's there?",
+            "I think I hear them. Stand, ho! Who's there?");
+
+        DataSet<Tuple2<String, Integer>> wordCounts = text
+            .flatMap(new LineSplitter())
+            .groupBy(0)
+            .sum(1);
+
+        wordCounts.print();
+    }
+
+    public static class LineSplitter implements FlatMapFunction<String, Tuple2<String, Integer>> {
+        @Override
+        public void flatMap(String line, Collector<Tuple2<String, Integer>> out) {
+            for (String word : line.split(" ")) {
+                out.collect(new Tuple2<String, Integer>(word, 1));
+            }
+        }
+    }
+}
+{% endhighlight %}
+
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+import org.apache.flink.api.scala._
+
+object WordCount {
+  def main(args: Array[String]) {
+
+    val env = ExecutionEnvironment.getExecutionEnvironment
+    val text = env.fromElements(
+      "Who's there?",
+      "I think I hear them. Stand, ho! Who's there?")
+
+    val counts = text.flatMap { _.toLowerCase.split("\\W+") filter { _.nonEmpty } }
+      .map { (_, 1) }
+      .groupBy(0)
+      .sum(1)
+
+    counts.print()
+  }
+}
+{% endhighlight %}
+</div>
+
+</div>
+
+{% top %}
+
+DataSet Transformations
+-----------------------
+
+Data transformations transform one or more DataSets into a new DataSet. Programs can combine
+multiple transformations into sophisticated assemblies.
+
+This section gives a brief overview of the available transformations. The [transformations
+documentation](dataset_transformations.html) has a full description of all transformations with
+examples.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+<br />
+
+<table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 20%">Transformation</th>
+      <th class="text-center">Description</th>
+    </tr>
+  </thead>
+
+  <tbody>
+    <tr>
+      <td><strong>Map</strong></td>
+      <td>
+        <p>Takes one element and produces one element.</p>
+{% highlight java %}
+data.map(new MapFunction<String, Integer>() {
+  public Integer map(String value) { return Integer.parseInt(value); }
+});
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>FlatMap</strong></td>
+      <td>
+        <p>Takes one element and produces zero, one, or more elements. </p>
+{% highlight java %}
+data.flatMap(new FlatMapFunction<String, String>() {
+  public void flatMap(String value, Collector<String> out) {
+    for (String s : value.split(" ")) {
+      out.collect(s);
+    }
+  }
+});
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>MapPartition</strong></td>
+      <td>
+        <p>Transforms a parallel partition in a single function call. The function gets the partition
+        as an <code>Iterable</code> stream and can produce an arbitrary number of result values. The number of
+        elements in each partition depends on the degree-of-parallelism and previous operations.</p>
+{% highlight java %}
+data.mapPartition(new MapPartitionFunction<String, Long>() {
+  public void mapPartition(Iterable<String> values, Collector<Long> out) {
+    long c = 0;
+    for (String s : values) {
+      c++;
+    }
+    out.collect(c);
+  }
+});
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>Filter</strong></td>
+      <td>
+        <p>Evaluates a boolean function for each element and retains those for which the function
+        returns true.<br/>
+
+        <strong>IMPORTANT:</strong> The system assumes that the function does not modify the elements on which the predicate is applied. Violating this assumption
+        can lead to incorrect results.
+        </p>
+{% highlight java %}
+data.filter(new FilterFunction<Integer>() {
+  public boolean filter(Integer value) { return value > 1000; }
+});
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>Reduce</strong></td>
+      <td>
+        <p>Combines a group of elements into a single element by repeatedly combining two elements
+        into one. Reduce may be applied on a full data set, or on a grouped data set.</p>
+{% highlight java %}
+data.reduce(new ReduceFunction<Integer> {
+  public Integer reduce(Integer a, Integer b) { return a + b; }
+});
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>ReduceGroup</strong></td>
+      <td>
+        <p>Combines a group of elements into one or more elements. ReduceGroup may be applied on a
+        full data set, or on a grouped data set.</p>
+{% highlight java %}
+data.reduceGroup(new GroupReduceFunction<Integer, Integer> {
+  public void reduce(Iterable<Integer> values, Collector<Integer> out) {
+    int prefixSum = 0;
+    for (Integer i : values) {
+      prefixSum += i;
+      out.collect(prefixSum);
+    }
+  }
+});
+{% endhighlight %}
+        <p>If the reduce was applied to a grouped data set, you can specify the way that the
+        runtime executes the combine phase of the reduce via supplying a CombineHint as a second
+        parameter. The hash-based strategy should be faster in most cases, especially if the
+        number of different keys is small compared to the number of input elements (eg. 1/10).</p>
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>Aggregate</strong></td>
+      <td>
+        <p>Aggregates a group of values into a single value. Aggregation functions can be thought of
+        as built-in reduce functions. Aggregate may be applied on a full data set, or on a grouped
+        data set.</p>
+{% highlight java %}
+Dataset<Tuple3<Integer, String, Double>> input = // [...]
+DataSet<Tuple3<Integer, String, Double>> output = input.aggregate(SUM, 0).and(MIN, 2);
+{% endhighlight %}
+	<p>You can also use short-hand syntax for minimum, maximum, and sum aggregations.</p>
+	{% highlight java %}
+	Dataset<Tuple3<Integer, String, Double>> input = // [...]
+DataSet<Tuple3<Integer, String, Double>> output = input.sum(0).andMin(2);
+	{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>Distinct</strong></td>
+      <td>
+        <p>Returns the distinct elements of a data set. It removes the duplicate entries
+        from the input DataSet, with respect to all fields of the elements, or a subset of fields.</p>
+    {% highlight java %}
+        data.distinct();
+    {% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>Join</strong></td>
+      <td>
+        Joins two data sets by creating all pairs of elements that are equal on their keys.
+        Optionally uses a JoinFunction to turn the pair of elements into a single element, or a
+        FlatJoinFunction to turn the pair of elements into arbitrarily many (including none)
+        elements. See the <a href="#specifying-keys">keys section</a> to learn how to define join keys.
+{% highlight java %}
+result = input1.join(input2)
+               .where(0)       // key of the first input (tuple field 0)
+               .equalTo(1);    // key of the second input (tuple field 1)
+{% endhighlight %}
+        You can specify the way that the runtime executes the join via <i>Join Hints</i>. The hints
+        describe whether the join happens through partitioning or broadcasting, and whether it uses
+        a sort-based or a hash-based algorithm. Please refer to the
+        <a href="dataset_transformations.html#join-algorithm-hints">Transformations Guide</a> for
+        a list of possible hints and an example.</br>
+        If no hint is specified, the system will try to make an estimate of the input sizes and
+        pick the best strategy according to those estimates.
+{% highlight java %}
+// This executes a join by broadcasting the first data set
+// using a hash table for the broadcasted data
+result = input1.join(input2, JoinHint.BROADCAST_HASH_FIRST)
+               .where(0).equalTo(1);
+{% endhighlight %}
+        Note that the join transformation works only for equi-joins. Other join types need to be expressed using OuterJoin or CoGroup.
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>OuterJoin</strong></td>
+      <td>
+        Performs a left, right, or full outer join on two data sets. Outer joins are similar to regular (inner) joins and create all pairs of elements that are equal on their keys. In addition, records of the "outer" side (left, right, or both in case of full) are preserved if no matching key is found in the other side. Matching pairs of elements (or one element and a <code>null</code> value for the other input) are given to a JoinFunction to turn the pair of elements into a single element, or to a FlatJoinFunction to turn the pair of elements into arbitrarily many (including none)         elements. See the <a href="#specifying-keys">keys section</a> to learn how to define join keys.
+{% highlight java %}
+input1.leftOuterJoin(input2) // rightOuterJoin or fullOuterJoin for right or full outer joins
+      .where(0)              // key of the first input (tuple field 0)
+      .equalTo(1)            // key of the second input (tuple field 1)
+      .with(new JoinFunction<String, String, String>() {
+          public String join(String v1, String v2) {
+             // NOTE:
+             // - v2 might be null for leftOuterJoin
+             // - v1 might be null for rightOuterJoin
+             // - v1 OR v2 might be null for fullOuterJoin
+          }
+      });
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>CoGroup</strong></td>
+      <td>
+        <p>The two-dimensional variant of the reduce operation. Groups each input on one or more
+        fields and then joins the groups. The transformation function is called per pair of groups.
+        See the <a href="#specifying-keys">keys section</a> to learn how to define coGroup keys.</p>
+{% highlight java %}
+data1.coGroup(data2)
+     .where(0)
+     .equalTo(1)
+     .with(new CoGroupFunction<String, String, String>() {
+         public void coGroup(Iterable<String> in1, Iterable<String> in2, Collector<String> out) {
+           out.collect(...);
+         }
+      });
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>Cross</strong></td>
+      <td>
+        <p>Builds the Cartesian product (cross product) of two inputs, creating all pairs of
+        elements. Optionally uses a CrossFunction to turn the pair of elements into a single
+        element</p>
+{% highlight java %}
+DataSet<Integer> data1 = // [...]
+DataSet<String> data2 = // [...]
+DataSet<Tuple2<Integer, String>> result = data1.cross(data2);
+{% endhighlight %}
+      <p>Note: Cross is potentially a <b>very</b> compute-intensive operation which can challenge even large compute clusters! It is advised to hint the system with the DataSet sizes by using <i>crossWithTiny()</i> and <i>crossWithHuge()</i>.</p>
+      </td>
+    </tr>
+    <tr>
+      <td><strong>Union</strong></td>
+      <td>
+        <p>Produces the union of two data sets.</p>
+{% highlight java %}
+DataSet<String> data1 = // [...]
+DataSet<String> data2 = // [...]
+DataSet<String> result = data1.union(data2);
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>Rebalance</strong></td>
+      <td>
+        <p>Evenly rebalances the parallel partitions of a data set to eliminate data skew. Only Map-like transformations may follow a rebalance transformation.</p>
+{% highlight java %}
+DataSet<String> in = // [...]
+DataSet<String> result = in.rebalance()
+                           .map(new Mapper());
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>Hash-Partition</strong></td>
+      <td>
+        <p>Hash-partitions a data set on a given key. Keys can be specified as position keys, expression keys, and key selector functions.</p>
+{% highlight java %}
+DataSet<Tuple2<String,Integer>> in = // [...]
+DataSet<Integer> result = in.partitionByHash(0)
+                            .mapPartition(new PartitionMapper());
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>Range-Partition</strong></td>
+      <td>
+        <p>Range-partitions a data set on a given key. Keys can be specified as position keys, expression keys, and key selector functions.</p>
+{% highlight java %}
+DataSet<Tuple2<String,Integer>> in = // [...]
+DataSet<Integer> result = in.partitionByRange(0)
+                            .mapPartition(new PartitionMapper());
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>Custom Partitioning</strong></td>
+      <td>
+        <p>Manually specify a partitioning over the data.
+          <br/>
+          <i>Note</i>: This method works only on single field keys.</p>
+{% highlight java %}
+DataSet<Tuple2<String,Integer>> in = // [...]
+DataSet<Integer> result = in.partitionCustom(Partitioner<K> partitioner, key)
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>Sort Partition</strong></td>
+      <td>
+        <p>Locally sorts all partitions of a data set on a specified field in a specified order.
+          Fields can be specified as tuple positions or field expressions.
+          Sorting on multiple fields is done by chaining sortPartition() calls.</p>
+{% highlight java %}
+DataSet<Tuple2<String,Integer>> in = // [...]
+DataSet<Integer> result = in.sortPartition(1, Order.ASCENDING)
+                            .mapPartition(new PartitionMapper());
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>First-n</strong></td>
+      <td>
+        <p>Returns the first n (arbitrary) elements of a data set. First-n can be applied on a regular data set, a grouped data set, or a grouped-sorted data set. Grouping keys can be specified as key-selector functions or field position keys.</p>
+{% highlight java %}
+DataSet<Tuple2<String,Integer>> in = // [...]
+// regular data set
+DataSet<Tuple2<String,Integer>> result1 = in.first(3);
+// grouped data set
+DataSet<Tuple2<String,Integer>> result2 = in.groupBy(0)
+                                            .first(3);
+// grouped-sorted data set
+DataSet<Tuple2<String,Integer>> result3 = in.groupBy(0)
+                                            .sortGroup(1, Order.ASCENDING)
+                                            .first(3);
+{% endhighlight %}
+      </td>
+    </tr>
+  </tbody>
+</table>
+
+----------
+
+The following transformations are available on data sets of Tuples:
+
+<table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 20%">Transformation</th>
+      <th class="text-center">Description</th>
+    </tr>
+  </thead>
+  <tbody>
+   <tr>
+      <td><strong>Project</strong></td>
+      <td>
+        <p>Selects a subset of fields from the tuples</p>
+{% highlight java %}
+DataSet<Tuple3<Integer, Double, String>> in = // [...]
+DataSet<Tuple2<String, Integer>> out = in.project(2,0);
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>MinBy / MaxBy</strong></td>
+      <td>
+        <p>Selects a tuple from a group of tuples whose values of one or more fields are minimum (maximum). The fields which are used for comparison must be valid key fields, i.e., comparable. If multiple tuples have minimum (maximum) field values, an arbitrary tuple of these tuples is returned. MinBy (MaxBy) may be applied on a full data set or a grouped data set.</p>
+{% highlight java %}
+DataSet<Tuple3<Integer, Double, String>> in = // [...]
+// a DataSet with a single tuple with minimum values for the Integer and String fields.
+DataSet<Tuple3<Integer, Double, String>> out = in.minBy(0, 2);
+// a DataSet with one tuple for each group with the minimum value for the Double field.
+DataSet<Tuple3<Integer, Double, String>> out2 = in.groupBy(2)
+                                                  .minBy(1);
+{% endhighlight %}
+      </td>
+    </tr>
+  </tbody>
+</table>
+
+</div>
+<div data-lang="scala" markdown="1">
+<br />
+
+<table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 20%">Transformation</th>
+      <th class="text-center">Description</th>
+    </tr>
+  </thead>
+
+  <tbody>
+    <tr>
+      <td><strong>Map</strong></td>
+      <td>
+        <p>Takes one element and produces one element.</p>
+{% highlight scala %}
+data.map { x => x.toInt }
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>FlatMap</strong></td>
+      <td>
+        <p>Takes one element and produces zero, one, or more elements. </p>
+{% highlight scala %}
+data.flatMap { str => str.split(" ") }
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>MapPartition</strong></td>
+      <td>
+        <p>Transforms a parallel partition in a single function call. The function get the partition
+        as an `Iterator` and can produce an arbitrary number of result values. The number of
+        elements in each partition depends on the degree-of-parallelism and previous operations.</p>
+{% highlight scala %}
+data.mapPartition { in => in map { (_, 1) } }
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>Filter</strong></td>
+      <td>
+        <p>Evaluates a boolean function for each element and retains those for which the function
+        returns true.<br/>
+        <strong>IMPORTANT:</strong> The system assumes that the function does not modify the element on which the predicate is applied.
+        Violating this assumption can lead to incorrect results.</p>
+{% highlight scala %}
+data.filter { _ > 1000 }
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>Reduce</strong></td>
+      <td>
+        <p>Combines a group of elements into a single element by repeatedly combining two elements
+        into one. Reduce may be applied on a full data set, or on a grouped data set.</p>
+{% highlight scala %}
+data.reduce { _ + _ }
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>ReduceGroup</strong></td>
+      <td>
+        <p>Combines a group of elements into one or more elements. ReduceGroup may be applied on a
+        full data set, or on a grouped data set.</p>
+{% highlight scala %}
+data.reduceGroup { elements => elements.sum }
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>Aggregate</strong></td>
+      <td>
+        <p>Aggregates a group of values into a single value. Aggregation functions can be thought of
+        as built-in reduce functions. Aggregate may be applied on a full data set, or on a grouped
+        data set.</p>
+{% highlight scala %}
+val input: DataSet[(Int, String, Double)] = // [...]
+val output: DataSet[(Int, String, Doublr)] = input.aggregate(SUM, 0).aggregate(MIN, 2);
+{% endhighlight %}
+  <p>You can also use short-hand syntax for minimum, maximum, and sum aggregations.</p>
+{% highlight scala %}
+val input: DataSet[(Int, String, Double)] = // [...]
+val output: DataSet[(Int, String, Double)] = input.sum(0).min(2)
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>Distinct</strong></td>
+      <td>
+        <p>Returns the distinct elements of a data set. It removes the duplicate entries
+        from the input DataSet, with respect to all fields of the elements, or a subset of fields.</p>
+      {% highlight scala %}
+         data.distinct()
+      {% endhighlight %}
+      </td>
+    </tr>
+
+    </tr>
+      <td><strong>Join</strong></td>
+      <td>
+        Joins two data sets by creating all pairs of elements that are equal on their keys.
+        Optionally uses a JoinFunction to turn the pair of elements into a single element, or a
+        FlatJoinFunction to turn the pair of elements into arbitrarily many (including none)
+        elements. See the <a href="#specifying-keys">keys section</a> to learn how to define join keys.
+{% highlight scala %}
+// In this case tuple fields are used as keys. "0" is the join field on the first tuple
+// "1" is the join field on the second tuple.
+val result = input1.join(input2).where(0).equalTo(1)
+{% endhighlight %}
+        You can specify the way that the runtime executes the join via <i>Join Hints</i>. The hints
+        describe whether the join happens through partitioning or broadcasting, and whether it uses
+        a sort-based or a hash-based algorithm. Please refer to the
+        <a href="dataset_transformations.html#join-algorithm-hints">Transformations Guide</a> for
+        a list of possible hints and an example.</br>
+        If no hint is specified, the system will try to make an estimate of the input sizes and
+        pick the best strategy according to those estimates.
+{% highlight scala %}
+// This executes a join by broadcasting the first data set
+// using a hash table for the broadcasted data
+val result = input1.join(input2, JoinHint.BROADCAST_HASH_FIRST)
+                   .where(0).equalTo(1)
+{% endhighlight %}
+          Note that the join transformation works only for equi-joins. Other join types need to be expressed using OuterJoin or CoGroup.
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>OuterJoin</strong></td>
+      <td>
+        Performs a left, right, or full outer join on two data sets. Outer joins are similar to regular (inner) joins and create all pairs of elements that are equal on their keys. In addition, records of the "outer" side (left, right, or both in case of full) are preserved if no matching key is found in the other side. Matching pairs of elements (or one element and a `null` value for the other input) are given to a JoinFunction to turn the pair of elements into a single element, or to a FlatJoinFunction to turn the pair of elements into arbitrarily many (including none)         elements. See the <a href="#specifying-keys">keys section</a> to learn how to define join keys.
+{% highlight scala %}
+val joined = left.leftOuterJoin(right).where(0).equalTo(1) {
+   (left, right) =>
+     val a = if (left == null) "none" else left._1
+     (a, right)
+  }
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>CoGroup</strong></td>
+      <td>
+        <p>The two-dimensional variant of the reduce operation. Groups each input on one or more
+        fields and then joins the groups. The transformation function is called per pair of groups.
+        See the <a href="#specifying-keys">keys section</a> to learn how to define coGroup keys.</p>
+{% highlight scala %}
+data1.coGroup(data2).where(0).equalTo(1)
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td><strong>Cross</strong></td>
+      <td>
+        <p>Builds the Cartesian product (cross product) of two inputs, creating all pairs of
+        elements. Optionally uses a CrossFunction to turn the pair of elements into a single
+        element</p>
+{% highlight scala %}
+val data1: DataSet[Int] = // [...]
+val data2: DataSet[String] = // [...]
+val result: DataSet[(Int, String)] = data1.cross(data2)
+{% endhighlight %}
+        <p>Note: Cross is potentially a <b>very</b> compute-intensive operation which can challenge even large compute clusters! It is adviced to hint the system with the DataSet sizes by using <i>crossWithTiny()</i> and <i>crossWithHuge()</i>.</p>
+      </td>
+    </tr>
+    <tr>
+      <td><strong>Union</strong></td>
+      <td>
+        <p>Produces the union of two data sets.</p>
+{% highlight scala %}
+data.union(data2)
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>Rebalance</strong></td>
+      <td>
+        <p>Evenly rebalances the parallel partitions of a data set to eliminate data skew. Only Map-like transformations may follow a rebalance transformation.</p>
+{% highlight scala %}
+val data1: DataSet[Int] = // [...]
+val result: DataSet[(Int, String)] = data1.rebalance().map(...)
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>Hash-Partition</strong></td>
+      <td>
+        <p>Hash-partitions a data set on a given key. Keys can be specified as position keys, expression keys, and key selector functions.</p>
+{% highlight scala %}
+val in: DataSet[(Int, String)] = // [...]
+val result = in.partitionByHash(0).mapPartition { ... }
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>Range-Partition</strong></td>
+      <td>
+        <p>Range-partitions a data set on a given key. Keys can be specified as position keys, expression keys, and key selector functions.</p>
+{% highlight scala %}
+val in: DataSet[(Int, String)] = // [...]
+val result = in.partitionByRange(0).mapPartition { ... }
+{% endhighlight %}
+      </td>
+    </tr>
+    </tr>
+    <tr>
+      <td><strong>Custom Partitioning</strong></td>
+      <td>
+        <p>Manually specify a partitioning over the data.
+          <br/>
+          <i>Note</i>: This method works only on single field keys.</p>
+{% highlight scala %}
+val in: DataSet[(Int, String)] = // [...]
+val result = in
+  .partitionCustom(partitioner: Partitioner[K], key)
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>Sort Partition</strong></td>
+      <td>
+        <p>Locally sorts all partitions of a data set on a specified field in a specified order.
+          Fields can be specified as tuple positions or field expressions.
+          Sorting on multiple fields is done by chaining sortPartition() calls.</p>
+{% highlight scala %}
+val in: DataSet[(Int, String)] = // [...]
+val result = in.sortPartition(1, Order.ASCENDING).mapPartition { ... }
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td><strong>First-n</strong></td>
+      <td>
+        <p>Returns the first n (arbitrary) elements of a data set. First-n can be applied on a regular data set, a grouped data set, or a grouped-sorted data set. Grouping keys can be specified as key-selector functions,
+        tuple positions or case class fields.</p>
+{% highlight scala %}
+val in: DataSet[(Int, String)] = // [...]
+// regular data set
+val result1 = in.first(3)
+// grouped data set
+val result2 = in.groupBy(0).first(3)
+// grouped-sorted data set
+val result3 = in.groupBy(0).sortGroup(1, Order.ASCENDING).first(3)
+{% endhighlight %}
+      </td>
+    </tr>
+  </tbody>
+</table>
+
+----------
+
+The following transformations are available on data sets of Tuples:
+
+<table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 20%">Transformation</th>
+      <th class="text-center">Description</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td><strong>MinBy / MaxBy</strong></td>
+      <td>
+        <p>Selects a tuple from a group of tuples whose values of one or more fields are minimum (maximum). The fields which are used for comparison must be valid key fields, i.e., comparable. If multiple tuples have minimum (maximum) field values, an arbitrary tuple of these tuples is returned. MinBy (MaxBy) may be applied on a full data set or a grouped data set.</p>
+{% highlight java %}
+val in: DataSet[(Int, Double, String)] = // [...]
+// a data set with a single tuple with minimum values for the Int and String fields.
+val out: DataSet[(Int, Double, String)] = in.minBy(0, 2)
+// a data set with one tuple for each group with the minimum value for the Double field.
+val out2: DataSet[(Int, Double, String)] = in.groupBy(2)
+                                             .minBy(1)
+{% endhighlight %}
+      </td>
+    </tr>
+  </tbody>
+</table>
+
+Extraction from tuples, case classes and collections via anonymous pattern matching, like the following:
+{% highlight scala %}
+val data: DataSet[(Int, String, Double)] = // [...]
+data.map {
+  case (id, name, temperature) => // [...]
+}
+{% endhighlight %}
+is not supported by the API out-of-the-box. To use this feature, you should use a <a href="../scala_api_extensions.html">Scala API extension</a>.
+
+</div>
+</div>
+
+The [parallelism]({{ site.baseurl }}/dev/api_concepts.html#parallel-execution) of a transformation can be defined by `setParallelism(int)` while
+`name(String)` assigns a custom name to a transformation which is helpful for debugging. The same is
+possible for [Data Sources](#data-sources) and [Data Sinks](#data-sinks).
+
+`withParameters(Configuration)` passes Configuration objects, which can be accessed from the `open()` method inside the user function.
+
+{% top %}
+
+Data Sources
+------------
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+Data sources create the initial data sets, such as from files or from Java collections. The general
+mechanism of creating data sets is abstracted behind an
+{% gh_link /flink-core/src/main/java/org/apache/flink/api/common/io/InputFormat.java "InputFormat"%}.
+Flink comes
+with several built-in formats to create data sets from common file formats. Many of them have
+shortcut methods on the *ExecutionEnvironment*.
+
+File-based:
+
+- `readTextFile(path)` / `TextInputFormat` - Reads files line wise and returns them as Strings.
+
+- `readTextFileWithValue(path)` / `TextValueInputFormat` - Reads files line wise and returns them as
+  StringValues. StringValues are mutable strings.
+
+- `readCsvFile(path)` / `CsvInputFormat` - Parses files of comma (or another char) delimited fields.
+  Returns a DataSet of tuples or POJOs. Supports the basic java types and their Value counterparts as field
+  types.
+
+- `readFileOfPrimitives(path, Class)` / `PrimitiveInputFormat` - Parses files of new-line (or another char sequence)
+  delimited primitive data types such as `String` or `Integer`.
+
+- `readFileOfPrimitives(path, delimiter, Class)` / `PrimitiveInputFormat` - Parses files of new-line (or another char sequence)
+   delimited primitive data types such as `String` or `Integer` using the given delimiter.
+
+- `readHadoopFile(FileInputFormat, Key, Value, path)` / `FileInputFormat` - Creates a JobConf and reads file from the specified
+   path with the specified FileInputFormat, Key class and Value class and returns them as Tuple2<Key, Value>.
+
+- `readSequenceFile(Key, Value, path)` / `SequenceFileInputFormat` - Creates a JobConf and reads file from the specified path with
+   type SequenceFileInputFormat, Key class and Value class and returns them as Tuple2<Key, Value>.
+
+
+Collection-based:
+
+- `fromCollection(Collection)` - Creates a data set from the Java Java.util.Collection. All elements
+  in the collection must be of the same type.
+
+- `fromCollection(Iterator, Class)` - Creates a data set from an iterator. The class specifies the
+  data type of the elements returned by the iterator.
+
+- `fromElements(T ...)` - Creates a data set from the given sequence of objects. All objects must be
+  of the same type.
+
+- `fromParallelCollection(SplittableIterator, Class)` - Creates a data set from an iterator, in
+  parallel. The class specifies the data type of the elements returned by the iterator.
+
+- `generateSequence(from, to)` - Generates the sequence of numbers in the given interval, in
+  parallel.
+
+Generic:
+
+- `readFile(inputFormat, path)` / `FileInputFormat` - Accepts a file input format.
+
+- `createInput(inputFormat)` / `InputFormat` - Accepts a generic input format.
+
+**Examples**
+
+{% highlight java %}
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+// read text file from local files system
+DataSet<String> localLines = env.readTextFile("file:///path/to/my/textfile");
+
+// read text file from a HDFS running at nnHost:nnPort
+DataSet<String> hdfsLines = env.readTextFile("hdfs://nnHost:nnPort/path/to/my/textfile");
+
+// read a CSV file with three fields
+DataSet<Tuple3<Integer, String, Double>> csvInput = env.readCsvFile("hdfs:///the/CSV/file")
+	                       .types(Integer.class, String.class, Double.class);
+
+// read a CSV file with five fields, taking only two of them
+DataSet<Tuple2<String, Double>> csvInput = env.readCsvFile("hdfs:///the/CSV/file")
+                               .includeFields("10010")  // take the first and the fourth field
+	                       .types(String.class, Double.class);
+
+// read a CSV file with three fields into a POJO (Person.class) with corresponding fields
+DataSet<Person>> csvInput = env.readCsvFile("hdfs:///the/CSV/file")
+                         .pojoType(Person.class, "name", "age", "zipcode");
+
+
+// read a file from the specified path of type TextInputFormat
+DataSet<Tuple2<LongWritable, Text>> tuples =
+ env.readHadoopFile(new TextInputFormat(), LongWritable.class, Text.class, "hdfs://nnHost:nnPort/path/to/file");
+
+// read a file from the specified path of type SequenceFileInputFormat
+DataSet<Tuple2<IntWritable, Text>> tuples =
+ env.readSequenceFile(IntWritable.class, Text.class, "hdfs://nnHost:nnPort/path/to/file");
+
+// creates a set from some given elements
+DataSet<String> value = env.fromElements("Foo", "bar", "foobar", "fubar");
+
+// generate a number sequence
+DataSet<Long> numbers = env.generateSequence(1, 10000000);
+
+// Read data from a relational database using the JDBC input format
+DataSet<Tuple2<String, Integer> dbData =
+    env.createInput(
+      // create and configure input format
+      JDBCInputFormat.buildJDBCInputFormat()
+                     .setDrivername("org.apache.derby.jdbc.EmbeddedDriver")
+                     .setDBUrl("jdbc:derby:memory:persons")
+                     .setQuery("select name, age from persons")
+                     .finish(),
+      // specify type information for DataSet
+      new TupleTypeInfo(Tuple2.class, STRING_TYPE_INFO, INT_TYPE_INFO)
+    );
+
+// Note: Flink's program compiler needs to infer the data types of the data items which are returned
+// by an InputFormat. If this information cannot be automatically inferred, it is necessary to
+// manually provide the type information as shown in the examples above.
+{% endhighlight %}
+
+#### Configuring CSV Parsing
+
+Flink offers a number of configuration options for CSV parsing:
+
+- `types(Class ... types)` specifies the types of the fields to parse. **It is mandatory to configure the types of the parsed fields.**
+  In case of the type class Boolean.class, "True" (case-insensitive), "False" (case-insensitive), "1" and "0" are treated as booleans.
+
+- `lineDelimiter(String del)` specifies the delimiter of individual records. The default line delimiter is the new-line character `'\n'`.
+
+- `fieldDelimiter(String del)` specifies the delimiter that separates fields of a record. The default field delimiter is the comma character `','`.
+
+- `includeFields(boolean ... flag)`, `includeFields(String mask)`, or `includeFields(long bitMask)` defines which fields to read from the input file (and which to ignore). By default the first *n* fields (as defined by the number of types in the `types()` call) are parsed.
+
+- `parseQuotedStrings(char quoteChar)` enables quoted string parsing. Strings are parsed as quoted strings if the first character of the string field is the quote character (leading or tailing whitespaces are *not* trimmed). Field delimiters within quoted strings are ignored. Quoted string parsing fails if the last character of a quoted string field is not the quote character or if the quote character appears at some point which is not the start or the end of the quoted string field (unless the quote character is escaped using '\'). If quoted string parsing is enabled and the first character of the field is *not* the quoting string, the string is parsed as unquoted string. By default, quoted string parsing is disabled.
+
+- `ignoreComments(String commentPrefix)` specifies a comment prefix. All lines that start with the specified comment prefix are not parsed and ignored. By default, no lines are ignored.
+
+- `ignoreInvalidLines()` enables lenient parsing, i.e., lines that cannot be correctly parsed are ignored. By default, lenient parsing is disabled and invalid lines raise an exception.
+
+- `ignoreFirstLine()` configures the InputFormat to ignore the first line of the input file. By default no line is ignored.
+
+
+#### Recursive Traversal of the Input Path Directory
+
+For file-based inputs, when the input path is a directory, nested files are not enumerated by default. Instead, only the files inside the base directory are read, while nested files are ignored. Recursive enumeration of nested files can be enabled through the `recursive.file.enumeration` configuration parameter, like in the following example.
+
+{% highlight java %}
+// enable recursive enumeration of nested input files
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+// create a configuration object
+Configuration parameters = new Configuration();
+
+// set the recursive enumeration parameter
+parameters.setBoolean("recursive.file.enumeration", true);
+
+// pass the configuration to the data source
+DataSet<String> logs = env.readTextFile("file:///path/with.nested/files")
+			  .withParameters(parameters);
+{% endhighlight %}
+
+</div>
+<div data-lang="scala" markdown="1">
+
+Data sources create the initial data sets, such as from files or from Java collections. The general
+mechanism of creating data sets is abstracted behind an
+{% gh_link /flink-core/src/main/java/org/apache/flink/api/common/io/InputFormat.java "InputFormat"%}.
+Flink comes
+with several built-in formats to create data sets from common file formats. Many of them have
+shortcut methods on the *ExecutionEnvironment*.
+
+File-based:
+
+- `readTextFile(path)` / `TextInputFormat` - Reads files line wise and returns them as Strings.
+
+- `readTextFileWithValue(path)` / `TextValueInputFormat` - Reads files line wise and returns them as
+  StringValues. StringValues are mutable strings.
+
+- `readCsvFile(path)` / `CsvInputFormat` - Parses files of comma (or another char) delimited fields.
+  Returns a DataSet of tuples, case class objects, or POJOs. Supports the basic java types and their Value counterparts as field
+  types.
+
+- `readFileOfPrimitives(path, delimiter)` / `PrimitiveInputFormat` - Parses files of new-line (or another char sequence)
+  delimited primitive data types such as `String` or `Integer` using the given delimiter.
+
+- `readHadoopFile(FileInputFormat, Key, Value, path)` / `FileInputFormat` - Creates a JobConf and reads file from the specified
+   path with the specified FileInputFormat, Key class and Value class and returns them as Tuple2<Key, Value>.
+
+- `readSequenceFile(Key, Value, path)` / `SequenceFileInputFormat` - Creates a JobConf and reads file from the specified path with
+   type SequenceFileInputFormat, Key class and Value class and returns them as Tuple2<Key, Value>.
+
+Collection-based:
+
+- `fromCollection(Seq)` - Creates a data set from a Seq. All elements
+  in the collection must be of the same type.
+
+- `fromCollection(Iterator)` - Creates a data set from an Iterator. The class specifies the
+  data type of the elements returned by the iterator.
+
+- `fromElements(elements: _*)` - Creates a data set from the given sequence of objects. All objects
+  must be of the same type.
+
+- `fromParallelCollection(SplittableIterator)` - Creates a data set from an iterator, in
+  parallel. The class specifies the data type of the elements returned by the iterator.
+
+- `generateSequence(from, to)` - Generates the squence of numbers in the given interval, in
+  parallel.
+
+Generic:
+
+- `readFile(inputFormat, path)` / `FileInputFormat` - Accepts a file input format.
+
+- `createInput(inputFormat)` / `InputFormat` - Accepts a generic input format.
+
+**Examples**
+
+{% highlight scala %}
+val env  = ExecutionEnvironment.getExecutionEnvironment
+
+// read text file from local files system
+val localLines = env.readTextFile("file:///path/to/my/textfile")
+
+// read text file from a HDFS running at nnHost:nnPort
+val hdfsLines = env.readTextFile("hdfs://nnHost:nnPort/path/to/my/textfile")
+
+// read a CSV file with three fields
+val csvInput = env.readCsvFile[(Int, String, Double)]("hdfs:///the/CSV/file")
+
+// read a CSV file with five fields, taking only two of them
+val csvInput = env.readCsvFile[(String, Double)](
+  "hdfs:///the/CSV/file",
+  includedFields = Array(0, 3)) // take the first and the fourth field
+
+// CSV input can also be used with Case Classes
+case class MyCaseClass(str: String, dbl: Double)
+val csvInput = env.readCsvFile[MyCaseClass](
+  "hdfs:///the/CSV/file",
+  includedFields = Array(0, 3)) // take the first and the fourth field
+
+// read a CSV file with three fields into a POJO (Person) with corresponding fields
+val csvInput = env.readCsvFile[Person](
+  "hdfs:///the/CSV/file",
+  pojoFields = Array("name", "age", "zipcode"))
+
+// create a set from some given elements
+val values = env.fromElements("Foo", "bar", "foobar", "fubar")
+
+// generate a number sequence
+val numbers = env.generateSequence(1, 10000000);
+
+// read a file from the specified path of type TextInputFormat
+val tuples = env.readHadoopFile(new TextInputFormat, classOf[LongWritable],
+ classOf[Text], "hdfs://nnHost:nnPort/path/to/file")
+
+// read a file from the specified path of type SequenceFileInputFormat
+val tuples = env.readSequenceFile(classOf[IntWritable], classOf[Text],
+ "hdfs://nnHost:nnPort/path/to/file")
+
+{% endhighlight %}
+
+#### Configuring CSV Parsing
+
+Flink offers a number of configuration options for CSV parsing:
+
+- `lineDelimiter: String` specifies the delimiter of individual records. The default line delimiter is the new-line character `'\n'`.
+
+- `fieldDelimiter: String` specifies the delimiter that separates fields of a record. The default field delimiter is the comma character `','`.
+
+- `includeFields: Array[Int]` defines which fields to read from the input file (and which to ignore). By default the first *n* fields (as defined by the number of types in the `types()` call) are parsed.
+
+- `pojoFields: Array[String]` specifies the fields of a POJO that are mapped to CSV fields. Parsers for CSV fields are automatically initialized based on the type and order of the POJO fields.
+
+- `parseQuotedStrings: Character` enables quoted string parsing. Strings are parsed as quoted strings if the first character of the string field is the quote character (leading or tailing whitespaces are *not* trimmed). Field delimiters within quoted strings are ignored. Quoted string parsing fails if the last character of a quoted string field is not the quote character. If quoted string parsing is enabled and the first character of the field is *not* the quoting string, the string is parsed as unquoted string. By default, quoted string parsing is disabled.
+
+- `ignoreComments: String` specifies a comment prefix. All lines that start with the specified comment prefix are not parsed and ignored. By default, no lines are ignored.
+
+- `lenient: Boolean` enables lenient parsing, i.e., lines that cannot be correctly parsed are ignored. By default, lenient parsing is disabled and invalid lines raise an exception.
+
+- `ignoreFirstLine: Boolean` configures the InputFormat to ignore the first line of the input file. By default no line is ignored.
+
+#### Recursive Traversal of the Input Path Directory
+
+For file-based inputs, when the input path is a directory, nested files are not enumerated by default. Instead, only the files inside the base directory are read, while nested files are ignored. Recursive enumeration of nested files can be enabled through the `recursive.file.enumeration` configuration parameter, like in the following example.
+
+{% highlight scala %}
+// enable recursive enumeration of nested input files
+val env  = ExecutionEnvironment.getExecutionEnvironment
+
+// create a configuration object
+val parameters = new Configuration
+
+// set the recursive enumeration parameter
+parameters.setBoolean("recursive.file.enumeration", true)
+
+// pass the configuration to the data source
+env.readTextFile("file:///path/with.nested/files").withParameters(parameters)
+{% endhighlight %}
+
+</div>
+</div>
+
+### Read Compressed Files
+
+Flink currently supports transparent decompression of input files if these are marked with an appropriate file extension. In particular, this means that no further configuration of the input formats is necessary and any `FileInputFormat` support the compression, including custom input formats. Please notice that compressed files might not be read in parallel, thus impacting job scalability.
+
+The following table lists the currently supported compression methods.
+
+<br />
+
+<table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 20%">Compression method</th>
+      <th class="text-left">File extensions</th>
+      <th class="text-left" style="width: 20%">Parallelizable</th>
+    </tr>
+  </thead>
+
+  <tbody>
+    <tr>
+      <td><strong>DEFLATE</strong></td>
+      <td><code>.deflate</code></td>
+      <td>no</td>
+    </tr>
+    <tr>
+      <td><strong>GZip</strong></td>
+      <td><code>.gz</code>, <code>.gzip</code></td>
+      <td>no</td>
+    </tr>
+  </tbody>
+</table>
+
+
+{% top %}
+
+Data Sinks
+----------
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+Data sinks consume DataSets and are used to store or return them. Data sink operations are described
+using an
+{% gh_link /flink-core/src/main/java/org/apache/flink/api/common/io/OutputFormat.java "OutputFormat" %}.
+Flink comes with a variety of built-in output formats that are encapsulated behind operations on the
+DataSet:
+
+- `writeAsText()` / `TextOuputFormat` - Writes elements line-wise as Strings. The Strings are
+  obtained by calling the *toString()* method of each element.
+- `writeAsFormattedText()` / `TextOutputFormat` - Write elements line-wise as Strings. The Strings
+  are obtained by calling a user-defined *format()* method for each element.
+- `writeAsCsv(...)` / `CsvOutputFormat` - Writes tuples as comma-separated value files. Row and field
+  delimiters are configurable. The value for each field comes from the *toString()* method of the objects.
+- `print()` / `printToErr()` / `print(String msg)` / `printToErr(String msg)` - Prints the *toString()* value
+of each element on the standard out / standard error stream. Optionally, a prefix (msg) can be provided which is
+prepended to the output. This can help to distinguish between different calls to *print*. If the parallelism is
+greater than 1, the output will also be prepended with the identifier of the task which produced the output.
+- `write()` / `FileOutputFormat` - Method and base class for custom file outputs. Supports
+  custom object-to-bytes conversion.
+- `output()`/ `OutputFormat` - Most generic output method, for data sinks that are not file based
+  (such as storing the result in a database).
+
+A DataSet can be input to multiple operations. Programs can write or print a data set and at the
+same time run additional transformations on them.
+
+**Examples**
+
+Standard data sink methods:
+
+{% highlight java %}
+// text data
+DataSet<String> textData = // [...]
+
+// write DataSet to a file on the local file system
+textData.writeAsText("file:///my/result/on/localFS");
+
+// write DataSet to a file on a HDFS with a namenode running at nnHost:nnPort
+textData.writeAsText("hdfs://nnHost:nnPort/my/result/on/localFS");
+
+// write DataSet to a file and overwrite the file if it exists
+textData.writeAsText("file:///my/result/on/localFS", WriteMode.OVERWRITE);
+
+// tuples as lines with pipe as the separator "a|b|c"
+DataSet<Tuple3<String, Integer, Double>> values = // [...]
+values.writeAsCsv("file:///path/to/the/result/file", "\n", "|");
+
+// this writes tuples in the text formatting "(a, b, c)", rather than as CSV lines
+values.writeAsText("file:///path/to/the/result/file");
+
+// this writes values as strings using a user-defined TextFormatter object
+values.writeAsFormattedText("file:///path/to/the/result/file",
+    new TextFormatter<Tuple2<Integer, Integer>>() {
+        public String format (Tuple2<Integer, Integer> value) {
+            return value.f1 + " - " + value.f0;
+        }
+    });
+{% endhighlight %}
+
+Using a custom output format:
+
+{% highlight java %}
+DataSet<Tuple3<String, Integer, Double>> myResult = [...]
+
+// write Tuple DataSet to a relational database
+myResult.output(
+    // build and configure OutputFormat
+    JDBCOutputFormat.buildJDBCOutputFormat()
+                    .setDrivername("org.apache.derby.jdbc.EmbeddedDriver")
+                    .setDBUrl("jdbc:derby:memory:persons")
+                    .setQuery("insert into persons (name, age, height) values (?,?,?)")
+                    .finish()
+    );
+{% endhighlight %}
+
+#### Locally Sorted Output
+
+The output of a data sink can be locally sorted on specified fields in specified orders using [tuple field positions]({{ site.baseurl }}/dev/api_concepts.html#define-keys-for-tuples) or [field expressions]({{ site.baseurl }}/dev/api_concepts.html#define-keys-using-field-expressions). This works for every output format.
+
+The following examples show how to use this feature:
+
+{% highlight java %}
+
+DataSet<Tuple3<Integer, String, Double>> tData = // [...]
+DataSet<Tuple2<BookPojo, Double>> pData = // [...]
+DataSet<String> sData = // [...]
+
+// sort output on String field in ascending order
+tData.sortPartition(1, Order.ASCENDING).print();
+
+// sort output on Double field in descending and Integer field in ascending order
+tData.sortPartition(2, Order.DESCENDING).sortPartition(0, Order.ASCENDING).print();
+
+// sort output on the "author" field of nested BookPojo in descending order
+pData.sortPartition("f0.author", Order.DESCENDING).writeAsText(...);
+
+// sort output on the full tuple in ascending order
+tData.sortPartition("*", Order.ASCENDING).writeAsCsv(...);
+
+// sort atomic type (String) output in descending order
+sData.sortPartition("*", Order.DESCENDING).writeAsText(...);
+
+{% endhighlight %}
+
+Globally sorted output is not supported yet.
+
+</div>
+<div data-lang="scala" markdown="1">
+Data sinks consume DataSets and are used to store or return them. Data sink operations are described
+using an
+{% gh_link /flink-core/src/main/java/org/apache/flink/api/common/io/OutputFormat.java "OutputFormat" %}.
+Flink comes with a variety of built-in output formats that are encapsulated behind operations on the
+DataSet:
+
+- `writeAsText()` / `TextOutputFormat` - Writes elements line-wise as Strings. The Strings are
+  obtained by calling the *toString()* method of each element.
+- `writeAsCsv(...)` / `CsvOutputFormat` - Writes tuples as comma-separated value files. Row and field
+  delimiters are configurable. The value for each field comes from the *toString()* method of the objects.
+- `print()` / `printToErr()` - Prints the *toString()* value of each element on the
+  standard out / standard error stream.
+- `write()` / `FileOutputFormat` - Method and base class for custom file outputs. Supports
+  custom object-to-bytes conversion.
+- `output()`/ `OutputFormat` - Most generic output method, for data sinks that are not file based
+  (such as storing the result in a database).
+
+A DataSet can be input to multiple operations. Programs can write or print a data set and at the
+same time run additional transformations on them.
+
+**Examples**
+
+Standard data sink methods:
+
+{% highlight scala %}
+// text data
+val textData: DataSet[String] = // [...]
+
+// write DataSet to a file on the local file system
+textData.writeAsText("file:///my/result/on/localFS")
+
+// write DataSet to a file on a HDFS with a namenode running at nnHost:nnPort
+textData.writeAsText("hdfs://nnHost:nnPort/my/result/on/localFS")
+
+// write DataSet to a file and overwrite the file if it exists
+textData.writeAsText("file:///my/result/on/localFS", WriteMode.OVERWRITE)
+
+// tuples as lines with pipe as the separator "a|b|c"
+val values: DataSet[(String, Int, Double)] = // [...]
+values.writeAsCsv("file:///path/to/the/result/file", "\n", "|")
+
+// this writes tuples in the text formatting "(a, b, c)", rather than as CSV lines
+values.writeAsText("file:///path/to/the/result/file");
+
+// this writes values as strings using a user-defined formatting
+values map { tuple => tuple._1 + " - " + tuple._2 }
+  .writeAsText("file:///path/to/the/result/file")
+{% endhighlight %}
+
+
+#### Locally Sorted Output
+
+The output of a data sink can be locally sorted on specified fields in specified orders using [tuple field positions]({{ site.baseurl }}/dev/api_concepts.html#define-keys-for-tuples) or [field expressions]({{ site.baseurl }}/dev/api_concepts.html#define-keys-using-field-expressions). This works for every output format.
+
+The following examples show how to use this feature:
+
+{% highlight scala %}
+
+val tData: DataSet[(Int, String, Double)] = // [...]
+val pData: DataSet[(BookPojo, Double)] = // [...]
+val sData: DataSet[String] = // [...]
+
+// sort output on String field in ascending order
+tData.sortPartition(1, Order.ASCENDING).print;
+
+// sort output on Double field in descending and Int field in ascending order
+tData.sortPartition(2, Order.DESCENDING).sortPartition(0, Order.ASCENDING).print;
+
+// sort output on the "author" field of nested BookPojo in descending order
+pData.sortPartition("_1.author", Order.DESCENDING).writeAsText(...);
+
+// sort output on the full tuple in ascending order
+tData.sortPartition("_", Order.ASCENDING).writeAsCsv(...);
+
+// sort atomic type (String) output in descending order
+sData.sortPartition("_", Order.DESCENDING).writeAsText(...);
+
+{% endhighlight %}
+
+Globally sorted output is not supported yet.
+
+</div>
+</div>
+
+{% top %}
+
+
+Iteration Operators
+-------------------
+
+Iterations implement loops in Flink programs. The iteration operators encapsulate a part of the
+program and execute it repeatedly, feeding back the result of one iteration (the partial solution)
+into the next iteration. There are two types of iterations in Flink: **BulkIteration** and
+**DeltaIteration**.
+
+This section provides quick examples on how to use both operators. Check out the [Introduction to
+Iterations](iterations.html) page for a more detailed introduction.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+#### Bulk Iterations
+
+To create a BulkIteration call the `iterate(int)` method of the DataSet the iteration should start
+at. This will return an `IterativeDataSet`, which can be transformed with the regular operators. The
+single argument to the iterate call specifies the maximum number of iterations.
+
+To specify the end of an iteration call the `closeWith(DataSet)` method on the `IterativeDataSet` to
+specify which transformation should be fed back to the next iteration. You can optionally specify a
+termination criterion with `closeWith(DataSet, DataSet)`, which evaluates the second DataSet and
+terminates the iteration, if this DataSet is empty. If no termination criterion is specified, the
+iteration terminates after the given maximum number iterations.
+
+The following example iteratively estimates the number Pi. The goal is to count the number of random
+points, which fall into the unit circle. In each iteration, a random point is picked. If this point
+lies inside the unit circle, we increment the count. Pi is then estimated as the resulting count
+divided by the number of iterations multiplied by 4.
+
+{% highlight java %}
+final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+// Create initial IterativeDataSet
+IterativeDataSet<Integer> initial = env.fromElements(0).iterate(10000);
+
+DataSet<Integer> iteration = initial.map(new MapFunction<Integer, Integer>() {
+    @Override
+    public Integer map(Integer i) throws Exception {
+        double x = Math.random();
+        double y = Math.random();
+
+        return i + ((x * x + y * y < 1) ? 1 : 0);
+    }
+});
+
+// Iteratively transform the IterativeDataSet
+DataSet<Integer> count = initial.closeWith(iteration);
+
+count.map(new MapFunction<Integer, Double>() {
+    @Override
+    public Double map(Integer count) throws Exception {
+        return count / (double) 10000 * 4;
+    }
+}).print();
+
+env.execute("Iterative Pi Example");
+{% endhighlight %}
+
+You can also check out the
+{% gh_link /flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/clustering/KMeans.java "K-Means example" %},
+which uses a BulkIteration to cluster a set of unlabeled points.
+
+#### Delta Iterations
+
+Delta iterations exploit the fact that certain algorithms do not change every data point of the
+solution in each iteration.
+
+In addition to the partial solution that is fed back (called workset) in every iteration, delta
+iterations maintain state across iterations (called solution set), which can be updated through
+deltas. The result of the iterative computation is the state after the last iteration. Please refer
+to the [Introduction to Iterations](iterations.html) for an overview of the basic principle of delta
+iterations.
+
+Defining a DeltaIteration is similar to defining a BulkIteration. For delta iterations, two data
+sets form the input to each iteration (workset and solution set), and two data sets are produced as
+the result (new workset, solution set delta) in each iteration.
+
+To create a DeltaIteration call the `iterateDelta(DataSet, int, int)` (or `iterateDelta(DataSet,
+int, int[])` respectively). This method is called on the initial solution set. The arguments are the
+initial delta set, the maximum number of iterations and the key positions. The returned
+`DeltaIteration` object gives you access to the DataSets representing the workset and solution set
+via the methods `iteration.getWorkset()` and `iteration.getSolutionSet()`.
+
+Below is an example for the syntax of a delta iteration
+
+{% highlight java %}
+// read the initial data sets
+DataSet<Tuple2<Long, Double>> initialSolutionSet = // [...]
+
+DataSet<Tuple2<Long, Double>> initialDeltaSet = // [...]
+
+int maxIterations = 100;
+int keyPosition = 0;
+
+DeltaIteration<Tuple2<Long, Double>, Tuple2<Long, Double>> iteration = initialSolutionSet
+    .iterateDelta(initialDeltaSet, maxIterations, keyPosition);
+
+DataSet<Tuple2<Long, Double>> candidateUpdates = iteration.getWorkset()
+    .groupBy(1)
+    .reduceGroup(new ComputeCandidateChanges());
+
+DataSet<Tuple2<Long, Double>> deltas = candidateUpdates
+    .join(iteration.getSolutionSet())
+    .where(0)
+    .equalTo(0)
+    .with(new CompareChangesToCurrent());
+
+DataSet<Tuple2<Long, Double>> nextWorkset = deltas
+    .filter(new FilterByThreshold());
+
+iteration.closeWith(deltas, nextWorkset)
+	.writeAsCsv(outputPath);
+{% endhighlight %}
+
+</div>
+<div data-lang="scala" markdown="1">
+#### Bulk Iterations
+
+To create a BulkIteration call the `iterate(int)` method of the DataSet the iteration should start
+at and also specify a step function. The step function gets the input DataSet for the current
+iteration and must return a new DataSet. The parameter of the iterate call is the maximum number
+of iterations after which to stop.
+
+There is also the `iterateWithTermination(int)` function that accepts a step function that
+returns two DataSets: The result of the iteration step and a termination criterion. The iterations
+are stopped once the termination criterion DataSet is empty.
+
+The following example iteratively estimates the number Pi. The goal is to count the number of random
+points, which fall into the unit circle. In each iteration, a random point is picked. If this point
+lies inside the unit circle, we increment the count. Pi is then estimated as the resulting count
+divided by the number of iterations multiplied by 4.
+
+{% highlight scala %}
+val env = ExecutionEnvironment.getExecutionEnvironment()
+
+// Create initial DataSet
+val initial = env.fromElements(0)
+
+val count = initial.iterate(10000) { iterationInput: DataSet[Int] =>
+  val result = iterationInput.map { i =>
+    val x = Math.random()
+    val y = Math.random()
+    i + (if (x * x + y * y < 1) 1 else 0)
+  }
+  result
+}
+
+val result = count map { c => c / 10000.0 * 4 }
+
+result.print()
+
+env.execute("Iterative Pi Example");
+{% endhighlight %}
+
+You can also check out the
+{% gh_link /flink-examples/flink-examples-batch/src/main/scala/org/apache/flink/examples/scala/clustering/KMeans.scala "K-Means example" %},
+which uses a BulkIteration to cluster a set of unlabeled points.
+
+#### Delta Iterations
+
+Delta iterations exploit the fact that certain algorithms do not change every data point of the
+solution in each iteration.
+
+In addition to the partial solution that is fed back (called workset) in every iteration, delta
+iterations maintain state across iterations (called solution set), which can be updated through
+deltas. The result of the iterative computation is the state after the last iteration. Please refer
+to the [Introduction to Iterations](iterations.html) for an overview of the basic principle of delta
+iterations.
+
+Defining a DeltaIteration is similar to defining a BulkIteration. For delta iterations, two data
+sets form the input to each iteration (workset and solution set), and two data sets are produced as
+the result (new workset, solution set delta) in each iteration.
+
+To create a DeltaIteration call the `iterateDelta(initialWorkset, maxIterations, key)` on the
+initial solution set. The step function takes two parameters: (solutionSet, workset), and must
+return two values: (solutionSetDelta, newWorkset).
+
+Below is an example for the syntax of a delta iteration
+
+{% highlight scala %}
+// read the initial data sets
+val initialSolutionSet: DataSet[(Long, Double)] = // [...]
+
+val initialWorkset: DataSet[(Long, Double)] = // [...]
+
+val maxIterations = 100
+val keyPosition = 0
+
+val result = initialSolutionSet.iterateDelta(initialWorkset, maxIterations, Array(keyPosition)) {
+  (solution, workset) =>
+    val candidateUpdates = workset.groupBy(1).reduceGroup(new ComputeCandidateChanges())
+    val deltas = candidateUpdates.join(solution).where(0).equalTo(0)(new CompareChangesToCurrent())
+
+    val nextWorkset = deltas.filter(new FilterByThreshold())
+
+    (deltas, nextWorkset)
+}
+
+result.writeAsCsv(outputPath)
+
+env.execute()
+{% endhighlight %}
+
+</div>
+</div>
+
+{% top %}
+
+Operating on data objects in functions
+--------------------------------------
+
+Flink's runtime exchanges data with user functions in form of Java objects. Functions receive input objects from the runtime as method parameters and return output objects as result. Because these objects are accessed by user functions and runtime code, it is very important to understand and follow the rules about how the user code may access, i.e., read and modify, these objects.
+
+User functions receive objects from Flink's runtime either as regular method parameters (like a `MapFunction`) or through an `Iterable` parameter (like a `GroupReduceFunction`). We refer to objects that the runtime passes to a user function as *input objects*. User functions can emit objects to the Flink runtime either as a method return value (like a `MapFunction`) or through a `Collector` (like a `FlatMapFunction`). We refer to objects which have been emitted by the user function to the runtime as *output objects*.
+
+Flink's DataSet API features two modes that differ in how Flink's runtime creates or reuses input objects. This behavior affects the guarantees and constraints for how user functions may interact with input and output objects. The following sections define these rules and give coding guidelines to write safe user function code.
+
+### Object-Reuse Disabled (DEFAULT)
+
+By default, Flink operates in object-reuse disabled mode. This mode ensures that functions always receive new input objects within a function call. The object-reuse disabled mode gives better guarantees and is safer to use. However, it comes with a certain processing overhead and might cause higher Java garbage collection activity. The following table explains how user functions may access input and output objects in object-reuse disabled mode.
+
+<table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 20%">Operation</th>
+      <th class="text-center">Guarantees and Restrictions</th>
+    </tr>
+  </thead>
+  <tbody>
+   <tr>
+      <td><strong>Reading Input Objects</strong></td>
+      <td>
+        Within a method call it is guaranteed that the value of an input object does not change. This includes objects served by an Iterable. For example it is safe to collect input objects served by an Iterable in a List or Map. Note that objects may be modified after the method call is left. It is <strong>not safe</strong> to remember objects across function calls.
+      </td>
+   </tr>
+   <tr>
+      <td><strong>Modifying Input Objects</strong></td>
+      <td>You may modify input objects.</td>
+   </tr>
+   <tr>
+      <td><strong>Emitting Input Objects</strong></td>
+      <td>
+        You may emit input objects. The value of an input object may have changed after it was emitted. It is <strong>not safe</strong> to read an input object after it was emitted.
+      </td>
+   </tr>
+   <tr>
+      <td><strong>Reading Output Objects</strong></td>
+      <td>
+        An object that was given to a Collector or returned as method result might have changed its value. It is <strong>not safe</strong> to read an output object.
+      </td>
+   </tr>
+   <tr>
+      <td><strong>Modifying Output Objects</strong></td>
+      <td>You may modify an object after it was emitted and emit it again.</td>
+   </tr>
+  </tbody>
+</table>
+
+**Coding guidelines for the object-reuse disabled (default) mode:**
+
+- Do not remember and read input objects across method calls.
+- Do not read objects after you emitted them.
+
+
+### Object-Reuse Enabled
+
+In object-reuse enabled mode, Flink's runtime minimizes the number of object instantiations. This can improve the performance and can reduce the Java garbage collection pressure. The object-reuse enabled mode is activated by calling `ExecutionConfig.enableObjectReuse()`. The following table explains how user functions may access input and output objects in object-reuse enabled mode.
+
+<table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 20%">Operation</th>
+      <th class="text-center">Guarantees and Restrictions</th>
+    </tr>
+  </thead>
+  <tbody>
+   <tr>
+      <td><strong>Reading input objects received as regular method parameters</strong></td>
+      <td>
+        Input objects received as regular method arguments are not modified within a function call. Objects may be modified after method call is left. It is <strong>not safe</strong> to remember objects across function calls.
+      </td>
+   </tr>
+   <tr>
+      <td><strong>Reading input objects received from an Iterable parameter</strong></td>
+      <td>
+        Input objects received from an Iterable are only valid until the next() method is called. An Iterable or Iterator may serve the same object instance multiple times. It is <strong>not safe</strong> to remember input objects received from an Iterable, e.g., by putting them in a List or Map.
+      </td>
+   </tr>
+   <tr>
+      <td><strong>Modifying Input Objects</strong></td>
+      <td>You <strong>must not</strong> modify input objects, except for input objects of MapFunction, FlatMapFunction, MapPartitionFunction, GroupReduceFunction, GroupCombineFunction, CoGroupFunction, and InputFormat.next(reuse).</td>
+   </tr>
+   <tr>
+      <td><strong>Emitting Input Objects</strong></td>
+      <td>
+        You <strong>must not</strong> emit input objects, except for input objects of MapFunction, FlatMapFunction, MapPartitionFunction, GroupReduceFunction, GroupCombineFunction, CoGroupFunction, and InputFormat.next(reuse).</td>
+      </td>
+   </tr>
+   <tr>
+      <td><strong>Reading Output Objects</strong></td>
+      <td>
+        An object that was given to a Collector or returned as method result might have changed its value. It is <strong>not safe</strong> to read an output object.
+      </td>
+   </tr>
+   <tr>
+      <td><strong>Modifying Output Objects</strong></td>
+      <td>You may modify an output object and emit it again.</td>
+   </tr>
+  </tbody>
+</table>
+
+**Coding guidelines for object-reuse enabled:**
+
+- Do not remember input objects received from an `Iterable`.
+- Do not remember and read input objects across method calls.
+- Do not modify or emit input objects, except for input objects of `MapFunction`, `FlatMapFunction`, `MapPartitionFunction`, `GroupReduceFunction`, `GroupCombineFunction`, `CoGroupFunction`, and `InputFormat.next(reuse)`.
+- To reduce object instantiations, you can always emit a dedicated output object which is repeatedly modified but never read.
+
+{% top %}
+
+Debugging
+---------
+
+Before running a data analysis program on a large data set in a distributed cluster, it is a good
+idea to make sure that the implemented algorithm works as desired. Hence, implementing data analysis
+programs is usually an incremental process of checking results, debugging, and improving.
+
+Flink provides a few nice features to significantly ease the development process of data analysis
+programs by supporting local debugging from within an IDE, injection of test data, and collection of
+result data. This section give some hints how to ease the development of Flink programs.
+
+### Local Execution Environment
+
+A `LocalEnvironment` starts a Flink system within the same JVM process it was created in. If you
+start the LocalEnvironment from an IDE, you can set breakpoints in your code and easily debug your
+program.
+
+A LocalEnvironment is created and used as follows:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+final ExecutionEnvironment env = ExecutionEnvironment.createLocalEnvironment();
+
+DataSet<String> lines = env.readTextFile(pathToTextFile);
+// build your program
+
+env.execute();
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+
+{% highlight scala %}
+val env = ExecutionEnvironment.createLocalEnvironment()
+
+val lines = env.readTextFile(pathToTextFile)
+// build your program
+
+env.execute();
+{% endhighlight %}
+</div>
+</div>
+
+### Collection Data Sources and Sinks
+
+Providing input for an analysis program and checking its output is cumbersome when done by creating
+input files and reading output files. Flink features special data sources and sinks which are backed
+by Java collections to ease testing. Once a program has been tested, the sources and sinks can be
+easily replaced by sources and sinks that read from / write to external data stores such as HDFS.
+
+Collection data sources can be used as follows:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+final ExecutionEnvironment env = ExecutionEnvironment.createLocalEnvironment();
+
+// Create a DataSet from a list of elements
+DataSet<Integer> myInts = env.fromElements(1, 2, 3, 4, 5);
+
+// Create a DataSet from any Java collection
+List<Tuple2<String, Integer>> data = ...
+DataSet<Tuple2<String, Integer>> myTuples = env.fromCollection(data);
+
+// Create a DataSet from an Iterator
+Iterator<Long> longIt = ...
+DataSet<Long> myLongs = env.fromCollection(longIt, Long.class);
+{% endhighlight %}
+
+A collection data sink is specified as follows:
+
+{% highlight java %}
+DataSet<Tuple2<String, Integer>> myResult = ...
+
+List<Tuple2<String, Integer>> outData = new ArrayList<Tuple2<String, Integer>>();
+myResult.output(new LocalCollectionOutputFormat(outData));
+{% endhighlight %}
+
+**Note:** Currently, the collection data sink is restricted to local execution, as a debugging tool.
+
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = ExecutionEnvironment.createLocalEnvironment()
+
+// Create a DataSet from a list of elements
+val myInts = env.fromElements(1, 2, 3, 4, 5)
+
+// Create a DataSet from any Collection
+val data: Seq[(String, Int)] = ...
+val myTuples = env.fromCollection(data)
+
+// Create a DataSet from an Iterator
+val longIt: Iterator[Long] = ...
+val myLongs = env.fromCollection(longIt)
+{% endhighlight %}
+</div>
+</div>
+
+**Note:** Currently, the collection data source requires that data types and iterators implement
+`Serializable`. Furthermore, collection data sources can not be executed in parallel (
+parallelism = 1).
+
+{% top %}
+
+Semantic Annotations
+-----------
+
+Semantic annotations can be used to give Flink hints about the behavior of a function.
+They tell the system which fields of a function's input the function reads and evaluates and
+which fields it unmodified forwards from its input to its output.
+Semantic annotations are a powerful means to speed up execution, because they
+allow the system to reason about reusing sort orders or partitions across multiple operations. Using
+semantic annotations may eventually save the program from unnecessary data shuffling or unnecessary
+sorts and significantly improve the performance of a program.
+
+**Note:** The use of semantic annotations is optional. However, it is absolutely crucial to
+be conservative when providing semantic annotations!
+Incorrect semantic annotations will cause Flink to make incorrect assumptions about your program and
+might eventually lead to incorrect results.
+If the behavior of an operator is not clearly predictable, no annotation should be provided.
+Please read the documentation carefully.
+
+The following semantic annotations are currently supported.
+
+#### Forwarded Fields Annotation
+
+Forwarded fields information declares input fields which are unmodified forwarded by a function to the same position or to another position in the output.
+This information is used by the optimizer to infer whether a data property such as sorting or
+partitioning is preserved by a function.
+For functions that operate on groups of input elements such as `GroupReduce`, `GroupCombine`, `CoGroup`, and `MapPartition`, all fields that are defined as forwarded fields must always be jointly forwarded from the same input element. The forwarded fields of each element that is emitted by a group-wise function may originate from a different element of the function's input group.
+
+Field forward information is specified using [field expressions]({{ site.baseurl }}/dev/api_concepts.html#define-keys-using-field-expressions).
+Fields that are forwarded to the same position in the output can be specified by their position.
+The specified position must be valid for the input and output data type and have the same type.
+For example the String `"f2"` declares that the third field of a Java input tuple is always equal to the third field in the output tuple.
+
+Fields which are unmodified forwarded to another position in the output are declared by specifying the
+source field in the input and the target field in the output as field expressions.
+The String `"f0->f2"` denotes that the first field of the Java input tuple is
+unchanged copied to the third field of the Java output tuple. The wildcard expression `*` can be used to refer to a whole input or output type, i.e., `"f0->*"` denotes that the output of a function is always equal to the first field of its Java input tuple.
+
+Multiple forwarded fields can be declared in a single String by separating them with semicolons as `"f0; f2->f1; f3->f2"` or in separate Strings `"f0", "f2->f1", "f3->f2"`. When specifying forwarded fields it is not required that all forwarded fields are declared, but all declarations must be correct.
+
+Forwarded field information can be declared by attaching Java annotations on function class definitions or
+by passing them as operator arguments after invoking a function on a DataSet as shown below.
+
+##### Function Class Annotations
+
+* `@ForwardedFields` for single input functions such as Map and Reduce.
+* `@ForwardedFieldsFirst` for the first input of a functions with two inputs such as Join and CoGroup.
+* `@ForwardedFieldsSecond` for the second input of a functions with two inputs such as Join and CoGroup.
+
+##### Operator Arguments
+
+* `data.map(myMapFnc).withForwardedFields()` for single input function such as Map and Reduce.
+* `data1.join(data2).where().equalTo().with(myJoinFnc).withForwardFieldsFirst()` for the first input of a function with two inputs such as Join and CoGroup.
+* `data1.join(data2).where().equalTo().with(myJoinFnc).withForwardFieldsSecond()` for the second input of a function with two inputs such as Join and CoGroup.
+
+Please note that it is not possible to overwrite field forward information which was specified as a class annotation by operator arguments.
+
+##### Example
+
+The following example shows how to declare forwarded field information using a function class annotation:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+@ForwardedFields("f0->f2")
+public class MyMap implements
+              MapFunction<Tuple2<Integer, Integer>, Tuple3<String, Integer, Integer>> {
+  @Override
+  public Tuple3<String, Integer, Integer> map(Tuple2<Integer, Integer> val) {
+    return new Tuple3<String, Integer, Integer>("foo", val.f1 / 2, val.f0);
+  }
+}
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+@ForwardedFields("_1->_3")
+class MyMap extends MapFunction[(Int, Int), (String, Int, Int)]{
+   def map(value: (Int, Int)): (String, Int, Int) = {
+    return ("foo", value._2 / 2, value._1)
+  }
+}
+{% endhighlight %}
+
+</div>
+</div>
+
+#### Non-Forwarded Fields
+
+Non-forwarded fields information declares all fields which are not preserved on the same position in a function's output.
+The values of all other fields are considered to be preserved at the same position in the output.
+Hence, non-forwarded fields information is inverse to forwarded fields information.
+Non-forwarded field information for group-wise operators such as `GroupReduce`, `GroupCombine`, `CoGroup`, and `MapPartition` must fulfill the same requirements as for forwarded field information.
+
+**IMPORTANT**: The specification of non-forwarded fields information is optional. However if used,
+**ALL!** non-forwarded fields must be specified, because all other fields are considered to be forwarded in place. It is safe to declare a forwarded field as non-forwarded.
+
+Non-forwarded fields are specified as a list of [field expressions]({{ site.baseurl }}/dev/api_concepts.html#define-keys-using-field-expressions). The list can be either given as a single String with field expressions separated by semicolons or as multiple Strings.
+For example both `"f1; f3"` and `"f1", "f3"` declare that the second and fourth field of a Java tuple
+are not preserved in place and all other fields are preserved in place.
+Non-forwarded field information can only be specified for functions which have identical input and output types.
+
+Non-forwarded field information is specified as function class annotations using the following annotations:
+
+* `@NonForwardedFields` for single input functions such as Map and Reduce.
+* `@NonForwardedFieldsFirst` for the first input of a function with two inputs such as Join and CoGroup.
+* `@NonForwardedFieldsSecond` for the second input of a function with two inputs such as Join and CoGroup.
+
+##### Example
+
+The following example shows how to declare non-forwarded field information:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+@NonForwardedFields("f1") // second field is not forwarded
+public class MyMap implements
+              MapFunction<Tuple2<Integer, Integer>, Tuple2<Integer, Integer>> {
+  @Override
+  public Tuple2<Integer, Integer> map(Tuple2<Integer, Integer> val) {
+    return new Tuple2<Integer, Integer>(val.f0, val.f1 / 2);
+  }
+}
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+@NonForwardedFields("_2") // second field is not forwarded
+class MyMap extends MapFunction[(Int, Int), (Int, Int)]{
+  def map(value: (Int, Int)): (Int, Int) = {
+    return (value._1, value._2 / 2)
+  }
+}
+{% endhighlight %}
+
+</div>
+</div>
+
+#### Read Fields
+
+Read fields information declares all fields that are accessed and evaluated by a function, i.e.,
+all fields that are used by the function to compute its result.
+For example, fields which are evaluated in conditional statements or used for computations must be marked as read when specifying read fields information.
+Fields which are only unmodified forwarded to the output without evaluating their values or fields which are not accessed at all are not considered to be read.
+
+**IMPORTANT**: The specification of read fields information is optional. However if used,
+**ALL!** read fields must be specified. It is safe to declare a non-read field as read.
+
+Read fields are specified as a list of [field expressions]({{ site.baseurl }}/dev/api_concepts.html#define-keys-using-field-expressions). The list can be either given as a single String with field expressions separated by semicolons or as multiple Strings.
+For example both `"f1; f3"` and `"f1", "f3"` declare that the second and fourth field of a Java tuple are read and evaluated by the function.
+
+Read field information is specified as function class annotations using the following annotations:
+
+* `@ReadFields` for single input functions such as Map and Reduce.
+* `@ReadFieldsFirst` for the first input of a function with two inputs such as Join and CoGroup.
+* `@ReadFieldsSecond` for the second input of a function with two inputs such as Join and CoGroup.
+
+##### Example
+
+The following example shows how to declare read field information:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+@ReadFields("f0; f3") // f0 and f3 are read and evaluated by the function.
+public class MyMap implements
+              MapFunction<Tuple4<Integer, Integer, Integer, Integer>,
+                          Tuple2<Integer, Integer>> {
+  @Override
+  public Tuple2<Integer, Integer> map(Tuple4<Integer, Integer, Integer, Integer> val) {
+    if(val.f0 == 42) {
+      return new Tuple2<Integer, Integer>(val.f0, val.f1);
+    } else {
+      return new Tuple2<Integer, Integer>(val.f3+10, val.f1);
+    }
+  }
+}
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+@ReadFields("_1; _4") // _1 and _4 are read and evaluated by the function.
+class MyMap extends MapFunction[(Int, Int, Int, Int), (Int, Int)]{
+   def map(value: (Int, Int, Int, Int)): (Int, Int) = {
+    if (value._1 == 42) {
+      return (value._1, value._2)
+    } else {
+      return (value._4 + 10, value._2)
+    }
+  }
+}
+{% endhighlight %}
+
+</div>
+</div>
+
+{% top %}
+
+
+Broadcast Variables
+-------------------
+
+Broadcast variables allow you to make a data set available to all parallel instances of an
+operation, in addition to the regular input of the operation. This is useful for auxiliary data
+sets, or data-dependent parameterization. The data set will then be accessible at the operator as a
+Collection.
+
+- **Broadcast**: broadcast sets are registered by name via `withBroadcastSet(DataSet, String)`, and
+- **Access**: accessible via `getRuntimeContext().getBroadcastVariable(String)` at the target operator.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+// 1. The DataSet to be broadcasted
+DataSet<Integer> toBroadcast = env.fromElements(1, 2, 3);
+
+DataSet<String> data = env.fromElements("a", "b");
+
+data.map(new RichMapFunction<String, String>() {
+    @Override
+    public void open(Configuration parameters) throws Exception {
+      // 3. Access the broadcasted DataSet as a Collection
+      Collection<Integer> broadcastSet = getRuntimeContext().getBroadcastVariable("broadcastSetName");
+    }
+
+
+    @Override
+    public String map(String value) throws Exception {
+        ...
+    }
+}).withBroadcastSet(toBroadcast, "broadcastSetName"); // 2. Broadcast the DataSet
+{% endhighlight %}
+
+Make sure that the names (`broadcastSetName` in the previous example) match when registering and
+accessing broadcasted data sets. For a complete example program, have a look at
+{% gh_link /flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/clustering/KMeans.java#L96 "K-Means Algorithm" %}.
+</div>
+<div data-lang="scala" markdown="1">
+
+{% highlight scala %}
+// 1. The DataSet to be broadcasted
+val toBroadcast = env.fromElements(1, 2, 3)
+
+val data = env.fromElements("a", "b")
+
+data.map(new RichMapFunction[String, String]() {
+    var broadcastSet: Traversable[String] = null
+
+    override def open(config: Configuration): Unit = {
+      // 3. Access the broadcasted DataSet as a Collection
+      broadcastSet = getRuntimeContext().getBroadcastVariable[String]("broadcastSetName").asScala
+    }
+
+    def map(in: String): String = {
+        ...
+    }
+}).withBroadcastSet(toBroadcast, "broadcastSetName") // 2. Broadcast the DataSet
+{% endhighlight %}
+
+Make sure that the names (`broadcastSetName` in the previous example) match when registering and
+accessing broadcasted data sets. For a complete example program, have a look at
+{% gh_link /flink-examples/flink-examples-batch/src/main/scala/org/apache/flink/examples/scala/clustering/KMeans.scala#L96 "KMeans Algorithm" %}.
+</div>
+</div>
+
+**Note**: As the content of broadcast variables is kept in-memory on each node, it should not become
+too large. For simpler things like scalar values you can simply make parameters part of the closure
+of a function, or use the `withParameters(...)` method to pass in a configuration.
+
+{% top %}
+
+Distributed Cache
+-------------------
+
+Flink offers a distributed cache, similar to Apache Hadoop, to make files locally accessible to parallel instances of user functions. This functionality can be used to share files that contain static external data such as dictionaries or machine-learned regression models.
+
+The cache works as follows. A program registers a file

<TRUNCATED>

[04/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/fig/parallel_dataflow.svg
----------------------------------------------------------------------
diff --git a/docs/fig/parallel_dataflow.svg b/docs/fig/parallel_dataflow.svg
new file mode 100644
index 0000000..3a699a9
--- /dev/null
+++ b/docs/fig/parallel_dataflow.svg
@@ -0,0 +1,487 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<svg
+   xmlns:dc="http://purl.org/dc/elements/1.1/"
+   xmlns:cc="http://creativecommons.org/ns#"
+   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+   xmlns:svg="http://www.w3.org/2000/svg"
+   xmlns="http://www.w3.org/2000/svg"
+   version="1.1"
+   width="657.83496"
+   height="439.34708"
+   id="svg2">
+  <defs
+     id="defs4" />
+  <metadata
+     id="metadata7">
+    <rdf:RDF>
+      <cc:Work
+         rdf:about="">
+        <dc:format>image/svg+xml</dc:format>
+        <dc:type
+           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+        <dc:title></dc:title>
+      </cc:Work>
+    </rdf:RDF>
+  </metadata>
+  <g
+     transform="translate(-218.47648,-86.629238)"
+     id="layer1">
+    <g
+       transform="translate(65.132093,66.963871)"
+       id="g2989">
+      <g
+         transform="translate(149.87814,1.1341165)"
+         id="g3265">
+        <path
+           d="m 9.930599,62.678115 c 0,-17.648147 14.309815,-31.957962 31.957961,-31.957962 17.657524,0 31.967339,14.309815 31.967339,31.957962 0,17.657524 -14.309815,31.957961 -31.967339,31.957961 -17.648146,0 -31.957961,-14.300437 -31.957961,-31.957961"
+           id="path3267"
+           style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+        <text
+           x="24.468645"
+           y="66.67173"
+           id="text3269"
+           xml:space="preserve"
+           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source</text>
+        <path
+           d="m 147.93685,62.678115 c 0,-17.648147 14.34733,-31.957962 32.03298,-31.957962 17.69504,0 32.04236,14.309815 32.04236,31.957962 0,17.657524 -14.34732,31.957961 -32.04236,31.957961 -17.68565,0 -32.03298,-14.300437 -32.03298,-31.957961"
+           id="path3271"
+           style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+        <text
+           x="164.556"
+           y="66.67173"
+           id="text3273"
+           xml:space="preserve"
+           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">map()</text>
+        <path
+           d="m 81.66722,58.533332 50.16875,0 0,-4.219801 8.4396,8.439602 -8.4396,8.439603 0,-4.219801 -50.16875,0 z"
+           id="path3275"
+           style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+        <path
+           d="m 219.67348,58.533332 50.16874,0 0,-4.219801 8.43961,8.439602 -8.43961,8.439603 0,-4.219801 -50.16874,0 z"
+           id="path3277"
+           style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+        <path
+           d="m 285.93373,62.678115 c 0,-17.648147 14.34733,-31.957962 32.05174,-31.957962 17.68565,0 32.03298,14.309815 32.03298,31.957962 0,17.648146 -14.34733,31.957961 -32.03298,31.957961 -17.70441,0 -32.05174,-14.309815 -32.05174,-31.957961"
+           id="path3279"
+           style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+        <text
+           x="295.73941"
+           y="54.668739"
+           id="text3281"
+           xml:space="preserve"
+           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">keyBy</text>
+        <text
+           x="326.64713"
+           y="54.668739"
+           id="text3283"
+           xml:space="preserve"
+           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">()/</text>
+        <text
+           x="292.28857"
+           y="66.67173"
+           id="text3285"
+           xml:space="preserve"
+           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">window()/</text>
+        <text
+           x="299.79044"
+           y="78.674713"
+           id="text3287"
+           xml:space="preserve"
+           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">apply()</text>
+        <path
+           d="m 357.67035,58.533332 50.16875,0 0,-4.219801 8.43961,8.439602 -8.43961,8.439603 0,-4.219801 -50.16875,0 z"
+           id="path3289"
+           style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+        <path
+           d="m 423.94937,62.678115 c 0,-17.648147 14.34732,-31.957962 32.03298,-31.957962 17.70441,0 32.03298,14.309815 32.03298,31.957962 0,17.648146 -14.32857,31.957961 -32.03298,31.957961 -17.68566,0 -32.03298,-14.309815 -32.03298,-31.957961"
+           id="path3291"
+           style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+        <text
+           x="444.97049"
+           y="66.67173"
+           id="text3293"
+           xml:space="preserve"
+           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Sink</text>
+        <text
+           x="21.30452"
+           y="299.24048"
+           id="text3295"
+           xml:space="preserve"
+           style="font-size:8.70216751px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Operator</text>
+        <text
+           x="23.104969"
+           y="309.74313"
+           id="text3297"
+           xml:space="preserve"
+           style="font-size:8.70216751px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Subtask</text>
+        <path
+           d="m 41.991711,290.75368 -0.825205,-10.71829 1.247185,-0.0938 0.825206,10.71829 -1.247186,0.0938 z m -2.597522,-9.33045 2.109901,-5.17628 2.878842,4.79181 -4.988743,0.38447 z"
+           id="path3299"
+           style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+        <path
+           d="m 225.60933,152.70054 17.00111,0 0,-16.33532 33.99284,0 0,16.33532 16.99174,0 -33.99285,16.33532 z"
+           id="path3301"
+           style="fill:none;stroke:#000000;stroke-width:1.24718571px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+        <path
+           d="m 9.930599,241.5508 c 0,-17.69503 14.309815,-32.04236 31.957961,-32.04236 17.657524,0 31.967339,14.34733 31.967339,32.04236 0,17.69503 -14.309815,32.04236 -31.967339,32.04236 -17.648146,0 -31.957961,-14.34733 -31.957961,-32.04236"
+           id="path3303"
+           style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+        <text
+           x="24.468645"
+           y="239.48763"
+           id="text3305"
+           xml:space="preserve"
+           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source</text>
+        <text
+           x="34.221073"
+           y="251.49062"
+           id="text3307"
+           xml:space="preserve"
+           style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[1]</text>
+        <path
+           d="m 147.93685,241.5508 c 0,-17.69503 14.34733,-32.04236 32.03298,-32.04236 17.69504,0 32.04236,14.34733 32.04236,32.04236 0,17.69503 -14.34732,32.04236 -32.04236,32.04236 -17.68565,0 -32.03298,-14.34733 -32.03298,-32.04236"
+           id="path3309"
+           style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+        <text
+           x="164.556"
+           y="239.48763"
+           id="text3311"
+           xml:space="preserve"
+           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">map</text>
+        <text
+           x="186.6115"
+           y="239.48763"
+           id="text3313"
+           xml:space="preserve"
+           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">()</text>
+        <text
+           x="172.35796"
+           y="251.49062"
+           id="text3315"
+           xml:space="preserve"
+           style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[1]</text>
+        <path
+           d="m 81.66722,237.331 50.16875,0 0,-4.2198 8.4396,8.4396 -8.4396,8.4396 0,-4.2198 -50.16875,0 z"
+           id="path3317"
+           style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+        <path
+           d="m 219.67348,237.331 50.16874,0 0,-4.2198 8.43961,8.4396 -8.43961,8.4396 0,-4.2198 -50.16874,0 z"
+           id="path3319"
+           style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+        <path
+           d="m 285.93373,241.56018 c 0,-17.70441 14.34733,-32.05174 32.05174,-32.05174 17.68565,0 32.03298,14.34733 32.03298,32.05174 0,17.68565 -14.34733,32.03298 -32.03298,32.03298 -17.70441,0 -32.05174,-14.34733 -32.05174,-32.03298"
+           id="path3321"
+           style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+        <text
+           x="295.73941"
+           y="227.48463"
+           id="text3323"
+           xml:space="preserve"
+           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">keyBy</text>
+        <text
+           x="326.64713"
+           y="227.48463"
+           id="text3325"
+           xml:space="preserve"
+           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">()/</text>
+        <text
+           x="292.28857"
+           y="239.48763"
+           id="text3327"
+           xml:space="preserve"
+           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">window()/</text>
+        <text
+           x="299.79044"
+           y="251.49062"
+           id="text3329"
+           xml:space="preserve"
+           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">apply</text>
+        <text
+           x="327.09723"
+           y="251.49062"
+           id="text3331"
+           xml:space="preserve"
+           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">()</text>
+        <text
+           x="310.29306"
+           y="263.49359"
+           id="text3333"
+           xml:space="preserve"
+           style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[1]</text>
+        <path
+           d="m 361.70261,245.31111 45.1425,22.03674 1.85671,-3.78844 3.88222,11.29031 -11.29032,3.88222 1.85671,-3.78844 -45.16125,-22.03674 z"
+           id="path3335"
+           style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+        <path
+           d="m 418.79183,298.04925 c 0,-17.64815 14.34733,-31.95796 32.03298,-31.95796 17.70441,0 32.03298,14.30981 32.03298,31.95796 0,17.64815 -14.32857,31.95796 -32.03298,31.95796 -17.68565,0 -32.03298,-14.30981 -32.03298,-31.95796"
+           id="path3337"
+           style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+        <text
+           x="439.83328"
+           y="296.00317"
+           id="text3339"
+           xml:space="preserve"
+           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Sink</text>
+        <text
+           x="443.13412"
+           y="308.00616"
+           id="text3341"
+           xml:space="preserve"
+           style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[1]</text>
+        <path
+           d="m 9.9399763,370.32976 c 0,-17.68566 14.3098147,-32.03298 31.9579617,-32.03298 17.648146,0 31.957961,14.34732 31.957961,32.03298 0,17.70441 -14.309815,32.05173 -31.957961,32.05173 -17.648147,0 -31.9579617,-14.34732 -31.9579617,-32.05173"
+           id="path3343"
+           style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+        <text
+           x="24.468645"
+           y="368.29453"
+           id="text3345"
+           xml:space="preserve"
+           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source</text>
+        <text
+           x="34.221073"
+           y="380.29749"
+           id="text3347"
+           xml:space="preserve"
+           style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[2]</text>
+        <path
+           d="m 147.93685,370.32976 c 0,-17.68566 14.34733,-32.03298 32.03298,-32.03298 17.70442,0 32.05174,14.34732 32.05174,32.03298 0,17.70441 -14.34732,32.05173 -32.05174,32.05173 -17.68565,0 -32.03298,-14.34732 -32.03298,-32.05173"
+           id="path3349"
+           style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+        <text
+           x="164.556"
+           y="368.29453"
+           id="text3351"
+           xml:space="preserve"
+           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">map</text>
+        <text
+           x="186.6115"
+           y="368.29453"
+           id="text3353"
+           xml:space="preserve"
+           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">()</text>
+        <text
+           x="172.35796"
+           y="380.29749"
+           id="text3355"
+           xml:space="preserve"
+           style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[2]</text>
+        <path
+           d="m 81.676598,366.10996 50.168752,0 0,-4.2198 8.4396,8.4396 -8.4396,8.4396 0,-4.2198 -50.168752,0 z"
+           id="path3357"
+           style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+        <path
+           d="m 219.67348,366.10996 50.16874,0 0,-4.2198 8.43961,8.4396 -8.43961,8.4396 0,-4.2198 -50.16874,0 z"
+           id="path3359"
+           style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+        <path
+           d="m 285.93373,370.32976 c 0,-17.68566 14.34733,-32.03298 32.05174,-32.03298 17.68565,0 32.03298,14.34732 32.03298,32.03298 0,17.70441 -14.34733,32.05173 -32.03298,32.05173 -17.70441,0 -32.05174,-14.34732 -32.05174,-32.05173"
+           id="path3361"
+           style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+        <text
+           x="295.73941"
+           y="356.29153"
+           id="text3363"
+           xml:space="preserve"
+           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">keyBy</text>
+        <text
+           x="326.64713"
+           y="356.29153"
+           id="text3365"
+           xml:space="preserve"
+           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">()/</text>
+        <text
+           x="292.28857"
+           y="368.29453"
+           id="text3367"
+           xml:space="preserve"
+           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">window()/</text>
+        <text
+           x="299.79044"
+           y="380.29749"
+           id="text3369"
+           xml:space="preserve"
+           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">apply</text>
+        <text
+           x="327.09723"
+           y="380.29749"
+           id="text3371"
+           xml:space="preserve"
+           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">()</text>
+        <text
+           x="310.29306"
+           y="392.30048"
+           id="text3373"
+           xml:space="preserve"
+           style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[2]</text>
+        <path
+           d="m 361.70261,366.54131 45.1425,-22.03674 1.85671,3.78845 3.88222,-11.29031 -11.29032,-3.88222 1.85671,3.78844 -45.16125,22.03674 z"
+           id="path3375"
+           style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+        <path
+           d="m 220.66747,351.51882 62.3968,-79.33226 3.31958,2.6069 -1.42536,-11.85295 -11.8342,1.42535 3.30082,2.6069 -62.39679,79.33226 z"
+           id="path3377"
+           style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
+        <text
+           x="97.286781"
+           y="299.24048"
+           id="text3379"
+           xml:space="preserve"
+           style="font-size:8.70216751px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Stream</text>
+        <text
+           x="94.886185"
+           y="309.74313"
+           id="text3381"
+           xml:space="preserve"
+           style="font-size:8.70216751px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Partition</text>
+        <path
+           d="m 43.19201,317.87294 -1.68792,10.37133 1.219053,0.20631 1.706676,-10.37134 -1.237809,-0.2063 z m -3.338331,8.83345 1.650411,5.32633 3.282067,-4.51988 -4.932478,-0.80645 z"
+           id="path3383"
+           style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+        <path
+           d="m 112.5843,285.92436 -2.08177,-32.93321 1.25656,-0.075 2.07239,32.93321 -1.24718,0.075 z m -3.87284,-31.56412 2.18492,-5.14816 2.8132,4.82933 -4.99812,0.31883 z"
+           id="path3385"
+           style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+        <path
+           d="m 112.69683,316.20377 -3.20705,37.99697 1.25656,0.11253 3.20705,-37.99697 -1.25656,-0.11253 z m -4.96999,36.60912 2.08177,5.17629 2.90697,-4.76368 -4.98874,-0.41261 z"
+           id="path3387"
+           style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+        <path
+           d="m 3.5258784,397.7866 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.2378 1.256563,0 0,1.2378 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,
 0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49438 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49438 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49438 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.2
 56563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.2378 1.256563,0 0,1.2378 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49438 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49438 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656
  1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49438 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1
 .25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.2378 1.256563,0 0,1.2378 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.2378 1.256563,0 0,1.2378 -1.256563,0 z m 0.018755,-2.53188 0.056264,-0.8252 0.075019,-0.46887 1.2190537,0.18755 -0.056264,0.43136 0,-0.0563 -0.037509,0.80645 -1.256563,-0.075 z m 0.3750934,-2.58814 0.1687921,-0.67517 0.2063013,-0.58139 1.1815444,0.43135 -0.2063014,0.54389 0.018755,-0.0563 -0.1687921,0.65641 -1.200299,-0.31883 z m 0.9002243,-2.45686 0.225056,-0.48762 0.3938482,-0.65642 1.0690163,0.65642 -0.3750935,0.6189 0.018755,-0.0563 -0.2063014,0.46886 -1.1252803,-0.54388 z m 1.3878457,-2.21305 0.1875467,-0.26257 0.6564136,-0.73143 0.9377336,0.84396 -0.6376589,0.69392 0.037509,-0.0375 -0.1875468,0.24381 -0.9939976,-0.75018 z m 1.8004486,-1.89423 0.093773,-0.075 0.9564883,-0.73144 0.037509,-0
 .0187 0.6564136,1.06901 -0.018755,0 0.056264,-0.0188 -0.9189789,0.67517 0.037509,-0.0375 -0.056264,0.0563 -0.8439602,-0.91898 z m 2.2318056,-1.50037 0.975243,-0.45011 0.206301,-0.075 0.431358,1.16279 -0.187547,0.075 0.05626,-0.0188 -0.956488,0.45012 -0.525131,-1.14404 z m 2.419353,-0.95649 0.900224,-0.22505 0.375093,-0.0563 0.187547,1.23781 -0.356339,0.0375 0.07502,0 -0.862715,0.2063 -0.318829,-1.2003 z m 2.588145,-0.43136 0.825205,-0.0563 0.450112,0 0,1.25656 -0.431357,0 0.03751,0 -0.825205,0.0375 -0.05626,-1.23781 z m 2.53188,-0.0563 1.237809,0 0,1.25656 -1.237809,0 0,-1.25656 z m 2.494372,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.494371,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.513126,0 1.237809,0 0,1.25656 -1.237809,0 0,-1.25656 z m 2.494372,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.494371,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.513126,0 1.237808,0 0,1.25656 -1.237808,0 0,-1.25656 z m 2.494372,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.2565
 6 z m 2.494371,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.513126,0 1.237808,0 0,1.25656 -1.237808,0 0,-1.25656 z m 2.494371,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.494372,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.513126,0 1.237808,0 0,1.25656 -1.237808,0 0,-1.25656 z m 2.494371,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.494372,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.513126,0 1.237808,0 0,1.25656 -1.237808,0 0,-1.25656 z m 2.494371,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.494372,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.513126,0 1.031507,0 0.24381,0.0188 -0.05626,1.2378 -0.225056,0 0.01875,0 -1.012752,0 0,-1.25656 z m 2.588144,0.11253 0.956489,0.15004 0.337584,0.075 -0.31883,1.21906 -0.300074,-0.075 0.05626,0 -0.918979,-0.13128 0.187546,-1.23781 z m 2.550636,0.60015 0.768941,0.28132 0.450113,0.2063 -0.543886,1.12528 -0.412603,-0.18755 0.03751,0.0188 -0.731433,-0.28132 0.431358,-1.16279 z m 2.363089,1.10652 0.543
 885,0.31883 0.543886,0.41261 -0.750187,1.01275 -0.525131,-0.39385 0.05626,0.0188 -0.506376,-0.30007 0.637659,-1.06902 z m 2.081768,1.5754 0.31883,0.28132 0.600149,0.67516 -0.937733,0.82521 -0.581395,-0.63766 0.05626,0.0375 -0.300075,-0.26256 0.84396,-0.91898 z m 1.72543,1.96924 0.131283,0.16879 0.56264,0.93773 -1.069016,0.65642 -0.562641,-0.91898 0.03751,0.0563 -0.09377,-0.15004 0.993998,-0.75018 z m 1.294072,2.34433 0.412603,1.12528 0.03751,0.11253 -1.219054,0.31883 -0.01875,-0.0938 0.01875,0.0563 -0.412603,-1.08777 1.181544,-0.43136 z m 0.750187,2.51313 0.150038,1.05026 0.01875,0.24381 -1.256563,0.075 0,-0.22506 0,0.0563 -0.150037,-1.01276 1.237808,-0.18754 z m 0.225056,2.58814 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49437 0,1.25657 -1.256563,0 0,-1.25657 1.256563,0 z m 0,2.49437 0,1.25657 -1.256563,0 0,-1.25657 1.256563,0 z m 0,2.51313 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49437 0,1.25657 -1.256563,0 0,-1.25657 1.256563,0 z m 0,2.49437 0,1.25657 -1.25656
 3,0 0,-1.25657 1.256563,0 z m 0,2.51313 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.49437 0,1.25657 -1.256563,0 0,-1.25657 1.256563,0 z m 0,2.51313 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.51313 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.51313 0,1.2378 -1.256563,0 0,-1.2378 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.51313 0,1.2378 -1.256563,0 0,-1.2378 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.51312 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49438 0,1.25656 -1.256563,0 0,-1.25656 
 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.51312 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49437 0,1.25657 -1.256563,0 0,-1.25657 1.256563,0 z m 0,2.49438 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.51312 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49437 0,1.25657 -1.256563,0 0,-1.25657 1.256563,0 z m 0,2.49437 0,1.25657 -1.256563,0 0,-1.25657 1.256563,0 z m 0,2.51313 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49437 0,1.25657 -1.256563,0 0,-1.25657 1.256563,0 z m 0,2.49437 0,1.25657 -1.256563,0 0,-1.25657 1.256563,0 z m 0,2.51313 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.49437 0,1.25657 -1.256563,0 0,-1.25657 1.256563,0 z m 0,2.51313 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.51313 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 
 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.51313 0,1.2378 -1.256563,0 0,-1.2378 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.51313 0,1.2378 -1.256563,0 0,-1.2378 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.51312 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49438 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.51312 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49437 0,1.25657 -1.256563,0 0,-1.25657 1.256563,0 z m 0,2.49438 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.51312 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49437 0,1.25657 -1.256563,0 0,-1.25657 1.256563,0 z m 0,2.49437 0,1.25657 -1.256563,0 0,-1.25657 1.256563,0 z m 0,2.51313 0
 ,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49437 0,1.25657 -1.256563,0 0,-1.25657 1.256563,0 z m 0,2.49437 0,1.25657 -1.256563,0 0,-1.25657 1.256563,0 z m 0,2.51313 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.49437 0,1.25657 -1.256563,0 0,-1.25657 1.256563,0 z m 0,2.51313 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.51313 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.51313 0,1.2378 -1.256563,0 0,-1.2378 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.51313 0,1.2378 -1.256563,0 0,-1.2378 1.256563,0 z m -0.03751,2.53188 -0.03751,0.58139 -0.112528,0.71268 -1.237809,-0.18755 0.112528,-0.6
 9392 0,0.075 0.01875,-0.56264 1.256563,0.075 z m -0.412603,2.58814 -0.112528,0.43136 -0.300074,0.80645 -1.16279,-0.43136 0.28132,-0.76894 -0.01875,0.0563 0.09377,-0.4126 1.219053,0.31883 z m -0.937733,2.43811 -0.131283,0.26256 -0.525131,0.86272 -1.069016,-0.63766 0.506376,-0.84396 -0.03751,0.0375 0.131282,-0.22506 1.125281,0.54389 z m -1.44411,2.19429 -0.03751,0.0563 -0.806451,0.90022 -0.05626,0.0563 -0.843961,-0.91898 0.03751,-0.0375 -0.05626,0.0375 0.768941,-0.84396 -0.03751,0.0375 0.03751,-0.0375 0.993998,0.75018 z m -1.894222,1.87547 -0.806451,0.61891 -0.262565,0.15003 -0.637659,-1.06901 0.225056,-0.15004 -0.05626,0.0375 0.787696,-0.5814 0.750187,0.994 z m -2.194297,1.4066 -0.750187,0.35634 -0.450112,0.16879 -0.431357,-1.16279 0.412603,-0.16879 -0.03751,0.0188 0.712678,-0.33759 0.543885,1.12528 z m -2.456862,0.91898 -0.656413,0.16879 -0.637659,0.0938 -0.168792,-1.23781 0.581395,-0.0938 -0.05626,0.0188 0.637659,-0.15004 0.300074,1.2003 z m -2.588144,0.39385 -0.581395,0.0375 -0.69
 3923,0 0,-1.25656 0.675168,0 -0.01875,0 0.543886,-0.0188 0.07502,1.23781 z m -2.531881,0.0375 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.494372,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.513126,0 -1.237808,0 0,-1.25656 1.237808,0 0,1.25656 z m -2.494371,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.494372,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.513126,0 -1.237808,0 0,-1.25656 1.237808,0 0,1.25656 z m -2.494371,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.494371,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.513127,0 -1.237808,0 0,-1.25656 1.237808,0 0,1.25656 z m -2.494371,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.494371,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.513126,0 -1.237809,0 0,-1.25656 1.237809,0 0,1.25656 z m -2.494372,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.494371,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.513126,0 -1.237809,0 0,-1.25656 1.237809,0 0,1.25656 z m -2.494372,0 -1.
 256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.494371,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.513126,0 -1.237809,0 0,-1.25656 1.237809,0 0,1.25656 z m -2.494372,0 -0.787696,0 -0.487621,-0.0375 0.05626,-1.23781 0.487621,0.0188 -0.03751,0 0.768941,0 0,1.25656 z m -2.588144,-0.15004 -0.712678,-0.11253 -0.581395,-0.15003 0.31883,-1.2003 0.543885,0.13128 -0.07502,-0.0188 0.693923,0.11252 -0.187546,1.23781 z m -2.550636,-0.63766 -0.506376,-0.2063 -0.675168,-0.31883 0.525131,-1.12528 0.656413,0.30008 -0.05626,-0.0188 0.506376,0.18755 -0.450112,1.18154 z m -2.3443336,-1.16279 -0.3188294,-0.18754 -0.7501869,-0.56264 0.7501869,-1.01276 0.7314322,0.54389 -0.056264,-0.0375 0.3000748,0.18755 -0.6564136,1.06901 z m -2.0442592,-1.6129 -0.1312828,-0.11253 -0.7689415,-0.84396 0.9377336,-0.84396 0.7501869,0.82521 -0.056264,-0.0375 0.112528,0.0938 -0.8439602,0.91898 z m -1.7066752,-2.06301 -0.5813949,-0.93774 -0.093773,-0.18754 1.1252803,-0.54389 0.075019,0.15004 -0.018755,-0.0375 0.5626
 402,0.91898 -1.0690163,0.63766 z m -1.2190537,-2.32558 -0.3188294,-0.88147 -0.093773,-0.35634 1.2190537,-0.30008 0.075019,0.31883 -0.018755,-0.0563 0.3188294,0.84396 -1.1815443,0.43136 z m -0.6939229,-2.51313 -0.112528,-0.80645 -0.037509,-0.50638 1.2565631,-0.0563 0.018755,0.46887 0,-0.075 0.1125281,0.7877 -1.2378084,0.18754 z"
+           id="path3389"
+           style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+        <path
+           d="m 78.075701,399.02441 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.2378 1.256563,0 0,1.2378 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563
 ,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.2378 1.256563,0 0,1.2378 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49438 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49438 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.25
 6563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.2378 1.256563,0 0,1.2378 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.2378 1.256563,0 0,1.2378 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49438 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.
 256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49438 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25
 656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.2378 1.256563,0 0,1.2378 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.2378 1.256563,0 0,1.2378 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0.01875,-2.53188 0.03751,-0.67517 0.09377,-0.63766 1.237808,0.18755 -0.09377,0.60015 0.01875,-0.0563 -0.03751,0.63766 -1.256563,-0.0563 z m 0.412603,-2.58814 0.07502,-0.30008 0.356339,-0.95649 1.181544,0.45012 -0.356339,0.91897 0.01875,-0.0563 -0.05626,0.24381 -1.219054,-0.30007 z m 1.031507,-2.47562 0.468867,-0.7877 0.243811,-0.31883 0.993997,0.75019 -0.206301,0.30008 0.01875,-0.0563 -0.450113,0.76894 -1.069016,-0.65641 z m 1.537883,-2.11928 0.31883,-0.33758 0.637658,-0.56264 0.825206,0.91898 -0.600149,0.54388 0.05626,-0.0375 -0.300075,0.31883 -0.937734,-0.84396 z m 2.025505,-1.74418 0.918979,-0.54
 389 0.225056,-0.11253 0.543885,1.12528 -0.206301,0.11253 0.05626,-0.0375 -0.88147,0.52513 -0.656413,-1.06901 z m 2.344334,-1.18155 0.600149,-0.22505 0.637659,-0.1688 0.31883,1.21906 -0.618905,0.15004 0.07502,-0.0188 -0.581395,0.22506 -0.431357,-1.18155 z m 2.531881,-0.63766 0.28132,-0.0375 1.031507,-0.0563 0.05626,1.25656 -0.993998,0.0563 0.05626,-0.0188 -0.243811,0.0375 -0.187546,-1.23781 z m 2.588144,-0.0938 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.513126,0 1.237809,0 0,1.25656 -1.237809,0 0,-1.25656 z m 2.494372,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.494371,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.513133,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51313,0 1.2378,0 0,1.25656 -1.2378,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51313,0 1.2378,0 0,1.
 25656 -1.2378,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51312,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49438,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51312,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.53188,0.0375 0.28132,0.0188 1.01276,0.15004 -0.18755,1.23781 -0.97524,-0.15004 0.0563,0.0188 -0.26256,-0.0188 0.075,-1.25656 z m 2.62566,0.52513 0.90022,0.33758 0.31883,0.15004 -0.54388,1.12528 -0.28132,-0.13128 0.0375,0.0187 -0.86272,-0.33758 0.43136,-1.16279 z m 2.36309,1.10653 0.46886,0.26256 0.61891,0.46887 -0.75019,1.01275 -0.60015,-0.45011 0.0563,0.0188 -0.43136,-0.24381 0.63766,-1.06901 z m 2.08177,1.59414 0.0563,0.0563 0.73143,0.80645 0.11253,0.15004 -0.994,0.75018 -0.11253,-0.13128 0.0375,0.0375 -0.67517,-0.75019 0.0375,0.0375 -0.0375,-0.0375 0.84397,-0.91898 z m 1.65041,2.10053 0.35634,0.56264 0.28132,0.58139 -
 1.14404,0.54389 -0.26257,-0.56264 0.0375,0.0563 -0.33758,-0.54388 1.06902,-0.63766 z m 1.12528,2.34433 0.0938,0.24381 0.26257,1.03151 -1.21906,0.30007 -0.24381,-0.97524 0.0188,0.0563 -0.0938,-0.2063 1.18155,-0.45011 z m 0.54388,2.62565 0.0563,0.97525 0,0.31883 -1.25657,0 0,-0.30008 0,0.0375 -0.0375,-0.95649 1.23781,-0.075 z m 0.0563,2.53189 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51312 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.2565
 7,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.2378 -1.25657,0 0,-1.2378 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.2378 -1.25657,0 0,-1.2378 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51312 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49438 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51312 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.49438 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51312 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657
  1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.2378 -1.25657,0 0,-1.2378 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.2378 -1.25657,0 0,-1.2378 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-
 1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51312 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49438 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51312 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.49438 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51312 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25656 -1.
 25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.2378 -1.25657,0 0,-1.2378 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,0.50637 -0.0375,0.76894 -1.25657,-0.0563 0.0375,-0.76894 0,0.0375 0,-0.48762 1.25657,0 z m -0.20631,2.56939 -0.0188,0.2063 -0.28132,1.06901 -0.0188,0.075 -1.18155,-0.43136 0.0188,-0.0375 -0.0188,0.0563 0.26257,-1.01275 -0.0188,0.0563 0.0188,-0.16879 1.2378,0.18755 z m -0.80645,2.56939 -0.35633,0.75018 -0.24382,0.39385 -1.06901,-0.63766 0.22505,-0.37509 -0.0375,0.0563 0.35634,-0.73143 1.12528,0.54389 z m -1.35033,2.2318 -0.22506,0.31883 -0.6189,0.67517 -0.91898,-0.82521 0.58139,-0.65641 -0.0375,0.0375 0.22506,-0.30008 0.994,0.75019 z m -1.85672,1.91298 -0.7
 6894,0.60015 -0.30007,0.16879 -0.63766,-1.06902 0.26257,-0.16879 -0.0563,0.0375 0.75019,-0.56264 0.75018,0.994 z m -2.21305,1.4066 -0.48762,0.22505 -0.71268,0.26257 -0.43135,-1.16279 0.67517,-0.26256 -0.0375,0.0188 0.45011,-0.2063 0.54388,1.12528 z m -2.47561,0.86271 -0.13129,0.0375 -1.12528,0.1688 -0.0938,0 -0.0563,-1.25657 0.0563,0 -0.0563,0 1.05026,-0.15004 -0.0563,0.0188 0.11253,-0.0375 0.30008,1.21905 z m -2.62566,0.26257 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.49438,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.49438,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.5131
 3,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.513133,0 -1.237808,0 0,-1.25656 1.237808,0 0,1.25656 z m -2.494371,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.494372,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.513126,0 -1.237808,0 0,-1.25656 1.237808,0 0,1.25656 z m -2.494371,0 -0.112528,0 -1.144035,-0.0563 -0.07502,-0.0188 0.168792,-1.23781 0.05626,0 -0.05626,0 1.087771,0.0563 -0.03751,0 0.112528,0 0,1.25656 z m -2.625654,-0.30008 -0.843961,-0.2063 -0.431357,-0.16879 0.450112,-1.16279 0.393848,0.15004 -0.07502,-0.0188 0.806451,0.20631 -0.300074,1.20029 z m -2.475617,-0.88147 -0.393848,-0.18754 -0.750187,-0.45011 0.656413,-1.06902 0.712678,0.43136 -0.05626,-0.0375 0.375093,0.18754 -0.543885,1.12528 z m -2.250561,-1.44411 -0.768941,-0.69392 -0.168792,-0.2063 0.918978,-0.84396 0.168793,0.18755 -0.05626,-0.0375 0.750186,0.67517 -0.84396,0.91897 z m -1
 .800448,-1.89422 -0.356339,-0.46886 -0.375094,-0.61891 1.069017,-0.65641 0.356339,0.60015 -0.01875,-0.0563 0.337584,0.45012 -1.012752,0.75018 z m -1.312827,-2.25056 -0.07502,-0.15004 -0.393848,-1.05026 -0.01875,-0.0938 1.200299,-0.30007 0.01875,0.0563 -0.01875,-0.0563 0.375094,0.97524 -0.01875,-0.0375 0.05626,0.11253 -1.12528,0.54388 z m -0.787697,-2.56939 -0.131282,-0.8252 -0.01875,-0.46887 1.256563,-0.075 0.01875,0.45011 -0.01875,-0.075 0.131283,0.80646 -1.237809,0.18754 z"
+           id="path3391"
+           style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+        <text
+           x="15.596898"
+           y="152.72169"
+           id="text3393"
+           xml:space="preserve"
+           style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Operator</text>
+        <path
+           d="m 41.344675,137.05914 0,-31.63913 -1.247186,0 0,31.63913 1.247186,0 z m 1.875467,-30.38256 -2.503749,-5.0075 -2.494371,5.0075 4.99812,0 z"
+           id="path3395"
+           style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+        <path
+           d="m 40.106867,159.79918 -0.722055,40.14438 1.247185,0.0188 0.731433,-40.135 -1.256563,-0.0281 z m -2.578768,38.85969 2.409976,5.045 2.588144,-4.95123 -4.99812,-0.0938 z"
+           id="path3397"
+           style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+        <text
+           x="94.709129"
+           y="152.72169"
+           id="text3399"
+           xml:space="preserve"
+           style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Stream</text>
+        <path
+           d="m 114.72233,137.0779 -5.04501,-55.213756 1.24719,-0.112528 5.03563,55.213754 -1.23781,0.11253 z m -6.79857,-53.797778 2.03488,-5.204421 2.94449,4.754309 -4.97937,0.450112 z"
+           id="path3401"
+           style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+        <path
+           d="m 114.71295,159.7523 -4.21042,40.15375 1.24718,0.13128 4.21043,-40.16313 -1.24719,-0.1219 z m -5.94523,38.71902 1.96924,5.23255 3.01013,-4.7168 -4.97937,-0.51575 z"
+           id="path3403"
+           style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+        <path
+           d="m 515.52843,213.10934 c 4.87622,0 8.83345,0.65641 8.83345,1.48162 l 0,97.76811 c 0,0.8252 3.95724,1.48162 8.83345,1.48162 -4.87621,0 -8.83345,0.65641 -8.83345,1.46286 l 0,97.78686 c 0,0.82521 -3.95723,1.48162 -8.83345,1.48162"
+           id="path3405"
+           style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+        <text
+           x="548.92151"
+           y="311.33228"
+           id="text3407"
+           xml:space="preserve"
+           style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Streaming Dataflow</text>
+        <text
+           x="552.97247"
+           y="324.83566"
+           id="text3409"
+           xml:space="preserve"
+           style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">(parallelized view)</text>
+        <path
+           d="m 515.52843,19.14852 c 4.87622,0 8.83345,0.675169 8.83345,1.481619 l 0,38.315796 c 0,0.806451 3.95724,1.462864 8.83345,1.462864 -4.87621,0 -8.83345,0.675169 -8.83345,1.481619 l 0,38.315792 c 0,0.80645 -3.95723,1.46287 -8.83345,1.46287"
+           id="path3411"
+           style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+        <text
+           x="548.92151"
+           y="57.884895"
+           id="text3413"
+           xml:space="preserve"
+           style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Streaming Dataflow</text>
+        <text
+           x="554.77295"
+           y="71.38826"
+           id="text3415"
+           xml:space="preserve"
+           style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">(condensed view)</text>
+        <text
+           x="436.57739"
+           y="455.61459"
+           id="text3417"
+           xml:space="preserve"
+           style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">parallelism = 1</text>
+        <text
+           x="333.19894"
+           y="430.43018"
+           id="text3419"
+           xml:space="preserve"
+           style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">parallelism = 2</text>
+        <path
+           d="m 472.86155,433.04538 -16.93547,-110.48377 1.23781,-0.18755 16.93547,110.48378 -1.23781,0.18754 z m -18.58588,-108.96464 1.70668,-5.32633 3.2258,4.57614 -4.93248,0.75019 z"
+           id="path3421"
+           style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+        <path
+           d="m 375.41227,414.94712 -40.30379,-145.70505 1.2003,-0.31882 40.30379,145.68629 -1.2003,0.33758 z m -41.78541,-143.99837 1.06902,-5.47636 3.75094,4.14478 -4.81996,1.33158 z"
+           id="path3423"
+           style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+        <path
+           d="m 352.00644,414.6658 -9.45235,-15.17253 1.06902,-0.67516 9.45235,15.19128 -1.06902,0.65641 z m -10.37133,-13.12827 -0.52513,-5.57013 4.76369,2.92572 -4.23856,2.64441 z"
+           id="path3425"
+           style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+        <path
+           d="m 145.59252,398.51803 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.2378 1.25656,0 0,1.2378 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.
 49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0
 ,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.2378 1.25656,0 0,1.2378 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.4
 9437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.2378 1.25656,0 0,1.2378 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.2378 1.25656,0 0,1.2378 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23
 781 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.05027 0.0188,-0.22505 1.2378,0.0563 0,0.2063 0,-0.0375 0,1.05027 -1.25656,0 z m 0.11253,-2.58815 0.13128,-0.84396 0.11253,-0.45011 1.21905,0.31883 -0.11252,0.4126 0,-0.0563 -0.11253,0.80645 -1.23781,-0.18755 z m 0.63766,-2.53188 0.18754,-0.52513 0.31883,-0.69392 1.12528,0.54388 -0.30007,0.65642 0.0187,-0.0563 -0.18754,0.48762 -1.16279,-0.4126 z m 1.16279,-2.34433 0.11253,-0.18755 0.65641,-0.88147 0.994,0.75019 -0.63766,0.86271 0.0375,-0.0563 -0.0938,0.15004 -1.06901,-0.63766 z m 1.70667,-2.06302 0.69393,-0.63766 0.30007,-0.22505 0.75019,1.01275 -0.28132,0.2063 0.0375,-0.0375 -0.67517,0.60015 -0.82521,-0.91898 z m 2.06302,-1.59414 0.50637,-0.31883 0.65642,-0.30008 0.52513,1.12528 -0.61891,0.30008 0.0563,-0.0375 -0.46887,0.30007 -0.65641,-1.06901 z m 2.38184,-1.10653 0.26257,-0.0938
  0.99399,-0.26257 0.31883,1.2003 -0.97524,0.26257 0.0563,-0.0188 -0.24381,0.075 -0.4126,-1.16279 z m 2.6069,-0.5814 1.14403,-0.0563 0.13129,0 0,1.25656 -0.11253,0 0.0375,0 -1.14404,0.0563 -0.0563,-1.25657 z m 2.53188,-0.0563 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51313,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51313,0 1.2378,0 0,1.25656 -1.2378,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51313,0 1.2378,0 0,1.25656 -1.2378,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51312,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49438,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.
 51312,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.49438,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51312,0 0.93774,0 0.33758,0.0188 -0.0563,1.25656 -0.33759,-0.0188 0.0375,0 -0.91898,0 0,-1.25656 z m 2.58815,0.13128 0.71267,0.11253 0.5814,0.15004 -0.31883,1.2003 -0.54389,-0.13128 0.0563,0 -0.67517,-0.0938 0.18755,-1.23781 z m 2.53188,0.65642 0.39385,0.15003 0.80645,0.3751 -0.54389,1.12528 -0.76894,-0.35634 0.0563,0.0187 -0.37509,-0.13128 0.43136,-1.18154 z m 2.32558,1.18154 0.075,0.0563 0.91897,0.67516 0.0938,0.0938 -0.82521,0.93773 -0.0938,-0.075 0.0375,0.0375 -0.86272,-0.65641 0.0563,0.0375 -0.0563,-0.0375 0.65642,-1.06902 z m 2.04426,1.74419 0.54388,0.60015 0.30008,0.39384 -0.994,0.75019 -0.28132,-0.37509 0.0375,0.0375 -0.52513,-0.5814 0.91898,-0.8252 z m 1.57539,2.06301 0.24381,0.4126 0.37509,0.75019 -1.12528,0.54388 -0.35634,-0.73143 0.0375,0.0563 -0.24381,-0.3751 1.06902,-0.65641 z m 1.08777,2.4006 0.0563,0.1
 3128 0.30008,1.12528 0,0.0563 -1.23781,0.18755 0,-0.0188 0.0187,0.0563 -0.28132,-1.06902 0.0188,0.0563 -0.0563,-0.11253 1.18154,-0.4126 z m 0.54389,2.62565 0.0375,1.01275 0,0.26257 -1.2378,0 0,-0.26257 0,0.0375 -0.0563,-0.99399 1.25657,-0.0563 z m 0.0375,2.53188 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.49438 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.51312 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.51313 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.51313 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.51313 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1
 .2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.51313 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.51313 0,1.2378 -1.2378,0 0,-1.2378 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.51312 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49438 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.51312 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49438 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.51312 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.49438 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.51312 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.49437 0,1.2
 5657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.51313 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.51313 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.51313 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.51313 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.51313 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.51313 0,1.2378 -1.2378,0 0,-1.2378 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1
 .2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.51313 0,1.2378 -1.2378,0 0,-1.2378 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.51312 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49438 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.51312 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.49438 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.51312 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.51313 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.51313 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.49437 0,1.2
 5657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.51313 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.51313 0,1.2378 -1.2378,0 0,-1.2378 1.2378,0 z m -0.0187,2.51312 -0.0375,0.73143 -0.0938,0.5814 -1.21906,-0.18755 0.075,-0.56264 0,0.075 0.0188,-0.69392 1.25656,0.0563 z m -0.39385,2.6069 -0.11253,0.43136 -0.30007,0.80645 -1.16279,-0.43136 0.28132,-0.76894 -0.0188,0.0375 0.0938,-0.39385 1.21905,0.31883 z m -0.95649,2.43811 -0.0563,0.11253 -0.5814,0.97524 -0.0563,0.075 -1.01275,-0.75019 0.0375,-0.0563 -0.0375,0.0563 0.56264,-0.91898 -0.0188,0.0563 0.0375,-0.0938 1.12528,0.54389 z m -1.50037,2.1943 -0.5814,0.65641 -0.33758,0.28132 -0.84396,-0.91898 0.31883,-0.28132 -0.0563,0.0563 0.56264,-0.63765 0.93773,0.84396 z m -1.93173,1.78169 -0.4126,0.30007 -0.67517,0.41261 -0.65642,-1.06902 0.65642,-0.39385 -0.0375,0.0375 0.39385,-0.30007 0.73143,1.01275 z m -2.25056,1.31283 -0.16879,0.075 -1.08778,0.39384 -0.0188,0.0188 -0.31883,-1.21906 0,0 -0
 .0563,0.0188 1.0315,-0.37509 -0.0563,0.0188 0.13128,-0.0563 0.54389,1.12528 z m -2.56939,0.80645 -0.994,0.15004 -0.30007,0 -0.0563,-1.23781 0.26257,-0.0188 -0.0563,0 0.95649,-0.13128 0.18755,1.23781 z m -2.56939,0.2063 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51313,0 -1.2378,0 0,-1.25656 1.2378,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51313,0 -1.2378,0 0,-1.25656 1.2378,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656
  z m -2.49438,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.53189,-0.0375 -0.58139,-0.0188 -0.71268,-0.11253 0.18755,-1.23781 0.67517,0.0938 -0.0563,0 0.54389,0.0375 -0.0563,1.23781 z m -2.58814,-0.4126 -0.30007,-0.0938 -0.93774,-0.33758 0.43136,-1.16279 0.90022,0.31883 -0.0563,-0.0188 0.28132,0.075 -0.31883,1.21906 z m -2.47562,-1.01276 -0.93773,-0.56264 -0.16879,-0.13128 0.75018,-0.994 0.15004,0.11253 -0.0563,-0.0375 0.91898,0.56264 -0.65642,1.05026 z m -2.13803,-1.50037 -0.54388,-0.48762 -0.39385,-0.43136 0.93773,-0.84396 0.35634,0.4126 -0.0375,-0.0563 0.52513,0.48762 -0.84396,0.91898 z m -1.76294,-1.93173 -0.22505,-0.31883 -0.48763,-0.7877 1.06902,-0.63765 0.46887,0.75018 -0.0375,-0.0563 0.22505,0.30007 -1.01275,0.75019 z m -1.29407,-2.26932 -0.0188,-0.0563 -0.39384,-1.08777 -0.0375,-0.15004 1.2003,-0.3188
 3 0.0375,0.11253 -0.0188,-0.0375 0.37509,1.03151 -0.0187,-0.0563 0,0.0188 -1.12528,0.54388 z m -0.76894,-2.56939 -0.13129,-0.88147 -0.0187,-0.43135 1.25656,-0.0563 0.0188,0.39385 -0.0188,-0.075 0.13129,0.86272 -1.23781,0.18754 z"
+           id="path3427"
+           style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+        <path
+           d="m 216.39141,398.81811 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-
 2.51313 0,-1.2378 1.25656,0 0,1.2378 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.2378 1.25656,0 0,1.2378 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1
 .25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.2378 1.25656,0 0,1.2378 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.2378 1.25656,0 0,1.2378 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437
  0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25
 656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0.0563,-2.53188 0,-0.26257 0.16879,-1.05026 1.23781,0.18755 -0.16879,1.01275 0.0188,-0.0563 -0.0188,0.22505 -1.23781,-0.0563 z m 0.50638,-2.64441 0.35634,-0.93773 0.13128,-0.28132 1.12528,0.54388 -0.11253,0.24381 0.0188,-0.0375 -0.35634,0.90023 -1.16279,-0.43136 z m 1.08777,-2.36309 0.31883,-0.54388 0.43136,-0.54389 0.99399,0.75019 -0.39384,0.52513 0.0188,-0.0563 -0.30007,0.50638 -1.06902,-0.63766 z m 1.59415,-2.08177 0.13128,-0.16879 0.82521,-0.73143 0.0375,-0.0375 0.75019,1.01275 -0.0187,0 0.0563,-0.0375 -0.7877,0.71268 0.0563,-0.0563 -0.13128,0.15003 -0.91898,-0.84396 z m 2.06301,-1.68792 0.73143,-0.46886 0.41261,-0.18755 0.54388,1.12528 -0.37509,0.16879 0.0563,-0.0188 -0.71268,0.43135 -0.65641,-1.05026 z m 2.34433,-1.16279 0.46887,-0.16
 879 0.7877,-0.2063 0.30007,1.21905 -0.75018,0.18755 0.0563,-0.0187 -0.43136,0.15003 -0.43136,-1.16279 z m 2.55064,-0.60015 0.16879,-0.0375 1.14404,-0.0563 0.0563,1.25656 -1.10652,0.0563 0.0563,-0.0188 -0.13128,0.0188 -0.18755,-1.21906 z m 2.58815,-0.0938 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51312,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.49438,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51312,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.51313,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.51313,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.
 25656 -1.25657,0 0,-1.25656 z m 2.51313,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.53188,0 1.05026,0.0563 0.26257,0.0375 -0.18755,1.23781 -0.22505,-0.0375 0.0563,0.0188 -1.01275,-0.0563 0.0563,-1.25656 z m 2.6069,0.33758 0.69392,0.18755 0.56264,0.18755 -0.43135,1.18154 -0.52513,-0.18754 0.0563,0.0187 -0.65641,-0.16879 0.30007,-1.21906 z m 2.45686,0.90023 0.31883,0.15004 0.82521,0.50637 -0.65642,1.06902 -0.78769,-0.48762 0.0563,0.0375 -0.30007,-0.15004 0.54388,-1.12528 z m 2.23181,1.48162 0.75019,0.65641 0.2063,0.24381 -0.93774,0.82521 -0.18754,-0.2063 0.0563,0.0563 -0.73143,-0.65641 0.84396,-0.91898 z m 1.80045,1.89422 0.35634,0.46887 0.37509,0.6189 -1.06902,0.65641 -0.35634,-0.60015 0.0188,0.0563 -0.33758,-0.45011 1.01275,-0.75019 z m 1.33158,2.25056 0.0938,0.18755 0.3751,1.06901 0.0187,0.0188 -1.21905,0.31883 0,0 0.0188,0.0563 -0.35634,-1.01275 0.0188,0.0563 -0.07
 5,-0.15003 1.12528,-0.54389 z m 0.7877,2.55064 0.15003,0.95648 0.0188,0.35634 -1.25657,0.0563 -0.0188,-0.31883 0.0188,0.0563 -0.15003,-0.90022 1.23781,-0.2063 z m 0.2063,2.58814 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.2378 -1.25657,0 0,-1.2378 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.2378 -1.25657,0 0,-1.2378 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51312 0,1.23781 -1.25657,0 0,-1.23781 1.25657
 ,0 z m 0,2.49438 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51312 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.49438 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51312 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23
 781 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.2378 -1.25657,0 0,-1.2378 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.2378 -1.25657,0 0,-1.2378 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51312 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49438 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51312 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.49438 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51312 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 
 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.2378 -1.25657,0 0,-1.2378 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.2378 -1.25657,0 0,-1.2378 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51312 0,1.23781 -1.2
 5657,0 0,-1.23781 1.25657,0 z m 0,2.49438 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51312 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.49438 0,1.18154 -0.0188,0.11253 -1.23781,-0.075 0,-0.0938 0,0.0375 0,-1.16279 1.25657,0 z m -0.0938,2.6069 -0.15003,0.88146 -0.0938,0.39385 -1.21905,-0.30007 0.0938,-0.3751 0,0.0563 0.13129,-0.86272 1.2378,0.20631 z m -0.6189,2.53188 -0.18755,0.50637 -0.33758,0.69393 -1.12528,-0.54389 0.31883,-0.65641 -0.0188,0.0563 0.1688,-0.48762 1.18154,0.43136 z m -1.18154,2.34433 -0.075,0.13128 -0.67517,0.88147 -0.075,0.0938 -0.91898,-0.84396 0.0563,-0.075 -0.0375,0.0563 0.6189,-0.84396 -0.0188,0.0563 0.0563,-0.11252 1.06902,0.65641 z m -1.72543,2.04426 -0.5814,0.52513 -0.4126,0.31883 -0.75019,-1.01275 0.37509,-0.28132 -0.0375,0.0375 0.56265,-0.52513 0.84396,0.93773 z m -2.08177,1.57539 -0.33759,0.2063 -0.8252,0.39385 -0.54389,-1.12528 0.8
 0645,-0.39385 -0.0563,0.0375 0.30007,-0.18754 0.65642,1.06901 z m -2.45687,1.08777 -1.05026,0.26257 -0.22505,0.0375 -0.18755,-1.23781 0.18755,-0.0188 -0.0563,0 1.03151,-0.26256 0.30007,1.21905 z m -2.56939,0.46887 -0.80645,0.0375 -0.46886,0 0,-1.25656 0.45011,0 -0.0188,0 0.7877,-0.0375 0.0563,1.25656 z m -2.53188,0.0375 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49438,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.49438,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2
 .49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.53188,-0.0375 -0.5814,-0.0188 -0.73143,-0.11253 0.18755,-1.23781 0.69392,0.11253 -0.0563,-0.0188 0.54388,0.0375 -0.0563,1.23781 z m -2.58815,-0.4126 -0.24381,-0.075 -0.994,-0.35634 0.41261,-1.18154 0.97524,0.35633 -0.0563,-0.0188 0.2063,0.0563 -0.30008,1.21906 z m -2.47561,-1.05026 -0.7877,-0.46887 -0.30008,-0.22506 0.73144,-1.01275 0.28132,0.2063 -0.0375,-0.0187 0.76894,0.46886 -0.65641,1.05027 z m -2.11928,-1.51913 -0.39385,-0.35634 -0.52513,-0.5814 0.93773,-0.84396 0.50638,0.54389 -0.0563,-0.0375 0.3751,0.33758 -0.84396,0.93774 z m -1.72543,-1.96924 -0.075,-0.0938 -0.56264,-0.95649 -0.0563,-0.11252 1.12529,-0.56264 0.0375,0.0938 -0.0375,-0.0375 0.54388,0.90022 -0.0188,-0.0563 0.0563,0.075 -1.01275,0
 .75019 z m -1.23781,-2.36309 -0.28132,-0.75019 -0.13128,-0.50638 1.21905,-0.30007 0.11253,0.46887 -0.0188,-0.075 0.26257,0.73143 -1.16279,0.43136 z m -0.67517,-2.53188 -0.075,-0.46887 -0.0375,-0.84396 1.25657,-0.0563 0.0375,0.80645 -0.0188,-0.075 0.075,0.45011 -1.23781,0.18755 z"
+           id="path3429"
+           style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
+        <path
+           d="m 284.84596,398.81811 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.49438 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51312 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.51313 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.51313 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.51313 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51313 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-
 2.51313 0,-1.2378 1.25657,0 0,1.2378 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51313 0,-1.2378 1.25657,0 0,1.2378 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51312 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49438 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51312 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.49438 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51312 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.51313 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1
 .25657 -1.25657,0 z m 0,-2.51313 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.51313 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51313 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51313 0,-1.2378 1.25657,0 0,1.2378 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51313 0,-1.2378 1.25657,0 0,1.2378 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51312 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49438 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437
  0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51312 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.49438 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51312 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.51313 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.51313 0,-1.23781 1.25657,

<TRUNCATED>

[71/89] [abbrv] flink git commit: [FLINK-4346] [rpc] Add new RPC abstraction

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/ResourceManagerGateway.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/ResourceManagerGateway.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/ResourceManagerGateway.java
new file mode 100644
index 0000000..464a261
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/ResourceManagerGateway.java
@@ -0,0 +1,58 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.resourcemanager;
+
+import org.apache.flink.runtime.rpc.RpcGateway;
+import org.apache.flink.runtime.rpc.RpcTimeout;
+import org.apache.flink.runtime.rpc.jobmaster.JobMaster;
+import scala.concurrent.Future;
+import scala.concurrent.duration.FiniteDuration;
+
+/**
+ * {@link ResourceManager} rpc gateway interface.
+ */
+public interface ResourceManagerGateway extends RpcGateway {
+
+	/**
+	 * Register a {@link JobMaster} at the resource manager.
+	 *
+	 * @param jobMasterRegistration Job master registration information
+	 * @param timeout Timeout for the future to complete
+	 * @return Future registration response
+	 */
+	Future<RegistrationResponse> registerJobMaster(
+		JobMasterRegistration jobMasterRegistration,
+		@RpcTimeout FiniteDuration timeout);
+
+	/**
+	 * Register a {@link JobMaster} at the resource manager.
+	 *
+	 * @param jobMasterRegistration Job master registration information
+	 * @return Future registration response
+	 */
+	Future<RegistrationResponse> registerJobMaster(JobMasterRegistration jobMasterRegistration);
+
+	/**
+	 * Requests a slot from the resource manager.
+	 *
+	 * @param slotRequest Slot request
+	 * @return Future slot assignment
+	 */
+	Future<SlotAssignment> requestSlot(SlotRequest slotRequest);
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/SlotAssignment.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/SlotAssignment.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/SlotAssignment.java
new file mode 100644
index 0000000..86cd8b7
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/SlotAssignment.java
@@ -0,0 +1,25 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.resourcemanager;
+
+import java.io.Serializable;
+
+public class SlotAssignment implements Serializable{
+	private static final long serialVersionUID = -6990813455942742322L;
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/SlotRequest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/SlotRequest.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/SlotRequest.java
new file mode 100644
index 0000000..d8fe268
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/SlotRequest.java
@@ -0,0 +1,25 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.resourcemanager;
+
+import java.io.Serializable;
+
+public class SlotRequest implements Serializable{
+	private static final long serialVersionUID = -6586877187990445986L;
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutor.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutor.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutor.java
new file mode 100644
index 0000000..cdfc3bd
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutor.java
@@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.taskexecutor;
+
+import akka.dispatch.ExecutionContexts$;
+import org.apache.flink.runtime.deployment.TaskDeploymentDescriptor;
+import org.apache.flink.runtime.executiongraph.ExecutionAttemptID;
+import org.apache.flink.runtime.messages.Acknowledge;
+import org.apache.flink.runtime.rpc.RpcMethod;
+import org.apache.flink.runtime.rpc.RpcEndpoint;
+import org.apache.flink.runtime.rpc.RpcService;
+import scala.concurrent.ExecutionContext;
+
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+
+/**
+ * TaskExecutor implementation. The task executor is responsible for the execution of multiple
+ * {@link org.apache.flink.runtime.taskmanager.Task}.
+ *
+ * It offers the following methods as part of its rpc interface to interact with him remotely:
+ * <ul>
+ *     <li>{@link #executeTask(TaskDeploymentDescriptor)} executes a given task on the TaskExecutor</li>
+ *     <li>{@link #cancelTask(ExecutionAttemptID)} cancels a given task identified by the {@link ExecutionAttemptID}</li>
+ * </ul>
+ */
+public class TaskExecutor extends RpcEndpoint<TaskExecutorGateway> {
+	private final ExecutionContext executionContext;
+	private final Set<ExecutionAttemptID> tasks = new HashSet<>();
+
+	public TaskExecutor(RpcService rpcService, ExecutorService executorService) {
+		super(rpcService);
+		this.executionContext = ExecutionContexts$.MODULE$.fromExecutor(executorService);
+	}
+
+	/**
+	 * Execute the given task on the task executor. The task is described by the provided
+	 * {@link TaskDeploymentDescriptor}.
+	 *
+	 * @param taskDeploymentDescriptor Descriptor for the task to be executed
+	 * @return Acknowledge the start of the task execution
+	 */
+	@RpcMethod
+	public Acknowledge executeTask(TaskDeploymentDescriptor taskDeploymentDescriptor) {
+		tasks.add(taskDeploymentDescriptor.getExecutionId());
+		return Acknowledge.get();
+	}
+
+	/**
+	 * Cancel a task identified by it {@link ExecutionAttemptID}. If the task cannot be found, then
+	 * the method throws an {@link Exception}.
+	 *
+	 * @param executionAttemptId Execution attempt ID identifying the task to be canceled.
+	 * @return Acknowledge the task canceling
+	 * @throws Exception if the task with the given execution attempt id could not be found
+	 */
+	@RpcMethod
+	public Acknowledge cancelTask(ExecutionAttemptID executionAttemptId) throws Exception {
+		if (tasks.contains(executionAttemptId)) {
+			return Acknowledge.get();
+		} else {
+			throw new Exception("Could not find task.");
+		}
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorGateway.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorGateway.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorGateway.java
new file mode 100644
index 0000000..450423e
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorGateway.java
@@ -0,0 +1,48 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.taskexecutor;
+
+import org.apache.flink.runtime.deployment.TaskDeploymentDescriptor;
+import org.apache.flink.runtime.executiongraph.ExecutionAttemptID;
+import org.apache.flink.runtime.messages.Acknowledge;
+import org.apache.flink.runtime.rpc.RpcGateway;
+import scala.concurrent.Future;
+
+/**
+ * {@link TaskExecutor} rpc gateway interface
+ */
+public interface TaskExecutorGateway extends RpcGateway {
+	/**
+	 * Execute the given task on the task executor. The task is described by the provided
+	 * {@link TaskDeploymentDescriptor}.
+	 *
+	 * @param taskDeploymentDescriptor Descriptor for the task to be executed
+	 * @return Future acknowledge of the start of the task execution
+	 */
+	Future<Acknowledge> executeTask(TaskDeploymentDescriptor taskDeploymentDescriptor);
+
+	/**
+	 * Cancel a task identified by it {@link ExecutionAttemptID}. If the task cannot be found, then
+	 * the method throws an {@link Exception}.
+	 *
+	 * @param executionAttemptId Execution attempt ID identifying the task to be canceled.
+	 * @return Future acknowledge of the task canceling
+	 */
+	Future<Acknowledge> cancelTask(ExecutionAttemptID executionAttemptId);
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/test/java/org/apache/flink/runtime/resourcemanager/ResourceManagerITCase.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/resourcemanager/ResourceManagerITCase.java b/flink-runtime/src/test/java/org/apache/flink/runtime/resourcemanager/ResourceManagerITCase.java
index ca09634..ce57fe6 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/resourcemanager/ResourceManagerITCase.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/resourcemanager/ResourceManagerITCase.java
@@ -31,7 +31,6 @@ import org.apache.flink.runtime.testingUtils.TestingUtils;
 import org.apache.flink.runtime.testutils.TestingResourceManager;
 import org.apache.flink.util.TestLogger;
 import org.junit.AfterClass;
-import org.junit.Before;
 import org.junit.BeforeClass;
 import org.junit.Test;
 import org.mockito.Mockito;

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/RpcCompletenessTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/RpcCompletenessTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/RpcCompletenessTest.java
new file mode 100644
index 0000000..0ded25e
--- /dev/null
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/RpcCompletenessTest.java
@@ -0,0 +1,327 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc;
+
+import org.apache.flink.util.TestLogger;
+import org.junit.Test;
+import org.reflections.Reflections;
+import scala.concurrent.Future;
+
+import java.lang.annotation.Annotation;
+import java.lang.reflect.Method;
+import java.lang.reflect.ParameterizedType;
+import java.lang.reflect.Type;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+public class RpcCompletenessTest extends TestLogger {
+	private static final Class<?> futureClass = Future.class;
+
+	@Test
+	public void testRpcCompleteness() {
+		Reflections reflections = new Reflections("org.apache.flink");
+
+		Set<Class<? extends RpcEndpoint>> classes = reflections.getSubTypesOf(RpcEndpoint.class);
+
+		Class<? extends RpcEndpoint> c;
+
+		for (Class<? extends RpcEndpoint> rpcEndpoint :classes){
+			c = rpcEndpoint;
+			Type superClass = c.getGenericSuperclass();
+
+			Class<?> rpcGatewayType = extractTypeParameter(superClass, 0);
+
+			if (rpcGatewayType != null) {
+				checkCompleteness(rpcEndpoint, (Class<? extends RpcGateway>) rpcGatewayType);
+			} else {
+				fail("Could not retrieve the rpc gateway class for the given rpc endpoint class " + rpcEndpoint.getName());
+			}
+		}
+	}
+
+	private void checkCompleteness(Class<? extends RpcEndpoint> rpcEndpoint, Class<? extends RpcGateway> rpcGateway) {
+		Method[] gatewayMethods = rpcGateway.getDeclaredMethods();
+		Method[] serverMethods = rpcEndpoint.getDeclaredMethods();
+
+		Map<String, Set<Method>> rpcMethods = new HashMap<>();
+		Set<Method> unmatchedRpcMethods = new HashSet<>();
+
+		for (Method serverMethod : serverMethods) {
+			if (serverMethod.isAnnotationPresent(RpcMethod.class)) {
+				if (rpcMethods.containsKey(serverMethod.getName())) {
+					Set<Method> methods = rpcMethods.get(serverMethod.getName());
+					methods.add(serverMethod);
+
+					rpcMethods.put(serverMethod.getName(), methods);
+				} else {
+					Set<Method> methods = new HashSet<>();
+					methods.add(serverMethod);
+
+					rpcMethods.put(serverMethod.getName(), methods);
+				}
+
+				unmatchedRpcMethods.add(serverMethod);
+			}
+		}
+
+		for (Method gatewayMethod : gatewayMethods) {
+			assertTrue(
+				"The rpc endpoint " + rpcEndpoint.getName() + " does not contain a RpcMethod " +
+					"annotated method with the same name and signature " +
+					generateEndpointMethodSignature(gatewayMethod) + ".",
+				rpcMethods.containsKey(gatewayMethod.getName()));
+
+			checkGatewayMethod(gatewayMethod);
+
+			if (!matchGatewayMethodWithEndpoint(gatewayMethod, rpcMethods.get(gatewayMethod.getName()), unmatchedRpcMethods)) {
+				fail("Could not find a RpcMethod annotated method in rpc endpoint " +
+					rpcEndpoint.getName() + " matching the rpc gateway method " +
+					generateEndpointMethodSignature(gatewayMethod) + " defined in the rpc gateway " +
+					rpcGateway.getName() + ".");
+			}
+		}
+
+		if (!unmatchedRpcMethods.isEmpty()) {
+			StringBuilder builder = new StringBuilder();
+
+			for (Method unmatchedRpcMethod : unmatchedRpcMethods) {
+				builder.append(unmatchedRpcMethod).append("\n");
+			}
+
+			fail("The rpc endpoint " + rpcEndpoint.getName() + " contains rpc methods which " +
+				"are not matched to gateway methods of " + rpcGateway.getName() + ":\n" +
+				builder.toString());
+		}
+	}
+
+	/**
+	 * Checks whether the gateway method fulfills the gateway method requirements.
+	 * <ul>
+	 *     <li>It checks whether the return type is void or a {@link Future} wrapping the actual result. </li>
+	 *     <li>It checks that the method's parameter list contains at most one parameter annotated with {@link RpcTimeout}.</li>
+	 * </ul>
+	 *
+	 * @param gatewayMethod Gateway method to check
+	 */
+	private void checkGatewayMethod(Method gatewayMethod) {
+		if (!gatewayMethod.getReturnType().equals(Void.TYPE)) {
+			assertTrue(
+				"The return type of method " + gatewayMethod.getName() + " in the rpc gateway " +
+					gatewayMethod.getDeclaringClass().getName() + " is non void and not a " +
+					"future. Non-void return types have to be returned as a future.",
+				gatewayMethod.getReturnType().equals(futureClass));
+		}
+
+		Annotation[][] parameterAnnotations = gatewayMethod.getParameterAnnotations();
+		int rpcTimeoutParameters = 0;
+
+		for (Annotation[] parameterAnnotation : parameterAnnotations) {
+			for (Annotation annotation : parameterAnnotation) {
+				if (annotation.equals(RpcTimeout.class)) {
+					rpcTimeoutParameters++;
+				}
+			}
+		}
+
+		assertTrue("The gateway method " + gatewayMethod + " must have at most one RpcTimeout " +
+			"annotated parameter.", rpcTimeoutParameters <= 1);
+	}
+
+	/**
+	 * Checks whether we find a matching overloaded version for the gateway method among the methods
+	 * with the same name in the rpc endpoint.
+	 *
+	 * @param gatewayMethod Gateway method
+	 * @param endpointMethods Set of rpc methods on the rpc endpoint with the same name as the gateway
+	 *                   method
+	 * @param unmatchedRpcMethods Set of unmatched rpc methods on the endpoint side (so far)
+	 */
+	private boolean matchGatewayMethodWithEndpoint(Method gatewayMethod, Set<Method> endpointMethods, Set<Method> unmatchedRpcMethods) {
+		for (Method endpointMethod : endpointMethods) {
+			if (checkMethod(gatewayMethod, endpointMethod)) {
+				unmatchedRpcMethods.remove(endpointMethod);
+				return true;
+			}
+		}
+
+		return false;
+	}
+
+	private boolean checkMethod(Method gatewayMethod, Method endpointMethod) {
+		Class<?>[] gatewayParameterTypes = gatewayMethod.getParameterTypes();
+		Annotation[][] gatewayParameterAnnotations = gatewayMethod.getParameterAnnotations();
+
+		Class<?>[] endpointParameterTypes = endpointMethod.getParameterTypes();
+
+		List<Class<?>> filteredGatewayParameterTypes = new ArrayList<>();
+
+		assertEquals(gatewayParameterTypes.length, gatewayParameterAnnotations.length);
+
+		// filter out the RpcTimeout parameters
+		for (int i = 0; i < gatewayParameterTypes.length; i++) {
+			if (!isRpcTimeout(gatewayParameterAnnotations[i])) {
+				filteredGatewayParameterTypes.add(gatewayParameterTypes[i]);
+			}
+		}
+
+		if (filteredGatewayParameterTypes.size() != endpointParameterTypes.length) {
+			return false;
+		} else {
+			// check the parameter types
+			for (int i = 0; i < filteredGatewayParameterTypes.size(); i++) {
+				if (!checkType(filteredGatewayParameterTypes.get(i), endpointParameterTypes[i])) {
+					return false;
+				}
+			}
+
+			// check the return types
+			if (endpointMethod.getReturnType() == void.class) {
+				if (gatewayMethod.getReturnType() != void.class) {
+					return false;
+				}
+			} else {
+				// has return value. The gateway method should be wrapped in a future
+				Class<?> futureClass = gatewayMethod.getReturnType();
+
+				// sanity check that the return type of a gateway method must be void or a future
+				if (!futureClass.equals(RpcCompletenessTest.futureClass)) {
+					return false;
+				} else {
+					Class<?> valueClass = extractTypeParameter(futureClass, 0);
+
+					if (endpointMethod.getReturnType().equals(futureClass)) {
+						Class<?> rpcEndpointValueClass = extractTypeParameter(endpointMethod.getReturnType(), 0);
+
+						// check if we have the same future value types
+						if (valueClass != null && rpcEndpointValueClass != null && !checkType(valueClass, rpcEndpointValueClass)) {
+							return false;
+						}
+					} else {
+						if (valueClass != null && !checkType(valueClass, endpointMethod.getReturnType())) {
+							return false;
+						}
+					}
+				}
+			}
+
+			return gatewayMethod.getName().equals(endpointMethod.getName());
+		}
+	}
+
+	private boolean checkType(Class<?> firstType, Class<?> secondType) {
+		return firstType.equals(secondType);
+	}
+
+	/**
+	 * Generates from a gateway rpc method signature the corresponding rpc endpoint signature.
+	 *
+	 * For example the {@link RpcTimeout} annotation adds an additional parameter to the gateway
+	 * signature which is not relevant on the server side.
+	 *
+	 * @param method Method to generate the signature string for
+	 * @return String of the respective server side rpc method signature
+	 */
+	private String generateEndpointMethodSignature(Method method) {
+		StringBuilder builder = new StringBuilder();
+
+		if (method.getReturnType().equals(Void.TYPE)) {
+			builder.append("void").append(" ");
+		} else if (method.getReturnType().equals(futureClass)) {
+			Class<?> valueClass = extractTypeParameter(method.getGenericReturnType(), 0);
+
+			builder
+				.append(futureClass.getSimpleName())
+				.append("<")
+				.append(valueClass != null ? valueClass.getSimpleName() : "")
+				.append(">");
+
+			if (valueClass != null) {
+				builder.append("/").append(valueClass.getSimpleName());
+			}
+
+			builder.append(" ");
+		} else {
+			return "Invalid rpc method signature.";
+		}
+
+		builder.append(method.getName()).append("(");
+
+		Class<?>[] parameterTypes = method.getParameterTypes();
+		Annotation[][] parameterAnnotations = method.getParameterAnnotations();
+
+		assertEquals(parameterTypes.length, parameterAnnotations.length);
+
+		for (int i = 0; i < parameterTypes.length; i++) {
+			// filter out the RpcTimeout parameters
+			if (!isRpcTimeout(parameterAnnotations[i])) {
+				builder.append(parameterTypes[i].getName());
+
+				if (i < parameterTypes.length -1) {
+					builder.append(", ");
+				}
+			}
+		}
+
+		builder.append(")");
+
+		return builder.toString();
+	}
+
+	private Class<?> extractTypeParameter(Type genericType, int position) {
+		if (genericType instanceof ParameterizedType) {
+			ParameterizedType parameterizedType = (ParameterizedType) genericType;
+
+			Type[] typeArguments = parameterizedType.getActualTypeArguments();
+
+			if (position < 0 || position >= typeArguments.length) {
+				throw new IndexOutOfBoundsException("The generic type " +
+					parameterizedType.getRawType() + " only has " + typeArguments.length +
+					" type arguments.");
+			} else {
+				Type typeArgument = typeArguments[position];
+
+				if (typeArgument instanceof Class<?>) {
+					return (Class<?>) typeArgument;
+				} else {
+					return null;
+				}
+			}
+		} else {
+			return null;
+		}
+	}
+
+	private boolean isRpcTimeout(Annotation[] annotations) {
+		for (Annotation annotation : annotations) {
+			if (annotation.annotationType().equals(RpcTimeout.class)) {
+				return true;
+			}
+		}
+
+		return false;
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java
new file mode 100644
index 0000000..c5bac94
--- /dev/null
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.akka;
+
+import akka.actor.ActorSystem;
+import akka.util.Timeout;
+import org.apache.flink.runtime.akka.AkkaUtils;
+import org.apache.flink.runtime.rpc.jobmaster.JobMaster;
+import org.apache.flink.runtime.rpc.resourcemanager.ResourceManagerGateway;
+import org.apache.flink.runtime.rpc.resourcemanager.ResourceManager;
+import org.apache.flink.util.TestLogger;
+import org.junit.Test;
+import scala.concurrent.duration.Deadline;
+import scala.concurrent.duration.FiniteDuration;
+
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ForkJoinPool;
+import java.util.concurrent.TimeUnit;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+public class AkkaRpcServiceTest extends TestLogger {
+
+	/**
+	 * Tests that the {@link JobMaster} can connect to the {@link ResourceManager} using the
+	 * {@link AkkaRpcService}.
+	 */
+	@Test
+	public void testJobMasterResourceManagerRegistration() throws Exception {
+		Timeout akkaTimeout = new Timeout(10, TimeUnit.SECONDS);
+		ActorSystem actorSystem = AkkaUtils.createDefaultActorSystem();
+		ActorSystem actorSystem2 = AkkaUtils.createDefaultActorSystem();
+		AkkaRpcService akkaRpcService = new AkkaRpcService(actorSystem, akkaTimeout);
+		AkkaRpcService akkaRpcService2 = new AkkaRpcService(actorSystem2, akkaTimeout);
+		ExecutorService executorService = new ForkJoinPool();
+
+		ResourceManager resourceManager = new ResourceManager(akkaRpcService, executorService);
+		JobMaster jobMaster = new JobMaster(akkaRpcService2, executorService);
+
+		resourceManager.start();
+
+		ResourceManagerGateway rm = resourceManager.getSelf();
+
+		assertTrue(rm instanceof AkkaGateway);
+
+		AkkaGateway akkaClient = (AkkaGateway) rm;
+
+		jobMaster.start();
+		jobMaster.registerAtResourceManager(AkkaUtils.getAkkaURL(actorSystem, akkaClient.getActorRef()));
+
+		// wait for successful registration
+		FiniteDuration timeout = new FiniteDuration(20, TimeUnit.SECONDS);
+		Deadline deadline = timeout.fromNow();
+
+		while (deadline.hasTimeLeft() && !jobMaster.isConnected()) {
+			Thread.sleep(100);
+		}
+
+		assertFalse(deadline.isOverdue());
+
+		jobMaster.shutDown();
+		resourceManager.shutDown();
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorTest.java
new file mode 100644
index 0000000..c143527
--- /dev/null
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorTest.java
@@ -0,0 +1,92 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.taskexecutor;
+
+import org.apache.flink.api.common.ExecutionConfig;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.runtime.blob.BlobKey;
+import org.apache.flink.runtime.deployment.InputGateDeploymentDescriptor;
+import org.apache.flink.runtime.deployment.ResultPartitionDeploymentDescriptor;
+import org.apache.flink.runtime.deployment.TaskDeploymentDescriptor;
+import org.apache.flink.runtime.executiongraph.ExecutionAttemptID;
+import org.apache.flink.runtime.jobgraph.JobVertexID;
+import org.apache.flink.runtime.messages.Acknowledge;
+import org.apache.flink.runtime.rpc.RpcService;
+import org.apache.flink.runtime.util.DirectExecutorService;
+import org.apache.flink.util.SerializedValue;
+import org.apache.flink.util.TestLogger;
+import org.junit.Test;
+
+import java.net.URL;
+import java.util.Collections;
+
+import static org.junit.Assert.fail;
+import static org.mockito.Mockito.mock;
+
+public class TaskExecutorTest extends TestLogger {
+
+	/**
+	 * Tests that we can deploy and cancel a task on the TaskExecutor without exceptions
+	 */
+	@Test
+	public void testTaskExecution() throws Exception {
+		RpcService testingRpcService = mock(RpcService.class);
+		DirectExecutorService directExecutorService = null;
+		TaskExecutor taskExecutor = new TaskExecutor(testingRpcService, directExecutorService);
+
+		TaskDeploymentDescriptor tdd = new TaskDeploymentDescriptor(
+			new JobID(),
+			"Test job",
+			new JobVertexID(),
+			new ExecutionAttemptID(),
+			new SerializedValue<ExecutionConfig>(null),
+			"Test task",
+			0,
+			1,
+			0,
+			new Configuration(),
+			new Configuration(),
+			"Invokable",
+			Collections.<ResultPartitionDeploymentDescriptor>emptyList(),
+			Collections.<InputGateDeploymentDescriptor>emptyList(),
+			Collections.<BlobKey>emptyList(),
+			Collections.<URL>emptyList(),
+			0
+		);
+
+		Acknowledge ack = taskExecutor.executeTask(tdd);
+
+		ack = taskExecutor.cancelTask(tdd.getExecutionId());
+	}
+
+	/**
+	 * Tests that cancelling a non-existing task will return an exception
+	 */
+	@Test(expected=Exception.class)
+	public void testWrongTaskCancellation() throws Exception {
+		RpcService testingRpcService = mock(RpcService.class);
+		DirectExecutorService directExecutorService = null;
+		TaskExecutor taskExecutor = new TaskExecutor(testingRpcService, directExecutorService);
+
+		taskExecutor.cancelTask(new ExecutionAttemptID());
+
+		fail("The cancellation should have thrown an exception.");
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-runtime/src/test/java/org/apache/flink/runtime/util/DirectExecutorService.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/util/DirectExecutorService.java b/flink-runtime/src/test/java/org/apache/flink/runtime/util/DirectExecutorService.java
new file mode 100644
index 0000000..1d7c971
--- /dev/null
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/util/DirectExecutorService.java
@@ -0,0 +1,234 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.util;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.List;
+import java.util.concurrent.Callable;
+import java.util.concurrent.CancellationException;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+
+public class DirectExecutorService implements ExecutorService {
+	private boolean _shutdown = false;
+
+	@Override
+	public void shutdown() {
+		_shutdown = true;
+	}
+
+	@Override
+	public List<Runnable> shutdownNow() {
+		_shutdown = true;
+		return Collections.emptyList();
+	}
+
+	@Override
+	public boolean isShutdown() {
+		return _shutdown;
+	}
+
+	@Override
+	public boolean isTerminated() {
+		return _shutdown;
+	}
+
+	@Override
+	public boolean awaitTermination(long timeout, TimeUnit unit) throws InterruptedException {
+		return _shutdown;
+	}
+
+	@Override
+	public <T> Future<T> submit(Callable<T> task) {
+		try {
+			T result = task.call();
+
+			return new CompletedFuture<>(result, null);
+		} catch (Exception e) {
+			return new CompletedFuture<>(null, e);
+		}
+	}
+
+	@Override
+	public <T> Future<T> submit(Runnable task, T result) {
+		task.run();
+
+		return new CompletedFuture<>(result, null);
+	}
+
+	@Override
+	public Future<?> submit(Runnable task) {
+		task.run();
+		return new CompletedFuture<>(null, null);
+	}
+
+	@Override
+	public <T> List<Future<T>> invokeAll(Collection<? extends Callable<T>> tasks) throws InterruptedException {
+		ArrayList<Future<T>> result = new ArrayList<>();
+
+		for (Callable<T> task : tasks) {
+			try {
+				result.add(new CompletedFuture<T>(task.call(), null));
+			} catch (Exception e) {
+				result.add(new CompletedFuture<T>(null, e));
+			}
+		}
+		return result;
+	}
+
+	@Override
+	public <T> List<Future<T>> invokeAll(Collection<? extends Callable<T>> tasks, long timeout, TimeUnit unit) throws InterruptedException {
+		long end = System.currentTimeMillis() + unit.toMillis(timeout);
+		Iterator<? extends Callable<T>> iterator = tasks.iterator();
+		ArrayList<Future<T>> result = new ArrayList<>();
+
+		while (end > System.currentTimeMillis() && iterator.hasNext()) {
+			Callable<T> callable = iterator.next();
+
+			try {
+				result.add(new CompletedFuture<T>(callable.call(), null));
+			} catch (Exception e) {
+				result.add(new CompletedFuture<T>(null, e));
+			}
+		}
+
+		while(iterator.hasNext()) {
+			iterator.next();
+			result.add(new Future<T>() {
+				@Override
+				public boolean cancel(boolean mayInterruptIfRunning) {
+					return false;
+				}
+
+				@Override
+				public boolean isCancelled() {
+					return true;
+				}
+
+				@Override
+				public boolean isDone() {
+					return false;
+				}
+
+				@Override
+				public T get() throws InterruptedException, ExecutionException {
+					throw new CancellationException("Task has been cancelled.");
+				}
+
+				@Override
+				public T get(long timeout, TimeUnit unit) throws InterruptedException, ExecutionException, TimeoutException {
+					throw new CancellationException("Task has been cancelled.");
+				}
+			});
+		}
+
+		return result;
+	}
+
+	@Override
+	public <T> T invokeAny(Collection<? extends Callable<T>> tasks) throws InterruptedException, ExecutionException {
+		Exception exception = null;
+
+		for (Callable<T> task : tasks) {
+			try {
+				return task.call();
+			} catch (Exception e) {
+				// try next task
+				exception = e;
+			}
+		}
+
+		throw new ExecutionException("No tasks finished successfully.", exception);
+	}
+
+	@Override
+	public <T> T invokeAny(Collection<? extends Callable<T>> tasks, long timeout, TimeUnit unit) throws InterruptedException, ExecutionException, TimeoutException {
+		long end = System.currentTimeMillis() + unit.toMillis(timeout);
+		Exception exception = null;
+
+		Iterator<? extends Callable<T>> iterator = tasks.iterator();
+
+		while (end > System.currentTimeMillis() && iterator.hasNext()) {
+			Callable<T> callable = iterator.next();
+
+			try {
+				return callable.call();
+			} catch (Exception e) {
+				// ignore exception and try next
+				exception = e;
+			}
+		}
+
+		if (iterator.hasNext()) {
+			throw new TimeoutException("Could not finish execution of tasks within time.");
+		} else {
+			throw new ExecutionException("No tasks finished successfully.", exception);
+		}
+	}
+
+	@Override
+	public void execute(Runnable command) {
+		command.run();
+	}
+
+	public static class CompletedFuture<V> implements Future<V> {
+		private final V value;
+		private final Exception exception;
+
+		public CompletedFuture(V value, Exception exception) {
+			this.value = value;
+			this.exception = exception;
+		}
+
+		@Override
+		public boolean cancel(boolean mayInterruptIfRunning) {
+			return false;
+		}
+
+		@Override
+		public boolean isCancelled() {
+			return false;
+		}
+
+		@Override
+		public boolean isDone() {
+			return true;
+		}
+
+		@Override
+		public V get() throws InterruptedException, ExecutionException {
+			if (exception != null) {
+				throw new ExecutionException(exception);
+			} else {
+				return value;
+			}
+		}
+
+		@Override
+		public V get(long timeout, TimeUnit unit) throws InterruptedException, ExecutionException, TimeoutException {
+			return get();
+		}
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/flink-tests/pom.xml
----------------------------------------------------------------------
diff --git a/flink-tests/pom.xml b/flink-tests/pom.xml
index b09db1f..3202a9f 100644
--- a/flink-tests/pom.xml
+++ b/flink-tests/pom.xml
@@ -202,7 +202,6 @@ under the License.
 		<dependency>
 			<groupId>org.reflections</groupId>
 			<artifactId>reflections</artifactId>
-			<version>0.9.10</version>
 		</dependency>
 
 	</dependencies>

http://git-wip-us.apache.org/repos/asf/flink/blob/b273afad/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index 0e8c3b7..f077ba9 100644
--- a/pom.xml
+++ b/pom.xml
@@ -405,6 +405,13 @@ under the License.
 				<artifactId>jackson-annotations</artifactId>
 				<version>${jackson.version}</version>
 			</dependency>
+
+			<dependency>
+				<groupId>org.reflections</groupId>
+				<artifactId>reflections</artifactId>
+				<version>0.9.10</version>
+				<scope>test</scope>
+			</dependency>
 		</dependencies>
 	</dependencyManagement>
 


[14/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/libs/gelly/iterative_graph_processing.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/gelly/iterative_graph_processing.md b/docs/dev/libs/gelly/iterative_graph_processing.md
new file mode 100644
index 0000000..ea0b87d
--- /dev/null
+++ b/docs/dev/libs/gelly/iterative_graph_processing.md
@@ -0,0 +1,968 @@
+---
+title: Iterative Graph Processing
+nav-parent_id: graphs
+nav-pos: 2
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Gelly exploits Flink's efficient iteration operators to support large-scale iterative graph processing. Currently, we provide implementations of the vertex-centric, scatter-gather, and gather-sum-apply models. In the following sections, we describe these abstractions and show how you can use them in Gelly.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Vertex-Centric Iterations
+The vertex-centric model, also known as "think like a vertex" or "Pregel", expresses computation from the perspective of a vertex in the graph.
+The computation proceeds in synchronized iteration steps, called supersteps. In each superstep, each vertex executes one user-defined function.
+Vertices communicate with other vertices through messages. A vertex can send a message to any other vertex in the graph, as long as it knows its unique ID.
+
+The computational model is shown in the figure below. The dotted boxes correspond to parallelization units.
+In each superstep, all active vertices execute the
+same user-defined computation in parallel. Supersteps are executed synchronously, so that messages sent during one superstep are guaranteed to be delivered in the beginning of the next superstep.
+
+<p class="text-center">
+    <img alt="Vertex-Centric Computational Model" width="70%" src="{{ site.baseurl }}/fig/vertex-centric supersteps.png"/>
+</p>
+
+To use vertex-centric iterations in Gelly, the user only needs to define the vertex compute function, `ComputeFunction`.
+This function and the maximum number of iterations to run are given as parameters to Gelly's `runVertexCentricIteration`. This method will execute the vertex-centric iteration on the input Graph and return a new Graph, with updated vertex values. An optional message combiner, `MessageCombiner`, can be defined to reduce communication costs.
+
+Let us consider computing Single-Source-Shortest-Paths with vertex-centric iterations. Initially, each vertex has a value of infinite distance, except from the source vertex, which has a value of zero. During the first superstep, the source propagates distances to its neighbors. During the following supersteps, each vertex checks its received messages and chooses the minimum distance among them. If this distance is smaller than its current value, it updates its state and produces messages for its neighbors. If a vertex does not change its value during a superstep, then it does not produce any messages for its neighbors for the next superstep. The algorithm converges when there are no value updates or the maximum number of supersteps has been reached. In this algorithm, a message combiner can be used to reduce the number of messages sent to a target vertex.
+
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+// read the input graph
+Graph<Long, Double, Double> graph = ...
+
+// define the maximum number of iterations
+int maxIterations = 10;
+
+// Execute the vertex-centric iteration
+Graph<Long, Double, Double> result = graph.runVertexCentricIteration(
+            new SSSPComputeFunction(), new SSSPCombiner(), maxIterations);
+
+// Extract the vertices as the result
+DataSet<Vertex<Long, Double>> singleSourceShortestPaths = result.getVertices();
+
+
+// - - -  UDFs - - - //
+
+public static final class SSSPComputeFunction extends ComputeFunction<Long, Double, Double, Double> {
+
+public void compute(Vertex<Long, Double> vertex, MessageIterator<Double> messages) {
+
+    double minDistance = (vertex.getId().equals(srcId)) ? 0d : Double.POSITIVE_INFINITY;
+
+    for (Double msg : messages) {
+        minDistance = Math.min(minDistance, msg);
+    }
+
+    if (minDistance < vertex.getValue()) {
+        setNewVertexValue(minDistance);
+        for (Edge<Long, Double> e: getEdges()) {
+            sendMessageTo(e.getTarget(), minDistance + e.getValue());
+        }
+    }
+}
+
+// message combiner
+public static final class SSSPCombiner extends MessageCombiner<Long, Double> {
+
+    public void combineMessages(MessageIterator<Double> messages) {
+
+        double minMessage = Double.POSITIVE_INFINITY;
+        for (Double msg: messages) {
+           minMessage = Math.min(minMessage, msg);
+        }
+        sendCombinedMessage(minMessage);
+    }
+}
+
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+// read the input graph
+val graph: Graph[Long, Double, Double] = ...
+
+// define the maximum number of iterations
+val maxIterations = 10
+
+// Execute the vertex-centric iteration
+val result = graph.runVertexCentricIteration(new SSSPComputeFunction, new SSSPCombiner, maxIterations)
+
+// Extract the vertices as the result
+val singleSourceShortestPaths = result.getVertices
+
+
+// - - -  UDFs - - - //
+
+final class SSSPComputeFunction extends ComputeFunction[Long, Double, Double, Double] {
+
+    override def compute(vertex: Vertex[Long, Double], messages: MessageIterator[Double]) = {
+
+    var minDistance = if (vertex.getId.equals(srcId)) 0 else Double.MaxValue
+
+    while (messages.hasNext) {
+        val msg = messages.next
+        if (msg < minDistance) {
+            minDistance = msg
+        }
+    }
+
+    if (vertex.getValue > minDistance) {
+        setNewVertexValue(minDistance)
+        for (edge: Edge[Long, Double] <- getEdges) {
+            sendMessageTo(edge.getTarget, vertex.getValue + edge.getValue)
+        }
+    }
+}
+
+// message combiner
+final class SSSPCombiner extends MessageCombiner[Long, Double] {
+
+    override def combineMessages(messages: MessageIterator[Double]) {
+
+        var minDistance = Double.MaxValue
+
+        while (messages.hasNext) {
+          val msg = inMessages.next
+          if (msg < minDistance) {
+            minDistance = msg
+          }
+        }
+        sendCombinedMessage(minMessage)
+    }
+}
+{% endhighlight %}
+</div>
+</div>
+
+{% top %}
+
+## Configuring a Vertex-Centric Iteration
+A vertex-centric iteration can be configured using a `VertexCentricConfiguration` object.
+Currently, the following parameters can be specified:
+
+* <strong>Name</strong>: The name for the vertex-centric iteration. The name is displayed in logs and messages
+and can be specified using the `setName()` method.
+
+* <strong>Parallelism</strong>: The parallelism for the iteration. It can be set using the `setParallelism()` method.
+
+* <strong>Solution set in unmanaged memory</strong>: Defines whether the solution set is kept in managed memory (Flink's internal way of keeping objects in serialized form) or as a simple object map. By default, the solution set runs in managed memory. This property can be set using the `setSolutionSetUnmanagedMemory()` method.
+
+* <strong>Aggregators</strong>: Iteration aggregators can be registered using the `registerAggregator()` method. An iteration aggregator combines
+all aggregates globally once per superstep and makes them available in the next superstep. Registered aggregators can be accessed inside the user-defined `ComputeFunction`.
+
+* <strong>Broadcast Variables</strong>: DataSets can be added as [Broadcast Variables]({{site.baseurl}}/dev/batch/index.html#broadcast-variables) to the `ComputeFunction`, using the `addBroadcastSet()` method.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+
+Graph<Long, Double, Double> graph = ...
+
+// configure the iteration
+VertexCentricConfiguration parameters = new VertexCentricConfiguration();
+
+// set the iteration name
+parameters.setName("Gelly Iteration");
+
+// set the parallelism
+parameters.setParallelism(16);
+
+// register an aggregator
+parameters.registerAggregator("sumAggregator", new LongSumAggregator());
+
+// run the vertex-centric iteration, also passing the configuration parameters
+Graph<Long, Long, Double> result =
+            graph.runVertexCentricIteration(
+            new Compute(), null, maxIterations, parameters);
+
+// user-defined function
+public static final class Compute extends ComputeFunction {
+
+    LongSumAggregator aggregator = new LongSumAggregator();
+
+    public void preSuperstep() {
+
+        // retrieve the Aggregator
+        aggregator = getIterationAggregator("sumAggregator");
+    }
+
+
+    public void compute(Vertex<Long, Long> vertex, MessageIterator inMessages) {
+
+        //do some computation
+        Long partialValue = ...
+
+        // aggregate the partial value
+        aggregator.aggregate(partialValue);
+
+        // update the vertex value
+        setNewVertexValue(...);
+    }
+}
+
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+
+val graph: Graph[Long, Long, Double] = ...
+
+val parameters = new VertexCentricConfiguration
+
+// set the iteration name
+parameters.setName("Gelly Iteration")
+
+// set the parallelism
+parameters.setParallelism(16)
+
+// register an aggregator
+parameters.registerAggregator("sumAggregator", new LongSumAggregator)
+
+// run the vertex-centric iteration, also passing the configuration parameters
+val result = graph.runVertexCentricIteration(new Compute, new Combiner, maxIterations, parameters)
+
+// user-defined function
+final class Compute extends ComputeFunction {
+
+    var aggregator = new LongSumAggregator
+
+    override def preSuperstep {
+
+        // retrieve the Aggregator
+        aggregator = getIterationAggregator("sumAggregator")
+    }
+
+
+    override def compute(vertex: Vertex[Long, Long], inMessages: MessageIterator[Long]) {
+
+        //do some computation
+        val partialValue = ...
+
+        // aggregate the partial value
+        aggregator.aggregate(partialValue)
+
+        // update the vertex value
+        setNewVertexValue(...)
+    }
+}
+
+{% endhighlight %}
+</div>
+</div>
+
+{% top %}
+
+## Scatter-Gather Iterations
+The scatter-gather model, also known as "signal/collect" model, expresses computation from the perspective of a vertex in the graph. The computation proceeds in synchronized iteration steps, called supersteps. In each superstep, a vertex produces messages for other vertices and updates its value based on the messages it receives. To use scatter-gather iterations in Gelly, the user only needs to define how a vertex behaves in each superstep:
+
+* <strong>Scatter</strong>:  produces the messages that a vertex will send to other vertices.
+* <strong>Gather</strong>: updates the vertex value using received messages.
+
+Gelly provides methods for scatter-gather iterations. The user only needs to implement two functions, corresponding to the scatter and gather phases. The first function is a `ScatterFunction`, which allows a vertex to send out messages to other vertices. Messages are received during the same superstep as they are sent. The second function is `GatherFunction`, which defines how a vertex will update its value based on the received messages.
+These functions and the maximum number of iterations to run are given as parameters to Gelly's `runScatterGatherIteration`. This method will execute the scatter-gather iteration on the input Graph and return a new Graph, with updated vertex values.
+
+A scatter-gather iteration can be extended with information such as the total number of vertices, the in degree and out degree.
+Additionally, the  neighborhood type (in/out/all) over which to run the scatter-gather iteration can be specified. By default, the updates from the in-neighbors are used to modify the current vertex's state and messages are sent to out-neighbors.
+
+Let us consider computing Single-Source-Shortest-Paths with scatter-gather iterations on the following graph and let vertex 1 be the source. In each superstep, each vertex sends a candidate distance message to all its neighbors. The message value is the sum of the current value of the vertex and the edge weight connecting this vertex with its neighbor. Upon receiving candidate distance messages, each vertex calculates the minimum distance and, if a shorter path has been discovered, it updates its value. If a vertex does not change its value during a superstep, then it does not produce messages for its neighbors for the next superstep. The algorithm converges when there are no value updates.
+
+<p class="text-center">
+    <img alt="Scatter-gather SSSP superstep 1" width="70%" src="{{ site.baseurl }}/fig/gelly-vc-sssp1.png"/>
+</p>
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+// read the input graph
+Graph<Long, Double, Double> graph = ...
+
+// define the maximum number of iterations
+int maxIterations = 10;
+
+// Execute the scatter-gather iteration
+Graph<Long, Double, Double> result = graph.runScatterGatherIteration(
+			new MinDistanceMessenger(), new VertexDistanceUpdater(), maxIterations);
+
+// Extract the vertices as the result
+DataSet<Vertex<Long, Double>> singleSourceShortestPaths = result.getVertices();
+
+
+// - - -  UDFs - - - //
+
+// scatter: messaging
+public static final class MinDistanceMessenger extends ScatterFunction<Long, Double, Double, Double> {
+
+	public void sendMessages(Vertex<Long, Double> vertex) {
+		for (Edge<Long, Double> edge : getEdges()) {
+			sendMessageTo(edge.getTarget(), vertex.getValue() + edge.getValue());
+		}
+	}
+}
+
+// gather: vertex update
+public static final class VertexDistanceUpdater extends GatherFunction<Long, Double, Double> {
+
+	public void updateVertex(Vertex<Long, Double> vertex, MessageIterator<Double> inMessages) {
+		Double minDistance = Double.MAX_VALUE;
+
+		for (double msg : inMessages) {
+			if (msg < minDistance) {
+				minDistance = msg;
+			}
+		}
+
+		if (vertex.getValue() > minDistance) {
+			setNewVertexValue(minDistance);
+		}
+	}
+}
+
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+// read the input graph
+val graph: Graph[Long, Double, Double] = ...
+
+// define the maximum number of iterations
+val maxIterations = 10
+
+// Execute the scatter-gather iteration
+val result = graph.runScatterGatherIteration(new MinDistanceMessenger, new VertexDistanceUpdater, maxIterations)
+
+// Extract the vertices as the result
+val singleSourceShortestPaths = result.getVertices
+
+
+// - - -  UDFs - - - //
+
+// messaging
+final class MinDistanceMessenger extends ScatterFunction[Long, Double, Double, Double] {
+
+	override def sendMessages(vertex: Vertex[Long, Double]) = {
+		for (edge: Edge[Long, Double] <- getEdges) {
+			sendMessageTo(edge.getTarget, vertex.getValue + edge.getValue)
+		}
+	}
+}
+
+// vertex update
+final class VertexDistanceUpdater extends GatherFunction[Long, Double, Double] {
+
+	override def updateVertex(vertex: Vertex[Long, Double], inMessages: MessageIterator[Double]) = {
+		var minDistance = Double.MaxValue
+
+		while (inMessages.hasNext) {
+		  val msg = inMessages.next
+		  if (msg < minDistance) {
+			minDistance = msg
+		  }
+		}
+
+		if (vertex.getValue > minDistance) {
+		  setNewVertexValue(minDistance)
+		}
+	}
+}
+{% endhighlight %}
+</div>
+</div>
+
+{% top %}
+
+## Configuring a Scatter-Gather Iteration
+A scatter-gather iteration can be configured using a `ScatterGatherConfiguration` object.
+Currently, the following parameters can be specified:
+
+* <strong>Name</strong>: The name for the scatter-gather iteration. The name is displayed in logs and messages
+and can be specified using the `setName()` method.
+
+* <strong>Parallelism</strong>: The parallelism for the iteration. It can be set using the `setParallelism()` method.
+
+* <strong>Solution set in unmanaged memory</strong>: Defines whether the solution set is kept in managed memory (Flink's internal way of keeping objects in serialized form) or as a simple object map. By default, the solution set runs in managed memory. This property can be set using the `setSolutionSetUnmanagedMemory()` method.
+
+* <strong>Aggregators</strong>: Iteration aggregators can be registered using the `registerAggregator()` method. An iteration aggregator combines
+all aggregates globally once per superstep and makes them available in the next superstep. Registered aggregators can be accessed inside the user-defined `ScatterFunction` and `GatherFunction`.
+
+* <strong>Broadcast Variables</strong>: DataSets can be added as [Broadcast Variables]({{site.baseurl}}/dev/batch/index.html#broadcast-variables) to the `ScatterFunction` and `GatherFunction`, using the `addBroadcastSetForUpdateFunction()` and `addBroadcastSetForMessagingFunction()` methods, respectively.
+
+* <strong>Number of Vertices</strong>: Accessing the total number of vertices within the iteration. This property can be set using the `setOptNumVertices()` method.
+The number of vertices can then be accessed in the vertex update function and in the messaging function using the `getNumberOfVertices()` method. If the option is not set in the configuration, this method will return -1.
+
+* <strong>Degrees</strong>: Accessing the in/out degree for a vertex within an iteration. This property can be set using the `setOptDegrees()` method.
+The in/out degrees can then be accessed in the vertex update function and in the messaging function, per vertex using the `getInDegree()` and `getOutDegree()` methods.
+If the degrees option is not set in the configuration, these methods will return -1.
+
+* <strong>Messaging Direction</strong>: By default, a vertex sends messages to its out-neighbors and updates its value based on messages received from its in-neighbors. This configuration option allows users to change the messaging direction to either `EdgeDirection.IN`, `EdgeDirection.OUT`, `EdgeDirection.ALL`. The messaging direction also dictates the update direction which would be `EdgeDirection.OUT`, `EdgeDirection.IN` and `EdgeDirection.ALL`, respectively. This property can be set using the `setDirection()` method.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+
+Graph<Long, Double, Double> graph = ...
+
+// configure the iteration
+ScatterGatherConfiguration parameters = new ScatterGatherConfiguration();
+
+// set the iteration name
+parameters.setName("Gelly Iteration");
+
+// set the parallelism
+parameters.setParallelism(16);
+
+// register an aggregator
+parameters.registerAggregator("sumAggregator", new LongSumAggregator());
+
+// run the scatter-gather iteration, also passing the configuration parameters
+Graph<Long, Double, Double> result =
+			graph.runScatterGatherIteration(
+			new Messenger(), new VertexUpdater(), maxIterations, parameters);
+
+// user-defined functions
+public static final class Messenger extends ScatterFunction {...}
+
+public static final class VertexUpdater extends GatherFunction {
+
+	LongSumAggregator aggregator = new LongSumAggregator();
+
+	public void preSuperstep() {
+
+		// retrieve the Aggregator
+		aggregator = getIterationAggregator("sumAggregator");
+	}
+
+
+	public void updateVertex(Vertex<Long, Long> vertex, MessageIterator inMessages) {
+
+		//do some computation
+		Long partialValue = ...
+
+		// aggregate the partial value
+		aggregator.aggregate(partialValue);
+
+		// update the vertex value
+		setNewVertexValue(...);
+	}
+}
+
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+
+val graph: Graph[Long, Double, Double] = ...
+
+val parameters = new ScatterGatherConfiguration
+
+// set the iteration name
+parameters.setName("Gelly Iteration")
+
+// set the parallelism
+parameters.setParallelism(16)
+
+// register an aggregator
+parameters.registerAggregator("sumAggregator", new LongSumAggregator)
+
+// run the scatter-gather iteration, also passing the configuration parameters
+val result = graph.runScatterGatherIteration(new Messenger, new VertexUpdater, maxIterations, parameters)
+
+// user-defined functions
+final class Messenger extends ScatterFunction {...}
+
+final class VertexUpdater extends GatherFunction {
+
+	var aggregator = new LongSumAggregator
+
+	override def preSuperstep {
+
+		// retrieve the Aggregator
+		aggregator = getIterationAggregator("sumAggregator")
+	}
+
+
+	override def updateVertex(vertex: Vertex[Long, Long], inMessages: MessageIterator[Long]) {
+
+		//do some computation
+		val partialValue = ...
+
+		// aggregate the partial value
+		aggregator.aggregate(partialValue)
+
+		// update the vertex value
+		setNewVertexValue(...)
+	}
+}
+
+{% endhighlight %}
+</div>
+</div>
+
+The following example illustrates the usage of the degree as well as the number of vertices options.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+
+Graph<Long, Double, Double> graph = ...
+
+// configure the iteration
+ScatterGatherConfiguration parameters = new ScatterGatherConfiguration();
+
+// set the number of vertices option to true
+parameters.setOptNumVertices(true);
+
+// set the degree option to true
+parameters.setOptDegrees(true);
+
+// run the scatter-gather iteration, also passing the configuration parameters
+Graph<Long, Double, Double> result =
+			graph.runScatterGatherIteration(
+			new Messenger(), new VertexUpdater(), maxIterations, parameters);
+
+// user-defined functions
+public static final class Messenger extends ScatterFunction {
+	...
+	// retrieve the vertex out-degree
+	outDegree = getOutDegree();
+	...
+}
+
+public static final class VertexUpdater extends GatherFunction {
+	...
+	// get the number of vertices
+	long numVertices = getNumberOfVertices();
+	...
+}
+
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+
+val graph: Graph[Long, Double, Double] = ...
+
+// configure the iteration
+val parameters = new ScatterGatherConfiguration
+
+// set the number of vertices option to true
+parameters.setOptNumVertices(true)
+
+// set the degree option to true
+parameters.setOptDegrees(true)
+
+// run the scatter-gather iteration, also passing the configuration parameters
+val result = graph.runScatterGatherIteration(new Messenger, new VertexUpdater, maxIterations, parameters)
+
+// user-defined functions
+final class Messenger extends ScatterFunction {
+	...
+	// retrieve the vertex out-degree
+	val outDegree = getOutDegree
+	...
+}
+
+final class VertexUpdater extends GatherFunction {
+	...
+	// get the number of vertices
+	val numVertices = getNumberOfVertices
+	...
+}
+
+{% endhighlight %}
+</div>
+</div>
+
+The following example illustrates the usage of the edge direction option. Vertices update their values to contain a list of all their in-neighbors.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+Graph<Long, HashSet<Long>, Double> graph = ...
+
+// configure the iteration
+ScatterGatherConfiguration parameters = new ScatterGatherConfiguration();
+
+// set the messaging direction
+parameters.setDirection(EdgeDirection.IN);
+
+// run the scatter-gather iteration, also passing the configuration parameters
+DataSet<Vertex<Long, HashSet<Long>>> result =
+			graph.runScatterGatherIteration(
+			new Messenger(), new VertexUpdater(), maxIterations, parameters)
+			.getVertices();
+
+// user-defined functions
+public static final class Messenger extends GatherFunction {...}
+
+public static final class VertexUpdater extends ScatterFunction {...}
+
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val graph: Graph[Long, HashSet[Long], Double] = ...
+
+// configure the iteration
+val parameters = new ScatterGatherConfiguration
+
+// set the messaging direction
+parameters.setDirection(EdgeDirection.IN)
+
+// run the scatter-gather iteration, also passing the configuration parameters
+val result = graph.runScatterGatherIteration(new Messenger, new VertexUpdater, maxIterations, parameters)
+			.getVertices
+
+// user-defined functions
+final class Messenger extends ScatterFunction {...}
+
+final class VertexUpdater extends GatherFunction {...}
+
+{% endhighlight %}
+</div>
+</div>
+
+{% top %}
+
+## Gather-Sum-Apply Iterations
+Like in the scatter-gather model, Gather-Sum-Apply also proceeds in synchronized iterative steps, called supersteps. Each superstep consists of the following three phases:
+
+* <strong>Gather</strong>: a user-defined function is invoked in parallel on the edges and neighbors of each vertex, producing a partial value.
+* <strong>Sum</strong>: the partial values produced in the Gather phase are aggregated to a single value, using a user-defined reducer.
+* <strong>Apply</strong>:  each vertex value is updated by applying a function on the current value and the aggregated value produced by the Sum phase.
+
+Let us consider computing Single-Source-Shortest-Paths with GSA on the following graph and let vertex 1 be the source. During the `Gather` phase, we calculate the new candidate distances, by adding each vertex value with the edge weight. In `Sum`, the candidate distances are grouped by vertex ID and the minimum distance is chosen. In `Apply`, the newly calculated distance is compared to the current vertex value and the minimum of the two is assigned as the new value of the vertex.
+
+<p class="text-center">
+    <img alt="GSA SSSP superstep 1" width="70%" src="{{ site.baseurl }}/fig/gelly-gsa-sssp1.png"/>
+</p>
+
+Notice that, if a vertex does not change its value during a superstep, it will not calculate candidate distance during the next superstep. The algorithm converges when no vertex changes value.
+
+To implement this example in Gelly GSA, the user only needs to call the `runGatherSumApplyIteration` method on the input graph and provide the `GatherFunction`, `SumFunction` and `ApplyFunction` UDFs. Iteration synchronization, grouping, value updates and convergence are handled by the system:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+// read the input graph
+Graph<Long, Double, Double> graph = ...
+
+// define the maximum number of iterations
+int maxIterations = 10;
+
+// Execute the GSA iteration
+Graph<Long, Double, Double> result = graph.runGatherSumApplyIteration(
+				new CalculateDistances(), new ChooseMinDistance(), new UpdateDistance(), maxIterations);
+
+// Extract the vertices as the result
+DataSet<Vertex<Long, Double>> singleSourceShortestPaths = result.getVertices();
+
+
+// - - -  UDFs - - - //
+
+// Gather
+private static final class CalculateDistances extends GatherFunction<Double, Double, Double> {
+
+	public Double gather(Neighbor<Double, Double> neighbor) {
+		return neighbor.getNeighborValue() + neighbor.getEdgeValue();
+	}
+}
+
+// Sum
+private static final class ChooseMinDistance extends SumFunction<Double, Double, Double> {
+
+	public Double sum(Double newValue, Double currentValue) {
+		return Math.min(newValue, currentValue);
+	}
+}
+
+// Apply
+private static final class UpdateDistance extends ApplyFunction<Long, Double, Double> {
+
+	public void apply(Double newDistance, Double oldDistance) {
+		if (newDistance < oldDistance) {
+			setResult(newDistance);
+		}
+	}
+}
+
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+// read the input graph
+val graph: Graph[Long, Double, Double] = ...
+
+// define the maximum number of iterations
+val maxIterations = 10
+
+// Execute the GSA iteration
+val result = graph.runGatherSumApplyIteration(new CalculateDistances, new ChooseMinDistance, new UpdateDistance, maxIterations)
+
+// Extract the vertices as the result
+val singleSourceShortestPaths = result.getVertices
+
+
+// - - -  UDFs - - - //
+
+// Gather
+final class CalculateDistances extends GatherFunction[Double, Double, Double] {
+
+	override def gather(neighbor: Neighbor[Double, Double]): Double = {
+		neighbor.getNeighborValue + neighbor.getEdgeValue
+	}
+}
+
+// Sum
+final class ChooseMinDistance extends SumFunction[Double, Double, Double] {
+
+	override def sum(newValue: Double, currentValue: Double): Double = {
+		Math.min(newValue, currentValue)
+	}
+}
+
+// Apply
+final class UpdateDistance extends ApplyFunction[Long, Double, Double] {
+
+	override def apply(newDistance: Double, oldDistance: Double) = {
+		if (newDistance < oldDistance) {
+			setResult(newDistance)
+		}
+	}
+}
+
+{% endhighlight %}
+</div>
+</div>
+
+Note that `gather` takes a `Neighbor` type as an argument. This is a convenience type which simply wraps a vertex with its neighboring edge.
+
+For more examples of how to implement algorithms with the Gather-Sum-Apply model, check the {% gh_link /flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/GSAPageRank.java "GSAPageRank" %} and {% gh_link /flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/GSAConnectedComponents.java "GSAConnectedComponents" %} library methods of Gelly.
+
+{% top %}
+
+## Configuring a Gather-Sum-Apply Iteration
+A GSA iteration can be configured using a `GSAConfiguration` object.
+Currently, the following parameters can be specified:
+
+* <strong>Name</strong>: The name for the GSA iteration. The name is displayed in logs and messages and can be specified using the `setName()` method.
+
+* <strong>Parallelism</strong>: The parallelism for the iteration. It can be set using the `setParallelism()` method.
+
+* <strong>Solution set in unmanaged memory</strong>: Defines whether the solution set is kept in managed memory (Flink's internal way of keeping objects in serialized form) or as a simple object map. By default, the solution set runs in managed memory. This property can be set using the `setSolutionSetUnmanagedMemory()` method.
+
+* <strong>Aggregators</strong>: Iteration aggregators can be registered using the `registerAggregator()` method. An iteration aggregator combines all aggregates globally once per superstep and makes them available in the next superstep. Registered aggregators can be accessed inside the user-defined `GatherFunction`, `SumFunction` and `ApplyFunction`.
+
+* <strong>Broadcast Variables</strong>: DataSets can be added as [Broadcast Variables]({{site.baseurl}}/dev/index.html#broadcast-variables) to the `GatherFunction`, `SumFunction` and `ApplyFunction`, using the methods `addBroadcastSetForGatherFunction()`, `addBroadcastSetForSumFunction()` and `addBroadcastSetForApplyFunction` methods, respectively.
+
+* <strong>Number of Vertices</strong>: Accessing the total number of vertices within the iteration. This property can be set using the `setOptNumVertices()` method.
+The number of vertices can then be accessed in the gather, sum and/or apply functions by using the `getNumberOfVertices()` method. If the option is not set in the configuration, this method will return -1.
+
+* <strong>Neighbor Direction</strong>: By default values are gathered from the out neighbors of the Vertex. This can be modified
+using the `setDirection()` method.
+
+The following example illustrates the usage of the number of vertices option.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+
+Graph<Long, Double, Double> graph = ...
+
+// configure the iteration
+GSAConfiguration parameters = new GSAConfiguration();
+
+// set the number of vertices option to true
+parameters.setOptNumVertices(true);
+
+// run the gather-sum-apply iteration, also passing the configuration parameters
+Graph<Long, Long, Long> result = graph.runGatherSumApplyIteration(
+				new Gather(), new Sum(), new Apply(),
+			    maxIterations, parameters);
+
+// user-defined functions
+public static final class Gather {
+	...
+	// get the number of vertices
+	long numVertices = getNumberOfVertices();
+	...
+}
+
+public static final class Sum {
+	...
+    // get the number of vertices
+    long numVertices = getNumberOfVertices();
+    ...
+}
+
+public static final class Apply {
+	...
+    // get the number of vertices
+    long numVertices = getNumberOfVertices();
+    ...
+}
+
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+
+val graph: Graph[Long, Double, Double] = ...
+
+// configure the iteration
+val parameters = new GSAConfiguration
+
+// set the number of vertices option to true
+parameters.setOptNumVertices(true)
+
+// run the gather-sum-apply iteration, also passing the configuration parameters
+val result = graph.runGatherSumApplyIteration(new Gather, new Sum, new Apply, maxIterations, parameters)
+
+// user-defined functions
+final class Gather {
+	...
+	// get the number of vertices
+	val numVertices = getNumberOfVertices
+	...
+}
+
+final class Sum {
+	...
+    // get the number of vertices
+    val numVertices = getNumberOfVertices
+    ...
+}
+
+final class Apply {
+	...
+    // get the number of vertices
+    val numVertices = getNumberOfVertices
+    ...
+}
+
+{% endhighlight %}
+</div>
+</div>
+
+The following example illustrates the usage of the edge direction option.
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+
+Graph<Long, HashSet<Long>, Double> graph = ...
+
+// configure the iteration
+GSAConfiguration parameters = new GSAConfiguration();
+
+// set the messaging direction
+parameters.setDirection(EdgeDirection.IN);
+
+// run the gather-sum-apply iteration, also passing the configuration parameters
+DataSet<Vertex<Long, HashSet<Long>>> result =
+			graph.runGatherSumApplyIteration(
+			new Gather(), new Sum(), new Apply(), maxIterations, parameters)
+			.getVertices();
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+
+val graph: Graph[Long, HashSet[Long], Double] = ...
+
+// configure the iteration
+val parameters = new GSAConfiguration
+
+// set the messaging direction
+parameters.setDirection(EdgeDirection.IN)
+
+// run the gather-sum-apply iteration, also passing the configuration parameters
+val result = graph.runGatherSumApplyIteration(new Gather, new Sum, new Apply, maxIterations, parameters)
+			.getVertices()
+{% endhighlight %}
+</div>
+</div>
+{% top %}
+
+## Iteration Abstractions Comparison
+Although the three iteration abstractions in Gelly seem quite similar, understanding their differences can lead to more performant and maintainable programs.
+Among the three, the vertex-centric model is the most general model and supports arbitrary computation and messaging for each vertex. In the scatter-gather model, the logic of producing messages is decoupled from the logic of updating vertex values. Thus, programs written using scatter-gather are sometimes easier to follow and maintain.
+Separating the messaging phase from the vertex value update logic not only makes some programs easier to follow but might also have a positive impact on performance. Scatter-gather implementations typically have lower memory requirements, because concurrent access to the inbox (messages received) and outbox (messages to send) data structures is not required. However, this characteristic also limits expressiveness and makes some computation patterns non-intuitive. Naturally, if an algorithm requires a vertex to concurrently access its inbox and outbox, then the expression of this algorithm in scatter-gather might be problematic. Strongly Connected Components and Approximate Maximum
+Weight Matching are examples of such graph algorithms. A direct consequence of this restriction is that vertices cannot generate messages and update their states in the same phase. Thus, deciding whether to propagate a message based on its content would require storing it in the vertex value, so that the gather phase has access to it, in the following iteration step. Similarly, if the vertex update logic includes computation over the values of the neighboring edges, these have to be included inside a special message passed from the scatter to the gather phase. Such workarounds often lead to higher memory requirements and non-elegant, hard to understand algorithm implementations.
+
+Gather-sum-apply iterations are also quite similar to scatter-gather iterations. In fact, any algorithm which can be expressed as a GSA iteration can also be written in the scatter-gather model. The messaging phase of the scatter-gather model is equivalent to the Gather and Sum steps of GSA: Gather can be seen as the phase where the messages are produced and Sum as the phase where they are routed to the target vertex. Similarly, the value update phase corresponds to the Apply step.
+
+The main difference between the two implementations is that the Gather phase of GSA parallelizes the computation over the edges, while the messaging phase distributes the computation over the vertices. Using the SSSP examples above, we see that in the first superstep of the scatter-gather case, vertices 1, 2 and 3 produce messages in parallel. Vertex 1 produces 3 messages, while vertices 2 and 3 produce one message each. In the GSA case on the other hand, the computation is parallelized over the edges: the three candidate distance values of vertex 1 are produced in parallel. Thus, if the Gather step contains "heavy" computation, it might be a better idea to use GSA and spread out the computation, instead of burdening a single vertex. Another case when parallelizing over the edges might prove to be more efficient is when the input graph is skewed (some vertices have a lot more neighbors than others).
+
+Another difference between the two implementations is that the scatter-gather implementation uses a `coGroup` operator internally, while GSA uses a `reduce`. Therefore, if the function that combines neighbor values (messages) requires the whole group of values for the computation, scatter-gather should be used. If the update function is associative and commutative, then the GSA's reducer is expected to give a more efficient implementation, as it can make use of a combiner.
+
+Another thing to note is that GSA works strictly on neighborhoods, while in the vertex-centric and scatter-gather models, a vertex can send a message to any vertex, given that it knows its vertex ID, regardless of whether it is a neighbor. Finally, in Gelly's scatter-gather implementation, one can choose the messaging direction, i.e. the direction in which updates propagate. GSA does not support this yet, so each vertex will be updated based on the values of its in-neighbors only.
+
+The main differences among the Gelly iteration models are shown in the table below.
+
+
+<table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 25%">Iteration Model</th>
+      <th class="text-center">Update Function</th>
+      <th class="text-center">Update Logic</th>
+      <th class="text-center">Communication Scope</th>
+      <th class="text-center">Communication Logic</th>
+    </tr>
+  </thead>
+  <tbody>
+ <tr>
+  <td>Vertex-Centric</td>
+  <td>arbitrary</td>
+  <td>arbitrary</td>
+  <td>any vertex</td>
+  <td>arbitrary</td>
+</tr>
+<tr>
+  <td>Scatter-Gather</td>
+  <td>arbitrary</td>
+  <td>based on received messages</td>
+  <td>any vertex</td>
+  <td>based on vertex state</td>
+</tr>
+<tr>
+  <td>Gather-Sum-Apply</td>
+  <td>associative and commutative</td>
+  <td>based on neighbors' values</td>
+  <td>neighborhood</td>
+  <td>based on vertex state</td>
+</tr>
+</tbody>
+</table>
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/libs/gelly/library_methods.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/gelly/library_methods.md b/docs/dev/libs/gelly/library_methods.md
new file mode 100644
index 0000000..49270a2
--- /dev/null
+++ b/docs/dev/libs/gelly/library_methods.md
@@ -0,0 +1,347 @@
+---
+title: Library Methods
+nav-parent_id: graphs
+nav-pos: 3
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Gelly has a growing collection of graph algorithms for easily analyzing large-scale Graphs.
+
+* This will be replaced by the TOC
+{:toc}
+
+Gelly's library methods can be used by simply calling the `run()` method on the input graph:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+Graph<Long, Long, NullValue> graph = ...
+
+// run Label Propagation for 30 iterations to detect communities on the input graph
+DataSet<Vertex<Long, Long>> verticesWithCommunity = graph.run(new LabelPropagation<Long>(30));
+
+// print the result
+verticesWithCommunity.print();
+
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = ExecutionEnvironment.getExecutionEnvironment
+
+val graph: Graph[java.lang.Long, java.lang.Long, NullValue] = ...
+
+// run Label Propagation for 30 iterations to detect communities on the input graph
+val verticesWithCommunity = graph.run(new LabelPropagation[java.lang.Long, java.lang.Long, NullValue](30))
+
+// print the result
+verticesWithCommunity.print
+
+{% endhighlight %}
+</div>
+</div>
+
+## Community Detection
+
+#### Overview
+In graph theory, communities refer to groups of nodes that are well connected internally, but sparsely connected to other groups.
+This library method is an implementation of the community detection algorithm described in the paper [Towards real-time community detection in large networks](http://arxiv.org/pdf/0808.2633.pdf%22%3Earticle%20explaining%20the%20algorithm%20in%20detail).
+
+#### Details
+The algorithm is implemented using [scatter-gather iterations](#scatter-gather-iterations).
+Initially, each vertex is assigned a `Tuple2` containing its initial value along with a score equal to 1.0.
+In each iteration, vertices send their labels and scores to their neighbors. Upon receiving messages from its neighbors,
+a vertex chooses the label with the highest score and subsequently re-scores it using the edge values,
+a user-defined hop attenuation parameter, `delta`, and the superstep number.
+The algorithm converges when vertices no longer update their value or when the maximum number of iterations
+is reached.
+
+#### Usage
+The algorithm takes as input a `Graph` with any vertex type, `Long` vertex values, and `Double` edge values. It returns a `Graph` of the same type as the input,
+where the vertex values correspond to the community labels, i.e. two vertices belong to the same community if they have the same vertex value.
+The constructor takes two parameters:
+
+* `maxIterations`: the maximum number of iterations to run.
+* `delta`: the hop attenuation parameter, with default value 0.5.
+
+## Label Propagation
+
+#### Overview
+This is an implementation of the well-known Label Propagation algorithm described in [this paper](http://journals.aps.org/pre/abstract/10.1103/PhysRevE.76.036106). The algorithm discovers communities in a graph, by iteratively propagating labels between neighbors. Unlike the [Community Detection library method](#community-detection), this implementation does not use scores associated with the labels.
+
+#### Details
+The algorithm is implemented using [scatter-gather iterations](#scatter-gather-iterations).
+Labels are expected to be of type `Comparable` and are initialized using the vertex values of the input `Graph`.
+The algorithm iteratively refines discovered communities by propagating labels. In each iteration, a vertex adopts
+the label that is most frequent among its neighbors' labels. In case of a tie (i.e. two or more labels appear with the
+same frequency), the algorithm picks the greater label. The algorithm converges when no vertex changes its value or
+the maximum number of iterations has been reached. Note that different initializations might lead to different results.
+
+#### Usage
+The algorithm takes as input a `Graph` with a `Comparable` vertex type, a `Comparable` vertex value type and an arbitrary edge value type.
+It returns a `DataSet` of vertices, where the vertex value corresponds to the community in which this vertex belongs after convergence.
+The constructor takes one parameter:
+
+* `maxIterations`: the maximum number of iterations to run.
+
+## Connected Components
+
+#### Overview
+This is an implementation of the Weakly Connected Components algorithm. Upon convergence, two vertices belong to the
+same component, if there is a path from one to the other, without taking edge direction into account.
+
+#### Details
+The algorithm is implemented using [scatter-gather iterations](#scatter-gather-iterations).
+This implementation uses a comparable vertex value as initial component identifier (ID). Vertices propagate their
+current value in each iteration. Upon receiving component IDs from its neighbors, a vertex adopts a new component ID if
+its value is lower than its current component ID. The algorithm converges when vertices no longer update their component
+ID value or when the maximum number of iterations has been reached.
+
+#### Usage
+The result is a `DataSet` of vertices, where the vertex value corresponds to the assigned component.
+The constructor takes one parameter:
+
+* `maxIterations`: the maximum number of iterations to run.
+
+## GSA Connected Components
+
+#### Overview
+This is an implementation of the Weakly Connected Components algorithm. Upon convergence, two vertices belong to the
+same component, if there is a path from one to the other, without taking edge direction into account.
+
+#### Details
+The algorithm is implemented using [gather-sum-apply iterations](#gather-sum-apply-iterations).
+This implementation uses a comparable vertex value as initial component identifier (ID). In the gather phase, each
+vertex collects the vertex value of their adjacent vertices. In the sum phase, the minimum among those values is
+selected. In the apply phase, the algorithm sets the minimum value as the new vertex value if it is smaller than
+the current value. The algorithm converges when vertices no longer update their component ID value or when the
+maximum number of iterations has been reached.
+
+#### Usage
+The result is a `DataSet` of vertices, where the vertex value corresponds to the assigned component.
+The constructor takes one parameter:
+
+* `maxIterations`: the maximum number of iterations to run.
+
+## PageRank
+
+#### Overview
+An implementation of a simple [PageRank algorithm](https://en.wikipedia.org/wiki/PageRank), using [scatter-gather iterations](#scatter-gather-iterations).
+PageRank is an algorithm that was first used to rank web search engine results. Today, the algorithm and many variations, are used in various graph application domains. The idea of PageRank is that important or relevant pages tend to link to other important pages.
+
+#### Details
+The algorithm operates in iterations, where pages distribute their scores to their neighbors (pages they have links to) and subsequently update their scores based on the partial values they receive. The implementation assumes that each page has at least one incoming and one outgoing link.
+In order to consider the importance of a link from one page to another, scores are divided by the total number of out-links of the source page. Thus, a page with 10 links will distribute 1/10 of its score to each neighbor, while a page with 100 links, will distribute 1/100 of its score to each neighboring page. This process computes what is often called the transition probablities, i.e. the probability that some page will lead to other page while surfing the web. To correctly compute the transition probabilities, this implementation expects the edge values to be initialised to 1.0.
+
+#### Usage
+The algorithm takes as input a `Graph` with any vertex type, `Double` vertex values, and `Double` edge values. Edges values should be initialized to 1.0, in order to correctly compute the transition probabilities. Otherwise, the transition probability for an Edge `(u, v)` will be set to the edge value divided by `u`'s out-degree. The algorithm returns a `DataSet` of vertices, where the vertex value corresponds to assigned rank after convergence (or maximum iterations).
+The constructors take the following parameters:
+
+* `beta`: the damping factor.
+* `maxIterations`: the maximum number of iterations to run.
+
+## GSA PageRank
+
+The algorithm is implemented using [gather-sum-apply iterations](#gather-sum-apply-iterations).
+
+See the [PageRank](#pagerank) library method for implementation details and usage information.
+
+## Single Source Shortest Paths
+
+#### Overview
+An implementation of the Single-Source-Shortest-Paths algorithm for weighted graphs. Given a source vertex, the algorithm computes the shortest paths from this source to all other nodes in the graph.
+
+#### Details
+The algorithm is implemented using [scatter-gather iterations](#scatter-gather-iterations).
+In each iteration, a vertex sends to its neighbors a message containing the sum its current distance and the edge weight connecting this vertex with the neighbor. Upon receiving candidate distance messages, a vertex calculates the minimum distance and, if a shorter path has been discovered, it updates its value. If a vertex does not change its value during a superstep, then it does not produce messages for its neighbors for the next superstep. The computation terminates after the specified maximum number of supersteps or when there are no value updates.
+
+#### Usage
+The algorithm takes as input a `Graph` with any vertex type, `Double` vertex values, and `Double` edge values. The output is a `DataSet` of vertices where the vertex values
+correspond to the minimum distances from the given source vertex.
+The constructor takes two parameters:
+
+* `srcVertexId` The vertex ID of the source vertex.
+* `maxIterations`: the maximum number of iterations to run.
+
+## GSA Single Source Shortest Paths
+
+The algorithm is implemented using [gather-sum-apply iterations](#gather-sum-apply-iterations).
+
+See the [Single Source Shortest Paths](#single-source-shortest-paths) library method for implementation details and usage information.
+
+## Triangle Count
+
+#### Overview
+An analytic for counting the number of unique triangles in a graph.
+
+#### Details
+Counts the triangles generated by [Triangle Listing](#triangle-listing).
+
+#### Usage
+The analytic takes an undirected graph as input and returns as a result a `Long` corresponding to the number of triangles
+in the graph. The graph ID type must be `Comparable` and `Copyable`.
+
+## Triangle Listing
+
+This algorithm supports object reuse. The graph ID type must be `Comparable` and `Copyable`.
+
+See the [Triangle Enumerator](#triangle-enumerator) library method for implementation details.
+
+## Triangle Enumerator
+
+#### Overview
+This library method enumerates unique triangles present in the input graph. A triangle consists of three edges that connect three vertices with each other.
+This implementation ignores edge directions.
+
+#### Details
+The basic triangle enumeration algorithm groups all edges that share a common vertex and builds triads, i.e., triples of vertices
+that are connected by two edges. Then, all triads are filtered for which no third edge exists that closes the triangle.
+For a group of <i>n</i> edges that share a common vertex, the number of built triads is quadratic <i>((n*(n-1))/2)</i>.
+Therefore, an optimization of the algorithm is to group edges on the vertex with the smaller output degree to reduce the number of triads.
+This implementation extends the basic algorithm by computing output degrees of edge vertices and grouping on edges on the vertex with the smaller degree.
+
+#### Usage
+The algorithm takes a directed graph as input and outputs a `DataSet` of `Tuple3`. The Vertex ID type has to be `Comparable`.
+Each `Tuple3` corresponds to a triangle, with the fields containing the IDs of the vertices forming the triangle.
+
+## Hyperlink-Induced Topic Search
+
+#### Overview
+[Hyperlink-Induced Topic Search](http://www.cs.cornell.edu/home/kleinber/auth.pdf) (HITS, or "Hubs and Authorities")
+computes two interdependent scores for every vertex in a directed graph. Good hubs are those which point to many
+good authorities and good authorities are those pointed to by many good hubs.
+
+#### Details
+Every vertex is assigned the same initial hub and authority scores. The algorithm then iteratively updates the scores
+until termination. During each iteration new hub scores are computed from the authority scores, then new authority
+scores are computed from the new hub scores. The scores are then normalized and optionally tested for convergence.
+
+#### Usage
+The algorithm takes a directed graph as input and outputs a `DataSet` of `Tuple3` containing the vertex ID, hub score,
+and authority score.
+
+## Summarization
+
+#### Overview
+The summarization algorithm computes a condensed version of the input graph by grouping vertices and edges based on
+their values. In doing so, the algorithm helps to uncover insights about patterns and distributions in the graph.
+One possible use case is the visualization of communities where the whole graph is too large and needs to be summarized
+based on the community identifier stored at a vertex.
+
+#### Details
+In the resulting graph, each vertex represents a group of vertices that share the same value. An edge, that connects a
+vertex with itself, represents all edges with the same edge value that connect vertices from the same vertex group. An
+edge between different vertices in the output graph represents all edges with the same edge value between members of
+different vertex groups in the input graph.
+
+The algorithm is implemented using Flink data operators. First, vertices are grouped by their value and a representative
+is chosen from each group. For any edge, the source and target vertex identifiers are replaced with the corresponding
+representative and grouped by source, target and edge value. Output vertices and edges are created from their
+corresponding groupings.
+
+#### Usage
+The algorithm takes a directed, vertex (and possibly edge) attributed graph as input and outputs a new graph where each
+vertex represents a group of vertices and each edge represents a group of edges from the input graph. Furthermore, each
+vertex and edge in the output graph stores the common group value and the number of represented elements.
+
+## Adamic-Adar
+
+#### Overview
+Adamic-Adar measures the similarity between pairs of vertices as the sum of the inverse logarithm of degree over shared
+neighbors. Scores are non-negative and unbounded. A vertex with higher degree has greater overall influence but is less
+influential to each pair of neighbors.
+
+#### Details
+The algorithm first annotates each vertex with the inverse of the logarithm of the vertex degree then joins this score
+onto edges by source vertex. Grouping on the source vertex, each pair of neighbors is emitted with the vertex score.
+Grouping on two-paths, the Adamic-Adar score is summed.
+
+See the [Jaccard Index](#jaccard-index) library method for a similar algorithm.
+
+#### Usage
+The algorithm takes a simple, undirected graph as input and outputs a `DataSet` of tuples containing two vertex IDs and
+the Adamic-Adair similarity score. The graph ID type must be `Comparable` and `Copyable`.
+
+* `setLittleParallelism`: override the parallelism of operators processing small amounts of data
+* `setMinimumRatio`: filter out Adamic-Adar scores less than the given ratio times the average score
+* `setMinimumScore`: filter out Adamic-Adar scores less than the given minimum
+
+## Jaccard Index
+
+#### Overview
+The Jaccard Index measures the similarity between vertex neighborhoods and is computed as the number of shared neighbors
+divided by the number of distinct neighbors. Scores range from 0.0 (no shared neighbors) to 1.0 (all neighbors are
+shared).
+
+#### Details
+Counting shared neighbors for pairs of vertices is equivalent to counting connecting paths of length two. The number of
+distinct neighbors is computed by storing the sum of degrees of the vertex pair and subtracting the count of shared
+neighbors, which are double-counted in the sum of degrees.
+
+The algorithm first annotates each edge with the target vertex's degree. Grouping on the source vertex, each pair of
+neighbors is emitted with the degree sum. Grouping on two-paths, the shared neighbors are counted.
+
+#### Usage
+The algorithm takes a simple, undirected graph as input and outputs a `DataSet` of tuples containing two vertex IDs,
+the number of shared neighbors, and the number of distinct neighbors. The result class provides a method to compute the
+Jaccard Index score. The graph ID type must be `Comparable` and `Copyable`.
+
+* `setLittleParallelism`: override the parallelism of operators processing small amounts of data
+* `setMaximumScore`: filter out Jaccard Index scores greater than or equal to the given maximum fraction
+* `setMinimumScore`: filter out Jaccard Index scores less than the given minimum fraction
+
+## Local Clustering Coefficient
+
+#### Overview
+The local clustering coefficient measures the connectedness of each vertex's neighborhood. Scores range from 0.0 (no
+edges between neighbors) to 1.0 (neighborhood is a clique).
+
+#### Details
+An edge between a vertex's neighbors is a triangle. Counting edges between neighbors is equivalent to counting the
+number of triangles which include the vertex. The clustering coefficient score is the number of edges between neighbors
+divided by the number of potential edges between neighbors.
+
+See the [Triangle Enumeration](#triangle-enumeration) library method for a detailed explanation of triangle enumeration.
+
+#### Usage
+Directed and undirected variants are provided. The algorithms take a simple graph as input and output a `DataSet` of
+tuples containing the vertex ID, vertex degree, and number of triangles containing the vertex. The graph ID type must be
+`Comparable` and `Copyable`.
+
+## Global Clustering Coefficient
+
+#### Overview
+The global clustering coefficient measures the connectedness of a graph. Scores range from 0.0 (no edges between
+neighbors) to 1.0 (complete graph).
+
+#### Details
+See the [Local Clustering Coefficient](#local-clustering-coefficient) library method for a detailed explanation of
+clustering coefficient.
+
+#### Usage
+Directed and undirected variants are provided. The algorithm takes a simple graph as input and outputs a result
+containing the total number of triplets and triangles in the graph. The graph ID type must be `Comparable` and
+`Copyable`.
+
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/libs/ml/als.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/ml/als.md b/docs/dev/libs/ml/als.md
new file mode 100644
index 0000000..a0ef78a
--- /dev/null
+++ b/docs/dev/libs/ml/als.md
@@ -0,0 +1,175 @@
+---
+mathjax: include
+title: Alternating Least Squares
+nav-title: ALS
+nav-parent_id: ml
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+* This will be replaced by the TOC
+{:toc}
+
+## Description
+
+The alternating least squares (ALS) algorithm factorizes a given matrix $R$ into two factors $U$ and $V$ such that $R \approx U^TV$.
+The unknown row dimension is given as a parameter to the algorithm and is called latent factors.
+Since matrix factorization can be used in the context of recommendation, the matrices $U$ and $V$ can be called user and item matrix, respectively.
+The $i$th column of the user matrix is denoted by $u_i$ and the $i$th column of the item matrix is $v_i$.
+The matrix $R$ can be called the ratings matrix with $$(R)_{i,j} = r_{i,j}$$.
+
+In order to find the user and item matrix, the following problem is solved:
+
+$$\arg\min_{U,V} \sum_{\{i,j\mid r_{i,j} \not= 0\}} \left(r_{i,j} - u_{i}^Tv_{j}\right)^2 +
+\lambda \left(\sum_{i} n_{u_i} \left\lVert u_i \right\rVert^2 + \sum_{j} n_{v_j} \left\lVert v_j \right\rVert^2 \right)$$
+
+with $\lambda$ being the regularization factor, $$n_{u_i}$$ being the number of items the user $i$ has rated and $$n_{v_j}$$ being the number of times the item $j$ has been rated.
+This regularization scheme to avoid overfitting is called weighted-$\lambda$-regularization.
+Details can be found in the work of [Zhou et al.](http://dx.doi.org/10.1007/978-3-540-68880-8_32).
+
+By fixing one of the matrices $U$ or $V$, we obtain a quadratic form which can be solved directly.
+The solution of the modified problem is guaranteed to monotonically decrease the overall cost function.
+By applying this step alternately to the matrices $U$ and $V$, we can iteratively improve the matrix factorization.
+
+The matrix $R$ is given in its sparse representation as a tuple of $(i, j, r)$ where $i$ denotes the row index, $j$ the column index and $r$ is the matrix value at position $(i,j)$.
+
+## Operations
+
+`ALS` is a `Predictor`.
+As such, it supports the `fit` and `predict` operation.
+
+### Fit
+
+ALS is trained on the sparse representation of the rating matrix:
+
+* `fit: DataSet[(Int, Int, Double)] => Unit`
+
+### Predict
+
+ALS predicts for each tuple of row and column index the rating:
+
+* `predict: DataSet[(Int, Int)] => DataSet[(Int, Int, Double)]`
+
+## Parameters
+
+The alternating least squares implementation can be controlled by the following parameters:
+
+   <table class="table table-bordered">
+    <thead>
+      <tr>
+        <th class="text-left" style="width: 20%">Parameters</th>
+        <th class="text-center">Description</th>
+      </tr>
+    </thead>
+
+    <tbody>
+      <tr>
+        <td><strong>NumFactors</strong></td>
+        <td>
+          <p>
+            The number of latent factors to use for the underlying model.
+            It is equivalent to the dimension of the calculated user and item vectors.
+            (Default value: <strong>10</strong>)
+          </p>
+        </td>
+      </tr>
+      <tr>
+        <td><strong>Lambda</strong></td>
+        <td>
+          <p>
+            Regularization factor. Tune this value in order to avoid overfitting or poor performance due to strong generalization.
+            (Default value: <strong>1</strong>)
+          </p>
+        </td>
+      </tr>
+      <tr>
+        <td><strong>Iterations</strong></td>
+        <td>
+          <p>
+            The maximum number of iterations.
+            (Default value: <strong>10</strong>)
+          </p>
+        </td>
+      </tr>
+      <tr>
+        <td><strong>Blocks</strong></td>
+        <td>
+          <p>
+            The number of blocks into which the user and item matrix are grouped.
+            The fewer blocks one uses, the less data is sent redundantly.
+            However, bigger blocks entail bigger update messages which have to be stored on the heap.
+            If the algorithm fails because of an OutOfMemoryException, then try to increase the number of blocks.
+            (Default value: <strong>None</strong>)
+          </p>
+        </td>
+      </tr>
+      <tr>
+        <td><strong>Seed</strong></td>
+        <td>
+          <p>
+            Random seed used to generate the initial item matrix for the algorithm.
+            (Default value: <strong>0</strong>)
+          </p>
+        </td>
+      </tr>
+      <tr>
+        <td><strong>TemporaryPath</strong></td>
+        <td>
+          <p>
+            Path to a temporary directory into which intermediate results are stored.
+            If this value is set, then the algorithm is split into two preprocessing steps, the ALS iteration and a post-processing step which calculates a last ALS half-step.
+            The preprocessing steps calculate the <code>OutBlockInformation</code> and <code>InBlockInformation</code> for the given rating matrix.
+            The results of the individual steps are stored in the specified directory.
+            By splitting the algorithm into multiple smaller steps, Flink does not have to split the available memory amongst too many operators.
+            This allows the system to process bigger individual messages and improves the overall performance.
+            (Default value: <strong>None</strong>)
+          </p>
+        </td>
+      </tr>
+    </tbody>
+  </table>
+
+## Examples
+
+{% highlight scala %}
+// Read input data set from a csv file
+val inputDS: DataSet[(Int, Int, Double)] = env.readCsvFile[(Int, Int, Double)](
+  pathToTrainingFile)
+
+// Setup the ALS learner
+val als = ALS()
+.setIterations(10)
+.setNumFactors(10)
+.setBlocks(100)
+.setTemporaryPath("hdfs://tempPath")
+
+// Set the other parameters via a parameter map
+val parameters = ParameterMap()
+.add(ALS.Lambda, 0.9)
+.add(ALS.Seed, 42L)
+
+// Calculate the factorization
+als.fit(inputDS, parameters)
+
+// Read the testing data set from a csv file
+val testingDS: DataSet[(Int, Int)] = env.readCsvFile[(Int, Int)](pathToData)
+
+// Calculate the ratings according to the matrix factorization
+val predictedRatings = als.predict(testingDS)
+{% endhighlight %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/libs/ml/contribution_guide.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/ml/contribution_guide.md b/docs/dev/libs/ml/contribution_guide.md
new file mode 100644
index 0000000..992232f
--- /dev/null
+++ b/docs/dev/libs/ml/contribution_guide.md
@@ -0,0 +1,106 @@
+---
+mathjax: include
+title: How to Contribute
+nav-parent_id: ml
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+The Flink community highly appreciates all sorts of contributions to FlinkML.
+FlinkML offers people interested in machine learning to work on a highly active open source project which makes scalable ML reality.
+The following document describes how to contribute to FlinkML.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Getting Started
+
+In order to get started first read Flink's [contribution guide](http://flink.apache.org/how-to-contribute.html).
+Everything from this guide also applies to FlinkML.
+
+## Pick a Topic
+
+If you are looking for some new ideas you should first look into our [roadmap](https://cwiki.apache.org/confluence/display/FLINK/FlinkML%3A+Vision+and+Roadmap), then you should check out the list of [unresolved issues on JIRA](https://issues.apache.org/jira/issues/?jql=component%20%3D%20%22Machine%20Learning%20Library%22%20AND%20project%20%3D%20FLINK%20AND%20resolution%20%3D%20Unresolved%20ORDER%20BY%20priority%20DESC).
+Once you decide to contribute to one of these issues, you should take ownership of it and track your progress with this issue.
+That way, the other contributors know the state of the different issues and redundant work is avoided.
+
+If you already know what you want to contribute to FlinkML all the better.
+It is still advisable to create a JIRA issue for your idea to tell the Flink community what you want to do, though.
+
+## Testing
+
+New contributions should come with tests to verify the correct behavior of the algorithm.
+The tests help to maintain the algorithm's correctness throughout code changes, e.g. refactorings.
+
+We distinguish between unit tests, which are executed during Maven's test phase, and integration tests, which are executed during maven's verify phase.
+Maven automatically makes this distinction by using the following naming rules:
+All test cases whose class name ends with a suffix fulfilling the regular expression `(IT|Integration)(Test|Suite|Case)`, are considered integration tests.
+The rest are considered unit tests and should only test behavior which is local to the component under test.
+
+An integration test is a test which requires the full Flink system to be started.
+In order to do that properly, all integration test cases have to mix in the trait `FlinkTestBase`.
+This trait will set the right `ExecutionEnvironment` so that the test will be executed on a special `FlinkMiniCluster` designated for testing purposes.
+Thus, an integration test could look the following:
+
+{% highlight scala %}
+class ExampleITSuite extends FlatSpec with FlinkTestBase {
+  behavior of "An example algorithm"
+
+  it should "do something" in {
+    ...
+  }
+}
+{% endhighlight %}
+
+The test style does not have to be `FlatSpec` but can be any other scalatest `Suite` subclass.
+See [ScalaTest testing styles](http://scalatest.org/user_guide/selecting_a_style) for more information.
+
+## Documentation
+
+When contributing new algorithms, it is required to add code comments describing the way the algorithm works and its parameters with which the user can control its behavior.
+Additionally, we would like to encourage contributors to add this information to the online documentation.
+The online documentation for FlinkML's components can be found in the directory `docs/libs/ml`.
+
+Every new algorithm is described by a single markdown file.
+This file should contain at least the following points:
+
+1. What does the algorithm do
+2. How does the algorithm work (or reference to description)
+3. Parameter description with default values
+4. Code snippet showing how the algorithm is used
+
+In order to use latex syntax in the markdown file, you have to include `mathjax: include` in the YAML front matter.
+
+{% highlight java %}
+---
+mathjax: include
+htmlTitle: FlinkML - Example title
+title: <a href="../ml">FlinkML</a> - Example title
+---
+{% endhighlight %}
+
+In order to use displayed mathematics, you have to put your latex code in `$$ ... $$`.
+For in-line mathematics, use `$ ... $`.
+Additionally some predefined latex commands are included into the scope of your markdown file.
+See `docs/_include/latex_commands.html` for the complete list of predefined latex commands.
+
+## Contributing
+
+Once you have implemented the algorithm with adequate test coverage and added documentation, you are ready to open a pull request.
+Details of how to open a pull request can be found [here](http://flink.apache.org/how-to-contribute.html#contributing-code--documentation).

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/libs/ml/cross_validation.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/ml/cross_validation.md b/docs/dev/libs/ml/cross_validation.md
new file mode 100644
index 0000000..943c492
--- /dev/null
+++ b/docs/dev/libs/ml/cross_validation.md
@@ -0,0 +1,171 @@
+---
+mathjax: include
+title: Cross Validation
+nav-parent_id: ml
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+* This will be replaced by the TOC
+{:toc}
+
+## Description
+
+ A prevalent problem when utilizing machine learning algorithms is *overfitting*, or when an algorithm "memorizes" the training data but does a poor job extrapolating to out of sample cases. A common method for dealing with the overfitting problem is to hold back some subset of data from the original training algorithm and then measure the fit algorithm's performance on this hold-out set. This is commonly known as *cross validation*.  A model is trained on one subset of data and then *validated* on another set of data.
+
+## Cross Validation Strategies
+
+There are several strategies for holding out data. FlinkML has convenience methods for
+- Train-Test Splits
+- Train-Test-Holdout Splits
+- K-Fold Splits
+- Multi-Random Splits
+
+### Train-Test Splits
+
+The simplest method of splitting is the `trainTestSplit`. This split takes a DataSet and a parameter *fraction*.  The *fraction* indicates the portion of the DataSet that should be allocated to the training set. This split also takes two additional optional parameters, *precise* and *seed*.  
+
+By default, the Split is done by randomly deciding whether or not an observation is assigned to the training DataSet with probability = *fraction*.  When *precise* is `true` however, additional steps are taken to ensure the training set is as close as possible to the length of the DataSet  $\cdot$ *fraction*.
+
+The method returns a new `TrainTestDataSet` object which has a `.training` attribute containing the training DataSet and a `.testing` attribute containing the testing DataSet.
+
+
+### Train-Test-Holdout Splits
+
+In some cases, algorithms have been known to 'learn' the testing set.  To combat this issue, a train-test-hold out strategy introduces a secondary holdout set, aptly called the *holdout* set.
+
+Traditionally, training and testing would be done to train an algorithms as normal and then a final test of the algorithm on the holdout set would be done.  Ideally, prediction errors/model scores in the holdout set would not be significantly different than those observed in the testing set.
+
+In a train-test-holdout strategy we sacrifice the sample size of the initial fitting algorithm for increased confidence that our model is not over-fit.
+
+When using `trainTestHoldout` splitter, the *fraction* `Double` is replaced by a *fraction* array of length three. The first element coresponds to the portion to be used for training, second for testing, and third for holdout.  The weights of this array are *relative*, e.g. an array `Array(3.0, 2.0, 1.0)` would results in approximately 50% of the observations being in the training set, 33% of the observations in the testing set, and 17% of the observations in holdout set.
+
+### K-Fold Splits
+
+In a *k-fold* strategy, the DataSet is split into *k* equal subsets. Then for each of the *k* subsets, a `TrainTestDataSet` is created where the subset is the `.training` DataSet, and the remaining subsets are the `.testing` set.
+
+For each training set, an algorithm is trained and then is evaluated based on the predictions based on the associated testing set. When an algorithm that has consistent grades (e.g. prediction errors) across held out datasets we can have some confidence that our approach (e.g. choice of algorithm / algorithm parameters / number of iterations) is robust against overfitting.
+
+<a href="https://en.wikipedia.org/wiki/Cross-validation_(statistics)#k-fold_cross-validation">K-Fold Cross Validatation</a>
+
+### Multi-Random Splits
+
+The *multi-random* strategy can be thought of as a more general form of the *train-test-holdout* strategy. In fact, `.trainTestHoldoutSplit` is a simple wrapper for `multiRandomSplit` which also packages the datasets into a `trainTestHoldoutDataSet` object.
+
+The first major difference, is that `multiRandomSplit` takes an array of fractions of any length. E.g. one can create multiple holdout sets.  Alternatively, one could think of `kFoldSplit` as a wrapper for `multiRandomSplit` (which it is), the difference being `kFoldSplit` creates subsets of approximately equal size, where `multiRandomSplit` will create subsets of any size.
+
+The second major difference is that `multiRandomSplit` returns an array of DataSets, equal in size and proportion to the *fraction array* that it was passed as an argument.
+
+## Parameters
+
+The various `Splitter` methods share many parameters.
+
+ <table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 20%">Parameter</th>
+      <th class="text-center">Type</th>
+      <th class="text-center">Description</th>
+      <th class="text-right">Used by Method</th>
+    </tr>
+  </thead>
+
+  <tbody>
+    <tr>
+      <td><code>input</code></td>
+      <td><code>DataSet[Any]</code></td>
+      <td>DataSet to be split.</td>
+      <td>
+      <code>randomSplit</code><br>
+      <code>multiRandomSplit</code><br>
+      <code>kFoldSplit</code><br>
+      <code>trainTestSplit</code><br>
+      <code>trainTestHoldoutSplit</code>
+      </td>
+    </tr>
+    <tr>
+      <td><code>seed</code></td>
+      <td><code>Long</code></td>
+      <td>
+        <p>
+          Used for seeding the random number generator which sorts DataSets into other DataSets.
+        </p>
+      </td>
+      <td>
+      <code>randomSplit</code><br>
+      <code>multiRandomSplit</code><br>
+      <code>kFoldSplit</code><br>
+      <code>trainTestSplit</code><br>
+      <code>trainTestHoldoutSplit</code>
+      </td>
+    </tr>
+    <tr>
+      <td><code>precise</code></td>
+      <td><code>Boolean</code></td>
+      <td>When true, make additional effort to make DataSets as close to the prescribed proportions as possible.</td>
+      <td>
+      <code>randomSplit</code><br>
+      <code>trainTestSplit</code>
+      </td>
+    </tr>
+    <tr>
+      <td><code>fraction</code></td>
+      <td><code>Double</code></td>
+      <td>The portion of the `input` to assign to the first or <code>.training</code> DataSet. Must be in the range (0,1)</td>
+      <td><code>randomSplit</code><br>
+        <code>trainTestSplit</code>
+      </td>
+    </tr>
+    <tr>
+      <td><code>fracArray</code></td>
+      <td><code>Array[Double]</code></td>
+      <td>An array that prescribes the proportions of the output datasets (proportions need not sum to 1 or be within the range (0,1))</td>
+      <td>
+      <code>multiRandomSplit</code><br>
+      <code>trainTestHoldoutSplit</code>
+      </td>
+    </tr>
+    <tr>
+      <td><code>kFolds</code></td>
+      <td><code>Int</code></td>
+      <td>The number of subsets to break the <code>input</code> DataSet into.</td>
+      <td><code>kFoldSplit</code></td>
+      </tr>
+
+  </tbody>
+</table>
+
+## Examples
+
+{% highlight scala %}
+// An input dataset- does not have to be of type LabeledVector
+val data: DataSet[LabeledVector] = ...
+
+// A Simple Train-Test-Split
+val dataTrainTest: TrainTestDataSet = Splitter.trainTestSplit(data, 0.6, true)
+
+// Create a train test holdout DataSet
+val dataTrainTestHO: trainTestHoldoutDataSet = Splitter.trainTestHoldoutSplit(data, Array(6.0, 3.0, 1.0))
+
+// Create an Array of K TrainTestDataSets
+val dataKFolded: Array[TrainTestDataSet] =  Splitter.kFoldSplit(data, 10)
+
+// create an array of 5 datasets
+val dataMultiRandom: Array[DataSet[T]] = Splitter.multiRandomSplit(data, Array(0.5, 0.1, 0.1, 0.1, 0.1))
+{% endhighlight %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/libs/ml/distance_metrics.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/ml/distance_metrics.md b/docs/dev/libs/ml/distance_metrics.md
new file mode 100644
index 0000000..1dbd002
--- /dev/null
+++ b/docs/dev/libs/ml/distance_metrics.md
@@ -0,0 +1,107 @@
+---
+mathjax: include
+title: Distance Metrics
+nav-parent_id: ml
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+* This will be replaced by the TOC
+{:toc}
+
+## Description
+
+Different metrics of distance are convenient for different types of analysis. Flink ML provides
+built-in implementations for many standard distance metrics. You can create custom
+distance metrics by implementing the `DistanceMetric` trait.
+
+## Built-in Implementations
+
+Currently, FlinkML supports the following metrics:
+
+<table class="table table-bordered">
+    <thead>
+      <tr>
+        <th class="text-left" style="width: 20%">Metric</th>
+        <th class="text-center">Description</th>
+      </tr>
+    </thead>
+
+    <tbody>
+      <tr>
+        <td><strong>Euclidean Distance</strong></td>
+        <td>
+          $$d(\x, \y) = \sqrt{\sum_{i=1}^n \left(x_i - y_i \right)^2}$$
+        </td>
+      </tr>
+      <tr>
+        <td><strong>Squared Euclidean Distance</strong></td>
+        <td>
+          $$d(\x, \y) = \sum_{i=1}^n \left(x_i - y_i \right)^2$$
+        </td>
+      </tr>
+      <tr>
+        <td><strong>Cosine Similarity</strong></td>
+        <td>
+          $$d(\x, \y) = 1 - \frac{\x^T \y}{\Vert \x \Vert \Vert \y \Vert}$$
+        </td>
+      </tr>
+      <tr>
+        <td><strong>Chebyshev Distance</strong></td>
+        <td>
+          $$d(\x, \y) = \max_{i}\left(\left \vert x_i - y_i \right\vert \right)$$
+        </td>
+      </tr>
+      <tr>
+        <td><strong>Manhattan Distance</strong></td>
+        <td>
+          $$d(\x, \y) = \sum_{i=1}^n \left\vert x_i - y_i \right\vert$$
+        </td>
+      </tr>
+      <tr>
+        <td><strong>Minkowski Distance</strong></td>
+        <td>
+          $$d(\x, \y) = \left( \sum_{i=1}^{n} \left( x_i - y_i \right)^p \right)^{\rfrac{1}{p}}$$
+        </td>
+      </tr>
+      <tr>
+        <td><strong>Tanimoto Distance</strong></td>
+        <td>
+          $$d(\x, \y) = 1 - \frac{\x^T\y}{\Vert \x \Vert^2 + \Vert \y \Vert^2 - \x^T\y}$$
+          with $\x$ and $\y$ being bit-vectors
+        </td>
+      </tr>
+    </tbody>
+  </table>
+
+## Custom Implementation
+
+You can create your own distance metric by implementing the `DistanceMetric` trait.
+
+{% highlight scala %}
+class MyDistance extends DistanceMetric {
+  override def distance(a: Vector, b: Vector) = ... // your implementation for distance metric
+}
+
+object MyDistance {
+  def apply() = new MyDistance()
+}
+
+val myMetric = MyDistance()
+{% endhighlight %}


[23/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/concepts/index.md
----------------------------------------------------------------------
diff --git a/docs/concepts/index.md b/docs/concepts/index.md
new file mode 100644
index 0000000..a9638451
--- /dev/null
+++ b/docs/concepts/index.md
@@ -0,0 +1,249 @@
+---
+title: Concepts
+nav-id: concepts
+nav-pos: 1
+nav-title: '<i class="fa fa-map-o" aria-hidden="true"></i> Concepts'
+nav-parent_id: root
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+* This will be replaced by the TOC
+{:toc}
+
+## Programs and Dataflows
+
+The basic building blocks of Flink programs are **streams** and **transformations** (note that a DataSet is internally
+also a stream). A *stream* is an intermediate result, and a *transformation* is an operation that takes one or more streams
+as input, and computes one or more result streams from them.
+
+When executed, Flink programs are mapped to **streaming dataflows**, consisting of **streams** and transformation **operators**.
+Each dataflow starts with one or more **sources** and ends in one or more **sinks**. The dataflows may resemble
+arbitrary **directed acyclic graphs** *(DAGs)*. (Special forms of cycles are permitted via *iteration* constructs, we
+omit this here for simplicity).
+
+In most cases, there is a one-to-one correspondence between the transformations in the programs and the operators
+in the dataflow. Sometimes, however, one transformation may consist of multiple transformation operators.
+
+<img src="{{ site.baseurl }}/fig/program_dataflow.svg" alt="A DataStream program, and its dataflow." class="offset" width="80%" />
+
+{% top %}
+
+### Parallel Dataflows
+
+Programs in Flink are inherently parallel and distributed. *Streams* are split into **stream partitions** and
+*operators* are split into **operator subtasks**. The operator subtasks execute independently from each other,
+in different threads and on different machines or containers.
+
+The number of operator subtasks is the **parallelism** of that particular operator. The parallelism of a stream
+is always that of its producing operator. Different operators of the program may have a different parallelism.
+
+<img src="{{ site.baseurl }}/fig/parallel_dataflow.svg" alt="A parallel dataflow" class="offset" width="80%" />
+
+Streams can transport data between two operators in a *one-to-one* (or *forwarding*) pattern, or in a *redistributing* pattern:
+
+  - **One-to-one** streams (for example between the *source* and the *map()* operators) preserves partitioning and order of
+    elements. That means that subtask[1] of the *map()* operator will see the same elements in the same order, as they
+    were produced by subtask[1] of the *source* operator.
+
+  - **Redistributing** streams (between *map()* and *keyBy/window*, as well as between *keyBy/window* and *sink*) change
+    the partitioning of streams. Each *operator subtask* sends data to different target subtasks,
+    depending on the selected transformation. Examples are *keyBy()* (re-partitions by hash code), *broadcast()*, or
+    *rebalance()* (random redistribution).
+    In a *redistributing* exchange, order among elements is only preserved for each pair of sending- and receiving
+    task (for example subtask[1] of *map()* and subtask[2] of *keyBy/window*).
+
+{% top %}
+
+### Tasks & Operator Chains
+
+For distributed execution, Flink *chains* operator subtasks together into *tasks*. Each task is executed by one thread.
+Chaining operators together into tasks is a useful optimization: it reduces the overhead of thread-to-thread
+handover and buffering, and increases overall throughput while decreasing latency.
+The chaining behavior can be configured in the APIs.
+
+The sample dataflow in the figure below is executed with five subtasks, and hence with five parallel threads.
+
+<img src="{{ site.baseurl }}/fig/tasks_chains.svg" alt="Operator chaining into Tasks" class="offset" width="80%" />
+
+{% top %}
+
+## Distributed Execution
+
+**Master, Worker, Client**
+
+The Flink runtime consists of two types of processes:
+
+  - The **master** processes (also called *JobManagers*) coordinate the distributed execution. They schedule tasks, coordinate
+    checkpoints, coordinate recovery on failures, etc.
+
+    There is always at least one master process. A high-availability setup will have multiple master processes, out of
+    which one is always the *leader*, and the others are *standby*.
+
+  - The **worker** processes (also called *TaskManagers*) execute the *tasks* (or more specifically, the subtasks) of a dataflow,
+    and buffer and exchange the data *streams*.
+
+    There must always be at least one worker process.
+
+The master and worker processes can be started in an arbitrary fashion: Directly on the machines, via containers, or via
+resource frameworks like YARN. Workers connect to masters, announcing themselves as available, and get work assigned.
+
+The **client** is not part of the runtime and program execution, but is used to prepare and send a dataflow to the master.
+After that, the client can disconnect, or stay connected to receive progress reports. The client runs either as part of the
+Java/Scala program that triggers the execution, or in the command line process `./bin/flink run ...`.
+
+<img src="{{ site.baseurl }}/fig/processes.svg" alt="The processes involved in executing a Flink dataflow" class="offset" width="80%" />
+
+{% top %}
+
+### Workers, Slots, Resources
+
+Each worker (TaskManager) is a *JVM process*, and may execute one or more subtasks in separate threads.
+To control how many tasks a worker accepts, a worker has so called **task slots** (at least one).
+
+Each *task slot* represents a fixed subset of resources of the TaskManager. A TaskManager with three slots, for example,
+will dedicate 1/3 of its managed memory to each slot. Slotting the resources means that a subtask will not
+compete with subtasks from other jobs for managed memory, but instead has a certain amount of reserved
+managed memory. Note that no CPU isolation happens here, slots currently only separate managed memory of tasks.
+
+Adjusting the number of task slots thus allows users to define how subtasks are isolated against each other.
+Having one slot per TaskManager means each task group runs in a separate JVM (which can be started in a
+separate container, for example). Having multiple slots
+means more subtasks share the same JVM. Tasks in the same JVM share TCP connections (via multiplexing) and
+heartbeats messages. They may also share data sets and data structures, thus reducing the per-task overhead.
+
+<img src="{{ site.baseurl }}/fig/tasks_slots.svg" alt="A TaskManager with Task Slots and Tasks" class="offset" width="80%" />
+
+By default, Flink allows subtasks to share slots, if they are subtasks of different tasks, but from the same
+job. The result is that one slot may hold an entire pipeline of the job. Allowing this *slot sharing*
+has two main benefits:
+
+  - A Flink cluster needs exactly as many tasks slots, as the highest parallelism used in the job.
+    No need to calculate how many tasks (with varying parallelism) a program contains in total.
+
+  - It is easier to get better resource utilization. Without slot sharing, the non-intensive
+    *source/map()* subtasks would block as many resources as the resource intensive *window* subtasks.
+    With slot sharing, increasing the base parallelism from two to six yields full utilization of the
+    slotted resources, while still making sure that each TaskManager gets only a fair share of the
+    heavy subtasks.
+
+The slot sharing behavior can be controlled in the APIs, to prevent sharing where it is undesirable.
+The mechanism for that are the *resource groups*, which define what (sub)tasks may share slots.
+
+As a rule-of-thumb, a good default number of task slots would be the number of CPU cores.
+With hyper threading, each slot then takes 2 or more hardware thread contexts.
+
+<img src="{{ site.baseurl }}/fig/slot_sharing.svg" alt="TaskManagers with shared Task Slots" class="offset" width="80%" />
+
+{% top %}
+
+## Time and Windows
+
+Aggregating events (e.g., counts, sums) works slightly differently on streams than in batch processing.
+For example, it is impossible to first count all elements in the stream and then return the count,
+because streams are in general infinite (unbounded). Instead, aggregates on streams (counts, sums, etc),
+are scoped by **windows**, such as *"count over the last 5 minutes"*, or *"sum of the last 100 elements"*.
+
+Windows can be *time driven* (example: every 30 seconds) or *data driven* (example: every 100 elements).
+One typically distinguishes different types of windows, such as *tumbling windows* (no overlap),
+*sliding windows* (with overlap), and *session windows* (gap of activity).
+
+<img src="{{ site.baseurl }}/fig/windows.svg" alt="Time- and Count Windows" class="offset" width="80%" />
+
+More window examples can be found in this [blog post](https://flink.apache.org/news/2015/12/04/Introducing-windows.html).
+
+{% top %}
+
+### Time
+
+When referring to time in a streaming program (for example to define windows), one can refer to different notions
+of time:
+
+  - **Event Time** is the time when an event was created. It is usually described by a timestamp in the events,
+    for example attached by the producing sensor, or the producing service. Flink accesses event timestamps
+    via [timestamp assigners]({{ site.baseurl }}/dev/event_timestamps_watermarks.html).
+
+  - **Ingestion time** is the time when an event enters the Flink dataflow at the source operator.
+
+  - **Processing Time** is the local time at each operator that performs a time-based operation.
+
+<img src="{{ site.baseurl }}/fig/event_ingestion_processing_time.svg" alt="Event Time, Ingestion Time, and Processing Time" class="offset" width="80%" />
+
+More details on how to handle time are in the [event time docs]({{ site.baseurl }}/dev/event_time.html).
+
+{% top %}
+
+## State and Fault Tolerance
+
+While many operations in a dataflow simply look at one individual *event at a time* (for example an event parser),
+some operations remember information across individual events (for example window operators).
+These operations are called **stateful**.
+
+The state of stateful operations is maintained in what can be thought of as an embedded key/value store.
+The state is partitioned and distributed strictly together with the streams that are read by the
+stateful operators. Hence, access the key/value state is only possible on *keyed streams*, after a *keyBy()* function,
+and is restricted to the values of the current event's key. Aligning the keys of streams and state
+makes sure that all state updates are local operations, guaranteeing consistency without transaction overhead.
+This alignment also allows Flink to redistribute the state and adjust the stream partitioning transparently.
+
+<img src="{{ site.baseurl }}/fig/state_partitioning.svg" alt="State and Partitioning" class="offset" width="50%" />
+
+{% top %}
+
+### Checkpoints for Fault Tolerance
+
+Flink implements fault tolerance using a combination of **stream replay** and **checkpoints**. A checkpoint
+defines a consistent point in streams and state from which a streaming dataflow can resume, and maintain consistency
+*(exactly-once processing semantics)*. The events and state updates since the last checkpoint are replayed from the input streams.
+
+The checkpoint interval is a means of trading off the overhead of fault tolerance during execution, with the recovery time (the amount
+of events that need to be replayed).
+
+More details on checkpoints and fault tolerance are in the [fault tolerance docs]({{ site.baseurl }}/internals/stream_checkpointing.html).
+
+<img src="{{ site.baseurl }}/fig/checkpoints.svg" alt="checkpoints and snapshots" class="offset" width="60%" />
+
+{% top %}
+
+### State Backends
+
+The exact data structures in which the key/values indexes are stored depend on the chosen **state backend**. One state backend
+stores data in an in-memory hash map, another state backend uses [RocksDB](http://rocksdb.org) as the key/value index.
+In addition to defining the data structure that holds the state, the state backends also implements the logic to
+take a point-in-time snapshot of the key/value state and store that snapshot as part of a checkpoint.
+
+{% top %}
+
+## Batch on Streaming
+
+Flink executes batch programs as a special case of streaming programs, where the streams are bounded (finite number of elements).
+A *DataSet* is treated internally as a stream of data. The concepts above thus apply to batch programs in the
+same way as well as they apply to streaming programs, with minor exceptions:
+
+  - Programs in the DataSet API do not use checkpoints. Recovery happens by fully replaying the streams.
+    That is possible, because inputs are bounded. This pushes the cost more towards the recovery,
+    but makes the regular processing cheaper, because it avoids checkpoints.
+
+  - Stateful operation in the DataSet API use simplified in-memory/out-of-core data structures, rather than
+    key/value indexes.
+
+  - The DataSet API introduces special synchronized (superstep-based) iterations, which are only possible on
+    bounded streams. For details, check out the [iteration docs]({{ site.baseurl }}/dev/batch/iterations.html).
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/api_concepts.md
----------------------------------------------------------------------
diff --git a/docs/dev/api_concepts.md b/docs/dev/api_concepts.md
new file mode 100644
index 0000000..468085a
--- /dev/null
+++ b/docs/dev/api_concepts.md
@@ -0,0 +1,1349 @@
+---
+title: "Basic API Concepts"
+nav-parent_id: apis
+nav-pos: 1
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Flink programs are regular programs that implement transformations on distributed collections
+(e.g., filtering, mapping, updating state, joining, grouping, defining windows, aggregating).
+Collections are initially created from sources (e.g., by reading files, kafka, or from local
+collections). Results are returned via sinks, which may for example write the data to
+(distributed) files, or to standard output (for example the command line terminal).
+Flink programs run in a variety of contexts, standalone, or embedded in other programs.
+The execution can happen in a local JVM, or on clusters of many machines.
+
+Depending on the type of data sources, i.e. bounded or unbounded sources you would either
+write a batch program or a streaming program where the DataSet API is used for the former
+and the DataStream API is used for the latter. This guide will introduce the basic concepts
+that are common to both APIs but please see our
+[Streaming Guide]({{ site.baseurl }}/dev/datastream_api.html) and
+[Batch Guide]({{ site.baseurl }}/dev/batch/index.html) for concrete information about
+writing programs with each API.
+
+**NOTE:** When showing actual examples of how the APIs can be used  we will use
+`StreamingExecutionEnvironment` and the `DataStream` API. The concepts are exactly the same
+in the `DataSet` API, just replace by `ExecutionEnvironment` and `DataSet`.
+
+* This will be replaced by the TOC
+{:toc}
+
+Linking with Flink
+------------------
+
+To write programs with Flink, you need to include the Flink library corresponding to
+your programming language in your project.
+
+The simplest way to do this is to use one of the quickstart scripts: either for
+[Java]({{ site.baseurl }}/quickstart/java_api_quickstart.html) or for [Scala]({{ site.baseurl }}/quickstart/scala_api_quickstart.html). They
+create a blank project from a template (a Maven Archetype), which sets up everything for you. To
+manually create the project, you can use the archetype and create a project by calling:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight bash %}
+mvn archetype:generate \
+    -DarchetypeGroupId=org.apache.flink \
+    -DarchetypeArtifactId=flink-quickstart-java \
+    -DarchetypeVersion={{site.version }}
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight bash %}
+mvn archetype:generate \
+    -DarchetypeGroupId=org.apache.flink \
+    -DarchetypeArtifactId=flink-quickstart-scala \
+    -DarchetypeVersion={{site.version }}
+{% endhighlight %}
+</div>
+</div>
+
+The archetypes are working for stable releases and preview versions (`-SNAPSHOT`).
+
+If you want to add Flink to an existing Maven project, add the following entry to your
+*dependencies* section in the *pom.xml* file of your project:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight xml %}
+<!-- Use this dependency if you are using the DataStream API -->
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-streaming-java{{ site.scala_version_suffix }}</artifactId>
+  <version>{{site.version }}</version>
+</dependency>
+<!-- Use this dependency if you are using the DataSet API -->
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-java</artifactId>
+  <version>{{site.version }}</version>
+</dependency>
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-clients{{ site.scala_version_suffix }}</artifactId>
+  <version>{{site.version }}</version>
+</dependency>
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight xml %}
+<!-- Use this dependency if you are using the DataStream API -->
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-streaming-scala{{ site.scala_version_suffix }}</artifactId>
+  <version>{{site.version }}</version>
+</dependency>
+<!-- Use this dependency if you are using the DataSet API -->
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-scala{{ site.scala_version_suffix }}</artifactId>
+  <version>{{site.version }}</version>
+</dependency>
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-clients{{ site.scala_version_suffix }}</artifactId>
+  <version>{{site.version }}</version>
+</dependency>
+{% endhighlight %}
+
+**Important:** When working with the Scala API you must have one of these two imports:
+{% highlight scala %}
+import org.apache.flink.api.scala._
+{% endhighlight %}
+
+or
+
+{% highlight scala %}
+import org.apache.flink.api.scala.createTypeInformation
+{% endhighlight %}
+
+The reason is that Flink analyzes the types that are used in a program and generates serializers
+and comparaters for them. By having either of those imports you enable an implicit conversion
+that creates the type information for Flink operations.
+</div>
+</div>
+
+#### Scala Dependency Versions
+
+Because Scala 2.10 binary is not compatible with Scala 2.11 binary, we provide multiple artifacts
+to support both Scala versions.
+
+Starting from the 0.10 line, we cross-build all Flink modules for both 2.10 and 2.11. If you want
+to run your program on Flink with Scala 2.11, you need to add a `_2.11` suffix to the `artifactId`
+values of the Flink modules in your dependencies section.
+
+If you are looking for building Flink with Scala 2.11, please check
+[build guide]({{ site.baseurl }}/setup/building.html#scala-versions).
+
+#### Hadoop Dependency Versions
+
+If you are using Flink together with Hadoop, the version of the dependency may vary depending on the
+version of Hadoop (or more specifically, HDFS) that you want to use Flink with. Please refer to the
+[downloads page](http://flink.apache.org/downloads.html) for a list of available versions, and instructions
+on how to link with custom versions of Hadoop.
+
+In order to link against the latest SNAPSHOT versions of the code, please follow
+[this guide](http://flink.apache.org/how-to-contribute.html#snapshots-nightly-builds).
+
+The *flink-clients* dependency is only necessary to invoke the Flink program locally (for example to
+run it standalone for testing and debugging).  If you intend to only export the program as a JAR
+file and [run it on a cluster]({{ site.baseurl }}/dev/cluster_execution.html), you can skip that dependency.
+
+{% top %}
+
+DataSet and DataStream
+----------------------
+
+Flink has the special classes `DataSet` and `DataStream` to represent data in a program. You
+can think of them as immutable collections of data that can contain duplicates. In the case
+of `DataSet` the data is finite while for a `DataStream` the number of elements can be unbounded.
+
+These collections differ from regular Java collections in some key ways. First, they
+are immutable, meaning that once they are created you cannot add or remove elements. You can also
+not simply inspect the elements inside.
+
+A collection is initially created by adding a source in a Flink program and new collections are
+derived from these by transforming them using API methods such as `map`, `filter` and so on.
+
+Anatomy of a Flink Program
+--------------------------
+
+Flink program programs look like regular programs that transform collections of data.
+Each program consists of the same basic parts:
+
+1. Obtain an `execution environment`,
+2. Load/create the initial data,
+3. Specify transformations on this data,
+4. Specify where to put the results of your computations,
+5. Trigger the program execution
+
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+
+We will now give an overview of each of those steps, please refer to the respective sections for
+more details. Note that all core classes of the Java DataSet API are found in the package
+{% gh_link /flink-java/src/main/java/org/apache/flink/api/java "org.apache.flink.api.java" %}
+while the classes of the Java DataStream API can be found in
+{% gh_link /flink-streaming-java/src/main/java/org/apache/flink/streaming/api "org.apache.flink.streaming.api" %}.
+
+The `StreamExecutionEnvironment` is the basis for all Flink programs. You can
+obtain one using these static methods on `StreamExecutionEnvironment`:
+
+{% highlight java %}
+getExecutionEnvironment()
+
+createLocalEnvironment()
+
+createRemoteEnvironment(String host, int port, String... jarFiles)
+{% endhighlight %}
+
+Typically, you only need to use `getExecutionEnvironment()`, since this
+will do the right thing depending on the context: if you are executing
+your program inside an IDE or as a regular Java program it will create
+a local environment that will execute your program on your local machine. If
+you created a JAR file from your program, and invoke it through the
+[command line]({{ site.baseurl }}/setup/cli.html), the Flink cluster manager
+will execute your main method and `getExecutionEnvironment()` will return
+an execution environment for executing your program on a cluster.
+
+For specifying data sources the execution environment has several methods
+to read from files using various methods: you can just read them line by line,
+as CSV files, or using completely custom data input formats. To just read
+a text file as a sequence of lines, you can use:
+
+{% highlight java %}
+final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
+
+DataStream<String> text = env.readTextFile("file:///path/to/file");
+{% endhighlight %}
+
+This will give you a DataStream on which you can then apply transformations to create new
+derived DataStreams.
+
+You apply transformations by calling methods on DataStream with a transformation
+functions. For example, a map transformation looks like this:
+
+{% highlight java %}
+DataStream<String> input = ...;
+
+DataStream<Integer> parsed = input.map(new MapFunction<String, Integer>() {
+    @Override
+    public Integer map(String value) {
+        return Integer.parseInt(value);
+    }
+});
+{% endhighlight %}
+
+This will create a new DataStream by converting every String in the original
+collection to an Integer.
+
+Once you have a DataStream containing your final results, you can write it to an outside system
+by creating a sink. These are just some example methods for creating a sink:
+
+{% highlight java %}
+writeAsText(String path)
+
+print()
+{% endhighlight %}
+
+</div>
+<div data-lang="scala" markdown="1">
+
+We will now give an overview of each of those steps, please refer to the respective sections for
+more details. Note that all core classes of the Scala DataSet API are found in the package
+{% gh_link /flink-scala/src/main/scala/org/apache/flink/api/scala "org.apache.flink.api.scala" %}
+while the classes of the Scala DataStream API can be found in
+{% gh_link /flink-streaming-scala/src/main/java/org/apache/flink/streaming/api/scala "org.apache.flink.streaming.api.scala" %}.
+
+The `StreamExecutionEnvironment` is the basis for all Flink programs. You can
+obtain one using these static methods on `StreamExecutionEnvironment`:
+
+{% highlight scala %}
+getExecutionEnvironment()
+
+createLocalEnvironment()
+
+createRemoteEnvironment(host: String, port: Int, jarFiles: String*)
+{% endhighlight %}
+
+Typically, you only need to use `getExecutionEnvironment()`, since this
+will do the right thing depending on the context: if you are executing
+your program inside an IDE or as a regular Java program it will create
+a local environment that will execute your program on your local machine. If
+you created a JAR file from your program, and invoke it through the
+[command line]({{ site.baseurl }}/apis/cli.html), the Flink cluster manager
+will execute your main method and `getExecutionEnvironment()` will return
+an execution environment for executing your program on a cluster.
+
+For specifying data sources the execution environment has several methods
+to read from files using various methods: you can just read them line by line,
+as CSV files, or using completely custom data input formats. To just read
+a text file as a sequence of lines, you can use:
+
+{% highlight scala %}
+val env = StreamExecutionEnvironment.getExecutionEnvironment()
+
+val text: DataStream[String] = env.readTextFile("file:///path/to/file")
+{% endhighlight %}
+
+This will give you a DataStream on which you can then apply transformations to create new
+derived DataStreams.
+
+You apply transformations by calling methods on DataSet with a transformation
+functions. For example, a map transformation looks like this:
+
+{% highlight scala %}
+val input: DataSet[String] = ...
+
+val mapped = input.map { x => x.toInt }
+{% endhighlight %}
+
+This will create a new DataStream by converting every String in the original
+collection to an Integer.
+
+Once you have a DataStream containing your final results, you can write it to an outside system
+by creating a sink. These are just some example methods for creating a sink:
+
+{% highlight scala %}
+writeAsText(path: String)
+
+print()
+{% endhighlight %}
+
+</div>
+</div>
+
+Once you specified the complete program you need to **trigger the program execution** by calling
+`execute()` on the `StreamExecutionEnvironment`.
+Depending on the type of the `ExecutionEnvironment` the execution will be triggered on your local
+machine or submit your program for execution on a cluster.
+
+The `execute()` method is returning a `JobExecutionResult`, this contains execution
+times and accumulator results.
+
+Please see the [Streaming Guide]({{ site.baseurl }}/dev/datastream_api.html)
+for information about streaming data sources and sink and for more in-depths information
+about the supported transformations on DataStream.
+
+Check out the [Batch Guide]({{ site.baseurl }}/dev/batch/index.html)
+for information about batch data sources and sink and for more in-depths information
+about the supported transformations on DataSet.
+
+
+{% top %}
+
+Lazy Evaluation
+---------------
+
+All Flink programs are executed lazily: When the program's main method is executed, the data loading
+and transformations do not happen directly. Rather, each operation is created and added to the
+program's plan. The operations are actually executed when the execution is explicitly triggered by
+an `execute()` call on the execution environment. Whether the program is executed locally
+or on a cluster depends on the type of execution environment
+
+The lazy evaluation lets you construct sophisticated programs that Flink executes as one
+holistically planned unit.
+
+{% top %}
+
+Specifying Keys
+---------------
+
+Some transformations (join, coGroup, keyBy, groupBy) require that a key be defined on
+a collection of elements. Other transformations (Reduce, GroupReduce,
+Aggregate, Windows) allow data being grouped on a key before they are
+applied.
+
+A DataSet is grouped as
+{% highlight java %}
+DataSet<...> input = // [...]
+DataSet<...> reduced = input
+  .groupBy(/*define key here*/)
+  .reduceGroup(/*do something*/);
+{% endhighlight %}
+
+while a key can be specified on a DataStream using
+{% highlight java %}
+DataStream<...> input = // [...]
+DataStream<...> windowed = input
+  .key(/*define key here*/)
+  .window(/*window specification*/);
+{% endhighlight %}
+
+The data model of Flink is not based on key-value pairs. Therefore,
+you do not need to physically pack the data set types into keys and
+values. Keys are "virtual": they are defined as functions over the
+actual data to guide the grouping operator.
+
+**NOTE:** In the following discussion we will use the `DataStream` API and `keyBy`.
+For the DataSet API you just have to replace by `DataSet` and `groupBy`.
+
+### Define keys for Tuples
+{:.no_toc}
+
+The simplest case is grouping Tuples on one or more
+fields of the Tuple:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+DataStream<Tuple3<Integer,String,Long>> input = // [...]
+KeyedStream<Tuple3<Integer,String,Long> keyed = input.keyBy(0)
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val input: DataStream[(Int, String, Long)] = // [...]
+val keyed = input.keyBy(0)
+{% endhighlight %}
+</div>
+</div>
+
+The tuples is grouped on the first field (the one of
+Integer type).
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+DataStream<Tuple3<Integer,String,Long>> input = // [...]
+KeyedStream<Tuple3<Integer,String,Long> keyed = input.keyBy(0,1)
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val input: DataSet[(Int, String, Long)] = // [...]
+val grouped = input.groupBy(0,1)
+{% endhighlight %}
+</div>
+</div>
+
+Here, we group the tuples on a composite key consisting of the first and the
+second field.
+
+A note on nested Tuples: If you have a DataStream with a nested tuple, such as:
+
+{% highlight java %}
+DataStream<Tuple3<Tuple2<Integer, Float>,String,Long>> ds;
+{% endhighlight %}
+
+Specifying `keyBy(0)` will cause the system to use the full `Tuple2` as a key (with the Integer and Float being the key). If you want to "navigate" into the nested `Tuple2`, you have to use field expression keys which are explained below.
+
+### Define keys using Field Expressions
+{:.no_toc}
+
+You can use String-based field expressions to reference nested fields and define keys for grouping, sorting, joining, or coGrouping.
+
+Field expressions make it very easy to select fields in (nested) composite types such as [Tuple](#tuples-and-case-classes) and [POJO](#pojos) types.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+In the example below, we have a `WC` POJO with two fields "word" and "count". To group by the field `word`, we just pass its name to the `groupBy()` function.
+{% highlight java %}
+// some ordinary POJO (Plain old Java Object)
+public class WC {
+  public String word;
+  public int count;
+}
+DataStream<WC> words = // [...]
+DataStream<WC> wordCounts = words.keyBy("word").window(/*window specification*/);
+{% endhighlight %}
+
+**Field Expression Syntax**:
+
+- Select POJO fields by their field name. For example `"user"` refers to the "user" field of a POJO type.
+
+- Select Tuple fields by their field name or 0-offset field index. For example `"f0"` and `"5"` refer to the first and sixth field of a Java Tuple type, respectively.
+
+- You can select nested fields in POJOs and Tuples. For example `"user.zip"` refers to the "zip" field of a POJO which is stored in the "user" field of a POJO type. Arbitrary nesting and mixing of POJOs and Tuples is supported such as `"f1.user.zip"` or `"user.f3.1.zip"`.
+
+- You can select the full type using the `"*"` wildcard expressions. This does also work for types which are not Tuple or POJO types.
+
+**Field Expression Example**:
+
+{% highlight java %}
+public static class WC {
+  public ComplexNestedClass complex; //nested POJO
+  private int count;
+  // getter / setter for private field (count)
+  public int getCount() {
+    return count;
+  }
+  public void setCount(int c) {
+    this.count = c;
+  }
+}
+public static class ComplexNestedClass {
+  public Integer someNumber;
+  public float someFloat;
+  public Tuple3<Long, Long, String> word;
+  public IntWritable hadoopCitizen;
+}
+{% endhighlight %}
+
+These are valid field expressions for the example code above:
+
+- `"count"`: The count field in the `WC` class.
+
+- `"complex"`: Recursively selects all fields of the field complex of POJO type `ComplexNestedClass`.
+
+- `"complex.word.f2"`: Selects the last field of the nested `Tuple3`.
+
+- `"complex.hadoopCitizen"`: Selects the Hadoop `IntWritable` type.
+
+</div>
+<div data-lang="scala" markdown="1">
+
+In the example below, we have a `WC` POJO with two fields "word" and "count". To group by the field `word`, we just pass its name to the `groupBy()` function.
+{% highlight java %}
+// some ordinary POJO (Plain old Java Object)
+class WC(var word: String, var count: Int) {
+  def this() { this("", 0L) }
+}
+val words: DataStream[WC] = // [...]
+val wordCounts = words.keyBy("word").window(/*window specification*/)
+
+// or, as a case class, which is less typing
+case class WC(word: String, count: Int)
+val words: DataStream[WC] = // [...]
+val wordCounts = words.keyBy("word").window(/*window specification*/)
+{% endhighlight %}
+
+**Field Expression Syntax**:
+
+- Select POJO fields by their field name. For example `"user"` refers to the "user" field of a POJO type.
+
+- Select Tuple fields by their 1-offset field name or 0-offset field index. For example `"_1"` and `"5"` refer to the first and sixth field of a Scala Tuple type, respectively.
+
+- You can select nested fields in POJOs and Tuples. For example `"user.zip"` refers to the "zip" field of a POJO which is stored in the "user" field of a POJO type. Arbitrary nesting and mixing of POJOs and Tuples is supported such as `"_2.user.zip"` or `"user._4.1.zip"`.
+
+- You can select the full type using the `"_"` wildcard expressions. This does also work for types which are not Tuple or POJO types.
+
+**Field Expression Example**:
+
+{% highlight scala %}
+class WC(var complex: ComplexNestedClass, var count: Int) {
+  def this() { this(null, 0) }
+}
+
+class ComplexNestedClass(
+    var someNumber: Int,
+    someFloat: Float,
+    word: (Long, Long, String),
+    hadoopCitizen: IntWritable) {
+  def this() { this(0, 0, (0, 0, ""), new IntWritable(0)) }
+}
+{% endhighlight %}
+
+These are valid field expressions for the example code above:
+
+- `"count"`: The count field in the `WC` class.
+
+- `"complex"`: Recursively selects all fields of the field complex of POJO type `ComplexNestedClass`.
+
+- `"complex.word._3"`: Selects the last field of the nested `Tuple3`.
+
+- `"complex.hadoopCitizen"`: Selects the Hadoop `IntWritable` type.
+
+</div>
+</div>
+
+### Define keys using Key Selector Functions
+{:.no_toc}
+
+An additional way to define keys are "key selector" functions. A key selector function
+takes a single element as input and returns the key for the element. The key can be of any type and be derived from arbitrary computations.
+
+The following example shows a key selector function that simply returns the field of an object:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+// some ordinary POJO
+public class WC {public String word; public int count;}
+DataStream<WC> words = // [...]
+KeyedStream<WC> kyed = words
+  .keyBy(new KeySelector<WC, String>() {
+     public String getKey(WC wc) { return wc.word; }
+   });
+{% endhighlight %}
+
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+// some ordinary case class
+case class WC(word: String, count: Int)
+val words: DataStream[WC] = // [...]
+val keyed = words.keyBy( _.word )
+{% endhighlight %}
+</div>
+</div>
+
+{% top %}
+
+Specifying Transformation Functions
+--------------------------
+
+Most transformations require user-defined functions. This section lists different ways
+of how they can be specified
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+#### Implementing an interface
+
+The most basic way is to implement one of the provided interfaces:
+
+{% highlight java %}
+class MyMapFunction implements MapFunction<String, Integer> {
+  public Integer map(String value) { return Integer.parseInt(value); }
+});
+data.map(new MyMapFunction());
+{% endhighlight %}
+
+#### Anonymous classes
+
+You can pass a function as an anonymous class:
+{% highlight java %}
+data.map(new MapFunction<String, Integer> () {
+  public Integer map(String value) { return Integer.parseInt(value); }
+});
+{% endhighlight %}
+
+#### Java 8 Lambdas
+
+Flink also supports Java 8 Lambdas in the Java API. Please see the full [Java 8 Guide]({{ site.baseurl }}/dev/java8.html).
+
+{% highlight java %}
+data.filter(s -> s.startsWith("http://"));
+{% endhighlight %}
+
+{% highlight java %}
+data.reduce((i1,i2) -> i1 + i2);
+{% endhighlight %}
+
+#### Rich functions
+
+All transformations that require a user-defined function can
+instead take as argument a *rich* function. For example, instead of
+
+{% highlight java %}
+class MyMapFunction implements MapFunction<String, Integer> {
+  public Integer map(String value) { return Integer.parseInt(value); }
+});
+{% endhighlight %}
+
+you can write
+
+{% highlight java %}
+class MyMapFunction extends RichMapFunction<String, Integer> {
+  public Integer map(String value) { return Integer.parseInt(value); }
+});
+{% endhighlight %}
+
+and pass the function as usual to a `map` transformation:
+
+{% highlight java %}
+data.map(new MyMapFunction());
+{% endhighlight %}
+
+Rich functions can also be defined as an anonymous class:
+{% highlight java %}
+data.map (new RichMapFunction<String, Integer>() {
+  public Integer map(String value) { return Integer.parseInt(value); }
+});
+{% endhighlight %}
+
+</div>
+<div data-lang="scala" markdown="1">
+
+
+#### Lambda Functions
+
+As already seen in previous examples all operations accept lambda functions for describing
+the operation:
+{% highlight scala %}
+val data: DataSet[String] = // [...]
+data.filter { _.startsWith("http://") }
+{% endhighlight %}
+
+{% highlight scala %}
+val data: DataSet[Int] = // [...]
+data.reduce { (i1,i2) => i1 + i2 }
+// or
+data.reduce { _ + _ }
+{% endhighlight %}
+
+#### Rich functions
+
+All transformations that take as argument a lambda function can
+instead take as argument a *rich* function. For example, instead of
+
+{% highlight scala %}
+data.map { x => x.toInt }
+{% endhighlight %}
+
+you can write
+
+{% highlight scala %}
+class MyMapFunction extends RichMapFunction[String, Int] {
+  def map(in: String):Int = { in.toInt }
+})
+{% endhighlight %}
+
+and pass the function to a `map` transformation:
+
+{% highlight scala %}
+data.map(new MyMapFunction())
+{% endhighlight %}
+
+Rich functions can also be defined as an anonymous class:
+{% highlight scala %}
+data.map (new RichMapFunction[String, Int] {
+  def map(in: String):Int = { in.toInt }
+})
+{% endhighlight %}
+</div>
+
+</div>
+
+Rich functions provide, in addition to the user-defined function (map,
+reduce, etc), four methods: `open`, `close`, `getRuntimeContext`, and
+`setRuntimeContext`. These are useful for parameterizing the function
+(see [Passing Parameters to Functions]({{ site.baseurl }}/dev/batch/index.html#passing-parameters-to-functions)),
+creating and finalizing local state, accessing broadcast variables (see
+[Broadcast Variables]({{ site.baseurl }}/dev/batch/index.html#broadcast-variables), and for accessing runtime
+information such as accumulators and counters (see
+[Accumulators and Counters](#accumulators--counters), and information
+on iterations (see [Iterations]({{ site.baseurl }}/dev/batch/iterations.html)).
+
+{% top %}
+
+Supported Data Types
+--------------------
+
+Flink places some restrictions on the type of elements that can be in a DataSet or DataStream.
+The reason for this is that the system analyzes the types to determine
+efficient execution strategies.
+
+There are six different categories of data types:
+
+1. **Java Tuples** and **Scala Case Classes**
+2. **Java POJOs**
+3. **Primitive Types**
+4. **Regular Classes**
+5. **Values**
+6. **Hadoop Writables**
+7. **Special Types**
+
+#### Tuples and Case Classes
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+Tuples are composite types that contain a fixed number of fields with various types.
+The Java API provides classes from `Tuple1` up to `Tuple25`. Every field of a tuple
+can be an arbitrary Flink type including further tuples, resulting in nested tuples. Fields of a
+tuple can be accessed directly using the field's name as `tuple.f4`, or using the generic getter method
+`tuple.getField(int position)`. The field indices start at 0. Note that this stands in contrast
+to the Scala tuples, but it is more consistent with Java's general indexing.
+
+{% highlight java %}
+DataStream<Tuple2<String, Integer>> wordCounts = env.fromElements(
+    new Tuple2<String, Integer>("hello", 1),
+    new Tuple2<String, Integer>("world", 2));
+
+wordCounts.map(new MapFunction<Tuple2<String, Integer>, Integer>() {
+    @Override
+    public String map(Tuple2<String, Integer> value) throws Exception {
+        return value.f1;
+    }
+});
+
+wordCounts.keyBy(0); // also valid .keyBy("f0")
+
+
+{% endhighlight %}
+
+</div>
+<div data-lang="scala" markdown="1">
+
+Scala case classes (and Scala tuples which are a special case of case classes), are composite types that contain a fixed number of fields with various types. Tuple fields are addressed by their 1-offset names such as `_1` for the first field. Case class fields are accessed by their name.
+
+{% highlight scala %}
+case class WordCount(word: String, count: Int)
+val input = env.fromElements(
+    WordCount("hello", 1),
+    WordCount("world", 2)) // Case Class Data Set
+
+input.keyBy("word")// key by field expression "word"
+
+val input2 = env.fromElements(("hello", 1), ("world", 2)) // Tuple2 Data Set
+
+input2.keyBy(0, 1) // key by field positions 0 and 1
+{% endhighlight %}
+
+</div>
+</div>
+
+#### POJOs
+
+Java and Scala classes are treated by Flink as a special POJO data type if they fulfill the following requirements:
+
+- The class must be public.
+
+- It must have a public constructor without arguments (default constructor).
+
+- All fields are either public or must be accessible through getter and setter functions. For a field called `foo` the getter and setter methods must be named `getFoo()` and `setFoo()`.
+
+- The type of a field must be supported by Flink. At the moment, Flink uses [Avro](http://avro.apache.org) to serialize arbitrary objects (such as `Date`).
+
+Flink analyzes the structure of POJO types, i.e., it learns about the fields of a POJO. As a result POJO types are easier to use than general types. Moreover, Flink can process POJOs more efficiently than general types.
+
+The following example shows a simple POJO with two public fields.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+public class WordWithCount {
+
+    public String word;
+    public int count;
+
+    public WordWithCount() {}
+
+    public WordWithCount(String word, int count) {
+        this.word = word;
+        this.count = count;
+    }
+}
+
+DataStream<WordWithCount> wordCounts = env.fromElements(
+    new WordWithCount("hello", 1),
+    new WordWithCount("world", 2));
+
+wordCounts.keyBy("word"); // key by field expression "word"
+
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+class WordWithCount(var word: String, var count: Int) {
+    def this() {
+      this(null, -1)
+    }
+}
+
+val input = env.fromElements(
+    new WordWithCount("hello", 1),
+    new WordWithCount("world", 2)) // Case Class Data Set
+
+input.keyBy("word")// key by field expression "word"
+
+{% endhighlight %}
+</div>
+</div>
+
+#### Primitive Types
+
+Flink supports all Java and Scala primitive types such as `Integer`, `String`, and `Double`.
+
+#### General Class Types
+
+Flink supports most Java and Scala classes (API and custom).
+Restrictions apply to classes containing fields that cannot be serialized, like file pointers, I/O streams, or other native
+resources. Classes that follow the Java Beans conventions work well in general.
+
+All classes that are not identified as POJO types (see POJO requirements above) are handled by Flink as general class types.
+Flink treats these data types as black boxes and is not able to access their their content (i.e., for efficient sorting). General types are de/serialized using the serialization framework [Kryo](https://github.com/EsotericSoftware/kryo).
+
+#### Values
+
+*Value* types describe their serialization and deserialization manually. Instead of going through a
+general purpose serialization framework, they provide custom code for those operations by means of
+implementing the `org.apache.flinktypes.Value` interface with the methods `read` and `write`. Using
+a Value type is reasonable when general purpose serialization would be highly inefficient. An
+example would be a data type that implements a sparse vector of elements as an array. Knowing that
+the array is mostly zero, one can use a special encoding for the non-zero elements, while the
+general purpose serialization would simply write all array elements.
+
+The `org.apache.flinktypes.CopyableValue` interface supports manual internal cloning logic in a
+similar way.
+
+Flink comes with pre-defined Value types that correspond to basic data types. (`ByteValue`,
+`ShortValue`, `IntValue`, `LongValue`, `FloatValue`, `DoubleValue`, `StringValue`, `CharValue`,
+`BooleanValue`). These Value types act as mutable variants of the basic data types: Their value can
+be altered, allowing programmers to reuse objects and take pressure off the garbage collector.
+
+
+#### Hadoop Writables
+
+You can use types that implement the `org.apache.hadoop.Writable` interface. The serialization logic
+defined in the `write()`and `readFields()` methods will be used for serialization.
+
+#### Special Types
+
+You can use special types, including Scala's `Either`, `Option`, and `Try`.
+The Java API has its own custom implementation of `Either`.
+Similarly to Scala's `Either`, it represents a value of one two possible types, *Left* or *Right*.
+`Either` can be useful for error handling or operators that need to output two different types of records.
+
+#### Type Erasure & Type Inference
+
+*Note: This Section is only relevant for Java.*
+
+The Java compiler throws away much of the generic type information after compilation. This is
+known as *type erasure* in Java. It means that at runtime, an instance of an object does not know
+its generic type any more. For example, instances of `DataStream<String>` and `DataStream<Long>` look the
+same to the JVM.
+
+Flink requires type information at the time when it prepares the program for execution (when the
+main method of the program is called). The Flink Java API tries to reconstruct the type information
+that was thrown away in various ways and store it explicitly in the data sets and operators. You can
+retrieve the type via `DataStream.getType()`. The method returns an instance of `TypeInformation`,
+which is Flink's internal way of representing types.
+
+The type inference has its limits and needs the "cooperation" of the programmer in some cases.
+Examples for that are methods that create data sets from collections, such as
+`ExecutionEnvironment.fromCollection(),` where you can pass an argument that describes the type. But
+also generic functions like `MapFunction<I, O>` may need extra type information.
+
+The
+{% gh_link /flink-core/src/main/java/org/apache/flink/api/java/typeutils/ResultTypeQueryable.java "ResultTypeQueryable" %}
+interface can be implemented by input formats and functions to tell the API
+explicitly about their return type. The *input types* that the functions are invoked with can
+usually be inferred by the result types of the previous operations.
+
+Execution Configuration
+-----------------------
+
+The `StreamExecutionEnvironment` also contains the `ExecutionConfig` which allows to set job specific configuration values for the runtime.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
+ExecutionConfig executionConfig = env.getConfig();
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = StreamExecutionEnvironment.getExecutionEnvironment
+var executionConfig = env.getConfig
+{% endhighlight %}
+</div>
+</div>
+
+The following configuration options are available: (the default is bold)
+
+- **`enableClosureCleaner()`** / `disableClosureCleaner()`. The closure cleaner is enabled by default. The closure cleaner removes unneeded references to the surrounding class of anonymous functions inside Flink programs.
+With the closure cleaner disabled, it might happen that an anonymous user function is referencing the surrounding class, which is usually not Serializable. This will lead to exceptions by the serializer.
+
+- `getParallelism()` / `setParallelism(int parallelism)` Set the default parallelism for the job.
+
+- `getNumberOfExecutionRetries()` / `setNumberOfExecutionRetries(int numberOfExecutionRetries)` Sets the number of times that failed tasks are re-executed. A value of zero effectively disables fault tolerance. A value of `-1` indicates that the system default value (as defined in the configuration) should be used.
+
+- `getExecutionRetryDelay()` / `setExecutionRetryDelay(long executionRetryDelay)` Sets the delay in milliseconds that the system waits after a job has failed, before re-executing it. The delay starts after all tasks have been successfully been stopped on the TaskManagers, and once the delay is past, the tasks are re-started. This parameter is useful to delay re-execution in order to let certain time-out related failures surface fully (like broken connections that have not fully timed out), before attempting a re-execution and immediately failing again due to the same problem. This parameter only has an effect if the number of execution re-tries is one or more.
+
+- `getExecutionMode()` / `setExecutionMode()`. The default execution mode is PIPELINED. Sets the execution mode to execute the program. The execution mode defines whether data exchanges are performed in a batch or on a pipelined manner.
+
+- `enableForceKryo()` / **`disableForceKryo`**. Kryo is not forced by default. Forces the GenericTypeInformation to use the Kryo serializer for POJOS even though we could analyze them as a POJO. In some cases this might be preferable. For example, when Flink's internal serializers fail to handle a POJO properly.
+
+- `enableForceAvro()` / **`disableForceAvro()`**. Avro is not forced by default. Forces the Flink AvroTypeInformation to use the Avro serializer instead of Kryo for serializing Avro POJOs.
+
+- `enableObjectReuse()` / **`disableObjectReuse()`** By default, objects are not reused in Flink. Enabling the object reuse mode will instruct the runtime to reuse user objects for better performance. Keep in mind that this can lead to bugs when the user-code function of an operation is not aware of this behavior.
+
+- **`enableSysoutLogging()`** / `disableSysoutLogging()` JobManager status updates are printed to `System.out` by default. This setting allows to disable this behavior.
+
+- `getGlobalJobParameters()` / `setGlobalJobParameters()` This method allows users to set custom objects as a global configuration for the job. Since the `ExecutionConfig` is accessible in all user defined functions, this is an easy method for making configuration globally available in a job.
+
+- `addDefaultKryoSerializer(Class<?> type, Serializer<?> serializer)` Register a Kryo serializer instance for the given `type`.
+
+- `addDefaultKryoSerializer(Class<?> type, Class<? extends Serializer<?>> serializerClass)` Register a Kryo serializer class for the given `type`.
+
+- `registerTypeWithKryoSerializer(Class<?> type, Serializer<?> serializer)` Register the given type with Kryo and specify a serializer for it. By registering a type with Kryo, the serialization of the type will be much more efficient.
+
+- `registerKryoType(Class<?> type)` If the type ends up being serialized with Kryo, then it will be registered at Kryo to make sure that only tags (integer IDs) are written. If a type is not registered with Kryo, its entire class-name will be serialized with every instance, leading to much higher I/O costs.
+
+- `registerPojoType(Class<?> type)` Registers the given type with the serialization stack. If the type is eventually serialized as a POJO, then the type is registered with the POJO serializer. If the type ends up being serialized with Kryo, then it will be registered at Kryo to make sure that only tags are written. If a type is not registered with Kryo, its entire class-name will be serialized with every instance, leading to much higher I/O costs.
+
+Note that types registered with `registerKryoType()` are not available to Flink's Kryo serializer instance.
+
+- `disableAutoTypeRegistration()` Automatic type registration is enabled by default. The automatic type registration is registering all types (including sub-types) used by usercode with Kryo and the POJO serializer.
+
+- `setTaskCancellationInterval(long interval)` Sets the the interval (in milliseconds) to wait between consecutive attempts to cancel a running task. When a task is canceled a new thread is created which periodically calls `interrupt()` on the task thread, if the task thread does not terminate within a certain time. This parameter refers to the time between consecutive calls to `interrupt()` and is set by default to **30000** milliseconds, or **30 seconds**.
+
+The `RuntimeContext` which is accessible in `Rich*` functions through the `getRuntimeContext()` method also allows to access the `ExecutionConfig` in all user defined functions.
+
+{% top %}
+
+Program Packaging and Distributed Execution
+-----------------------------------------
+
+As described earlier, Flink programs can be executed on
+clusters by using a `remote environment`. Alternatively, programs can be packaged into JAR Files
+(Java Archives) for execution. Packaging the program is a prerequisite to executing them through the
+[command line interface]({{ site.baseurl }}/setup/cli.html).
+
+#### Packaging Programs
+
+To support execution from a packaged JAR file via the command line or web interface, a program must
+use the environment obtained by `StreamExecutionEnvironment.getExecutionEnvironment()`. This environment
+will act as the cluster's environment when the JAR is submitted to the command line or web
+interface. If the Flink program is invoked differently than through these interfaces, the
+environment will act like a local environment.
+
+To package the program, simply export all involved classes as a JAR file. The JAR file's manifest
+must point to the class that contains the program's *entry point* (the class with the public
+`main` method). The simplest way to do this is by putting the *main-class* entry into the
+manifest (such as `main-class: org.apache.flinkexample.MyProgram`). The *main-class* attribute is
+the same one that is used by the Java Virtual Machine to find the main method when executing a JAR
+files through the command `java -jar pathToTheJarFile`. Most IDEs offer to include that attribute
+automatically when exporting JAR files.
+
+
+#### Packaging Programs through Plans
+
+Additionally, we support packaging programs as *Plans*. Instead of defining a progam in the main
+method and calling
+`execute()` on the environment, plan packaging returns the *Program Plan*, which is a description of
+the program's data flow. To do that, the program must implement the
+`org.apache.flink.api.common.Program` interface, defining the `getPlan(String...)` method. The
+strings passed to that method are the command line arguments. The program's plan can be created from
+the environment via the `ExecutionEnvironment#createProgramPlan()` method. When packaging the
+program's plan, the JAR manifest must point to the class implementing the
+`org.apache.flinkapi.common.Program` interface, instead of the class with the main method.
+
+
+#### Summary
+
+The overall procedure to invoke a packaged program is as follows:
+
+1. The JAR's manifest is searched for a *main-class* or *program-class* attribute. If both
+attributes are found, the *program-class* attribute takes precedence over the *main-class*
+attribute. Both the command line and the web interface support a parameter to pass the entry point
+class name manually for cases where the JAR manifest contains neither attribute.
+
+2. If the entry point class implements the `org.apache.flinkapi.common.Program`, then the system
+calls the `getPlan(String...)` method to obtain the program plan to execute.
+
+3. If the entry point class does not implement the `org.apache.flinkapi.common.Program` interface,
+the system will invoke the main method of the class.
+
+{% top %}
+
+Accumulators & Counters
+---------------------------
+
+Accumulators are simple constructs with an **add operation** and a **final accumulated result**,
+which is available after the job ended.
+
+The most straightforward accumulator is a **counter**: You can increment it using the
+```Accumulator.add(V value)``` method. At the end of the job Flink will sum up (merge) all partial
+results and send the result to the client. Accumulators are useful during debugging or if you
+quickly want to find out more about your data.
+
+Flink currently has the following **built-in accumulators**. Each of them implements the
+{% gh_link /flink-core/src/main/java/org/apache/flink/api/common/accumulators/Accumulator.java "Accumulator" %}
+interface.
+
+- {% gh_link /flink-core/src/main/java/org/apache/flink/api/common/accumulators/IntCounter.java "__IntCounter__" %},
+  {% gh_link /flink-core/src/main/java/org/apache/flink/api/common/accumulators/LongCounter.java "__LongCounter__" %}
+  and {% gh_link /flink-core/src/main/java/org/apache/flink/api/common/accumulators/DoubleCounter.java "__DoubleCounter__" %}:
+  See below for an example using a counter.
+- {% gh_link /flink-core/src/main/java/org/apache/flink/api/common/accumulators/Histogram.java "__Histogram__" %}:
+  A histogram implementation for a discrete number of bins. Internally it is just a map from Integer
+  to Integer. You can use this to compute distributions of values, e.g. the distribution of
+  words-per-line for a word count program.
+
+__How to use accumulators:__
+
+First you have to create an accumulator object (here a counter) in the user-defined transformation
+function where you want to use it.
+
+{% highlight java %}
+private IntCounter numLines = new IntCounter();
+{% endhighlight %}
+
+Second you have to register the accumulator object, typically in the ```open()``` method of the
+*rich* function. Here you also define the name.
+
+{% highlight java %}
+getRuntimeContext().addAccumulator("num-lines", this.numLines);
+{% endhighlight %}
+
+You can now use the accumulator anywhere in the operator function, including in the ```open()``` and
+```close()``` methods.
+
+{% highlight java %}
+this.numLines.add(1);
+{% endhighlight %}
+
+The overall result will be stored in the ```JobExecutionResult``` object which is
+returned from the `execute()` method of the execution environment
+(currently this only works if the execution waits for the
+completion of the job).
+
+{% highlight java %}
+myJobExecutionResult.getAccumulatorResult("num-lines")
+{% endhighlight %}
+
+All accumulators share a single namespace per job. Thus you can use the same accumulator in
+different operator functions of your job. Flink will internally merge all accumulators with the same
+name.
+
+A note on accumulators and iterations: Currently the result of accumulators is only available after
+the overall job ended. We plan to also make the result of the previous iteration available in the
+next iteration. You can use
+{% gh_link /flink-java/src/main/java/org/apache/flink/api/java/operators/IterativeDataSet.java#L98 "Aggregators" %}
+to compute per-iteration statistics and base the termination of iterations on such statistics.
+
+__Custom accumulators:__
+
+To implement your own accumulator you simply have to write your implementation of the Accumulator
+interface. Feel free to create a pull request if you think your custom accumulator should be shipped
+with Flink.
+
+You have the choice to implement either
+{% gh_link /flink-core/src/main/java/org/apache/flink/api/common/accumulators/Accumulator.java "Accumulator" %}
+or {% gh_link /flink-core/src/main/java/org/apache/flink/api/common/accumulators/SimpleAccumulator.java "SimpleAccumulator" %}.
+
+```Accumulator<V,R>``` is most flexible: It defines a type ```V``` for the value to add, and a
+result type ```R``` for the final result. E.g. for a histogram, ```V``` is a number and ```R``` is
+ a histogram. ```SimpleAccumulator``` is for the cases where both types are the same, e.g. for counters.
+
+{% top %}
+
+Parallel Execution
+------------------
+
+This section describes how the parallel execution of programs can be configured in Flink. A Flink
+program consists of multiple tasks (transformations/operators, data sources, and sinks). A task is split into
+several parallel instances for execution and each parallel instance processes a subset of the task's
+input data. The number of parallel instances of a task is called its *parallelism*.
+
+
+The parallelism of a task can be specified in Flink on different levels.
+
+### Operator Level
+
+The parallelism of an individual operator, data source, or data sink can be defined by calling its
+`setParallelism()` method.  For example, like this:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
+
+DataStream<String> text = [...]
+DataStream<Tuple2<String, Integer>> wordCounts = text
+    .flatMap(new LineSplitter())
+    .keyBy(0)
+    .timeWindow(Time.seconds(5))
+    .sum(1).setParallelism(5);
+
+wordCounts.print();
+
+env.execute("Word Count Example");
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = StreamExecutionEnvironment.getExecutionEnvironment
+
+val text = [...]
+val wordCounts = text
+    .flatMap{ _.split(" ") map { (_, 1) } }
+    .keyBy(0)
+    .timeWindow(Time.seconds(5))
+    .sum(1).setParallelism(5)
+wordCounts.print()
+
+env.execute("Word Count Example")
+{% endhighlight %}
+</div>
+</div>
+
+### Execution Environment Level
+
+As mentioned [here](#anatomy-of-a-flink-program) Flink programs are executed in the context
+of an execution environment. An
+execution environment defines a default parallelism for all operators, data sources, and data sinks
+it executes. Execution environment parallelism can be overwritten by explicitly configuring the
+parallelism of an operator.
+
+The default parallelism of an execution environment can be specified by calling the
+`setParallelism()` method. To execute all operators, data sources, and data sinks with a parallelism
+of `3`, set the default parallelism of the execution environment as follows:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
+env.setParallelism(3);
+
+DataStream<String> text = [...]
+DataStream<Tuple2<String, Integer>> wordCounts = [...]
+wordCounts.print();
+
+env.execute("Word Count Example");
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = StreamExecutionEnvironment.getExecutionEnvironment
+env.setParallelism(3)
+
+val text = [...]
+val wordCounts = text
+    .flatMap{ _.split(" ") map { (_, 1) } }
+    .keyBy(0)
+    .timeWindow(Time.seconds(5))
+    .sum(1)
+wordCounts.print()
+
+env.execute("Word Count Example")
+{% endhighlight %}
+</div>
+</div>
+
+### Client Level
+
+The parallelism can be set at the Client when submitting jobs to Flink. The
+Client can either be a Java or a Scala program. One example of such a Client is
+Flink's Command-line Interface (CLI).
+
+For the CLI client, the parallelism parameter can be specified with `-p`. For
+example:
+
+    ./bin/flink run -p 10 ../examples/*WordCount-java*.jar
+
+
+In a Java/Scala program, the parallelism is set as follows:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+
+try {
+    PackagedProgram program = new PackagedProgram(file, args);
+    InetSocketAddress jobManagerAddress = RemoteExecutor.getInetFromHostport("localhost:6123");
+    Configuration config = new Configuration();
+
+    Client client = new Client(jobManagerAddress, config, program.getUserCodeClassLoader());
+
+    // set the parallelism to 10 here
+    client.run(program, 10, true);
+
+} catch (ProgramInvocationException e) {
+    e.printStackTrace();
+}
+
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+try {
+    PackagedProgram program = new PackagedProgram(file, args)
+    InetSocketAddress jobManagerAddress = RemoteExecutor.getInetFromHostport("localhost:6123")
+    Configuration config = new Configuration()
+
+    Client client = new Client(jobManagerAddress, new Configuration(), program.getUserCodeClassLoader())
+
+    // set the parallelism to 10 here
+    client.run(program, 10, true)
+
+} catch {
+    case e: Exception => e.printStackTrace
+}
+{% endhighlight %}
+</div>
+</div>
+
+
+### System Level
+
+A system-wide default parallelism for all execution environments can be defined by setting the
+`parallelism.default` property in `./conf/flink-conf.yaml`. See the
+[Configuration]({{ site.baseurl }}/setup/config.html) documentation for details.
+
+{% top %}
+
+Execution Plans
+---------------
+
+Depending on various parameters such as data size or number of machines in the cluster, Flink's
+optimizer automatically chooses an execution strategy for your program. In many cases, it can be
+useful to know how exactly Flink will execute your program.
+
+__Plan Visualization Tool__
+
+Flink comes packaged with a visualization tool for execution plans. The HTML document containing
+the visualizer is located under ```tools/planVisualizer.html```. It takes a JSON representation of
+the job execution plan and visualizes it as a graph with complete annotations of execution
+strategies.
+
+The following code shows how to print the execution plan JSON from your program:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+...
+
+System.out.println(env.getExecutionPlan());
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = ExecutionEnvironment.getExecutionEnvironment
+
+...
+
+println(env.getExecutionPlan())
+{% endhighlight %}
+</div>
+</div>
+
+
+To visualize the execution plan, do the following:
+
+1. **Open** ```planVisualizer.html``` with your web browser,
+2. **Paste** the JSON string into the text field, and
+3. **Press** the draw button.
+
+After these steps, a detailed execution plan will be visualized.
+
+<img alt="A flink job execution graph." src="{{ site.baseurl }}/fig/plan_visualizer.png" width="80%">
+
+
+__Web Interface__
+
+Flink offers a web interface for submitting and executing jobs. The interface is part of the JobManager's
+web interface for monitoring, per default running on port 8081. Job submission via this interfaces requires
+that you have set `jobmanager.web.submit.enable: true` in `flink-conf.yaml`.
+
+You may specify program arguments before the job is executed. The plan visualization enables you to show
+the execution plan before executing the Flink job.
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/apis.md
----------------------------------------------------------------------
diff --git a/docs/dev/apis.md b/docs/dev/apis.md
new file mode 100644
index 0000000..5e06e14
--- /dev/null
+++ b/docs/dev/apis.md
@@ -0,0 +1,24 @@
+---
+title: "APIs"
+nav-id: apis
+nav-parent_id: dev
+nav-pos: 2
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/batch/connectors.md
----------------------------------------------------------------------
diff --git a/docs/dev/batch/connectors.md b/docs/dev/batch/connectors.md
new file mode 100644
index 0000000..4e5b009
--- /dev/null
+++ b/docs/dev/batch/connectors.md
@@ -0,0 +1,238 @@
+---
+title:  "Connectors"
+nav-parent_id: batch
+nav-pos: 4
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+* TOC
+{:toc}
+
+## Reading from file systems
+
+Flink has build-in support for the following file systems:
+
+| Filesystem                            | Scheme       | Notes  |
+| ------------------------------------- |--------------| ------ |
+| Hadoop Distributed File System (HDFS) &nbsp; | `hdfs://`    | All HDFS versions are supported |
+| Amazon S3                             | `s3://`      | Support through Hadoop file system implementation (see below) |
+| MapR file system                      | `maprfs://`  | The user has to manually place the required jar files in the `lib/` dir |
+| Alluxio                               | `alluxio://` &nbsp; | Support through Hadoop file system implementation (see below) |
+
+
+
+### Using Hadoop file system implementations
+
+Apache Flink allows users to use any file system implementing the `org.apache.hadoop.fs.FileSystem`
+interface. There are Hadoop `FileSystem` implementations for
+
+- [S3](https://aws.amazon.com/s3/) (tested)
+- [Google Cloud Storage Connector for Hadoop](https://cloud.google.com/hadoop/google-cloud-storage-connector) (tested)
+- [Alluxio](http://alluxio.org/) (tested)
+- [XtreemFS](http://www.xtreemfs.org/) (tested)
+- FTP via [Hftp](http://hadoop.apache.org/docs/r1.2.1/hftp.html) (not tested)
+- and many more.
+
+In order to use a Hadoop file system with Flink, make sure that
+
+- the `flink-conf.yaml` has set the `fs.hdfs.hadoopconf` property set to the Hadoop configuration directory.
+- the Hadoop configuration (in that directory) has an entry for the required file system. Examples for S3 and Alluxio are shown below.
+- the required classes for using the file system are available in the `lib/` folder of the Flink installation (on all machines running Flink). If putting the files into the directory is not possible, Flink is also respecting the `HADOOP_CLASSPATH` environment variable to add Hadoop jar files to the classpath.
+
+#### Amazon S3
+
+For Amazon S3 support add the following entries into the `core-site.xml` file:
+
+~~~xml
+<!-- configure the file system implementation -->
+<property>
+  <name>fs.s3.impl</name>
+  <value>org.apache.hadoop.fs.s3native.NativeS3FileSystem</value>
+</property>
+
+<!-- set your AWS ID -->
+<property>
+  <name>fs.s3.awsAccessKeyId</name>
+  <value>putKeyHere</value>
+</property>
+
+<!-- set your AWS access key -->
+<property>
+  <name>fs.s3.awsSecretAccessKey</name>
+  <value>putSecretHere</value>
+</property>
+~~~
+
+#### Alluxio
+
+For Alluxio support add the following entry into the `core-site.xml` file:
+
+~~~xml
+<property>
+  <name>fs.alluxio.impl</name>
+  <value>alluxio.hadoop.FileSystem</value>
+</property>
+~~~
+
+
+## Connecting to other systems using Input/OutputFormat wrappers for Hadoop
+
+Apache Flink allows users to access many different systems as data sources or sinks.
+The system is designed for very easy extensibility. Similar to Apache Hadoop, Flink has the concept
+of so called `InputFormat`s and `OutputFormat`s.
+
+One implementation of these `InputFormat`s is the `HadoopInputFormat`. This is a wrapper that allows
+users to use all existing Hadoop input formats with Flink.
+
+This section shows some examples for connecting Flink to other systems.
+[Read more about Hadoop compatibility in Flink]({{ site.baseurl }}/dev/batch/hadoop_compatibility.html).
+
+## Avro support in Flink
+
+Flink has extensive build-in support for [Apache Avro](http://avro.apache.org/). This allows to easily read from Avro files with Flink.
+Also, the serialization framework of Flink is able to handle classes generated from Avro schemas.
+
+In order to read data from an Avro file, you have to specify an `AvroInputFormat`.
+
+**Example**:
+
+~~~java
+AvroInputFormat<User> users = new AvroInputFormat<User>(in, User.class);
+DataSet<User> usersDS = env.createInput(users);
+~~~
+
+Note that `User` is a POJO generated by Avro. Flink also allows to perform string-based key selection of these POJOs. For example:
+
+~~~java
+usersDS.groupBy("name")
+~~~
+
+
+Note that using the `GenericData.Record` type is possible with Flink, but not recommended. Since the record contains the full schema, its very data intensive and thus probably slow to use.
+
+Flink's POJO field selection also works with POJOs generated from Avro. However, the usage is only possible if the field types are written correctly to the generated class. If a field is of type `Object` you can not use the field as a join or grouping key.
+Specifying a field in Avro like this `{"name": "type_double_test", "type": "double"},` works fine, however specifying it as a UNION-type with only one field (`{"name": "type_double_test", "type": ["double"]},`) will generate a field of type `Object`. Note that specifying nullable types (`{"name": "type_double_test", "type": ["null", "double"]},`) is possible!
+
+
+
+### Access Microsoft Azure Table Storage
+
+_Note: This example works starting from Flink 0.6-incubating_
+
+This example is using the `HadoopInputFormat` wrapper to use an existing Hadoop input format implementation for accessing [Azure's Table Storage](https://azure.microsoft.com/en-us/documentation/articles/storage-introduction/).
+
+1. Download and compile the `azure-tables-hadoop` project. The input format developed by the project is not yet available in Maven Central, therefore, we have to build the project ourselves.
+Execute the following commands:
+
+   ~~~bash
+   git clone https://github.com/mooso/azure-tables-hadoop.git
+   cd azure-tables-hadoop
+   mvn clean install
+   ~~~
+
+2. Setup a new Flink project using the quickstarts:
+
+   ~~~bash
+   curl https://flink.apache.org/q/quickstart.sh | bash
+   ~~~
+
+3. Add the following dependencies (in the `<dependencies>` section) to your `pom.xml` file:
+
+   ~~~xml
+   <dependency>
+       <groupId>org.apache.flink</groupId>
+       <artifactId>flink-hadoop-compatibility{{ site.scala_version_suffix }}</artifactId>
+       <version>{{site.version}}</version>
+   </dependency>
+   <dependency>
+     <groupId>com.microsoft.hadoop</groupId>
+     <artifactId>microsoft-hadoop-azure</artifactId>
+     <version>0.0.4</version>
+   </dependency>
+   ~~~
+
+   `flink-hadoop-compatibility` is a Flink package that provides the Hadoop input format wrappers.
+   `microsoft-hadoop-azure` is adding the project we've build before to our project.
+
+The project is now prepared for starting to code. We recommend to import the project into an IDE, such as Eclipse or IntelliJ. (Import as a Maven project!).
+Browse to the code of the `Job.java` file. Its an empty skeleton for a Flink job.
+
+Paste the following code into it:
+
+~~~java
+import java.util.Map;
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.java.DataSet;
+import org.apache.flink.api.java.ExecutionEnvironment;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.hadoopcompatibility.mapreduce.HadoopInputFormat;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.Job;
+import com.microsoft.hadoop.azure.AzureTableConfiguration;
+import com.microsoft.hadoop.azure.AzureTableInputFormat;
+import com.microsoft.hadoop.azure.WritableEntity;
+import com.microsoft.windowsazure.storage.table.EntityProperty;
+
+public class AzureTableExample {
+
+  public static void main(String[] args) throws Exception {
+    // set up the execution environment
+    final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+    // create a  AzureTableInputFormat, using a Hadoop input format wrapper
+    HadoopInputFormat<Text, WritableEntity> hdIf = new HadoopInputFormat<Text, WritableEntity>(new AzureTableInputFormat(), Text.class, WritableEntity.class, new Job());
+
+    // set the Account URI, something like: https://apacheflink.table.core.windows.net
+    hdIf.getConfiguration().set(AzureTableConfiguration.Keys.ACCOUNT_URI.getKey(), "TODO");
+    // set the secret storage key here
+    hdIf.getConfiguration().set(AzureTableConfiguration.Keys.STORAGE_KEY.getKey(), "TODO");
+    // set the table name here
+    hdIf.getConfiguration().set(AzureTableConfiguration.Keys.TABLE_NAME.getKey(), "TODO");
+
+    DataSet<Tuple2<Text, WritableEntity>> input = env.createInput(hdIf);
+    // a little example how to use the data in a mapper.
+    DataSet<String> fin = input.map(new MapFunction<Tuple2<Text,WritableEntity>, String>() {
+      @Override
+      public String map(Tuple2<Text, WritableEntity> arg0) throws Exception {
+        System.err.println("--------------------------------\nKey = "+arg0.f0);
+        WritableEntity we = arg0.f1;
+
+        for(Map.Entry<String, EntityProperty> prop : we.getProperties().entrySet()) {
+          System.err.println("key="+prop.getKey() + " ; value (asString)="+prop.getValue().getValueAsString());
+        }
+
+        return arg0.f0.toString();
+      }
+    });
+
+    // emit result (this works only locally)
+    fin.print();
+
+    // execute program
+    env.execute("Azure Example");
+  }
+}
+~~~
+
+The example shows how to access an Azure table and turn data into Flink's `DataSet` (more specifically, the type of the set is `DataSet<Tuple2<Text, WritableEntity>>`). With the `DataSet`, you can apply all known transformations to the DataSet.
+
+## Access MongoDB
+
+This [GitHub repository documents how to use MongoDB with Apache Flink (starting from 0.7-incubating)](https://github.com/okkam-it/flink-mongodb-test).


[31/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/concepts/fig/event_ingestion_processing_time.svg
----------------------------------------------------------------------
diff --git a/docs/concepts/fig/event_ingestion_processing_time.svg b/docs/concepts/fig/event_ingestion_processing_time.svg
deleted file mode 100644
index fc80d91..0000000
--- a/docs/concepts/fig/event_ingestion_processing_time.svg
+++ /dev/null
@@ -1,375 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<svg
-   xmlns:dc="http://purl.org/dc/elements/1.1/"
-   xmlns:cc="http://creativecommons.org/ns#"
-   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
-   xmlns:svg="http://www.w3.org/2000/svg"
-   xmlns="http://www.w3.org/2000/svg"
-   version="1.1"
-   width="444.25604"
-   height="209.83659"
-   id="svg2">
-  <defs
-     id="defs4" />
-  <metadata
-     id="metadata7">
-    <rdf:RDF>
-      <cc:Work
-         rdf:about="">
-        <dc:format>image/svg+xml</dc:format>
-        <dc:type
-           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
-        <dc:title></dc:title>
-      </cc:Work>
-    </rdf:RDF>
-  </metadata>
-  <g
-     transform="translate(-152.87198,-427.44388)"
-     id="layer1">
-    <g
-       transform="translate(113.23391,306.36012)"
-       id="g2989">
-      <path
-         d="m 190.5756,167.00098 38.65338,0 c 5.33571,0 9.66804,3.98537 9.66804,8.90847 0,4.9231 -4.33233,8.90847 -9.66804,8.90847 l -38.65338,0 c -5.3357,0 -9.66803,-3.98537 -9.66803,-8.90847 0,-4.9231 4.33233,-8.90847 9.66803,-8.90847 z"
-         id="path2991"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 229.22898,184.81792 c -5.3357,0 -9.65865,-3.98537 -9.65865,-8.90847 0,-4.9231 4.32295,-8.90847 9.65865,-8.90847"
-         id="path2993"
-         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 190.5756,167.00098 38.65338,0 c 5.33571,0 9.66804,3.98537 9.66804,8.90847 0,4.9231 -4.33233,8.90847 -9.66804,8.90847 l -38.65338,0 c -5.3357,0 -9.66803,-3.98537 -9.66803,-8.90847 0,-4.9231 4.33233,-8.90847 9.66803,-8.90847 z"
-         id="path2995"
-         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 190.5756,239.67533 38.65338,0 c 5.33571,0 9.66804,3.95724 9.66804,8.83345 0,4.87622 -4.33233,8.82408 -9.66804,8.82408 l -38.65338,0 c -5.3357,0 -9.66803,-3.94786 -9.66803,-8.82408 0,-4.87621 4.33233,-8.83345 9.66803,-8.83345 z"
-         id="path2997"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 229.22898,257.33286 c -5.3357,0 -9.65865,-3.94786 -9.65865,-8.82408 0,-4.87621 4.32295,-8.83345 9.65865,-8.83345"
-         id="path2999"
-         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 190.5756,239.67533 38.65338,0 c 5.33571,0 9.66804,3.95724 9.66804,8.83345 0,4.87622 -4.33233,8.82408 -9.66804,8.82408 l -38.65338,0 c -5.3357,0 -9.66803,-3.94786 -9.66803,-8.82408 0,-4.87621 4.33233,-8.83345 9.66803,-8.83345 z"
-         id="path3001"
-         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 80.570072,234.04893 c 0,-1.98799 1.575392,-3.59152 3.516501,-3.59152 1.950486,0 3.516501,1.60353 3.516501,3.59152 0,1.988 -1.566015,3.59152 -3.516501,3.59152 -1.941109,0 -3.516501,-1.60352 -3.516501,-3.59152"
-         id="path3003"
-         style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 80.570072,234.04893 c 0,-1.98799 1.575392,-3.59152 3.516501,-3.59152 1.950486,0 3.516501,1.60353 3.516501,3.59152 0,1.988 -1.566015,3.59152 -3.516501,3.59152 -1.941109,0 -3.516501,-1.60352 -3.516501,-3.59152"
-         id="path3005"
-         style="fill:none;stroke:#000000;stroke-width:1.24718571px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 84.170969,237.64045 0,10.90584"
-         id="path3007"
-         style="fill:none;stroke:#000000;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="M 84.170969,246.54892 70.414417,280.72931"
-         id="path3009"
-         style="fill:none;stroke:#000000;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 84.170969,246.54892 13.756552,34.18039"
-         id="path3011"
-         style="fill:none;stroke:#000000;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 78.694605,259.20833 10.549503,0"
-         id="path3013"
-         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 76.819138,265.77246 15.003737,0"
-         id="path3015"
-         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 74.474804,271.39886 19.223539,0"
-         id="path3017"
-         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 72.13047,277.02526 24.615507,0"
-         id="path3019"
-         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 90.041182,223.33064 c 5.476364,5.23255 5.673288,13.91596 0.440734,19.39233"
-         id="path3021"
-         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 95.751979,223.33064 c 5.476361,5.23255 5.673291,13.91596 0.440735,19.39233"
-         id="path3023"
-         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 101.4534,223.33064 c 5.47636,5.23255 5.67329,13.91596 0.44073,19.39233"
-         id="path3025"
-         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="M 77.250495,242.68546 C 71.774131,237.45291 71.577207,228.76949 76.80976,223.29313"
-         id="path3027"
-         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="M 71.549075,242.68546 C 66.07271,237.45291 65.875786,228.76949 71.10834,223.29313"
-         id="path3029"
-         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="M 65.847654,242.68546 C 60.37129,237.45291 60.174366,228.76949 65.40692,223.29313"
-         id="path3031"
-         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 66.194616,155.41059 c 0,-3.26331 2.644409,-5.9171 5.917099,-5.9171 l 23.649642,0 c 3.263313,0 5.907723,2.65379 5.907723,5.9171 l 0,40.2194 c 0,3.26331 -2.64441,5.90772 -5.907723,5.90772 l -23.649642,0 c -3.27269,0 -5.917099,-2.64441 -5.917099,-5.90772 z"
-         id="path3033"
-         style="fill:none;stroke:#000000;stroke-width:1.24718571px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 70.255002,156.05763 27.35369,0 0,4.37921 -27.35369,0 z"
-         id="path3035"
-         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 70.255002,162.47173 27.35369,0 0,4.52925 -27.35369,0 z"
-         id="path3037"
-         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 70.255002,169.03586 27.35369,0 0,4.52925 -27.35369,0 z"
-         id="path3039"
-         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 70.255002,175.6 27.35369,0 0,4.36983 -27.35369,0 z"
-         id="path3041"
-         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="42.700401"
-         y="136.7509"
-         id="text3043"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Event Producer</text>
-      <text
-         x="164.97949"
-         y="136.7509"
-         id="text3045"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Message Queue</text>
-      <path
-         d="m 314.53461,174.8123 c 0,-8.79594 7.14553,-15.94147 15.94147,-15.94147 8.8147,0 15.94147,7.14553 15.94147,15.94147 0,8.8147 -7.12677,15.94147 -15.94147,15.94147 -8.79594,0 -15.94147,-7.12677 -15.94147,-15.94147"
-         id="path3047"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="317.27582"
-         y="129.93326"
-         id="text3049"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Flink</text>
-      <text
-         x="295.8205"
-         y="143.43661"
-         id="text3051"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Data Source</text>
-      <text
-         x="415.83893"
-         y="129.93326"
-         id="text3053"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Flink</text>
-      <text
-         x="379.52988"
-         y="143.43661"
-         id="text3055"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Window Operator</text>
-      <path
-         d="m 314.53461,248.74322 c 0,-8.79594 7.14553,-15.94147 15.94147,-15.94147 8.8147,0 15.94147,7.14553 15.94147,15.94147 0,8.81469 -7.12677,15.94147 -15.94147,15.94147 -8.79594,0 -15.94147,-7.12678 -15.94147,-15.94147"
-         id="path3057"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 408.94563,174.8123 c 0,-8.79594 7.16428,-15.94147 16.01649,-15.94147 8.8522,0 16.01649,7.14553 16.01649,15.94147 0,8.8147 -7.16429,15.94147 -16.01649,15.94147 -8.85221,0 -16.01649,-7.12677 -16.01649,-15.94147"
-         id="path3059"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 408.94563,248.74322 c 0,-8.79594 7.16428,-15.94147 16.01649,-15.94147 8.8522,0 16.01649,7.14553 16.01649,15.94147 0,8.81469 -7.16429,15.94147 -16.01649,15.94147 -8.85221,0 -16.01649,-7.12678 -16.01649,-15.94147"
-         id="path3061"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 432.38897,169.65477 c 2.41935,0 4.36984,0.16879 4.36984,0.37509 l 0,9.4336 c 0,0.18755 -1.95049,0.35634 -4.36984,0.35634"
-         id="path3063"
-         style="fill:none;stroke:#000000;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 418.62304,169.65477 c -2.49437,0 -4.51988,0.16879 -4.51988,0.39384 l 0,9.39609 c 0,0.20631 2.02551,0.3751 4.51988,0.3751"
-         id="path3065"
-         style="fill:none;stroke:#000000;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 432.38897,244.05455 c 2.41935,0 4.36984,0.16879 4.36984,0.37509 l 0,9.41485 c 0,0.2063 -1.95049,0.37509 -4.36984,0.37509"
-         id="path3067"
-         style="fill:none;stroke:#000000;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 418.62304,244.05455 c -2.49437,0 -4.51988,0.16879 -4.51988,0.37509 l 0,9.41485 c 0,0.2063 2.02551,0.37509 4.51988,0.37509"
-         id="path3069"
-         style="fill:none;stroke:#000000;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 323.61187,170.55499 7.50187,0 0,-4.18229 7.50187,8.36458 -7.50187,8.36459 0,-4.1823 -7.50187,0 z"
-         id="path3071"
-         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 323.61187,244.63594 7.50187,0 0,-4.18229 7.50187,8.36459 -7.50187,8.36458 0,-4.18229 -7.50187,0 z"
-         id="path3073"
-         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 117.46051,174.1934 49.70926,0 0,1.24718 -49.70926,0 z m 42.52622,-4.29482 8.42085,4.91372 -8.42085,4.9231 c -0.30007,0.16879 -0.68454,0.075 -0.85334,-0.22505 -0.17817,-0.30008 -0.075,-0.68455 0.22506,-0.85334 l 7.50187,-4.37922 0,1.0784 -7.50187,-4.37922 c -0.30007,-0.16879 -0.40323,-0.55326 -0.22506,-0.85334 0.1688,-0.30007 0.55327,-0.39385 0.85334,-0.22505 z"
-         id="path3075"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 117.46051,247.95552 49.70926,0 0,1.25656 -49.70926,0 z m 42.52622,-4.28544 8.42085,4.91372 -8.42085,4.91373 c -0.30007,0.17817 -0.68454,0.075 -0.85334,-0.22506 -0.17817,-0.2907 -0.075,-0.67517 0.22506,-0.85334 l 7.50187,-4.36983 0,1.07839 -7.50187,-4.37922 c -0.30007,-0.16879 -0.40323,-0.55326 -0.22506,-0.85333 0.1688,-0.30008 0.55327,-0.40323 0.85334,-0.22506 z"
-         id="path3077"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 117.88249,240.8475 52.54122,-48.79028 -0.84396,-0.91898 -52.5506,48.79028 z m 50.19688,-40.7539 2.83196,-9.33983 -9.518,2.13803 c -0.33758,0.075 -0.55326,0.40323 -0.47824,0.74081 0.075,0.33759 0.4126,0.55327 0.75018,0.47825 l 8.47712,-1.9036 -0.74081,-0.7877 -2.51313,8.30832 c -0.10315,0.33759 0.0844,0.68455 0.4126,0.77832 0.32821,0.10315 0.67517,-0.0844 0.77832,-0.4126 z"
-         id="path3079"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 252.17532,174.1934 49.71864,0 0,1.25656 -49.71864,0 z m 42.5356,-4.29482 8.42085,4.91372 -8.42085,4.93248 c -0.30007,0.16879 -0.67517,0.075 -0.86272,-0.22506 -0.16879,-0.30007 -0.075,-0.69392 0.22506,-0.86271 l 7.50187,-4.36984 0,1.06902 -7.50187,-4.36984 c -0.30007,-0.1688 -0.39385,-0.56264 -0.22506,-0.86272 0.18755,-0.30007 0.56265,-0.39385 0.86272,-0.22505 z"
-         id="path3081"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 252.17532,248.59318 49.71864,0 0,1.23781 -49.71864,0 z m 42.5356,-4.29482 8.42085,4.91372 -8.42085,4.91373 c -0.30007,0.18755 -0.67517,0.075 -0.86272,-0.22506 -0.16879,-0.30007 -0.075,-0.67517 0.22506,-0.84396 l 7.50187,-4.38859 0,1.08777 -7.50187,-4.36984 c -0.30007,-0.18754 -0.39385,-0.56264 -0.22506,-0.86271 0.18755,-0.30008 0.56265,-0.39385 0.86272,-0.22506 z"
-         id="path3083"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="189.89742"
-         y="177.54382"
-         id="text3085"
-         xml:space="preserve"
-         style="font-size:4.95123339px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">partition 1</text>
-      <text
-         x="189.89742"
-         y="251.09189"
-         id="text3087"
-         xml:space="preserve"
-         style="font-size:4.95123339px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">partition 2</text>
-      <path
-         d="m 352.51282,174.1934 49.71864,0 0,1.25656 -49.71864,0 z m 42.5356,-4.29482 8.42084,4.91372 -8.42084,4.93248 c -0.30008,0.16879 -0.67517,0.075 -0.86272,-0.22506 -0.16879,-0.30007 -0.075,-0.69392 0.22506,-0.86271 l 7.50187,-4.36984 0,1.06902 -7.50187,-4.36984 c -0.30008,-0.1688 -0.39385,-0.56264 -0.22506,-0.86272 0.18755,-0.30007 0.56264,-0.39385 0.86272,-0.22505 z"
-         id="path3089"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 352.51282,248.59318 49.71864,0 0,1.23781 -49.71864,0 z m 42.5356,-4.29482 8.42084,4.91372 -8.42084,4.91373 c -0.30008,0.18755 -0.67517,0.075 -0.86272,-0.22506 -0.16879,-0.30007 -0.075,-0.67517 0.22506,-0.84396 l 7.50187,-4.38859 0,1.08777 -7.50187,-4.36984 c -0.30008,-0.18754 -0.39385,-0.56264 -0.22506,-0.86271 0.18755,-0.30008 0.56264,-0.39385 0.86272,-0.22506 z"
-         id="path3091"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 352.98169,177.68176 55.6076,58.62711 -0.90022,0.86272 -55.62636,-58.64586 z m 53.76964,50.46883 2.23181,9.48986 -9.35858,-2.73818 c -0.33759,-0.0938 -0.52513,-0.43136 -0.43136,-0.76894 0.0938,-0.33759 0.45011,-0.52513 0.7877,-0.43136 l 8.32707,2.43811 -0.7877,0.75019 -1.98799,-8.45836 c -0.075,-0.33759 0.13128,-0.67517 0.46887,-0.75019 0.33758,-0.0938 0.67516,0.13128 0.75018,0.46887 z"
-         id="path3093"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 352.98169,246.90526 52.5881,-54.81991 -0.90023,-0.86271 -52.60685,54.80115 z m 50.71263,-46.66162 2.28807,-9.48987 -9.39609,2.68192 c -0.31883,0.0938 -0.52513,0.45011 -0.43136,0.76894 0.0938,0.33759 0.45011,0.52513 0.7877,0.43136 l 8.34583,-2.38184 -0.7877,-0.75019 -2.0255,8.4396 c -0.0938,0.33759 0.11252,0.67517 0.45011,0.76894 0.33758,0.075 0.67517,-0.13128 0.76894,-0.46886 z"
-         id="path3095"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 40.566356,302.98173 c 0,-5.0075 4.069764,-9.07726 9.058507,-9.07726 5.007497,0 9.077261,4.06976 9.077261,9.07726 0,5.0075 -4.069764,9.05851 -9.077261,9.05851 -4.988743,0 -9.058507,-4.05101 -9.058507,-9.05851"
-         id="path3097"
-         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 49.474825,303.61001 0,-7.67066"
-         id="path3099"
-         style="fill:none;stroke:#000000;stroke-width:1.24718571px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 52.869421,306.0575 -3.394596,-2.45687"
-         id="path3101"
-         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 417.38523,302.98173 c 0,-5.0075 4.08852,-9.07726 9.13353,-9.07726 5.06376,0 9.15228,4.06976 9.15228,9.07726 0,5.0075 -4.08852,9.05851 -9.15228,9.05851 -5.04501,0 -9.13353,-4.05101 -9.13353,-9.05851"
-         id="path3103"
-         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 426.44374,303.61939 0,-7.67066"
-         id="path3105"
-         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 429.83833,306.0575 -3.39459,-2.45687"
-         id="path3107"
-         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 297.35533,302.98173 c 0,-5.0075 4.05101,-9.07726 9.05851,-9.07726 5.00749,0 9.0585,4.06976 9.0585,9.07726 0,5.0075 -4.05101,9.05851 -9.0585,9.05851 -5.0075,0 -9.05851,-4.05101 -9.05851,-9.05851"
-         id="path3109"
-         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 306.2638,303.61939 0,-7.67066"
-         id="path3111"
-         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 309.65839,306.0575 -3.39459,-2.45687"
-         id="path3113"
-         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="67.701958"
-         y="308.1178"
-         id="text3115"
-         xml:space="preserve"
-         style="font-size:8.70216751px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Event</text>
-      <text
-         x="69.502403"
-         y="318.62042"
-         id="text3117"
-         xml:space="preserve"
-         style="font-size:8.70216751px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Time</text>
-      <text
-         x="319.48459"
-         y="308.11505"
-         id="text3119"
-         xml:space="preserve"
-         style="font-size:8.70216751px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Ingestion</text>
-      <text
-         x="329.23703"
-         y="318.61768"
-         id="text3121"
-         xml:space="preserve"
-         style="font-size:8.70216751px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Time</text>
-      <text
-         x="443.61313"
-         y="309.50467"
-         id="text3123"
-         xml:space="preserve"
-         style="font-size:8.70216751px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Window</text>
-      <text
-         x="437.46158"
-         y="320.00726"
-         id="text3125"
-         xml:space="preserve"
-         style="font-size:8.70216751px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Processing</text>
-      <text
-         x="449.9147"
-         y="330.50989"
-         id="text3127"
-         xml:space="preserve"
-         style="font-size:8.70216751px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Time</text>
-      <path
-         d="m 307.35157,271.39886 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.2378 1.25656,0 0,1.2378 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49
 438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.2378 1.25656,0 0,1.2378 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.2
 5656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.4066 1.08777,0 0,1.25656 -0.46887,0 0.63766,-0.6189 0,0.76894 -1.25656,0 z m 2.34433,-1.4066 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.51313,0 1.23781,0 0,1.2
 5656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.51313,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.51313,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51313,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51313,0 1.2378,0 0,1.25656 -1.2378,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51313,0 1.2378,0 0,1.25656 -1.2378,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51312,0 1.23781,
 0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49438,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51312,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.49438,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51312,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.51313,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.51313,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.51313,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51313
 ,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51313,0 1.2378,0 0,1.25656 -1.2378,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51313,0 1.2378,0 0,1.25656 -1.2378,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51312,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49438,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51312,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.49438,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51312,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 
 2.51313,0 0.93773,0 0,1.57539 -1.25656,0 0,-0.93773 0.61891,0.6189 -0.30008,0 0,-1.25656 z m 0.93773,2.8132 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51313 0,1.2378 -1.25656,0 0,-1.2378 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51313 0,1.2378 -1.25656,0 0,-1.2378 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51312 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49438 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51312 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.49438 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51312 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.513
 13 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.51313 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.51313 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51313 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51313 0,1.2378 -1.25656,0 0,-1.2378 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51313 0,1.2378 -1.25656,0 0,-1.2378 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 
 0,2.51312 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49438 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51312 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.49438 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51312 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.51313 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.51313 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.72543 -0.78769,0 0,-1.25656 0.15004,0 -0.61891,0.6189 0,-1.08777 1.25656,0 z m -2.04425,1.72543 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49438,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m
  -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49438,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.49438,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.
 25656 1.25656,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51313,0 -1.2378,0 0,-1.25656 1.2378,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51313,0 -1.2378,0 0,-1.25656 1.2378,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49438,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.49438,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2
 .49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51313,0 -1.2378,0 0,-1.25656 1.2378,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51313,0 -1.2378,0 0,-1.25656 1.2378,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1
 .25656,0 0,1.25656 z m -2.49437,0 -0.63766,0 0,-1.25656 0.63766,0 0,1.25656 z"
-         id="path3129"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 70.977057,294.514 12.753177,-25.4876 -1.115903,-0.56264 -12.753177,25.49697 z m 14.43172,-24.65302 c 0.618905,-1.22843 0.112529,-2.72881 -1.115903,-3.34771 -1.237808,-0.6189 -2.738182,-0.12191 -3.357086,1.1159 -0.618904,1.23781 -0.121905,2.73819 1.115903,3.35709 1.237809,0.6189 2.738182,0.11253 3.357086,-1.12528 z"
-         id="path3131"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 325.78741,293.19179 6.37659,-31.97671 -1.23781,-0.24382 -6.37659,31.97672 z m 8.21455,-31.60162 c 0.26256,-1.36909 -0.61891,-2.68192 -1.96924,-2.94448 -1.35034,-0.28132 -2.66317,0.60014 -2.94449,1.95048 -0.26256,1.36909 0.61891,2.68192 1.96924,2.94448 1.35034,0.28132 2.68192,-0.60014 2.94449,-1.95048 z"
-         id="path3133"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 440.43472,294.07326 -14.57238,-36.49659 1.16279,-0.46887 14.57238,36.49659 z m -16.31656,-35.80267 c -0.50638,-1.27532 0.11253,-2.73818 1.4066,-3.24456 1.27532,-0.52513 2.73818,0.11253 3.24456,1.38785 0.50637,1.27532 -0.11253,2.73818 -1.38785,3.24456 -1.29407,0.52513 -2.73818,-0.11253 -3.26331,-1.38785 z"
-         id="path3135"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-    </g>
-  </g>
-</svg>


[30/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/concepts/fig/parallel_dataflow.svg
----------------------------------------------------------------------
diff --git a/docs/concepts/fig/parallel_dataflow.svg b/docs/concepts/fig/parallel_dataflow.svg
deleted file mode 100644
index 3a699a9..0000000
--- a/docs/concepts/fig/parallel_dataflow.svg
+++ /dev/null
@@ -1,487 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<svg
-   xmlns:dc="http://purl.org/dc/elements/1.1/"
-   xmlns:cc="http://creativecommons.org/ns#"
-   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
-   xmlns:svg="http://www.w3.org/2000/svg"
-   xmlns="http://www.w3.org/2000/svg"
-   version="1.1"
-   width="657.83496"
-   height="439.34708"
-   id="svg2">
-  <defs
-     id="defs4" />
-  <metadata
-     id="metadata7">
-    <rdf:RDF>
-      <cc:Work
-         rdf:about="">
-        <dc:format>image/svg+xml</dc:format>
-        <dc:type
-           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
-        <dc:title></dc:title>
-      </cc:Work>
-    </rdf:RDF>
-  </metadata>
-  <g
-     transform="translate(-218.47648,-86.629238)"
-     id="layer1">
-    <g
-       transform="translate(65.132093,66.963871)"
-       id="g2989">
-      <g
-         transform="translate(149.87814,1.1341165)"
-         id="g3265">
-        <path
-           d="m 9.930599,62.678115 c 0,-17.648147 14.309815,-31.957962 31.957961,-31.957962 17.657524,0 31.967339,14.309815 31.967339,31.957962 0,17.657524 -14.309815,31.957961 -31.967339,31.957961 -17.648146,0 -31.957961,-14.300437 -31.957961,-31.957961"
-           id="path3267"
-           style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-        <text
-           x="24.468645"
-           y="66.67173"
-           id="text3269"
-           xml:space="preserve"
-           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source</text>
-        <path
-           d="m 147.93685,62.678115 c 0,-17.648147 14.34733,-31.957962 32.03298,-31.957962 17.69504,0 32.04236,14.309815 32.04236,31.957962 0,17.657524 -14.34732,31.957961 -32.04236,31.957961 -17.68565,0 -32.03298,-14.300437 -32.03298,-31.957961"
-           id="path3271"
-           style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-        <text
-           x="164.556"
-           y="66.67173"
-           id="text3273"
-           xml:space="preserve"
-           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">map()</text>
-        <path
-           d="m 81.66722,58.533332 50.16875,0 0,-4.219801 8.4396,8.439602 -8.4396,8.439603 0,-4.219801 -50.16875,0 z"
-           id="path3275"
-           style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-        <path
-           d="m 219.67348,58.533332 50.16874,0 0,-4.219801 8.43961,8.439602 -8.43961,8.439603 0,-4.219801 -50.16874,0 z"
-           id="path3277"
-           style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-        <path
-           d="m 285.93373,62.678115 c 0,-17.648147 14.34733,-31.957962 32.05174,-31.957962 17.68565,0 32.03298,14.309815 32.03298,31.957962 0,17.648146 -14.34733,31.957961 -32.03298,31.957961 -17.70441,0 -32.05174,-14.309815 -32.05174,-31.957961"
-           id="path3279"
-           style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-        <text
-           x="295.73941"
-           y="54.668739"
-           id="text3281"
-           xml:space="preserve"
-           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">keyBy</text>
-        <text
-           x="326.64713"
-           y="54.668739"
-           id="text3283"
-           xml:space="preserve"
-           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">()/</text>
-        <text
-           x="292.28857"
-           y="66.67173"
-           id="text3285"
-           xml:space="preserve"
-           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">window()/</text>
-        <text
-           x="299.79044"
-           y="78.674713"
-           id="text3287"
-           xml:space="preserve"
-           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">apply()</text>
-        <path
-           d="m 357.67035,58.533332 50.16875,0 0,-4.219801 8.43961,8.439602 -8.43961,8.439603 0,-4.219801 -50.16875,0 z"
-           id="path3289"
-           style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-        <path
-           d="m 423.94937,62.678115 c 0,-17.648147 14.34732,-31.957962 32.03298,-31.957962 17.70441,0 32.03298,14.309815 32.03298,31.957962 0,17.648146 -14.32857,31.957961 -32.03298,31.957961 -17.68566,0 -32.03298,-14.309815 -32.03298,-31.957961"
-           id="path3291"
-           style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-        <text
-           x="444.97049"
-           y="66.67173"
-           id="text3293"
-           xml:space="preserve"
-           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Sink</text>
-        <text
-           x="21.30452"
-           y="299.24048"
-           id="text3295"
-           xml:space="preserve"
-           style="font-size:8.70216751px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Operator</text>
-        <text
-           x="23.104969"
-           y="309.74313"
-           id="text3297"
-           xml:space="preserve"
-           style="font-size:8.70216751px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Subtask</text>
-        <path
-           d="m 41.991711,290.75368 -0.825205,-10.71829 1.247185,-0.0938 0.825206,10.71829 -1.247186,0.0938 z m -2.597522,-9.33045 2.109901,-5.17628 2.878842,4.79181 -4.988743,0.38447 z"
-           id="path3299"
-           style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-        <path
-           d="m 225.60933,152.70054 17.00111,0 0,-16.33532 33.99284,0 0,16.33532 16.99174,0 -33.99285,16.33532 z"
-           id="path3301"
-           style="fill:none;stroke:#000000;stroke-width:1.24718571px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-        <path
-           d="m 9.930599,241.5508 c 0,-17.69503 14.309815,-32.04236 31.957961,-32.04236 17.657524,0 31.967339,14.34733 31.967339,32.04236 0,17.69503 -14.309815,32.04236 -31.967339,32.04236 -17.648146,0 -31.957961,-14.34733 -31.957961,-32.04236"
-           id="path3303"
-           style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-        <text
-           x="24.468645"
-           y="239.48763"
-           id="text3305"
-           xml:space="preserve"
-           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source</text>
-        <text
-           x="34.221073"
-           y="251.49062"
-           id="text3307"
-           xml:space="preserve"
-           style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[1]</text>
-        <path
-           d="m 147.93685,241.5508 c 0,-17.69503 14.34733,-32.04236 32.03298,-32.04236 17.69504,0 32.04236,14.34733 32.04236,32.04236 0,17.69503 -14.34732,32.04236 -32.04236,32.04236 -17.68565,0 -32.03298,-14.34733 -32.03298,-32.04236"
-           id="path3309"
-           style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-        <text
-           x="164.556"
-           y="239.48763"
-           id="text3311"
-           xml:space="preserve"
-           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">map</text>
-        <text
-           x="186.6115"
-           y="239.48763"
-           id="text3313"
-           xml:space="preserve"
-           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">()</text>
-        <text
-           x="172.35796"
-           y="251.49062"
-           id="text3315"
-           xml:space="preserve"
-           style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[1]</text>
-        <path
-           d="m 81.66722,237.331 50.16875,0 0,-4.2198 8.4396,8.4396 -8.4396,8.4396 0,-4.2198 -50.16875,0 z"
-           id="path3317"
-           style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-        <path
-           d="m 219.67348,237.331 50.16874,0 0,-4.2198 8.43961,8.4396 -8.43961,8.4396 0,-4.2198 -50.16874,0 z"
-           id="path3319"
-           style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-        <path
-           d="m 285.93373,241.56018 c 0,-17.70441 14.34733,-32.05174 32.05174,-32.05174 17.68565,0 32.03298,14.34733 32.03298,32.05174 0,17.68565 -14.34733,32.03298 -32.03298,32.03298 -17.70441,0 -32.05174,-14.34733 -32.05174,-32.03298"
-           id="path3321"
-           style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-        <text
-           x="295.73941"
-           y="227.48463"
-           id="text3323"
-           xml:space="preserve"
-           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">keyBy</text>
-        <text
-           x="326.64713"
-           y="227.48463"
-           id="text3325"
-           xml:space="preserve"
-           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">()/</text>
-        <text
-           x="292.28857"
-           y="239.48763"
-           id="text3327"
-           xml:space="preserve"
-           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">window()/</text>
-        <text
-           x="299.79044"
-           y="251.49062"
-           id="text3329"
-           xml:space="preserve"
-           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">apply</text>
-        <text
-           x="327.09723"
-           y="251.49062"
-           id="text3331"
-           xml:space="preserve"
-           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">()</text>
-        <text
-           x="310.29306"
-           y="263.49359"
-           id="text3333"
-           xml:space="preserve"
-           style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[1]</text>
-        <path
-           d="m 361.70261,245.31111 45.1425,22.03674 1.85671,-3.78844 3.88222,11.29031 -11.29032,3.88222 1.85671,-3.78844 -45.16125,-22.03674 z"
-           id="path3335"
-           style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-        <path
-           d="m 418.79183,298.04925 c 0,-17.64815 14.34733,-31.95796 32.03298,-31.95796 17.70441,0 32.03298,14.30981 32.03298,31.95796 0,17.64815 -14.32857,31.95796 -32.03298,31.95796 -17.68565,0 -32.03298,-14.30981 -32.03298,-31.95796"
-           id="path3337"
-           style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-        <text
-           x="439.83328"
-           y="296.00317"
-           id="text3339"
-           xml:space="preserve"
-           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Sink</text>
-        <text
-           x="443.13412"
-           y="308.00616"
-           id="text3341"
-           xml:space="preserve"
-           style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[1]</text>
-        <path
-           d="m 9.9399763,370.32976 c 0,-17.68566 14.3098147,-32.03298 31.9579617,-32.03298 17.648146,0 31.957961,14.34732 31.957961,32.03298 0,17.70441 -14.309815,32.05173 -31.957961,32.05173 -17.648147,0 -31.9579617,-14.34732 -31.9579617,-32.05173"
-           id="path3343"
-           style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-        <text
-           x="24.468645"
-           y="368.29453"
-           id="text3345"
-           xml:space="preserve"
-           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source</text>
-        <text
-           x="34.221073"
-           y="380.29749"
-           id="text3347"
-           xml:space="preserve"
-           style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[2]</text>
-        <path
-           d="m 147.93685,370.32976 c 0,-17.68566 14.34733,-32.03298 32.03298,-32.03298 17.70442,0 32.05174,14.34732 32.05174,32.03298 0,17.70441 -14.34732,32.05173 -32.05174,32.05173 -17.68565,0 -32.03298,-14.34732 -32.03298,-32.05173"
-           id="path3349"
-           style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-        <text
-           x="164.556"
-           y="368.29453"
-           id="text3351"
-           xml:space="preserve"
-           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">map</text>
-        <text
-           x="186.6115"
-           y="368.29453"
-           id="text3353"
-           xml:space="preserve"
-           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">()</text>
-        <text
-           x="172.35796"
-           y="380.29749"
-           id="text3355"
-           xml:space="preserve"
-           style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[2]</text>
-        <path
-           d="m 81.676598,366.10996 50.168752,0 0,-4.2198 8.4396,8.4396 -8.4396,8.4396 0,-4.2198 -50.168752,0 z"
-           id="path3357"
-           style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-        <path
-           d="m 219.67348,366.10996 50.16874,0 0,-4.2198 8.43961,8.4396 -8.43961,8.4396 0,-4.2198 -50.16874,0 z"
-           id="path3359"
-           style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-        <path
-           d="m 285.93373,370.32976 c 0,-17.68566 14.34733,-32.03298 32.05174,-32.03298 17.68565,0 32.03298,14.34732 32.03298,32.03298 0,17.70441 -14.34733,32.05173 -32.03298,32.05173 -17.70441,0 -32.05174,-14.34732 -32.05174,-32.05173"
-           id="path3361"
-           style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-        <text
-           x="295.73941"
-           y="356.29153"
-           id="text3363"
-           xml:space="preserve"
-           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">keyBy</text>
-        <text
-           x="326.64713"
-           y="356.29153"
-           id="text3365"
-           xml:space="preserve"
-           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">()/</text>
-        <text
-           x="292.28857"
-           y="368.29453"
-           id="text3367"
-           xml:space="preserve"
-           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">window()/</text>
-        <text
-           x="299.79044"
-           y="380.29749"
-           id="text3369"
-           xml:space="preserve"
-           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">apply</text>
-        <text
-           x="327.09723"
-           y="380.29749"
-           id="text3371"
-           xml:space="preserve"
-           style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">()</text>
-        <text
-           x="310.29306"
-           y="392.30048"
-           id="text3373"
-           xml:space="preserve"
-           style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[2]</text>
-        <path
-           d="m 361.70261,366.54131 45.1425,-22.03674 1.85671,3.78845 3.88222,-11.29031 -11.29032,-3.88222 1.85671,3.78844 -45.16125,22.03674 z"
-           id="path3375"
-           style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-        <path
-           d="m 220.66747,351.51882 62.3968,-79.33226 3.31958,2.6069 -1.42536,-11.85295 -11.8342,1.42535 3.30082,2.6069 -62.39679,79.33226 z"
-           id="path3377"
-           style="fill:#bfbfbf;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-        <text
-           x="97.286781"
-           y="299.24048"
-           id="text3379"
-           xml:space="preserve"
-           style="font-size:8.70216751px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Stream</text>
-        <text
-           x="94.886185"
-           y="309.74313"
-           id="text3381"
-           xml:space="preserve"
-           style="font-size:8.70216751px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Partition</text>
-        <path
-           d="m 43.19201,317.87294 -1.68792,10.37133 1.219053,0.20631 1.706676,-10.37134 -1.237809,-0.2063 z m -3.338331,8.83345 1.650411,5.32633 3.282067,-4.51988 -4.932478,-0.80645 z"
-           id="path3383"
-           style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-        <path
-           d="m 112.5843,285.92436 -2.08177,-32.93321 1.25656,-0.075 2.07239,32.93321 -1.24718,0.075 z m -3.87284,-31.56412 2.18492,-5.14816 2.8132,4.82933 -4.99812,0.31883 z"
-           id="path3385"
-           style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-        <path
-           d="m 112.69683,316.20377 -3.20705,37.99697 1.25656,0.11253 3.20705,-37.99697 -1.25656,-0.11253 z m -4.96999,36.60912 2.08177,5.17629 2.90697,-4.76368 -4.98874,-0.41261 z"
-           id="path3387"
-           style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-        <path
-           d="m 3.5258784,397.7866 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.2378 1.256563,0 0,1.2378 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,
 0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49438 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49438 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49438 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.2
 56563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.2378 1.256563,0 0,1.2378 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49438 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49438 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656
  1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49438 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1
 .25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.2378 1.256563,0 0,1.2378 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.2378 1.256563,0 0,1.2378 -1.256563,0 z m 0.018755,-2.53188 0.056264,-0.8252 0.075019,-0.46887 1.2190537,0.18755 -0.056264,0.43136 0,-0.0563 -0.037509,0.80645 -1.256563,-0.075 z m 0.3750934,-2.58814 0.1687921,-0.67517 0.2063013,-0.58139 1.1815444,0.43135 -0.2063014,0.54389 0.018755,-0.0563 -0.1687921,0.65641 -1.200299,-0.31883 z m 0.9002243,-2.45686 0.225056,-0.48762 0.3938482,-0.65642 1.0690163,0.65642 -0.3750935,0.6189 0.018755,-0.0563 -0.2063014,0.46886 -1.1252803,-0.54388 z m 1.3878457,-2.21305 0.1875467,-0.26257 0.6564136,-0.73143 0.9377336,0.84396 -0.6376589,0.69392 0.037509,-0.0375 -0.1875468,0.24381 -0.9939976,-0.75018 z m 1.8004486,-1.89423 0.093773,-0.075 0.9564883,-0.73144 0.037509,-0
 .0187 0.6564136,1.06901 -0.018755,0 0.056264,-0.0188 -0.9189789,0.67517 0.037509,-0.0375 -0.056264,0.0563 -0.8439602,-0.91898 z m 2.2318056,-1.50037 0.975243,-0.45011 0.206301,-0.075 0.431358,1.16279 -0.187547,0.075 0.05626,-0.0188 -0.956488,0.45012 -0.525131,-1.14404 z m 2.419353,-0.95649 0.900224,-0.22505 0.375093,-0.0563 0.187547,1.23781 -0.356339,0.0375 0.07502,0 -0.862715,0.2063 -0.318829,-1.2003 z m 2.588145,-0.43136 0.825205,-0.0563 0.450112,0 0,1.25656 -0.431357,0 0.03751,0 -0.825205,0.0375 -0.05626,-1.23781 z m 2.53188,-0.0563 1.237809,0 0,1.25656 -1.237809,0 0,-1.25656 z m 2.494372,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.494371,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.513126,0 1.237809,0 0,1.25656 -1.237809,0 0,-1.25656 z m 2.494372,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.494371,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.513126,0 1.237808,0 0,1.25656 -1.237808,0 0,-1.25656 z m 2.494372,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.2565
 6 z m 2.494371,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.513126,0 1.237808,0 0,1.25656 -1.237808,0 0,-1.25656 z m 2.494371,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.494372,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.513126,0 1.237808,0 0,1.25656 -1.237808,0 0,-1.25656 z m 2.494371,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.494372,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.513126,0 1.237808,0 0,1.25656 -1.237808,0 0,-1.25656 z m 2.494371,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.494372,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.513126,0 1.031507,0 0.24381,0.0188 -0.05626,1.2378 -0.225056,0 0.01875,0 -1.012752,0 0,-1.25656 z m 2.588144,0.11253 0.956489,0.15004 0.337584,0.075 -0.31883,1.21906 -0.300074,-0.075 0.05626,0 -0.918979,-0.13128 0.187546,-1.23781 z m 2.550636,0.60015 0.768941,0.28132 0.450113,0.2063 -0.543886,1.12528 -0.412603,-0.18755 0.03751,0.0188 -0.731433,-0.28132 0.431358,-1.16279 z m 2.363089,1.10652 0.543
 885,0.31883 0.543886,0.41261 -0.750187,1.01275 -0.525131,-0.39385 0.05626,0.0188 -0.506376,-0.30007 0.637659,-1.06902 z m 2.081768,1.5754 0.31883,0.28132 0.600149,0.67516 -0.937733,0.82521 -0.581395,-0.63766 0.05626,0.0375 -0.300075,-0.26256 0.84396,-0.91898 z m 1.72543,1.96924 0.131283,0.16879 0.56264,0.93773 -1.069016,0.65642 -0.562641,-0.91898 0.03751,0.0563 -0.09377,-0.15004 0.993998,-0.75018 z m 1.294072,2.34433 0.412603,1.12528 0.03751,0.11253 -1.219054,0.31883 -0.01875,-0.0938 0.01875,0.0563 -0.412603,-1.08777 1.181544,-0.43136 z m 0.750187,2.51313 0.150038,1.05026 0.01875,0.24381 -1.256563,0.075 0,-0.22506 0,0.0563 -0.150037,-1.01276 1.237808,-0.18754 z m 0.225056,2.58814 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49437 0,1.25657 -1.256563,0 0,-1.25657 1.256563,0 z m 0,2.49437 0,1.25657 -1.256563,0 0,-1.25657 1.256563,0 z m 0,2.51313 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49437 0,1.25657 -1.256563,0 0,-1.25657 1.256563,0 z m 0,2.49437 0,1.25657 -1.25656
 3,0 0,-1.25657 1.256563,0 z m 0,2.51313 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.49437 0,1.25657 -1.256563,0 0,-1.25657 1.256563,0 z m 0,2.51313 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.51313 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.51313 0,1.2378 -1.256563,0 0,-1.2378 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.51313 0,1.2378 -1.256563,0 0,-1.2378 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.51312 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49438 0,1.25656 -1.256563,0 0,-1.25656 
 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.51312 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49437 0,1.25657 -1.256563,0 0,-1.25657 1.256563,0 z m 0,2.49438 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.51312 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49437 0,1.25657 -1.256563,0 0,-1.25657 1.256563,0 z m 0,2.49437 0,1.25657 -1.256563,0 0,-1.25657 1.256563,0 z m 0,2.51313 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49437 0,1.25657 -1.256563,0 0,-1.25657 1.256563,0 z m 0,2.49437 0,1.25657 -1.256563,0 0,-1.25657 1.256563,0 z m 0,2.51313 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.49437 0,1.25657 -1.256563,0 0,-1.25657 1.256563,0 z m 0,2.51313 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.51313 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 
 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.51313 0,1.2378 -1.256563,0 0,-1.2378 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.51313 0,1.2378 -1.256563,0 0,-1.2378 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.51312 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49438 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.51312 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49437 0,1.25657 -1.256563,0 0,-1.25657 1.256563,0 z m 0,2.49438 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.51312 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49437 0,1.25657 -1.256563,0 0,-1.25657 1.256563,0 z m 0,2.49437 0,1.25657 -1.256563,0 0,-1.25657 1.256563,0 z m 0,2.51313 0
 ,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49437 0,1.25657 -1.256563,0 0,-1.25657 1.256563,0 z m 0,2.49437 0,1.25657 -1.256563,0 0,-1.25657 1.256563,0 z m 0,2.51313 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.49437 0,1.25657 -1.256563,0 0,-1.25657 1.256563,0 z m 0,2.51313 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.51313 0,1.23781 -1.256563,0 0,-1.23781 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.51313 0,1.2378 -1.256563,0 0,-1.2378 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.49437 0,1.25656 -1.256563,0 0,-1.25656 1.256563,0 z m 0,2.51313 0,1.2378 -1.256563,0 0,-1.2378 1.256563,0 z m -0.03751,2.53188 -0.03751,0.58139 -0.112528,0.71268 -1.237809,-0.18755 0.112528,-0.6
 9392 0,0.075 0.01875,-0.56264 1.256563,0.075 z m -0.412603,2.58814 -0.112528,0.43136 -0.300074,0.80645 -1.16279,-0.43136 0.28132,-0.76894 -0.01875,0.0563 0.09377,-0.4126 1.219053,0.31883 z m -0.937733,2.43811 -0.131283,0.26256 -0.525131,0.86272 -1.069016,-0.63766 0.506376,-0.84396 -0.03751,0.0375 0.131282,-0.22506 1.125281,0.54389 z m -1.44411,2.19429 -0.03751,0.0563 -0.806451,0.90022 -0.05626,0.0563 -0.843961,-0.91898 0.03751,-0.0375 -0.05626,0.0375 0.768941,-0.84396 -0.03751,0.0375 0.03751,-0.0375 0.993998,0.75018 z m -1.894222,1.87547 -0.806451,0.61891 -0.262565,0.15003 -0.637659,-1.06901 0.225056,-0.15004 -0.05626,0.0375 0.787696,-0.5814 0.750187,0.994 z m -2.194297,1.4066 -0.750187,0.35634 -0.450112,0.16879 -0.431357,-1.16279 0.412603,-0.16879 -0.03751,0.0188 0.712678,-0.33759 0.543885,1.12528 z m -2.456862,0.91898 -0.656413,0.16879 -0.637659,0.0938 -0.168792,-1.23781 0.581395,-0.0938 -0.05626,0.0188 0.637659,-0.15004 0.300074,1.2003 z m -2.588144,0.39385 -0.581395,0.0375 -0.69
 3923,0 0,-1.25656 0.675168,0 -0.01875,0 0.543886,-0.0188 0.07502,1.23781 z m -2.531881,0.0375 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.494372,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.513126,0 -1.237808,0 0,-1.25656 1.237808,0 0,1.25656 z m -2.494371,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.494372,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.513126,0 -1.237808,0 0,-1.25656 1.237808,0 0,1.25656 z m -2.494371,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.494371,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.513127,0 -1.237808,0 0,-1.25656 1.237808,0 0,1.25656 z m -2.494371,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.494371,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.513126,0 -1.237809,0 0,-1.25656 1.237809,0 0,1.25656 z m -2.494372,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.494371,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.513126,0 -1.237809,0 0,-1.25656 1.237809,0 0,1.25656 z m -2.494372,0 -1.
 256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.494371,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.513126,0 -1.237809,0 0,-1.25656 1.237809,0 0,1.25656 z m -2.494372,0 -0.787696,0 -0.487621,-0.0375 0.05626,-1.23781 0.487621,0.0188 -0.03751,0 0.768941,0 0,1.25656 z m -2.588144,-0.15004 -0.712678,-0.11253 -0.581395,-0.15003 0.31883,-1.2003 0.543885,0.13128 -0.07502,-0.0188 0.693923,0.11252 -0.187546,1.23781 z m -2.550636,-0.63766 -0.506376,-0.2063 -0.675168,-0.31883 0.525131,-1.12528 0.656413,0.30008 -0.05626,-0.0188 0.506376,0.18755 -0.450112,1.18154 z m -2.3443336,-1.16279 -0.3188294,-0.18754 -0.7501869,-0.56264 0.7501869,-1.01276 0.7314322,0.54389 -0.056264,-0.0375 0.3000748,0.18755 -0.6564136,1.06901 z m -2.0442592,-1.6129 -0.1312828,-0.11253 -0.7689415,-0.84396 0.9377336,-0.84396 0.7501869,0.82521 -0.056264,-0.0375 0.112528,0.0938 -0.8439602,0.91898 z m -1.7066752,-2.06301 -0.5813949,-0.93774 -0.093773,-0.18754 1.1252803,-0.54389 0.075019,0.15004 -0.018755,-0.0375 0.5626
 402,0.91898 -1.0690163,0.63766 z m -1.2190537,-2.32558 -0.3188294,-0.88147 -0.093773,-0.35634 1.2190537,-0.30008 0.075019,0.31883 -0.018755,-0.0563 0.3188294,0.84396 -1.1815443,0.43136 z m -0.6939229,-2.51313 -0.112528,-0.80645 -0.037509,-0.50638 1.2565631,-0.0563 0.018755,0.46887 0,-0.075 0.1125281,0.7877 -1.2378084,0.18754 z"
-           id="path3389"
-           style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-        <path
-           d="m 78.075701,399.02441 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.2378 1.256563,0 0,1.2378 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563
 ,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.2378 1.256563,0 0,1.2378 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49438 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49438 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.25
 6563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.2378 1.256563,0 0,1.2378 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.2378 1.256563,0 0,1.2378 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49438 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.
 256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49438 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51312 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25657 1.256563,0 0,1.25657 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.23781 1.256563,0 0,1.23781 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25
 656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.2378 1.256563,0 0,1.2378 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.51313 0,-1.2378 1.256563,0 0,1.2378 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0,-2.49437 0,-1.25656 1.256563,0 0,1.25656 -1.256563,0 z m 0.01875,-2.53188 0.03751,-0.67517 0.09377,-0.63766 1.237808,0.18755 -0.09377,0.60015 0.01875,-0.0563 -0.03751,0.63766 -1.256563,-0.0563 z m 0.412603,-2.58814 0.07502,-0.30008 0.356339,-0.95649 1.181544,0.45012 -0.356339,0.91897 0.01875,-0.0563 -0.05626,0.24381 -1.219054,-0.30007 z m 1.031507,-2.47562 0.468867,-0.7877 0.243811,-0.31883 0.993997,0.75019 -0.206301,0.30008 0.01875,-0.0563 -0.450113,0.76894 -1.069016,-0.65641 z m 1.537883,-2.11928 0.31883,-0.33758 0.637658,-0.56264 0.825206,0.91898 -0.600149,0.54388 0.05626,-0.0375 -0.300075,0.31883 -0.937734,-0.84396 z m 2.025505,-1.74418 0.918979,-0.54
 389 0.225056,-0.11253 0.543885,1.12528 -0.206301,0.11253 0.05626,-0.0375 -0.88147,0.52513 -0.656413,-1.06901 z m 2.344334,-1.18155 0.600149,-0.22505 0.637659,-0.1688 0.31883,1.21906 -0.618905,0.15004 0.07502,-0.0188 -0.581395,0.22506 -0.431357,-1.18155 z m 2.531881,-0.63766 0.28132,-0.0375 1.031507,-0.0563 0.05626,1.25656 -0.993998,0.0563 0.05626,-0.0188 -0.243811,0.0375 -0.187546,-1.23781 z m 2.588144,-0.0938 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.513126,0 1.237809,0 0,1.25656 -1.237809,0 0,-1.25656 z m 2.494372,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.494371,0 1.256563,0 0,1.25656 -1.256563,0 0,-1.25656 z m 2.513133,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51313,0 1.2378,0 0,1.25656 -1.2378,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51313,0 1.2378,0 0,1.
 25656 -1.2378,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51312,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49438,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51312,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.53188,0.0375 0.28132,0.0188 1.01276,0.15004 -0.18755,1.23781 -0.97524,-0.15004 0.0563,0.0188 -0.26256,-0.0188 0.075,-1.25656 z m 2.62566,0.52513 0.90022,0.33758 0.31883,0.15004 -0.54388,1.12528 -0.28132,-0.13128 0.0375,0.0187 -0.86272,-0.33758 0.43136,-1.16279 z m 2.36309,1.10653 0.46886,0.26256 0.61891,0.46887 -0.75019,1.01275 -0.60015,-0.45011 0.0563,0.0188 -0.43136,-0.24381 0.63766,-1.06901 z m 2.08177,1.59414 0.0563,0.0563 0.73143,0.80645 0.11253,0.15004 -0.994,0.75018 -0.11253,-0.13128 0.0375,0.0375 -0.67517,-0.75019 0.0375,0.0375 -0.0375,-0.0375 0.84397,-0.91898 z m 1.65041,2.10053 0.35634,0.56264 0.28132,0.58139 -
 1.14404,0.54389 -0.26257,-0.56264 0.0375,0.0563 -0.33758,-0.54388 1.06902,-0.63766 z m 1.12528,2.34433 0.0938,0.24381 0.26257,1.03151 -1.21906,0.30007 -0.24381,-0.97524 0.0188,0.0563 -0.0938,-0.2063 1.18155,-0.45011 z m 0.54388,2.62565 0.0563,0.97525 0,0.31883 -1.25657,0 0,-0.30008 0,0.0375 -0.0375,-0.95649 1.23781,-0.075 z m 0.0563,2.53189 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51312 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.2565
 7,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.2378 -1.25657,0 0,-1.2378 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.2378 -1.25657,0 0,-1.2378 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51312 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49438 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51312 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.49438 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51312 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657
  1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.2378 -1.25657,0 0,-1.2378 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.2378 -1.25657,0 0,-1.2378 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-
 1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51312 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49438 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51312 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.49438 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51312 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25656 -1.
 25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.2378 -1.25657,0 0,-1.2378 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,0.50637 -0.0375,0.76894 -1.25657,-0.0563 0.0375,-0.76894 0,0.0375 0,-0.48762 1.25657,0 z m -0.20631,2.56939 -0.0188,0.2063 -0.28132,1.06901 -0.0188,0.075 -1.18155,-0.43136 0.0188,-0.0375 -0.0188,0.0563 0.26257,-1.01275 -0.0188,0.0563 0.0188,-0.16879 1.2378,0.18755 z m -0.80645,2.56939 -0.35633,0.75018 -0.24382,0.39385 -1.06901,-0.63766 0.22505,-0.37509 -0.0375,0.0563 0.35634,-0.73143 1.12528,0.54389 z m -1.35033,2.2318 -0.22506,0.31883 -0.6189,0.67517 -0.91898,-0.82521 0.58139,-0.65641 -0.0375,0.0375 0.22506,-0.30008 0.994,0.75019 z m -1.85672,1.91298 -0.7
 6894,0.60015 -0.30007,0.16879 -0.63766,-1.06902 0.26257,-0.16879 -0.0563,0.0375 0.75019,-0.56264 0.75018,0.994 z m -2.21305,1.4066 -0.48762,0.22505 -0.71268,0.26257 -0.43135,-1.16279 0.67517,-0.26256 -0.0375,0.0188 0.45011,-0.2063 0.54388,1.12528 z m -2.47561,0.86271 -0.13129,0.0375 -1.12528,0.1688 -0.0938,0 -0.0563,-1.25657 0.0563,0 -0.0563,0 1.05026,-0.15004 -0.0563,0.0188 0.11253,-0.0375 0.30008,1.21905 z m -2.62566,0.26257 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.49438,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.49438,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.5131
 3,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.513133,0 -1.237808,0 0,-1.25656 1.237808,0 0,1.25656 z m -2.494371,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.494372,0 -1.256563,0 0,-1.25656 1.256563,0 0,1.25656 z m -2.513126,0 -1.237808,0 0,-1.25656 1.237808,0 0,1.25656 z m -2.494371,0 -0.112528,0 -1.144035,-0.0563 -0.07502,-0.0188 0.168792,-1.23781 0.05626,0 -0.05626,0 1.087771,0.0563 -0.03751,0 0.112528,0 0,1.25656 z m -2.625654,-0.30008 -0.843961,-0.2063 -0.431357,-0.16879 0.450112,-1.16279 0.393848,0.15004 -0.07502,-0.0188 0.806451,0.20631 -0.300074,1.20029 z m -2.475617,-0.88147 -0.393848,-0.18754 -0.750187,-0.45011 0.656413,-1.06902 0.712678,0.43136 -0.05626,-0.0375 0.375093,0.18754 -0.543885,1.12528 z m -2.250561,-1.44411 -0.768941,-0.69392 -0.168792,-0.2063 0.918978,-0.84396 0.168793,0.18755 -0.05626,-0.0375 0.750186,0.67517 -0.84396,0.91897 z m -1
 .800448,-1.89422 -0.356339,-0.46886 -0.375094,-0.61891 1.069017,-0.65641 0.356339,0.60015 -0.01875,-0.0563 0.337584,0.45012 -1.012752,0.75018 z m -1.312827,-2.25056 -0.07502,-0.15004 -0.393848,-1.05026 -0.01875,-0.0938 1.200299,-0.30007 0.01875,0.0563 -0.01875,-0.0563 0.375094,0.97524 -0.01875,-0.0375 0.05626,0.11253 -1.12528,0.54388 z m -0.787697,-2.56939 -0.131282,-0.8252 -0.01875,-0.46887 1.256563,-0.075 0.01875,0.45011 -0.01875,-0.075 0.131283,0.80646 -1.237809,0.18754 z"
-           id="path3391"
-           style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-        <text
-           x="15.596898"
-           y="152.72169"
-           id="text3393"
-           xml:space="preserve"
-           style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Operator</text>
-        <path
-           d="m 41.344675,137.05914 0,-31.63913 -1.247186,0 0,31.63913 1.247186,0 z m 1.875467,-30.38256 -2.503749,-5.0075 -2.494371,5.0075 4.99812,0 z"
-           id="path3395"
-           style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-        <path
-           d="m 40.106867,159.79918 -0.722055,40.14438 1.247185,0.0188 0.731433,-40.135 -1.256563,-0.0281 z m -2.578768,38.85969 2.409976,5.045 2.588144,-4.95123 -4.99812,-0.0938 z"
-           id="path3397"
-           style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-        <text
-           x="94.709129"
-           y="152.72169"
-           id="text3399"
-           xml:space="preserve"
-           style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Stream</text>
-        <path
-           d="m 114.72233,137.0779 -5.04501,-55.213756 1.24719,-0.112528 5.03563,55.213754 -1.23781,0.11253 z m -6.79857,-53.797778 2.03488,-5.204421 2.94449,4.754309 -4.97937,0.450112 z"
-           id="path3401"
-           style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-        <path
-           d="m 114.71295,159.7523 -4.21042,40.15375 1.24718,0.13128 4.21043,-40.16313 -1.24719,-0.1219 z m -5.94523,38.71902 1.96924,5.23255 3.01013,-4.7168 -4.97937,-0.51575 z"
-           id="path3403"
-           style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-        <path
-           d="m 515.52843,213.10934 c 4.87622,0 8.83345,0.65641 8.83345,1.48162 l 0,97.76811 c 0,0.8252 3.95724,1.48162 8.83345,1.48162 -4.87621,0 -8.83345,0.65641 -8.83345,1.46286 l 0,97.78686 c 0,0.82521 -3.95723,1.48162 -8.83345,1.48162"
-           id="path3405"
-           style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-        <text
-           x="548.92151"
-           y="311.33228"
-           id="text3407"
-           xml:space="preserve"
-           style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Streaming Dataflow</text>
-        <text
-           x="552.97247"
-           y="324.83566"
-           id="text3409"
-           xml:space="preserve"
-           style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">(parallelized view)</text>
-        <path
-           d="m 515.52843,19.14852 c 4.87622,0 8.83345,0.675169 8.83345,1.481619 l 0,38.315796 c 0,0.806451 3.95724,1.462864 8.83345,1.462864 -4.87621,0 -8.83345,0.675169 -8.83345,1.481619 l 0,38.315792 c 0,0.80645 -3.95723,1.46287 -8.83345,1.46287"
-           id="path3411"
-           style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-        <text
-           x="548.92151"
-           y="57.884895"
-           id="text3413"
-           xml:space="preserve"
-           style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Streaming Dataflow</text>
-        <text
-           x="554.77295"
-           y="71.38826"
-           id="text3415"
-           xml:space="preserve"
-           style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">(condensed view)</text>
-        <text
-           x="436.57739"
-           y="455.61459"
-           id="text3417"
-           xml:space="preserve"
-           style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">parallelism = 1</text>
-        <text
-           x="333.19894"
-           y="430.43018"
-           id="text3419"
-           xml:space="preserve"
-           style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">parallelism = 2</text>
-        <path
-           d="m 472.86155,433.04538 -16.93547,-110.48377 1.23781,-0.18755 16.93547,110.48378 -1.23781,0.18754 z m -18.58588,-108.96464 1.70668,-5.32633 3.2258,4.57614 -4.93248,0.75019 z"
-           id="path3421"
-           style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-        <path
-           d="m 375.41227,414.94712 -40.30379,-145.70505 1.2003,-0.31882 40.30379,145.68629 -1.2003,0.33758 z m -41.78541,-143.99837 1.06902,-5.47636 3.75094,4.14478 -4.81996,1.33158 z"
-           id="path3423"
-           style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-        <path
-           d="m 352.00644,414.6658 -9.45235,-15.17253 1.06902,-0.67516 9.45235,15.19128 -1.06902,0.65641 z m -10.37133,-13.12827 -0.52513,-5.57013 4.76369,2.92572 -4.23856,2.64441 z"
-           id="path3425"
-           style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-        <path
-           d="m 145.59252,398.51803 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.2378 1.25656,0 0,1.2378 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.
 49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0
 ,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.2378 1.25656,0 0,1.2378 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.4
 9437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.2378 1.25656,0 0,1.2378 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.2378 1.25656,0 0,1.2378 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23
 781 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.05027 0.0188,-0.22505 1.2378,0.0563 0,0.2063 0,-0.0375 0,1.05027 -1.25656,0 z m 0.11253,-2.58815 0.13128,-0.84396 0.11253,-0.45011 1.21905,0.31883 -0.11252,0.4126 0,-0.0563 -0.11253,0.80645 -1.23781,-0.18755 z m 0.63766,-2.53188 0.18754,-0.52513 0.31883,-0.69392 1.12528,0.54388 -0.30007,0.65642 0.0187,-0.0563 -0.18754,0.48762 -1.16279,-0.4126 z m 1.16279,-2.34433 0.11253,-0.18755 0.65641,-0.88147 0.994,0.75019 -0.63766,0.86271 0.0375,-0.0563 -0.0938,0.15004 -1.06901,-0.63766 z m 1.70667,-2.06302 0.69393,-0.63766 0.30007,-0.22505 0.75019,1.01275 -0.28132,0.2063 0.0375,-0.0375 -0.67517,0.60015 -0.82521,-0.91898 z m 2.06302,-1.59414 0.50637,-0.31883 0.65642,-0.30008 0.52513,1.12528 -0.61891,0.30008 0.0563,-0.0375 -0.46887,0.30007 -0.65641,-1.06901 z m 2.38184,-1.10653 0.26257,-0.0938
  0.99399,-0.26257 0.31883,1.2003 -0.97524,0.26257 0.0563,-0.0188 -0.24381,0.075 -0.4126,-1.16279 z m 2.6069,-0.5814 1.14403,-0.0563 0.13129,0 0,1.25656 -0.11253,0 0.0375,0 -1.14404,0.0563 -0.0563,-1.25657 z m 2.53188,-0.0563 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51313,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51313,0 1.2378,0 0,1.25656 -1.2378,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51313,0 1.2378,0 0,1.25656 -1.2378,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51312,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49438,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.
 51312,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.49438,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51312,0 0.93774,0 0.33758,0.0188 -0.0563,1.25656 -0.33759,-0.0188 0.0375,0 -0.91898,0 0,-1.25656 z m 2.58815,0.13128 0.71267,0.11253 0.5814,0.15004 -0.31883,1.2003 -0.54389,-0.13128 0.0563,0 -0.67517,-0.0938 0.18755,-1.23781 z m 2.53188,0.65642 0.39385,0.15003 0.80645,0.3751 -0.54389,1.12528 -0.76894,-0.35634 0.0563,0.0187 -0.37509,-0.13128 0.43136,-1.18154 z m 2.32558,1.18154 0.075,0.0563 0.91897,0.67516 0.0938,0.0938 -0.82521,0.93773 -0.0938,-0.075 0.0375,0.0375 -0.86272,-0.65641 0.0563,0.0375 -0.0563,-0.0375 0.65642,-1.06902 z m 2.04426,1.74419 0.54388,0.60015 0.30008,0.39384 -0.994,0.75019 -0.28132,-0.37509 0.0375,0.0375 -0.52513,-0.5814 0.91898,-0.8252 z m 1.57539,2.06301 0.24381,0.4126 0.37509,0.75019 -1.12528,0.54388 -0.35634,-0.73143 0.0375,0.0563 -0.24381,-0.3751 1.06902,-0.65641 z m 1.08777,2.4006 0.0563,0.1
 3128 0.30008,1.12528 0,0.0563 -1.23781,0.18755 0,-0.0188 0.0187,0.0563 -0.28132,-1.06902 0.0188,0.0563 -0.0563,-0.11253 1.18154,-0.4126 z m 0.54389,2.62565 0.0375,1.01275 0,0.26257 -1.2378,0 0,-0.26257 0,0.0375 -0.0563,-0.99399 1.25657,-0.0563 z m 0.0375,2.53188 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.49438 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.51312 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.51313 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.51313 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.51313 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1
 .2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.51313 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.51313 0,1.2378 -1.2378,0 0,-1.2378 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.51312 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49438 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.51312 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49438 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.51312 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.49438 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.51312 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.49437 0,1.2
 5657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.51313 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.51313 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.51313 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.51313 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.51313 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.51313 0,1.2378 -1.2378,0 0,-1.2378 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1
 .2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.51313 0,1.2378 -1.2378,0 0,-1.2378 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.51312 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49438 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.51312 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.49438 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.51312 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.51313 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.49437 0,1.25657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.51313 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.49437 0,1.2
 5657 -1.2378,0 0,-1.25657 1.2378,0 z m 0,2.51313 0,1.23781 -1.2378,0 0,-1.23781 1.2378,0 z m 0,2.49437 0,1.25656 -1.2378,0 0,-1.25656 1.2378,0 z m 0,2.51313 0,1.2378 -1.2378,0 0,-1.2378 1.2378,0 z m -0.0187,2.51312 -0.0375,0.73143 -0.0938,0.5814 -1.21906,-0.18755 0.075,-0.56264 0,0.075 0.0188,-0.69392 1.25656,0.0563 z m -0.39385,2.6069 -0.11253,0.43136 -0.30007,0.80645 -1.16279,-0.43136 0.28132,-0.76894 -0.0188,0.0375 0.0938,-0.39385 1.21905,0.31883 z m -0.95649,2.43811 -0.0563,0.11253 -0.5814,0.97524 -0.0563,0.075 -1.01275,-0.75019 0.0375,-0.0563 -0.0375,0.0563 0.56264,-0.91898 -0.0188,0.0563 0.0375,-0.0938 1.12528,0.54389 z m -1.50037,2.1943 -0.5814,0.65641 -0.33758,0.28132 -0.84396,-0.91898 0.31883,-0.28132 -0.0563,0.0563 0.56264,-0.63765 0.93773,0.84396 z m -1.93173,1.78169 -0.4126,0.30007 -0.67517,0.41261 -0.65642,-1.06902 0.65642,-0.39385 -0.0375,0.0375 0.39385,-0.30007 0.73143,1.01275 z m -2.25056,1.31283 -0.16879,0.075 -1.08778,0.39384 -0.0188,0.0188 -0.31883,-1.21906 0,0 -0
 .0563,0.0188 1.0315,-0.37509 -0.0563,0.0188 0.13128,-0.0563 0.54389,1.12528 z m -2.56939,0.80645 -0.994,0.15004 -0.30007,0 -0.0563,-1.23781 0.26257,-0.0188 -0.0563,0 0.95649,-0.13128 0.18755,1.23781 z m -2.56939,0.2063 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51313,0 -1.2378,0 0,-1.25656 1.2378,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51313,0 -1.2378,0 0,-1.25656 1.2378,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656
  z m -2.49438,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.53189,-0.0375 -0.58139,-0.0188 -0.71268,-0.11253 0.18755,-1.23781 0.67517,0.0938 -0.0563,0 0.54389,0.0375 -0.0563,1.23781 z m -2.58814,-0.4126 -0.30007,-0.0938 -0.93774,-0.33758 0.43136,-1.16279 0.90022,0.31883 -0.0563,-0.0188 0.28132,0.075 -0.31883,1.21906 z m -2.47562,-1.01276 -0.93773,-0.56264 -0.16879,-0.13128 0.75018,-0.994 0.15004,0.11253 -0.0563,-0.0375 0.91898,0.56264 -0.65642,1.05026 z m -2.13803,-1.50037 -0.54388,-0.48762 -0.39385,-0.43136 0.93773,-0.84396 0.35634,0.4126 -0.0375,-0.0563 0.52513,0.48762 -0.84396,0.91898 z m -1.76294,-1.93173 -0.22505,-0.31883 -0.48763,-0.7877 1.06902,-0.63765 0.46887,0.75018 -0.0375,-0.0563 0.22505,0.30007 -1.01275,0.75019 z m -1.29407,-2.26932 -0.0188,-0.0563 -0.39384,-1.08777 -0.0375,-0.15004 1.2003,-0.3188
 3 0.0375,0.11253 -0.0188,-0.0375 0.37509,1.03151 -0.0187,-0.0563 0,0.0188 -1.12528,0.54388 z m -0.76894,-2.56939 -0.13129,-0.88147 -0.0187,-0.43135 1.25656,-0.0563 0.0188,0.39385 -0.0188,-0.075 0.13129,0.86272 -1.23781,0.18754 z"
-           id="path3427"
-           style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-        <path
-           d="m 216.39141,398.81811 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-
 2.51313 0,-1.2378 1.25656,0 0,1.2378 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.2378 1.25656,0 0,1.2378 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1
 .25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.2378 1.25656,0 0,1.2378 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.2378 1.25656,0 0,1.2378 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437
  0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25
 656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0.0563,-2.53188 0,-0.26257 0.16879,-1.05026 1.23781,0.18755 -0.16879,1.01275 0.0188,-0.0563 -0.0188,0.22505 -1.23781,-0.0563 z m 0.50638,-2.64441 0.35634,-0.93773 0.13128,-0.28132 1.12528,0.54388 -0.11253,0.24381 0.0188,-0.0375 -0.35634,0.90023 -1.16279,-0.43136 z m 1.08777,-2.36309 0.31883,-0.54388 0.43136,-0.54389 0.99399,0.75019 -0.39384,0.52513 0.0188,-0.0563 -0.30007,0.50638 -1.06902,-0.63766 z m 1.59415,-2.08177 0.13128,-0.16879 0.82521,-0.73143 0.0375,-0.0375 0.75019,1.01275 -0.0187,0 0.0563,-0.0375 -0.7877,0.71268 0.0563,-0.0563 -0.13128,0.15003 -0.91898,-0.84396 z m 2.06301,-1.68792 0.73143,-0.46886 0.41261,-0.18755 0.54388,1.12528 -0.37509,0.16879 0.0563,-0.0188 -0.71268,0.43135 -0.65641,-1.05026 z m 2.34433,-1.16279 0.46887,-0.16
 879 0.7877,-0.2063 0.30007,1.21905 -0.75018,0.18755 0.0563,-0.0187 -0.43136,0.15003 -0.43136,-1.16279 z m 2.55064,-0.60015 0.16879,-0.0375 1.14404,-0.0563 0.0563,1.25656 -1.10652,0.0563 0.0563,-0.0188 -0.13128,0.0188 -0.18755,-1.21906 z m 2.58815,-0.0938 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51312,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.49438,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.51312,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.51313,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.25656 -1.25657,0 0,-1.25656 z m 2.51313,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25657,0 0,1.
 25656 -1.25657,0 0,-1.25656 z m 2.51313,0 1.23781,0 0,1.25656 -1.23781,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.49437,0 1.25656,0 0,1.25656 -1.25656,0 0,-1.25656 z m 2.53188,0 1.05026,0.0563 0.26257,0.0375 -0.18755,1.23781 -0.22505,-0.0375 0.0563,0.0188 -1.01275,-0.0563 0.0563,-1.25656 z m 2.6069,0.33758 0.69392,0.18755 0.56264,0.18755 -0.43135,1.18154 -0.52513,-0.18754 0.0563,0.0187 -0.65641,-0.16879 0.30007,-1.21906 z m 2.45686,0.90023 0.31883,0.15004 0.82521,0.50637 -0.65642,1.06902 -0.78769,-0.48762 0.0563,0.0375 -0.30007,-0.15004 0.54388,-1.12528 z m 2.23181,1.48162 0.75019,0.65641 0.2063,0.24381 -0.93774,0.82521 -0.18754,-0.2063 0.0563,0.0563 -0.73143,-0.65641 0.84396,-0.91898 z m 1.80045,1.89422 0.35634,0.46887 0.37509,0.6189 -1.06902,0.65641 -0.35634,-0.60015 0.0188,0.0563 -0.33758,-0.45011 1.01275,-0.75019 z m 1.33158,2.25056 0.0938,0.18755 0.3751,1.06901 0.0187,0.0188 -1.21905,0.31883 0,0 0.0188,0.0563 -0.35634,-1.01275 0.0188,0.0563 -0.07
 5,-0.15003 1.12528,-0.54389 z m 0.7877,2.55064 0.15003,0.95648 0.0188,0.35634 -1.25657,0.0563 -0.0188,-0.31883 0.0188,0.0563 -0.15003,-0.90022 1.23781,-0.2063 z m 0.2063,2.58814 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.2378 -1.25657,0 0,-1.2378 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.2378 -1.25657,0 0,-1.2378 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51312 0,1.23781 -1.25657,0 0,-1.23781 1.25657
 ,0 z m 0,2.49438 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51312 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.49438 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51312 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23
 781 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.2378 -1.25657,0 0,-1.2378 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.2378 -1.25657,0 0,-1.2378 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51312 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49438 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51312 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.49438 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51312 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 
 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.2378 -1.25657,0 0,-1.2378 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51313 0,1.2378 -1.25657,0 0,-1.2378 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51312 0,1.23781 -1.2
 5657,0 0,-1.23781 1.25657,0 z m 0,2.49438 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.49437 0,1.25656 -1.25657,0 0,-1.25656 1.25657,0 z m 0,2.51312 0,1.23781 -1.25657,0 0,-1.23781 1.25657,0 z m 0,2.49437 0,1.25657 -1.25657,0 0,-1.25657 1.25657,0 z m 0,2.49438 0,1.18154 -0.0188,0.11253 -1.23781,-0.075 0,-0.0938 0,0.0375 0,-1.16279 1.25657,0 z m -0.0938,2.6069 -0.15003,0.88146 -0.0938,0.39385 -1.21905,-0.30007 0.0938,-0.3751 0,0.0563 0.13129,-0.86272 1.2378,0.20631 z m -0.6189,2.53188 -0.18755,0.50637 -0.33758,0.69393 -1.12528,-0.54389 0.31883,-0.65641 -0.0188,0.0563 0.1688,-0.48762 1.18154,0.43136 z m -1.18154,2.34433 -0.075,0.13128 -0.67517,0.88147 -0.075,0.0938 -0.91898,-0.84396 0.0563,-0.075 -0.0375,0.0563 0.6189,-0.84396 -0.0188,0.0563 0.0563,-0.11252 1.06902,0.65641 z m -1.72543,2.04426 -0.5814,0.52513 -0.4126,0.31883 -0.75019,-1.01275 0.37509,-0.28132 -0.0375,0.0375 0.56265,-0.52513 0.84396,0.93773 z m -2.08177,1.57539 -0.33759,0.2063 -0.8252,0.39385 -0.54389,-1.12528 0.8
 0645,-0.39385 -0.0563,0.0375 0.30007,-0.18754 0.65642,1.06901 z m -2.45687,1.08777 -1.05026,0.26257 -0.22505,0.0375 -0.18755,-1.23781 0.18755,-0.0188 -0.0563,0 1.03151,-0.26256 0.30007,1.21905 z m -2.56939,0.46887 -0.80645,0.0375 -0.46886,0 0,-1.25656 0.45011,0 -0.0188,0 0.7877,-0.0375 0.0563,1.25656 z m -2.53188,0.0375 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49438,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.49438,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2
 .49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.53188,-0.0375 -0.5814,-0.0188 -0.73143,-0.11253 0.18755,-1.23781 0.69392,0.11253 -0.0563,-0.0188 0.54388,0.0375 -0.0563,1.23781 z m -2.58815,-0.4126 -0.24381,-0.075 -0.994,-0.35634 0.41261,-1.18154 0.97524,0.35633 -0.0563,-0.0188 0.2063,0.0563 -0.30008,1.21906 z m -2.47561,-1.05026 -0.7877,-0.46887 -0.30008,-0.22506 0.73144,-1.01275 0.28132,0.2063 -0.0375,-0.0187 0.76894,0.46886 -0.65641,1.05027 z m -2.11928,-1.51913 -0.39385,-0.35634 -0.52513,-0.5814 0.93773,-0.84396 0.50638,0.54389 -0.0563,-0.0375 0.3751,0.33758 -0.84396,0.93774 z m -1.72543,-1.96924 -0.075,-0.0938 -0.56264,-0.95649 -0.0563,-0.11252 1.12529,-0.56264 0.0375,0.0938 -0.0375,-0.0375 0.54388,0.90022 -0.0188,-0.0563 0.0563,0.075 -1.01275,0
 .75019 z m -1.23781,-2.36309 -0.28132,-0.75019 -0.13128,-0.50638 1.21905,-0.30007 0.11253,0.46887 -0.0188,-0.075 0.26257,0.73143 -1.16279,0.43136 z m -0.67517,-2.53188 -0.075,-0.46887 -0.0375,-0.84396 1.25657,-0.0563 0.0375,0.80645 -0.0188,-0.075 0.075,0.45011 -1.23781,0.18755 z"
-           id="path3429"
-           style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-        <path
-           d="m 284.84596,398.81811 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.49438 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51312 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.51313 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.51313 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.51313 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51313 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-
 2.51313 0,-1.2378 1.25657,0 0,1.2378 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51313 0,-1.2378 1.25657,0 0,1.2378 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51312 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49438 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51312 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.49438 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51312 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.51313 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1
 .25657 -1.25657,0 z m 0,-2.51313 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.51313 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51313 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51313 0,-1.2378 1.25657,0 0,1.2378 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51313 0,-1.2378 1.25657,0 0,1.2378 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51312 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49438 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.49437
  0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51312 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.49438 0,-1.25656 1.25657,0 0,1.25656 -1.25657,0 z m 0,-2.51312 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.51313 0,-1.23781 1.25657,0 0,1.23781 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25657,0 z m 0,-2.49437 0,-1.25657 1.25657,0 0,1.25657 -1.25

<TRUNCATED>

[15/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/libs/gelly/graph_api.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/gelly/graph_api.md b/docs/dev/libs/gelly/graph_api.md
new file mode 100644
index 0000000..465c24f
--- /dev/null
+++ b/docs/dev/libs/gelly/graph_api.md
@@ -0,0 +1,833 @@
+---
+title: Graph API
+nav-parent_id: graphs
+nav-pos: 1
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+* This will be replaced by the TOC
+{:toc}
+
+Graph Representation
+-----------
+
+In Gelly, a `Graph` is represented by a `DataSet` of vertices and a `DataSet` of edges.
+
+The `Graph` nodes are represented by the `Vertex` type. A `Vertex` is defined by a unique ID and a value. `Vertex` IDs should implement the `Comparable` interface. Vertices without value can be represented by setting the value type to `NullValue`.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+// create a new vertex with a Long ID and a String value
+Vertex<Long, String> v = new Vertex<Long, String>(1L, "foo");
+
+// create a new vertex with a Long ID and no value
+Vertex<Long, NullValue> v = new Vertex<Long, NullValue>(1L, NullValue.getInstance());
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+// create a new vertex with a Long ID and a String value
+val v = new Vertex(1L, "foo")
+
+// create a new vertex with a Long ID and no value
+val v = new Vertex(1L, NullValue.getInstance())
+{% endhighlight %}
+</div>
+</div>
+
+The graph edges are represented by the `Edge` type. An `Edge` is defined by a source ID (the ID of the source `Vertex`), a target ID (the ID of the target `Vertex`) and an optional value. The source and target IDs should be of the same type as the `Vertex` IDs. Edges with no value have a `NullValue` value type.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+Edge<Long, Double> e = new Edge<Long, Double>(1L, 2L, 0.5);
+
+// reverse the source and target of this edge
+Edge<Long, Double> reversed = e.reverse();
+
+Double weight = e.getValue(); // weight = 0.5
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val e = new Edge(1L, 2L, 0.5)
+
+// reverse the source and target of this edge
+val reversed = e.reverse
+
+val weight = e.getValue // weight = 0.5
+{% endhighlight %}
+</div>
+</div>
+
+In Gelly an `Edge` is always directed from the source vertex to the target vertex. A `Graph` may be undirected if for
+every `Edge` it contains a matching `Edge` from the target vertex to the source vertex.
+
+{% top %}
+
+Graph Creation
+-----------
+
+You can create a `Graph` in the following ways:
+
+* from a `DataSet` of edges and an optional `DataSet` of vertices:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+DataSet<Vertex<String, Long>> vertices = ...
+
+DataSet<Edge<String, Double>> edges = ...
+
+Graph<String, Long, Double> graph = Graph.fromDataSet(vertices, edges, env);
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = ExecutionEnvironment.getExecutionEnvironment
+
+val vertices: DataSet[Vertex[String, Long]] = ...
+
+val edges: DataSet[Edge[String, Double]] = ...
+
+val graph = Graph.fromDataSet(vertices, edges, env)
+{% endhighlight %}
+</div>
+</div>
+
+* from a `DataSet` of `Tuple2` representing the edges. Gelly will convert each `Tuple2` to an `Edge`, where the first field will be the source ID and the second field will be the target ID. Both vertex and edge values will be set to `NullValue`.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+DataSet<Tuple2<String, String>> edges = ...
+
+Graph<String, NullValue, NullValue> graph = Graph.fromTuple2DataSet(edges, env);
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = ExecutionEnvironment.getExecutionEnvironment
+
+val edges: DataSet[(String, String)] = ...
+
+val graph = Graph.fromTuple2DataSet(edges, env)
+{% endhighlight %}
+</div>
+</div>
+
+* from a `DataSet` of `Tuple3` and an optional `DataSet` of `Tuple2`. In this case, Gelly will convert each `Tuple3` to an `Edge`, where the first field will be the source ID, the second field will be the target ID and the third field will be the edge value. Equivalently, each `Tuple2` will be converted to a `Vertex`, where the first field will be the vertex ID and the second field will be the vertex value:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+DataSet<Tuple2<String, Long>> vertexTuples = env.readCsvFile("path/to/vertex/input").types(String.class, Long.class);
+
+DataSet<Tuple3<String, String, Double>> edgeTuples = env.readCsvFile("path/to/edge/input").types(String.class, String.class, Double.class);
+
+Graph<String, Long, Double> graph = Graph.fromTupleDataSet(vertexTuples, edgeTuples, env);
+{% endhighlight %}
+
+* from a CSV file of Edge data and an optional CSV file of Vertex data. In this case, Gelly will convert each row from the Edge CSV file to an `Edge`, where the first field will be the source ID, the second field will be the target ID and the third field (if present) will be the edge value. Equivalently, each row from the optional Vertex CSV file will be converted to a `Vertex`, where the first field will be the vertex ID and the second field (if present) will be the vertex value. In order to get a `Graph` from a `GraphCsvReader` one has to specify the types, using one of the following methods:
+
+- `types(Class<K> vertexKey, Class<VV> vertexValue,Class<EV> edgeValue)`: both vertex and edge values are present.
+- `edgeTypes(Class<K> vertexKey, Class<EV> edgeValue)`: the Graph has edge values, but no vertex values.
+- `vertexTypes(Class<K> vertexKey, Class<VV> vertexValue)`: the Graph has vertex values, but no edge values.
+- `keyType(Class<K> vertexKey)`: the Graph has no vertex values and no edge values.
+
+{% highlight java %}
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+// create a Graph with String Vertex IDs, Long Vertex values and Double Edge values
+Graph<String, Long, Double> graph = Graph.fromCsvReader("path/to/vertex/input", "path/to/edge/input", env)
+					.types(String.class, Long.class, Double.class);
+
+
+// create a Graph with neither Vertex nor Edge values
+Graph<Long, NullValue, NullValue> simpleGraph = Graph.fromCsvReader("path/to/edge/input", env).keyType(Long.class);
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = ExecutionEnvironment.getExecutionEnvironment
+
+val vertexTuples = env.readCsvFile[String, Long]("path/to/vertex/input")
+
+val edgeTuples = env.readCsvFile[String, String, Double]("path/to/edge/input")
+
+val graph = Graph.fromTupleDataSet(vertexTuples, edgeTuples, env)
+{% endhighlight %}
+
+* from a CSV file of Edge data and an optional CSV file of Vertex data.
+In this case, Gelly will convert each row from the Edge CSV file to an `Edge`.
+The first field of the each row will be the source ID, the second field will be the target ID and the third field (if present) will be the edge value.
+If the edges have no associated value, set the edge value type parameter (3rd type argument) to `NullValue`.
+You can also specify that the vertices are initialized with a vertex value.
+If you provide a path to a CSV file via `pathVertices`, each row of this file will be converted to a `Vertex`.
+The first field of each row will be the vertex ID and the second field will be the vertex value.
+If you provide a vertex value initializer `MapFunction` via the `vertexValueInitializer` parameter, then this function is used to generate the vertex values.
+The set of vertices will be created automatically from the edges input.
+If the vertices have no associated value, set the vertex value type parameter (2nd type argument) to `NullValue`.
+The vertices will then be automatically created from the edges input with vertex value of type `NullValue`.
+
+{% highlight scala %}
+val env = ExecutionEnvironment.getExecutionEnvironment
+
+// create a Graph with String Vertex IDs, Long Vertex values and Double Edge values
+val graph = Graph.fromCsvReader[String, Long, Double](
+		pathVertices = "path/to/vertex/input",
+		pathEdges = "path/to/edge/input",
+		env = env)
+
+
+// create a Graph with neither Vertex nor Edge values
+val simpleGraph = Graph.fromCsvReader[Long, NullValue, NullValue](
+		pathEdges = "path/to/edge/input",
+		env = env)
+
+// create a Graph with Double Vertex values generated by a vertex value initializer and no Edge values
+val simpleGraph = Graph.fromCsvReader[Long, Double, NullValue](
+        pathEdges = "path/to/edge/input",
+        vertexValueInitializer = new MapFunction[Long, Double]() {
+            def map(id: Long): Double = {
+                id.toDouble
+            }
+        },
+        env = env)
+{% endhighlight %}
+</div>
+</div>
+
+
+* from a `Collection` of edges and an optional `Collection` of vertices:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+List<Vertex<Long, Long>> vertexList = new ArrayList...
+
+List<Edge<Long, String>> edgeList = new ArrayList...
+
+Graph<Long, Long, String> graph = Graph.fromCollection(vertexList, edgeList, env);
+{% endhighlight %}
+
+If no vertex input is provided during Graph creation, Gelly will automatically produce the `Vertex` `DataSet` from the edge input. In this case, the created vertices will have no values. Alternatively, you can provide a `MapFunction` as an argument to the creation method, in order to initialize the `Vertex` values:
+
+{% highlight java %}
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+// initialize the vertex value to be equal to the vertex ID
+Graph<Long, Long, String> graph = Graph.fromCollection(edgeList,
+				new MapFunction<Long, Long>() {
+					public Long map(Long value) {
+						return value;
+					}
+				}, env);
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = ExecutionEnvironment.getExecutionEnvironment
+
+val vertexList = List(...)
+
+val edgeList = List(...)
+
+val graph = Graph.fromCollection(vertexList, edgeList, env)
+{% endhighlight %}
+
+If no vertex input is provided during Graph creation, Gelly will automatically produce the `Vertex` `DataSet` from the edge input. In this case, the created vertices will have no values. Alternatively, you can provide a `MapFunction` as an argument to the creation method, in order to initialize the `Vertex` values:
+
+{% highlight java %}
+val env = ExecutionEnvironment.getExecutionEnvironment
+
+// initialize the vertex value to be equal to the vertex ID
+val graph = Graph.fromCollection(edgeList,
+    new MapFunction[Long, Long] {
+       def map(id: Long): Long = id
+    }, env)
+{% endhighlight %}
+</div>
+</div>
+
+{% top %}
+
+Graph Properties
+------------
+
+Gelly includes the following methods for retrieving various Graph properties and metrics:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+// get the Vertex DataSet
+DataSet<Vertex<K, VV>> getVertices()
+
+// get the Edge DataSet
+DataSet<Edge<K, EV>> getEdges()
+
+// get the IDs of the vertices as a DataSet
+DataSet<K> getVertexIds()
+
+// get the source-target pairs of the edge IDs as a DataSet
+DataSet<Tuple2<K, K>> getEdgeIds()
+
+// get a DataSet of <vertex ID, in-degree> pairs for all vertices
+DataSet<Tuple2<K, LongValue>> inDegrees()
+
+// get a DataSet of <vertex ID, out-degree> pairs for all vertices
+DataSet<Tuple2<K, LongValue>> outDegrees()
+
+// get a DataSet of <vertex ID, degree> pairs for all vertices, where degree is the sum of in- and out- degrees
+DataSet<Tuple2<K, LongValue>> getDegrees()
+
+// get the number of vertices
+long numberOfVertices()
+
+// get the number of edges
+long numberOfEdges()
+
+// get a DataSet of Triplets<srcVertex, trgVertex, edge>
+DataSet<Triplet<K, VV, EV>> getTriplets()
+
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+// get the Vertex DataSet
+getVertices: DataSet[Vertex[K, VV]]
+
+// get the Edge DataSet
+getEdges: DataSet[Edge[K, EV]]
+
+// get the IDs of the vertices as a DataSet
+getVertexIds: DataSet[K]
+
+// get the source-target pairs of the edge IDs as a DataSet
+getEdgeIds: DataSet[(K, K)]
+
+// get a DataSet of <vertex ID, in-degree> pairs for all vertices
+inDegrees: DataSet[(K, LongValue)]
+
+// get a DataSet of <vertex ID, out-degree> pairs for all vertices
+outDegrees: DataSet[(K, LongValue)]
+
+// get a DataSet of <vertex ID, degree> pairs for all vertices, where degree is the sum of in- and out- degrees
+getDegrees: DataSet[(K, LongValue)]
+
+// get the number of vertices
+numberOfVertices: Long
+
+// get the number of edges
+numberOfEdges: Long
+
+// get a DataSet of Triplets<srcVertex, trgVertex, edge>
+getTriplets: DataSet[Triplet[K, VV, EV]]
+
+{% endhighlight %}
+</div>
+</div>
+
+{% top %}
+
+Graph Transformations
+-----------------
+
+* <strong>Map</strong>: Gelly provides specialized methods for applying a map transformation on the vertex values or edge values. `mapVertices` and `mapEdges` return a new `Graph`, where the IDs of the vertices (or edges) remain unchanged, while the values are transformed according to the provided user-defined map function. The map functions also allow changing the type of the vertex or edge values.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+Graph<Long, Long, Long> graph = Graph.fromDataSet(vertices, edges, env);
+
+// increment each vertex value by one
+Graph<Long, Long, Long> updatedGraph = graph.mapVertices(
+				new MapFunction<Vertex<Long, Long>, Long>() {
+					public Long map(Vertex<Long, Long> value) {
+						return value.getValue() + 1;
+					}
+				});
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = ExecutionEnvironment.getExecutionEnvironment
+val graph = Graph.fromDataSet(vertices, edges, env)
+
+// increment each vertex value by one
+val updatedGraph = graph.mapVertices(v => v.getValue + 1)
+{% endhighlight %}
+</div>
+</div>
+
+* <strong>Translate</strong>: Gelly provides specialized methods for translating the value and/or type of vertex and edge IDs (`translateGraphIDs`), vertex values (`translateVertexValues`), or edge values (`translateEdgeValues`). Translation is performed by the user-defined map function, several of which are provided in the `org.apache.flink.graph.asm.translate` package. The same `MapFunction` can be used for all the three translate methods.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+Graph<Long, Long, Long> graph = Graph.fromDataSet(vertices, edges, env);
+
+// translate each vertex and edge ID to a String
+Graph<String, Long, Long> updatedGraph = graph.translateGraphIds(
+				new MapFunction<Long, String>() {
+					public String map(Long id) {
+						return id.toString();
+					}
+				});
+
+// translate vertex IDs, edge IDs, vertex values, and edge values to LongValue
+Graph<LongValue, LongValue, LongValue> updatedGraph = graph
+                .translateGraphIds(new LongToLongValue())
+                .translateVertexValues(new LongToLongValue())
+                .translateEdgeValues(new LongToLongValue())
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = ExecutionEnvironment.getExecutionEnvironment
+val graph = Graph.fromDataSet(vertices, edges, env)
+
+// translate each vertex and edge ID to a String
+val updatedGraph = graph.translateGraphIds(id => id.toString)
+{% endhighlight %}
+</div>
+</div>
+
+
+* <strong>Filter</strong>: A filter transformation applies a user-defined filter function on the vertices or edges of the `Graph`. `filterOnEdges` will create a sub-graph of the original graph, keeping only the edges that satisfy the provided predicate. Note that the vertex dataset will not be modified. Respectively, `filterOnVertices` applies a filter on the vertices of the graph. Edges whose source and/or target do not satisfy the vertex predicate are removed from the resulting edge dataset. The `subgraph` method can be used to apply a filter function to the vertices and the edges at the same time.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+Graph<Long, Long, Long> graph = ...
+
+graph.subgraph(
+		new FilterFunction<Vertex<Long, Long>>() {
+			   	public boolean filter(Vertex<Long, Long> vertex) {
+					// keep only vertices with positive values
+					return (vertex.getValue() > 0);
+			   }
+		   },
+		new FilterFunction<Edge<Long, Long>>() {
+				public boolean filter(Edge<Long, Long> edge) {
+					// keep only edges with negative values
+					return (edge.getValue() < 0);
+				}
+		})
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val graph: Graph[Long, Long, Long] = ...
+
+// keep only vertices with positive values
+// and only edges with negative values
+graph.subgraph((vertex => vertex.getValue > 0), (edge => edge.getValue < 0))
+{% endhighlight %}
+</div>
+</div>
+
+<p class="text-center">
+    <img alt="Filter Transformations" width="80%" src="{{ site.baseurl }}/fig/gelly-filter.png"/>
+</p>
+
+* <strong>Join</strong>: Gelly provides specialized methods for joining the vertex and edge datasets with other input datasets. `joinWithVertices` joins the vertices with a `Tuple2` input data set. The join is performed using the vertex ID and the first field of the `Tuple2` input as the join keys. The method returns a new `Graph` where the vertex values have been updated according to a provided user-defined transformation function.
+Similarly, an input dataset can be joined with the edges, using one of three methods. `joinWithEdges` expects an input `DataSet` of `Tuple3` and joins on the composite key of both source and target vertex IDs. `joinWithEdgesOnSource` expects a `DataSet` of `Tuple2` and joins on the source key of the edges and the first attribute of the input dataset and `joinWithEdgesOnTarget` expects a `DataSet` of `Tuple2` and joins on the target key of the edges and the first attribute of the input dataset. All three methods apply a transformation function on the edge and the input data set values.
+Note that if the input dataset contains a key multiple times, all Gelly join methods will only consider the first value encountered.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+Graph<Long, Double, Double> network = ...
+
+DataSet<Tuple2<Long, LongValue>> vertexOutDegrees = network.outDegrees();
+
+// assign the transition probabilities as the edge weights
+Graph<Long, Double, Double> networkWithWeights = network.joinWithEdgesOnSource(vertexOutDegrees,
+				new VertexJoinFunction<Double, LongValue>() {
+					public Double vertexJoin(Double vertexValue, LongValue inputValue) {
+						return vertexValue / inputValue.getValue();
+					}
+				});
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val network: Graph[Long, Double, Double] = ...
+
+val vertexOutDegrees: DataSet[(Long, LongValue)] = network.outDegrees
+
+// assign the transition probabilities as the edge weights
+val networkWithWeights = network.joinWithEdgesOnSource(vertexOutDegrees, (v1: Double, v2: LongValue) => v1 / v2.getValue)
+{% endhighlight %}
+</div>
+</div>
+
+* <strong>Reverse</strong>: the `reverse()` method returns a new `Graph` where the direction of all edges has been reversed.
+
+* <strong>Undirected</strong>: In Gelly, a `Graph` is always directed. Undirected graphs can be represented by adding all opposite-direction edges to a graph. For this purpose, Gelly provides the `getUndirected()` method.
+
+* <strong>Union</strong>: Gelly's `union()` method performs a union operation on the vertex and edge sets of the specified graph and the current graph. Duplicate vertices are removed from the resulting `Graph`, while if duplicate edges exist, these will be preserved.
+
+<p class="text-center">
+    <img alt="Union Transformation" width="50%" src="{{ site.baseurl }}/fig/gelly-union.png"/>
+</p>
+
+* <strong>Difference</strong>: Gelly's `difference()` method performs a difference on the vertex and edge sets of the current graph and the specified graph.
+
+* <strong>Intersect</strong>: Gelly's `intersect()` method performs an intersect on the edge
+ sets of the current graph and the specified graph. The result is a new `Graph` that contains all
+ edges that exist in both input graphs. Two edges are considered equal, if they have the same source
+ identifier, target identifier and edge value. Vertices in the resulting graph have no
+ value. If vertex values are required, one can for example retrieve them from one of the input graphs using
+ the `joinWithVertices()` method.
+ Depending on the parameter `distinct`, equal edges are either contained once in the resulting
+ `Graph` or as often as there are pairs of equal edges in the input graphs.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+// create first graph from edges {(1, 3, 12) (1, 3, 13), (1, 3, 13)}
+List<Edge<Long, Long>> edges1 = ...
+Graph<Long, NullValue, Long> graph1 = Graph.fromCollection(edges1, env);
+
+// create second graph from edges {(1, 3, 13)}
+List<Edge<Long, Long>> edges2 = ...
+Graph<Long, NullValue, Long> graph2 = Graph.fromCollection(edges2, env);
+
+// Using distinct = true results in {(1,3,13)}
+Graph<Long, NullValue, Long> intersect1 = graph1.intersect(graph2, true);
+
+// Using distinct = false results in {(1,3,13),(1,3,13)} as there is one edge pair
+Graph<Long, NullValue, Long> intersect2 = graph1.intersect(graph2, false);
+
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = ExecutionEnvironment.getExecutionEnvironment
+
+// create first graph from edges {(1, 3, 12) (1, 3, 13), (1, 3, 13)}
+val edges1: List[Edge[Long, Long]] = ...
+val graph1 = Graph.fromCollection(edges1, env)
+
+// create second graph from edges {(1, 3, 13)}
+val edges2: List[Edge[Long, Long]] = ...
+val graph2 = Graph.fromCollection(edges2, env)
+
+
+// Using distinct = true results in {(1,3,13)}
+val intersect1 = graph1.intersect(graph2, true)
+
+// Using distinct = false results in {(1,3,13),(1,3,13)} as there is one edge pair
+val intersect2 = graph1.intersect(graph2, false)
+{% endhighlight %}
+</div>
+</div>
+
+-{% top %}
+
+Graph Mutations
+-----------
+
+Gelly includes the following methods for adding and removing vertices and edges from an input `Graph`:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+// adds a Vertex to the Graph. If the Vertex already exists, it will not be added again.
+Graph<K, VV, EV> addVertex(final Vertex<K, VV> vertex)
+
+// adds a list of vertices to the Graph. If the vertices already exist in the graph, they will not be added once more.
+Graph<K, VV, EV> addVertices(List<Vertex<K, VV>> verticesToAdd)
+
+// adds an Edge to the Graph. If the source and target vertices do not exist in the graph, they will also be added.
+Graph<K, VV, EV> addEdge(Vertex<K, VV> source, Vertex<K, VV> target, EV edgeValue)
+
+// adds a list of edges to the Graph. When adding an edge for a non-existing set of vertices, the edge is considered invalid and ignored.
+Graph<K, VV, EV> addEdges(List<Edge<K, EV>> newEdges)
+
+// removes the given Vertex and its edges from the Graph.
+Graph<K, VV, EV> removeVertex(Vertex<K, VV> vertex)
+
+// removes the given list of vertices and their edges from the Graph
+Graph<K, VV, EV> removeVertices(List<Vertex<K, VV>> verticesToBeRemoved)
+
+// removes *all* edges that match the given Edge from the Graph.
+Graph<K, VV, EV> removeEdge(Edge<K, EV> edge)
+
+// removes *all* edges that match the edges in the given list
+Graph<K, VV, EV> removeEdges(List<Edge<K, EV>> edgesToBeRemoved)
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+// adds a Vertex to the Graph. If the Vertex already exists, it will not be added again.
+addVertex(vertex: Vertex[K, VV])
+
+// adds a list of vertices to the Graph. If the vertices already exist in the graph, they will not be added once more.
+addVertices(verticesToAdd: List[Vertex[K, VV]])
+
+// adds an Edge to the Graph. If the source and target vertices do not exist in the graph, they will also be added.
+addEdge(source: Vertex[K, VV], target: Vertex[K, VV], edgeValue: EV)
+
+// adds a list of edges to the Graph. When adding an edge for a non-existing set of vertices, the edge is considered invalid and ignored.
+addEdges(edges: List[Edge[K, EV]])
+
+// removes the given Vertex and its edges from the Graph.
+removeVertex(vertex: Vertex[K, VV])
+
+// removes the given list of vertices and their edges from the Graph
+removeVertices(verticesToBeRemoved: List[Vertex[K, VV]])
+
+// removes *all* edges that match the given Edge from the Graph.
+removeEdge(edge: Edge[K, EV])
+
+// removes *all* edges that match the edges in the given list
+removeEdges(edgesToBeRemoved: List[Edge[K, EV]])
+{% endhighlight %}
+</div>
+</div>
+
+Neighborhood Methods
+-----------
+
+Neighborhood methods allow vertices to perform an aggregation on their first-hop neighborhood.
+`reduceOnEdges()` can be used to compute an aggregation on the values of the neighboring edges of a vertex and `reduceOnNeighbors()` can be used to compute an aggregation on the values of the neighboring vertices. These methods assume associative and commutative aggregations and exploit combiners internally, significantly improving performance.
+The neighborhood scope is defined by the `EdgeDirection` parameter, which takes the values `IN`, `OUT` or `ALL`. `IN` will gather all in-coming edges (neighbors) of a vertex, `OUT` will gather all out-going edges (neighbors), while `ALL` will gather all edges (neighbors).
+
+For example, assume that you want to select the minimum weight of all out-edges for each vertex in the following graph:
+
+<p class="text-center">
+    <img alt="reduceOnEdges Example" width="50%" src="{{ site.baseurl }}/fig/gelly-example-graph.png"/>
+</p>
+
+The following code will collect the out-edges for each vertex and apply the `SelectMinWeight()` user-defined function on each of the resulting neighborhoods:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+Graph<Long, Long, Double> graph = ...
+
+DataSet<Tuple2<Long, Double>> minWeights = graph.reduceOnEdges(new SelectMinWeight(), EdgeDirection.OUT);
+
+// user-defined function to select the minimum weight
+static final class SelectMinWeight implements ReduceEdgesFunction<Double> {
+
+		@Override
+		public Double reduceEdges(Double firstEdgeValue, Double secondEdgeValue) {
+			return Math.min(firstEdgeValue, secondEdgeValue);
+		}
+}
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val graph: Graph[Long, Long, Double] = ...
+
+val minWeights = graph.reduceOnEdges(new SelectMinWeight, EdgeDirection.OUT)
+
+// user-defined function to select the minimum weight
+final class SelectMinWeight extends ReduceEdgesFunction[Double] {
+	override def reduceEdges(firstEdgeValue: Double, secondEdgeValue: Double): Double = {
+		Math.min(firstEdgeValue, secondEdgeValue)
+	}
+ }
+{% endhighlight %}
+</div>
+</div>
+
+<p class="text-center">
+    <img alt="reduceOnEdges Example" width="50%" src="{{ site.baseurl }}/fig/gelly-reduceOnEdges.png"/>
+</p>
+
+Similarly, assume that you would like to compute the sum of the values of all in-coming neighbors, for every vertex. The following code will collect the in-coming neighbors for each vertex and apply the `SumValues()` user-defined function on each neighborhood:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+Graph<Long, Long, Double> graph = ...
+
+DataSet<Tuple2<Long, Long>> verticesWithSum = graph.reduceOnNeighbors(new SumValues(), EdgeDirection.IN);
+
+// user-defined function to sum the neighbor values
+static final class SumValues implements ReduceNeighborsFunction<Long> {
+
+	    	@Override
+	    	public Long reduceNeighbors(Long firstNeighbor, Long secondNeighbor) {
+		    	return firstNeighbor + secondNeighbor;
+	  	}
+}
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val graph: Graph[Long, Long, Double] = ...
+
+val verticesWithSum = graph.reduceOnNeighbors(new SumValues, EdgeDirection.IN)
+
+// user-defined function to sum the neighbor values
+final class SumValues extends ReduceNeighborsFunction[Long] {
+   	override def reduceNeighbors(firstNeighbor: Long, secondNeighbor: Long): Long = {
+    	firstNeighbor + secondNeighbor
+    }
+}
+{% endhighlight %}
+</div>
+</div>
+
+<p class="text-center">
+    <img alt="reduceOnNeighbors Example" width="70%" src="{{ site.baseurl }}/fig/gelly-reduceOnNeighbors.png"/>
+</p>
+
+When the aggregation function is not associative and commutative or when it is desirable to return more than one values per vertex, one can use the more general
+`groupReduceOnEdges()` and `groupReduceOnNeighbors()` methods.
+These methods return zero, one or more values per vertex and provide access to the whole neighborhood.
+
+For example, the following code will output all the vertex pairs which are connected with an edge having a weight of 0.5 or more:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+Graph<Long, Long, Double> graph = ...
+
+DataSet<Tuple2<Vertex<Long, Long>, Vertex<Long, Long>>> vertexPairs = graph.groupReduceOnNeighbors(new SelectLargeWeightNeighbors(), EdgeDirection.OUT);
+
+// user-defined function to select the neighbors which have edges with weight > 0.5
+static final class SelectLargeWeightNeighbors implements NeighborsFunctionWithVertexValue<Long, Long, Double,
+		Tuple2<Vertex<Long, Long>, Vertex<Long, Long>>> {
+
+		@Override
+		public void iterateNeighbors(Vertex<Long, Long> vertex,
+				Iterable<Tuple2<Edge<Long, Double>, Vertex<Long, Long>>> neighbors,
+				Collector<Tuple2<Vertex<Long, Long>, Vertex<Long, Long>>> out) {
+
+			for (Tuple2<Edge<Long, Double>, Vertex<Long, Long>> neighbor : neighbors) {
+				if (neighbor.f0.f2 > 0.5) {
+					out.collect(new Tuple2<Vertex<Long, Long>, Vertex<Long, Long>>(vertex, neighbor.f1));
+				}
+			}
+		}
+}
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val graph: Graph[Long, Long, Double] = ...
+
+val vertexPairs = graph.groupReduceOnNeighbors(new SelectLargeWeightNeighbors, EdgeDirection.OUT)
+
+// user-defined function to select the neighbors which have edges with weight > 0.5
+final class SelectLargeWeightNeighbors extends NeighborsFunctionWithVertexValue[Long, Long, Double,
+  (Vertex[Long, Long], Vertex[Long, Long])] {
+
+	override def iterateNeighbors(vertex: Vertex[Long, Long],
+		neighbors: Iterable[(Edge[Long, Double], Vertex[Long, Long])],
+		out: Collector[(Vertex[Long, Long], Vertex[Long, Long])]) = {
+
+			for (neighbor <- neighbors) {
+				if (neighbor._1.getValue() > 0.5) {
+					out.collect(vertex, neighbor._2);
+				}
+			}
+		}
+   }
+{% endhighlight %}
+</div>
+</div>
+
+When the aggregation computation does not require access to the vertex value (for which the aggregation is performed), it is advised to use the more efficient `EdgesFunction` and `NeighborsFunction` for the user-defined functions. When access to the vertex value is required, one should use `EdgesFunctionWithVertexValue` and `NeighborsFunctionWithVertexValue` instead.
+
+{% top %}
+
+Graph Validation
+-----------
+
+Gelly provides a simple utility for performing validation checks on input graphs. Depending on the application context, a graph may or may not be valid according to certain criteria. For example, a user might need to validate whether their graph contains duplicate edges or whether its structure is bipartite. In order to validate a graph, one can define a custom `GraphValidator` and implement its `validate()` method. `InvalidVertexIdsValidator` is Gelly's pre-defined validator. It checks that the edge set contains valid vertex IDs, i.e. that all edge IDs
+also exist in the vertex IDs set.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+// create a list of vertices with IDs = {1, 2, 3, 4, 5}
+List<Vertex<Long, Long>> vertices = ...
+
+// create a list of edges with IDs = {(1, 2) (1, 3), (2, 4), (5, 6)}
+List<Edge<Long, Long>> edges = ...
+
+Graph<Long, Long, Long> graph = Graph.fromCollection(vertices, edges, env);
+
+// will return false: 6 is an invalid ID
+graph.validate(new InvalidVertexIdsValidator<Long, Long, Long>());
+
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = ExecutionEnvironment.getExecutionEnvironment
+
+// create a list of vertices with IDs = {1, 2, 3, 4, 5}
+val vertices: List[Vertex[Long, Long]] = ...
+
+// create a list of edges with IDs = {(1, 2) (1, 3), (2, 4), (5, 6)}
+val edges: List[Edge[Long, Long]] = ...
+
+val graph = Graph.fromCollection(vertices, edges, env)
+
+// will return false: 6 is an invalid ID
+graph.validate(new InvalidVertexIdsValidator[Long, Long, Long])
+
+{% endhighlight %}
+</div>
+</div>
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/libs/gelly/graph_generators.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/gelly/graph_generators.md b/docs/dev/libs/gelly/graph_generators.md
new file mode 100644
index 0000000..5598d83
--- /dev/null
+++ b/docs/dev/libs/gelly/graph_generators.md
@@ -0,0 +1,654 @@
+---
+title: Graph Generators
+nav-parent_id: graphs
+nav-pos: 5
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+* This will be replaced by the TOC
+{:toc}
+
+Gelly provides a collection of scalable graph generators. Each generator is
+
+* parallelizable, in order to create large datasets
+* scale-free, generating the same graph regardless of parallelism
+* thrifty, using as few operators as possible
+
+Graph generators are configured using the builder pattern. The parallelism of generator
+operators can be set explicitly by calling `setParallelism(parallelism)`. Lowering the
+parallelism will reduce the allocation of memory and network buffers.
+
+Graph-specific configuration must be called first, then configuration common to all
+generators, and lastly the call to `generate()`. The following example configures a
+grid graph with two dimensions, configures the parallelism, and generates the graph.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+boolean wrapEndpoints = false;
+
+int parallelism = 4;
+
+Graph<LongValue,NullValue,NullValue> graph = new GridGraph(env)
+    .addDimension(2, wrapEndpoints)
+    .addDimension(4, wrapEndpoints)
+    .setParallelism(parallelism)
+    .generate();
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+import org.apache.flink.api.scala._
+import org.apache.flink.graph.generator.GridGraph
+
+val env: ExecutionEnvironment = ExecutionEnvironment.getExecutionEnvironment
+
+wrapEndpoints = false
+
+val parallelism = 4
+
+val graph = new GridGraph(env.getJavaEnv).addDimension(2, wrapEndpoints).addDimension(4, wrapEndpoints).setParallelism(parallelism).generate()
+{% endhighlight %}
+</div>
+</div>
+
+## Complete Graph
+
+An undirected graph connecting every distinct pair of vertices.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+long vertexCount = 5;
+
+Graph<LongValue,NullValue,NullValue> graph = new CompleteGraph(env, vertexCount)
+    .generate();
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+import org.apache.flink.api.scala._
+import org.apache.flink.graph.generator.CompleteGraph
+
+val env: ExecutionEnvironment = ExecutionEnvironment.getExecutionEnvironment
+
+val vertexCount = 5
+
+val graph = new CompleteGraph(env.getJavaEnv, vertexCount).generate()
+{% endhighlight %}
+</div>
+</div>
+
+<svg class="graph" width="540" height="540"
+    xmlns="http://www.w3.org/2000/svg"
+    xmlns:xlink="http://www.w3.org/1999/xlink">
+
+    <line x1="270" y1="40" x2="489" y2="199" />
+    <line x1="270" y1="40" x2="405" y2="456" />
+    <line x1="270" y1="40" x2="135" y2="456" />
+    <line x1="270" y1="40" x2="51" y2="199" />
+
+    <line x1="489" y1="199" x2="405" y2="456" />
+    <line x1="489" y1="199" x2="135" y2="456" />
+    <line x1="489" y1="199" x2="51" y2="199" />
+
+    <line x1="405" y1="456" x2="135" y2="456" />
+    <line x1="405" y1="456" x2="51" y2="199" />
+
+    <line x1="135" y1="456" x2="51" y2="199" />
+
+    <circle cx="270" cy="40" r="20" />
+    <text x="270" y="40">0</text>
+
+    <circle cx="489" cy="199" r="20" />
+    <text x="489" y="199">1</text>
+
+    <circle cx="405" cy="456" r="20" />
+    <text x="405" y="456">2</text>
+
+    <circle cx="135" cy="456" r="20" />
+    <text x="135" y="456">3</text>
+
+    <circle cx="51" cy="199" r="20" />
+    <text x="51" y="199">4</text>
+</svg>
+
+## Cycle Graph
+
+An undirected graph where all edges form a single cycle.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+long vertexCount = 5;
+
+Graph<LongValue,NullValue,NullValue> graph = new CycleGraph(env, vertexCount)
+    .generate();
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+import org.apache.flink.api.scala._
+import org.apache.flink.graph.generator.CycleGraph
+
+val env: ExecutionEnvironment = ExecutionEnvironment.getExecutionEnvironment
+
+val vertexCount = 5
+
+val graph = new CycleGraph(env.getJavaEnv, vertexCount).generate()
+{% endhighlight %}
+</div>
+</div>
+
+<svg class="graph" width="540" height="540"
+    xmlns="http://www.w3.org/2000/svg"
+    xmlns:xlink="http://www.w3.org/1999/xlink">
+
+    <line x1="270" y1="40" x2="489" y2="199" />
+    <line x1="489" y1="199" x2="405" y2="456" />
+    <line x1="405" y1="456" x2="135" y2="456" />
+    <line x1="135" y1="456" x2="51" y2="199" />
+    <line x1="51" y1="199" x2="270" y2="40" />
+
+    <circle cx="270" cy="40" r="20" />
+    <text x="270" y="40">0</text>
+
+    <circle cx="489" cy="199" r="20" />
+    <text x="489" y="199">1</text>
+
+    <circle cx="405" cy="456" r="20" />
+    <text x="405" y="456">2</text>
+
+    <circle cx="135" cy="456" r="20" />
+    <text x="135" y="456">3</text>
+
+    <circle cx="51" cy="199" r="20" />
+    <text x="51" y="199">4</text>
+</svg>
+
+## Empty Graph
+
+The graph containing no edges.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+long vertexCount = 5;
+
+Graph<LongValue,NullValue,NullValue> graph = new EmptyGraph(env, vertexCount)
+    .generate();
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+import org.apache.flink.api.scala._
+import org.apache.flink.graph.generator.EmptyGraph
+
+val env: ExecutionEnvironment = ExecutionEnvironment.getExecutionEnvironment
+
+val vertexCount = 5
+
+val graph = new EmptyGraph(env.getJavaEnv, vertexCount).generate()
+{% endhighlight %}
+</div>
+</div>
+
+<svg class="graph" width="540" height="80"
+    xmlns="http://www.w3.org/2000/svg"
+    xmlns:xlink="http://www.w3.org/1999/xlink">
+
+    <circle cx="30" cy="40" r="20" />
+    <text x="30" y="40">0</text>
+
+    <circle cx="150" cy="40" r="20" />
+    <text x="150" y="40">1</text>
+
+    <circle cx="270" cy="40" r="20" />
+    <text x="270" y="40">2</text>
+
+    <circle cx="390" cy="40" r="20" />
+    <text x="390" y="40">3</text>
+
+    <circle cx="510" cy="40" r="20" />
+    <text x="510" y="40">4</text>
+</svg>
+
+## Grid Graph
+
+An undirected graph connecting vertices in a regular tiling in one or more dimensions.
+Each dimension is configured separately. When the dimension size is at least three the
+endpoints are optionally connected by setting `wrapEndpoints`. Changing the following
+example to `addDimension(4, true)` would connect `0` to `3` and `4` to `7`.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+boolean wrapEndpoints = false;
+
+Graph<LongValue,NullValue,NullValue> graph = new GridGraph(env)
+    .addDimension(2, wrapEndpoints)
+    .addDimension(4, wrapEndpoints)
+    .generate();
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+import org.apache.flink.api.scala._
+import org.apache.flink.graph.generator.GridGraph
+
+val env: ExecutionEnvironment = ExecutionEnvironment.getExecutionEnvironment
+
+val wrapEndpoints = false
+
+val graph = new GridGraph(env.getJavaEnv).addDimension(2, wrapEndpoints).addDimension(4, wrapEndpoints).generate()
+{% endhighlight %}
+</div>
+</div>
+
+<svg class="graph" width="540" height="200"
+    xmlns="http://www.w3.org/2000/svg"
+    xmlns:xlink="http://www.w3.org/1999/xlink">
+
+    <line x1="30" y1="40" x2="510" y2="40" />
+    <line x1="30" y1="160" x2="510" y2="160" />
+
+    <line x1="30" y1="40" x2="30" y2="160" />
+    <line x1="190" y1="40" x2="190" y2="160" />
+    <line x1="350" y1="40" x2="350" y2="160" />
+    <line x1="510" y1="40" x2="510" y2="160" />
+
+    <circle cx="30" cy="40" r="20" />
+    <text x="30" y="40">0</text>
+
+    <circle cx="190" cy="40" r="20" />
+    <text x="190" y="40">1</text>
+
+    <circle cx="350" cy="40" r="20" />
+    <text x="350" y="40">2</text>
+
+    <circle cx="510" cy="40" r="20" />
+    <text x="510" y="40">3</text>
+
+    <circle cx="30" cy="160" r="20" />
+    <text x="30" y="160">4</text>
+
+    <circle cx="190" cy="160" r="20" />
+    <text x="190" y="160">5</text>
+
+    <circle cx="350" cy="160" r="20" />
+    <text x="350" y="160">6</text>
+
+    <circle cx="510" cy="160" r="20" />
+    <text x="510" y="160">7</text>
+</svg>
+
+## Hypercube Graph
+
+An undirected graph where edges form an n-dimensional hypercube. Each vertex
+in a hypercube connects to one other vertex in each dimension.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+long dimensions = 3;
+
+Graph<LongValue,NullValue,NullValue> graph = new HypercubeGraph(env, dimensions)
+    .generate();
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+import org.apache.flink.api.scala._
+import org.apache.flink.graph.generator.HypercubeGraph
+
+val env: ExecutionEnvironment = ExecutionEnvironment.getExecutionEnvironment
+
+val dimensions = 3
+
+val graph = new HypercubeGraph(env.getJavaEnv, dimensions).generate()
+{% endhighlight %}
+</div>
+</div>
+
+<svg class="graph" width="540" height="320"
+    xmlns="http://www.w3.org/2000/svg"
+    xmlns:xlink="http://www.w3.org/1999/xlink">
+
+    <line x1="190" y1="120" x2="350" y2="120" />
+    <line x1="190" y1="200" x2="350" y2="200" />
+    <line x1="190" y1="120" x2="190" y2="200" />
+    <line x1="350" y1="120" x2="350" y2="200" />
+
+    <line x1="30" y1="40" x2="510" y2="40" />
+    <line x1="30" y1="280" x2="510" y2="280" />
+    <line x1="30" y1="40" x2="30" y2="280" />
+    <line x1="510" y1="40" x2="510" y2="280" />
+
+    <line x1="190" y1="120" x2="30" y2="40" />
+    <line x1="350" y1="120" x2="510" y2="40" />
+    <line x1="190" y1="200" x2="30" y2="280" />
+    <line x1="350" y1="200" x2="510" y2="280" />
+
+    <circle cx="190" cy="120" r="20" />
+    <text x="190" y="120">0</text>
+
+    <circle cx="350" cy="120" r="20" />
+    <text x="350" y="120">1</text>
+
+    <circle cx="190" cy="200" r="20" />
+    <text x="190" y="200">2</text>
+
+    <circle cx="350" cy="200" r="20" />
+    <text x="350" y="200">3</text>
+
+    <circle cx="30" cy="40" r="20" />
+    <text x="30" y="40">4</text>
+
+    <circle cx="510" cy="40" r="20" />
+    <text x="510" y="40">5</text>
+
+    <circle cx="30" cy="280" r="20" />
+    <text x="30" y="280">6</text>
+
+    <circle cx="510" cy="280" r="20" />
+    <text x="510" y="280">7</text>
+</svg>
+
+## Path Graph
+
+An undirected Graph where all edges form a single path.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+long vertexCount = 5
+
+Graph<LongValue,NullValue,NullValue> graph = new PathGraph(env, vertexCount)
+    .generate();
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+import org.apache.flink.api.scala._
+import org.apache.flink.graph.generator.PathGraph
+
+val env: ExecutionEnvironment = ExecutionEnvironment.getExecutionEnvironment
+
+val vertexCount = 5
+
+val graph = new PathGraph(env.getJavaEnv, vertexCount).generate()
+{% endhighlight %}
+</div>
+</div>
+
+<svg class="graph" width="540" height="80"
+    xmlns="http://www.w3.org/2000/svg"
+    xmlns:xlink="http://www.w3.org/1999/xlink">
+
+    <line x1="30" y1="40" x2="510" y2="40" />
+
+    <circle cx="30" cy="40" r="20" />
+    <text x="30" y="40">0</text>
+
+    <circle cx="150" cy="40" r="20" />
+    <text x="150" y="40">1</text>
+
+    <circle cx="270" cy="40" r="20" />
+    <text x="270" y="40">2</text>
+
+    <circle cx="390" cy="40" r="20" />
+    <text x="390" y="40">3</text>
+
+    <circle cx="510" cy="40" r="20" />
+    <text x="510" y="40">4</text>
+</svg>
+
+## RMat Graph
+
+A directed or undirected power-law graph generated using the
+[Recursive Matrix (R-Mat)](http://www.cs.cmu.edu/~christos/PUBLICATIONS/siam04.pdf) model.
+
+RMat is a stochastic generator configured with a source of randomness implementing the
+`RandomGenerableFactory` interface. Provided implementations are `JDKRandomGeneratorFactory`
+and `MersenneTwisterFactory`. These generate an initial sequence of random values which are
+then used as seeds for generating the edges.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+RandomGenerableFactory<JDKRandomGenerator> rnd = new JDKRandomGeneratorFactory();
+
+int vertexCount = 1 << scale;
+int edgeCount = edgeFactor * vertexCount;
+
+Graph<LongValue,NullValue,NullValue> graph = new RMatGraph<>(env, rnd, vertexCount, edgeCount)
+    .generate();
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+import org.apache.flink.api.scala._
+import org.apache.flink.graph.generator.RMatGraph
+
+val env = ExecutionEnvironment.getExecutionEnvironment
+
+val vertexCount = 1 << scale
+val edgeCount = edgeFactor * vertexCount
+
+val graph = new RMatGraph(env.getJavaEnv, rnd, vertexCount, edgeCount).generate()
+{% endhighlight %}
+</div>
+</div>
+
+The default RMat contants can be overridden as shown in the following example.
+The contants define the interdependence of bits from each generated edge's source
+and target labels. The RMat noise can be enabled and progressively perturbs the
+contants while generating each edge.
+
+The RMat generator can be configured to produce a simple graph by removing self-loops
+and duplicate edges. Symmetrization is performed either by a "clip-and-flip" throwing away
+the half matrix above the diagonal or a full "flip" preserving and mirroring all edges.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+RandomGenerableFactory<JDKRandomGenerator> rnd = new JDKRandomGeneratorFactory();
+
+int vertexCount = 1 << scale;
+int edgeCount = edgeFactor * vertexCount;
+
+boolean clipAndFlip = false;
+
+Graph<LongValue,NullValue,NullValue> graph = new RMatGraph<>(env, rnd, vertexCount, edgeCount)
+    .setConstants(0.57f, 0.19f, 0.19f)
+    .setNoise(true, 0.10f)
+    .generate();
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+import org.apache.flink.api.scala._
+import org.apache.flink.graph.generator.RMatGraph
+
+val env = ExecutionEnvironment.getExecutionEnvironment
+
+val vertexCount = 1 << scale
+val edgeCount = edgeFactor * vertexCount
+
+clipAndFlip = false
+
+val graph = new RMatGraph(env.getJavaEnv, rnd, vertexCount, edgeCount).setConstants(0.57f, 0.19f, 0.19f).setNoise(true, 0.10f).generate()
+{% endhighlight %}
+</div>
+</div>
+
+## Singleton Edge Graph
+
+An undirected graph containing isolated two-paths.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+long vertexPairCount = 4
+
+// note: configured with the number of vertex pairs
+Graph<LongValue,NullValue,NullValue> graph = new SingletonEdgeGraph(env, vertexPairCount)
+    .generate();
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+import org.apache.flink.api.scala._
+import org.apache.flink.graph.generator.SingletonEdgeGraph
+
+val env: ExecutionEnvironment = ExecutionEnvironment.getExecutionEnvironment
+
+val vertexPairCount = 4
+
+// note: configured with the number of vertex pairs
+val graph = new SingletonEdgeGraph(env.getJavaEnv, vertexPairCount).generate()
+{% endhighlight %}
+</div>
+</div>
+
+<svg class="graph" width="540" height="200"
+    xmlns="http://www.w3.org/2000/svg"
+    xmlns:xlink="http://www.w3.org/1999/xlink">
+
+    <line x1="30" y1="40" x2="190" y2="40" />
+    <line x1="350" y1="40" x2="510" y2="40" />
+    <line x1="30" y1="160" x2="190" y2="160" />
+    <line x1="350" y1="160" x2="510" y2="160" />
+
+    <circle cx="30" cy="40" r="20" />
+    <text x="30" y="40">0</text>
+
+    <circle cx="190" cy="40" r="20" />
+    <text x="190" y="40">1</text>
+
+    <circle cx="350" cy="40" r="20" />
+    <text x="350" y="40">2</text>
+
+    <circle cx="510" cy="40" r="20" />
+    <text x="510" y="40">3</text>
+
+    <circle cx="30" cy="160" r="20" />
+    <text x="30" y="160">4</text>
+
+    <circle cx="190" cy="160" r="20" />
+    <text x="190" y="160">5</text>
+
+    <circle cx="350" cy="160" r="20" />
+    <text x="350" y="160">6</text>
+
+    <circle cx="510" cy="160" r="20" />
+    <text x="510" y="160">7</text>
+</svg>
+
+## Star Graph
+
+An undirected graph containing a single central vertex connected to all other leaf vertices.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+long vertexCount = 6;
+
+Graph<LongValue,NullValue,NullValue> graph = new StarGraph(env, vertexCount)
+    .generate();
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+import org.apache.flink.api.scala._
+import org.apache.flink.graph.generator.StarGraph
+
+val env: ExecutionEnvironment = ExecutionEnvironment.getExecutionEnvironment
+
+val vertexCount = 6
+
+val graph = new StarGraph(env.getJavaEnv, vertexCount).generate()
+{% endhighlight %}
+</div>
+</div>
+
+<svg class="graph" width="540" height="540"
+    xmlns="http://www.w3.org/2000/svg"
+    xmlns:xlink="http://www.w3.org/1999/xlink">
+
+    <line x1="270" y1="270" x2="270" y2="40" />
+    <line x1="270" y1="270" x2="489" y2="199" />
+    <line x1="270" y1="270" x2="405" y2="456" />
+    <line x1="270" y1="270" x2="135" y2="456" />
+    <line x1="270" y1="270" x2="51" y2="199" />
+
+    <circle cx="270" cy="270" r="20" />
+    <text x="270" y="270">0</text>
+
+    <circle cx="270" cy="40" r="20" />
+    <text x="270" y="40">1</text>
+
+    <circle cx="489" cy="199" r="20" />
+    <text x="489" y="199">2</text>
+
+    <circle cx="405" cy="456" r="20" />
+    <text x="405" y="456">3</text>
+
+    <circle cx="135" cy="456" r="20" />
+    <text x="135" y="456">4</text>
+
+    <circle cx="51" cy="199" r="20" />
+    <text x="51" y="199">5</text>
+</svg>
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/dev/libs/gelly/index.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/gelly/index.md b/docs/dev/libs/gelly/index.md
new file mode 100644
index 0000000..2eeec2c
--- /dev/null
+++ b/docs/dev/libs/gelly/index.md
@@ -0,0 +1,69 @@
+---
+title: "Gelly: Flink Graph API"
+nav-id: graphs
+nav-show_overview: true
+nav-title: "Graphs: Gelly"
+nav-parent_id: libs
+nav-pos: 3
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Gelly is a Graph API for Flink. It contains a set of methods and utilities which aim to simplify the development of graph analysis applications in Flink. In Gelly, graphs can be transformed and modified using high-level functions similar to the ones provided by the batch processing API. Gelly provides methods to create, transform and modify graphs, as well as a library of graph algorithms.
+
+{:#markdown-toc}
+* [Graph API](graph_api.html)
+* [Iterative Graph Processing](iterative_graph_processing.html)
+* [Library Methods](library_methods.html)
+* [Graph Algorithms](graph_algorithms.html)
+* [Graph Generators](graph_generators.html)
+
+Using Gelly
+-----------
+
+Gelly is currently part of the *libraries* Maven project. All relevant classes are located in the *org.apache.flink.graph* package.
+
+Add the following dependency to your `pom.xml` to use Gelly.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight xml %}
+<dependency>
+    <groupId>org.apache.flink</groupId>
+    <artifactId>flink-gelly{{ site.scala_version_suffix }}</artifactId>
+    <version>{{site.version}}</version>
+</dependency>
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight xml %}
+<dependency>
+    <groupId>org.apache.flink</groupId>
+    <artifactId>flink-gelly-scala{{ site.scala_version_suffix }}</artifactId>
+    <version>{{site.version}}</version>
+</dependency>
+{% endhighlight %}
+</div>
+</div>
+
+Note that Gelly is currently not part of the binary distribution. See linking with it for cluster execution [here]({{ site.baseurl }}/dev/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution).
+
+The remaining sections provide a description of available methods and present several examples of how to use Gelly and how to mix it with the Flink DataSet API. After reading this guide, you might also want to check the {% gh_link /flink-libraries/flink-gelly-examples/ "Gelly examples" %}.
+
+{% top %}


[81/89] [abbrv] flink git commit: [FLINK-4403] [rpc] Use relative classloader for proxies, rather than system class loader.

Posted by se...@apache.org.
[FLINK-4403] [rpc] Use relative classloader for proxies, rather than system class loader.


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/4501ca16
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/4501ca16
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/4501ca16

Branch: refs/heads/flip-6
Commit: 4501ca1607aeb67c6df3712a0fc38f2efede9b29
Parents: 7db2788
Author: Stephan Ewen <se...@apache.org>
Authored: Tue Aug 16 21:11:01 2016 +0200
Committer: Stephan Ewen <se...@apache.org>
Committed: Thu Aug 25 20:21:04 2016 +0200

----------------------------------------------------------------------
 .../org/apache/flink/runtime/rpc/akka/AkkaRpcService.java     | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/4501ca16/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
index b647bbd..d987c2f 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
@@ -104,9 +104,14 @@ public class AkkaRpcService implements RpcService {
 
 				InvocationHandler akkaInvocationHandler = new AkkaInvocationHandler(actorRef, timeout, maximumFramesize);
 
+				// Rather than using the System ClassLoader directly, we derive the ClassLoader
+				// from this class . That works better in cases where Flink runs embedded and all Flink
+				// code is loaded dynamically (for example from an OSGI bundle) through a custom ClassLoader
+				ClassLoader classLoader = AkkaRpcService.this.getClass().getClassLoader();
+				
 				@SuppressWarnings("unchecked")
 				C proxy = (C) Proxy.newProxyInstance(
-					ClassLoader.getSystemClassLoader(),
+					classLoader,
 					new Class<?>[] {clazz},
 					akkaInvocationHandler);
 


[79/89] [abbrv] flink git commit: [FLINK-4373] [cluster management] Introduce SlotID, AllocationID, ResourceProfile

Posted by se...@apache.org.
[FLINK-4373] [cluster management] Introduce SlotID, AllocationID, ResourceProfile

[FLINK-4373] [cluster management] address comments

This closes #2370.


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/baf4a616
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/baf4a616
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/baf4a616

Branch: refs/heads/flip-6
Commit: baf4a616905e4ba15974511abc39993dda307f2b
Parents: 946ea09
Author: Kurt Young <yk...@gmail.com>
Authored: Fri Aug 12 11:05:48 2016 +0800
Committer: Stephan Ewen <se...@apache.org>
Committed: Thu Aug 25 20:21:03 2016 +0200

----------------------------------------------------------------------
 .../clusterframework/types/AllocationID.java    | 32 ++++++++
 .../clusterframework/types/ResourceProfile.java | 68 ++++++++++++++++
 .../runtime/clusterframework/types/SlotID.java  | 83 ++++++++++++++++++++
 .../types/ResourceProfileTest.java              | 49 ++++++++++++
 4 files changed, 232 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/baf4a616/flink-runtime/src/main/java/org/apache/flink/runtime/clusterframework/types/AllocationID.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/clusterframework/types/AllocationID.java b/flink-runtime/src/main/java/org/apache/flink/runtime/clusterframework/types/AllocationID.java
new file mode 100644
index 0000000..f7ae6ee
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/clusterframework/types/AllocationID.java
@@ -0,0 +1,32 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.clusterframework.types;
+
+import org.apache.flink.util.AbstractID;
+
+/**
+ * Unique identifier for the attempt to allocate a slot, normally created by JobManager when requesting a slot,
+ * constant across re-tries. This can also be used to identify responses by the ResourceManager and to identify
+ * deployment calls towards the TaskManager that was allocated from.
+ */
+public class AllocationID extends AbstractID {
+
+	private static final long serialVersionUID = 1L;
+
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/baf4a616/flink-runtime/src/main/java/org/apache/flink/runtime/clusterframework/types/ResourceProfile.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/clusterframework/types/ResourceProfile.java b/flink-runtime/src/main/java/org/apache/flink/runtime/clusterframework/types/ResourceProfile.java
new file mode 100644
index 0000000..cbe709f
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/clusterframework/types/ResourceProfile.java
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.clusterframework.types;
+
+import java.io.Serializable;
+
+/**
+ * Describe the resource profile of the slot, either when requiring or offering it. The profile can be
+ * checked whether it can match another profile's requirement, and furthermore we may calculate a matching
+ * score to decide which profile we should choose when we have lots of candidate slots.
+ */
+public class ResourceProfile implements Serializable {
+
+	private static final long serialVersionUID = -784900073893060124L;
+
+	/** How many cpu cores are needed, use double so we can specify cpu like 0.1 */
+	private final double cpuCores;
+
+	/** How many memory in mb are needed */
+	private final long memoryInMB;
+
+	public ResourceProfile(double cpuCores, long memoryInMB) {
+		this.cpuCores = cpuCores;
+		this.memoryInMB = memoryInMB;
+	}
+
+	/**
+	 * Get the cpu cores needed
+	 * @return The cpu cores, 1.0 means a full cpu thread
+	 */
+	public double getCpuCores() {
+		return cpuCores;
+	}
+
+	/**
+	 * Get the memory needed in MB
+	 * @return The memory in MB
+	 */
+	public long getMemoryInMB() {
+		return memoryInMB;
+	}
+
+	/**
+	 * Check whether required resource profile can be matched
+	 *
+	 * @param required the required resource profile
+	 * @return true if the requirement is matched, otherwise false
+	 */
+	public boolean isMatching(ResourceProfile required) {
+		return Double.compare(cpuCores, required.getCpuCores()) >= 0 && memoryInMB >= required.getMemoryInMB();
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/baf4a616/flink-runtime/src/main/java/org/apache/flink/runtime/clusterframework/types/SlotID.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/clusterframework/types/SlotID.java b/flink-runtime/src/main/java/org/apache/flink/runtime/clusterframework/types/SlotID.java
new file mode 100644
index 0000000..d1b072d
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/clusterframework/types/SlotID.java
@@ -0,0 +1,83 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.clusterframework.types;
+
+import java.io.Serializable;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * Unique identifier for a slot which located in TaskManager.
+ */
+public class SlotID implements ResourceIDRetrievable, Serializable {
+
+	private static final long serialVersionUID = -6399206032549807771L;
+
+	/** The resource id which this slot located */
+	private final ResourceID resourceId;
+
+	/** The numeric id for single slot */
+	private final int slotId;
+
+	public SlotID(ResourceID resourceId, int slotId) {
+		this.resourceId = checkNotNull(resourceId, "ResourceID must not be null");
+		this.slotId = slotId;
+	}
+
+	// ------------------------------------------------------------------------
+
+	@Override
+	public ResourceID getResourceID() {
+		return resourceId;
+	}
+
+	// ------------------------------------------------------------------------
+
+	@Override
+	public boolean equals(Object o) {
+		if (this == o) {
+			return true;
+		}
+		if (o == null || getClass() != o.getClass()) {
+			return false;
+		}
+
+		SlotID slotID = (SlotID) o;
+
+		if (slotId != slotID.slotId) {
+			return false;
+		}
+		return resourceId.equals(slotID.resourceId);
+	}
+
+	@Override
+	public int hashCode() {
+		int result = resourceId.hashCode();
+		result = 31 * result + slotId;
+		return result;
+	}
+
+	@Override
+	public String toString() {
+		return "SlotID{" +
+			"resourceId=" + resourceId +
+			", slotId=" + slotId +
+			'}';
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/baf4a616/flink-runtime/src/test/java/org/apache/flink/runtime/clusterframework/types/ResourceProfileTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/clusterframework/types/ResourceProfileTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/clusterframework/types/ResourceProfileTest.java
new file mode 100644
index 0000000..bc5ddaa
--- /dev/null
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/clusterframework/types/ResourceProfileTest.java
@@ -0,0 +1,49 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.clusterframework.types;
+
+import org.junit.Test;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+public class ResourceProfileTest {
+
+	@Test
+	public void testMatchRequirement() throws Exception {
+		ResourceProfile rp1 = new ResourceProfile(1.0, 100);
+		ResourceProfile rp2 = new ResourceProfile(1.0, 200);
+		ResourceProfile rp3 = new ResourceProfile(2.0, 100);
+		ResourceProfile rp4 = new ResourceProfile(2.0, 200);
+
+		assertFalse(rp1.isMatching(rp2));
+		assertTrue(rp2.isMatching(rp1));
+
+		assertFalse(rp1.isMatching(rp3));
+		assertTrue(rp3.isMatching(rp1));
+
+		assertFalse(rp2.isMatching(rp3));
+		assertFalse(rp3.isMatching(rp2));
+
+		assertTrue(rp4.isMatching(rp1));
+		assertTrue(rp4.isMatching(rp2));
+		assertTrue(rp4.isMatching(rp3));
+		assertTrue(rp4.isMatching(rp4));
+	}
+}


[37/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/libs/cep.md
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/libs/cep.md b/docs/apis/streaming/libs/cep.md
deleted file mode 100644
index ef35d32..0000000
--- a/docs/apis/streaming/libs/cep.md
+++ /dev/null
@@ -1,659 +0,0 @@
----
-title: "FlinkCEP - Complex event processing for Flink"
-# Top navigation
-top-nav-group: libs
-top-nav-pos: 2
-top-nav-title: CEP
-# Sub navigation
-sub-nav-group: streaming
-sub-nav-id: cep
-sub-nav-pos: 1
-sub-nav-parent: libs
-sub-nav-title: Event Processing (CEP)
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-FlinkCEP is the complex event processing library for Flink.
-It allows you to easily detect complex event patterns in a stream of endless data.
-Complex events can then be constructed from matching sequences.
-This gives you the opportunity to quickly get hold of what's really important in your data.
-
-<span class="label label-danger">Attention</span> The events in the `DataStream` to which
-you want to apply pattern matching have to implement proper `equals()` and `hashCode()` methods
-because these are used for comparing and matching events.
-
-* This will be replaced by the TOC
-{:toc}
-
-## Getting Started
-
-If you want to jump right in, you have to [set up a Flink program]({{ site.baseurl }}/apis/common/index.html#linking-with-flink).
-Next, you have to add the FlinkCEP dependency to the `pom.xml` of your project.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight xml %}
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-cep{{ site.scala_version_suffix }}</artifactId>
-  <version>{{site.version }}</version>
-</dependency>
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight xml %}
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-cep-scala{{ site.scala_version_suffix }}</artifactId>
-  <version>{{site.version }}</version>
-</dependency>
-{% endhighlight %}
-</div>
-</div>
-
-Note that FlinkCEP is currently not part of the binary distribution.
-See linking with it for cluster execution [here]({{site.baseurl}}/apis/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution).
-
-Now you can start writing your first CEP program using the pattern API.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-DataStream<Event> input = ...
-
-Pattern<Event, ?> pattern = Pattern.begin("start").where(evt -> evt.getId() == 42)
-    .next("middle").subtype(SubEvent.class).where(subEvt -> subEvt.getVolume() >= 10.0)
-    .followedBy("end").where(evt -> evt.getName().equals("end"));
-
-PatternStream<Event> patternStream = CEP.pattern(input, pattern);
-
-DataStream<Alert> result = patternStream.select(pattern -> {
-    return createAlertFrom(pattern);
-});
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val input: DataStream[Event] = ...
-
-val pattern = Pattern.begin("start").where(_.getId == 42)
-  .next("middle").subtype(classOf[SubEvent]).where(_.getVolume >= 10.0)
-  .followedBy("end").where(_.getName == "end")
-
-val patternStream = CEP.pattern(input, pattern)
-
-val result: DataStream[Alert] = patternStream.select(createAlert(_))
-{% endhighlight %}
-</div>
-</div>
-
-Note that we use use Java 8 lambdas in our Java code examples to make them more succinct.
-
-## The Pattern API
-
-The pattern API allows you to quickly define complex event patterns.
-
-Each pattern consists of multiple stages or what we call states.
-In order to go from one state to the next, the user can specify conditions.
-These conditions can be the contiguity of events or a filter condition on an event.
-
-Each pattern has to start with an initial state:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-Pattern<Event, ?> start = Pattern.<Event>begin("start");
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val start : Pattern[Event, _] = Pattern.begin("start")
-{% endhighlight %}
-</div>
-</div>
-
-Each state must have an unique name to identify the matched events later on.
-Additionally, we can specify a filter condition for the event to be accepted as the start event via the `where` method.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-start.where(new FilterFunction<Event>() {
-    @Override
-    public boolean filter(Event value) {
-        return ... // some condition
-    }
-});
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-start.where(event => ... /* some condition */)
-{% endhighlight %}
-</div>
-</div>
-
-We can also restrict the type of the accepted event to some subtype of the initial event type (here `Event`) via the `subtype` method.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-start.subtype(SubEvent.class).where(new FilterFunction<SubEvent>() {
-    @Override
-    public boolean filter(SubEvent value) {
-        return ... // some condition
-    }
-});
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-start.subtype(classOf[SubEvent]).where(subEvent => ... /* some condition */)
-{% endhighlight %}
-</div>
-</div>
-
-As it can be seen here, the subtype condition can also be combined with an additional filter condition on the subtype.
-In fact you can always provide multiple conditions by calling `where` and `subtype` multiple times.
-These conditions will then be combined using the logical AND operator.
-
-In order to construct or conditions, one has to call the `or` method with a respective filter function.
-Any existing filter function is then ORed with the given one.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-pattern.where(new FilterFunction<Event>() {
-    @Override
-    public boolean filter(Event value) {
-        return ... // some condition
-    }
-}).or(new FilterFunction<Event>() {
-    @Override
-    public boolean filter(Event value) {
-        return ... // or condition
-    }
-});
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-pattern.where(event => ... /* some condition */).or(event => ... /* or condition */)
-{% endhighlight %}
-</div>
-</div>
-
-Next, we can append further states to detect complex patterns.
-We can control the contiguity of two succeeding events to be accepted by the pattern.
-
-Strict contiguity means that two matching events have to succeed directly.
-This means that no other events can occur in between.
-A strict contiguity pattern state can be created via the `next` method.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-Pattern<Event, ?> strictNext = start.next("middle");
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val strictNext: Pattern[Event, _] = start.next("middle")
-{% endhighlight %}
-</div>
-</div>
-
-Non-strict contiguity means that other events are allowed to occur in-between two matching events.
-A non-strict contiguity pattern state can be created via the `followedBy` method.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-Pattern<Event, ?> nonStrictNext = start.followedBy("middle");
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val nonStrictNext : Pattern[Event, _] = start.followedBy("middle")
-{% endhighlight %}
-</div>
-</div>
-It is also possible to define a temporal constraint for the pattern to be valid.
-For example, one can define that a pattern should occur within 10 seconds via the `within` method.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-next.within(Time.seconds(10));
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-next.within(Time.seconds(10))
-{% endhighlight %}
-</div>
-</div>
-
-<br />
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-<table class="table table-bordered">
-    <thead>
-        <tr>
-            <th class="text-left" style="width: 25%">Pattern Operation</th>
-            <th class="text-center">Description</th>
-        </tr>
-    </thead>
-    <tbody>
-        <tr>
-            <td><strong>Begin</strong></td>
-            <td>
-            <p>Defines a starting pattern state:</p>
-{% highlight java %}
-Pattern<Event, ?> start = Pattern.<Event>begin("start");
-{% endhighlight %}
-            </td>
-        </tr>
-        <tr>
-            <td><strong>Next</strong></td>
-            <td>
-                <p>Appends a new pattern state. A matching event has to directly succeed the previous matching event:</p>
-{% highlight java %}
-Pattern<Event, ?> next = start.next("next");
-{% endhighlight %}
-            </td>
-        </tr>
-        <tr>
-            <td><strong>FollowedBy</strong></td>
-            <td>
-                <p>Appends a new pattern state. Other events can occur between a matching event and the previous matching event:</p>
-{% highlight java %}
-Pattern<Event, ?> followedBy = start.followedBy("next");
-{% endhighlight %}
-            </td>
-        </tr>
-        <tr>
-            <td><strong>Where</strong></td>
-            <td>
-                <p>Defines a filter condition for the current pattern state. Only if an event passes the filter, it can match the state:</p>
-{% highlight java %}
-patternState.where(new FilterFunction<Event>() {
-    @Override
-    public boolean filter(Event value) throws Exception {
-        return ... // some condition
-    }
-});
-{% endhighlight %}
-            </td>
-        </tr>
-        <tr>
-            <td><strong>Or</strong></td>
-            <td>
-                <p>Adds a new filter condition which is ORed with an existing filter condition. Only if an event passes the filter condition, it can match the state:</p>
-{% highlight java %}
-patternState.where(new FilterFunction<Event>() {
-    @Override
-    public boolean filter(Event value) throws Exception {
-        return ... // some condition
-    }
-}).or(new FilterFunction<Event>() {
-    @Override
-    public boolean filter(Event value) throws Exception {
-        return ... // alternative condition
-    }
-});
-{% endhighlight %}
-                    </td>
-                </tr>
-       <tr>
-           <td><strong>Subtype</strong></td>
-           <td>
-               <p>Defines a subtype condition for the current pattern state. Only if an event is of this subtype, it can match the state:</p>
-{% highlight java %}
-patternState.subtype(SubEvent.class);
-{% endhighlight %}
-           </td>
-       </tr>
-       <tr>
-          <td><strong>Within</strong></td>
-          <td>
-              <p>Defines the maximum time interval for an event sequence to match the pattern. If a non-completed event sequence exceeds this time, it is discarded:</p>
-{% highlight java %}
-patternState.within(Time.seconds(10));
-{% endhighlight %}
-          </td>
-      </tr>
-  </tbody>
-</table>
-</div>
-
-<div data-lang="scala" markdown="1">
-<table class="table table-bordered">
-    <thead>
-        <tr>
-            <th class="text-left" style="width: 25%">Pattern Operation</th>
-            <th class="text-center">Description</th>
-        </tr>
-    </thead>
-    <tbody>
-        <tr>
-            <td><strong>Begin</strong></td>
-            <td>
-            <p>Defines a starting pattern state:</p>
-{% highlight scala %}
-val start = Pattern.begin[Event]("start")
-{% endhighlight %}
-            </td>
-        </tr>
-        <tr>
-            <td><strong>Next</strong></td>
-            <td>
-                <p>Appends a new pattern state. A matching event has to directly succeed the previous matching event:</p>
-{% highlight scala %}
-val next = start.next("middle")
-{% endhighlight %}
-            </td>
-        </tr>
-        <tr>
-            <td><strong>FollowedBy</strong></td>
-            <td>
-                <p>Appends a new pattern state. Other events can occur between a matching event and the previous matching event:</p>
-{% highlight scala %}
-val followedBy = start.followedBy("middle")
-{% endhighlight %}
-            </td>
-        </tr>
-        <tr>
-            <td><strong>Where</strong></td>
-            <td>
-                <p>Defines a filter condition for the current pattern state. Only if an event passes the filter, it can match the state:</p>
-{% highlight scala %}
-patternState.where(event => ... /* some condition */)
-{% endhighlight %}
-            </td>
-        </tr>
-        <tr>
-            <td><strong>Or</strong></td>
-            <td>
-                <p>Adds a new filter condition which is ORed with an existing filter condition. Only if an event passes the filter condition, it can match the state:</p>
-{% highlight scala %}
-patternState.where(event => ... /* some condition */)
-    .or(event => ... /* alternative condition */)
-{% endhighlight %}
-                    </td>
-                </tr>
-       <tr>
-           <td><strong>Subtype</strong></td>
-           <td>
-               <p>Defines a subtype condition for the current pattern state. Only if an event is of this subtype, it can match the state:</p>
-{% highlight scala %}
-patternState.subtype(classOf[SubEvent])
-{% endhighlight %}
-           </td>
-       </tr>
-       <tr>
-          <td><strong>Within</strong></td>
-          <td>
-              <p>Defines the maximum time interval for an event sequence to match the pattern. If a non-completed event sequence exceeds this time, it is discarded:</p>
-{% highlight scala %}
-patternState.within(Time.seconds(10))
-{% endhighlight %}
-          </td>
-      </tr>
-  </tbody>
-</table>
-</div>
-
-</div>
-
-### Detecting Patterns
-
-In order to run a stream of events against your pattern, you have to create a `PatternStream`.
-Given an input stream `input` and a pattern `pattern`, you create the `PatternStream` by calling
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-DataStream<Event> input = ...
-Pattern<Event, ?> pattern = ...
-
-PatternStream<Event> patternStream = CEP.pattern(input, pattern);
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val input : DataStream[Event] = ...
-val pattern : Pattern[Event, _] = ...
-
-val patternStream: PatternStream[Event] = CEP.pattern(input, pattern)
-{% endhighlight %}
-</div>
-</div>
-
-### Selecting from Patterns
-Once you have obtained a `PatternStream` you can select from detected event sequences via the `select` or `flatSelect` methods.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-The `select` method requires a `PatternSelectFunction` implementation.
-A `PatternSelectFunction` has a `select` method which is called for each matching event sequence.
-It receives a map of string/event pairs of the matched events.
-The string is defined by the name of the state to which the event has been matched.
-The `select` method can return exactly one result.
-
-{% highlight java %}
-class MyPatternSelectFunction<IN, OUT> implements PatternSelectFunction<IN, OUT> {
-    @Override
-    public OUT select(Map<String, IN> pattern) {
-        IN startEvent = pattern.get("start");
-        IN endEvent = pattern.get("end");
-        return new OUT(startEvent, endEvent);
-    }
-}
-{% endhighlight %}
-
-A `PatternFlatSelectFunction` is similar to the `PatternSelectFunction`, with the only distinction that it can return an arbitrary number of results.
-In order to do this, the `select` method has an additional `Collector` parameter which is used for the element output.
-
-{% highlight java %}
-class MyPatternFlatSelectFunction<IN, OUT> implements PatternFlatSelectFunction<IN, OUT> {
-    @Override
-    public void select(Map<String, IN> pattern, Collector<OUT> collector) {
-        IN startEvent = pattern.get("start");
-        IN endEvent = pattern.get("end");
-
-        for (int i = 0; i < startEvent.getValue(); i++ ) {
-            collector.collect(new OUT(startEvent, endEvent));
-        }
-    }
-}
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-The `select` method takes a selection function as argument, which is called for each matching event sequence.
-It receives a map of string/event pairs of the matched events.
-The string is defined by the name of the state to which the event has been matched.
-The selection function returns exactly one result per call.
-
-{% highlight scala %}
-def selectFn(pattern : mutable.Map[String, IN]): OUT = {
-    val startEvent = pattern.get("start").get
-    val endEvent = pattern.get("end").get
-    OUT(startEvent, endEvent)
-}
-{% endhighlight %}
-
-The `flatSelect` method is similar to the `select` method. Their only difference is that the function passed to the `flatSelect` method can return an arbitrary number of results per call.
-In order to do this, the function for `flatSelect` has an additional `Collector` parameter which is used for the element output.
-
-{% highlight scala %}
-def flatSelectFn(pattern : mutable.Map[String, IN], collector : Collector[OUT]) = {
-    val startEvent = pattern.get("start").get
-    val endEvent = pattern.get("end").get
-    for (i <- 0 to startEvent.getValue) {
-        collector.collect(OUT(startEvent, endEvent))
-    }
-}
-{% endhighlight %}
-</div>
-</div>
-
-### Handling Timed Out Partial Patterns
-
-Whenever a pattern has a window length associated via the `within` key word, it is possible that partial event patterns will be discarded because they exceed the window length.
-In order to react to these timeout events the `select` and `flatSelect` API calls allow to specify a timeout handler.
-This timeout handler is called for each partial event pattern which has timed out.
-The timeout handler receives all so far matched events of the partial pattern and the timestamp when the timeout was detected.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-In order to treat partial patterns, the `select` and `flatSelect` API calls offer an overloaded version which takes as the first parameter a `PatternTimeoutFunction`/`PatternFlatTimeoutFunction` and as second parameter the known `PatternSelectFunction`/`PatternFlatSelectFunction`.
-The return type of the timeout function can be different from the select function.
-The timeout event and the select event are wrapped in `Either.Left` and `Either.Right` respectively so that the resulting data stream is of type `org.apache.flink.types.Either`.
-
-{% highlight java %}
-PatternStream<Event> patternStream = CEP.pattern(input, pattern);
-
-DataStream<Either<TimeoutEvent, ComplexEvent>> result = patternStream.select(
-    new PatternTimeoutFunction<Event, TimeoutEvent>() {...},
-    new PatternSelectFunction<Event, ComplexEvent>() {...}
-);
-
-DataStream<Either<TimeoutEvent, ComplexEvent>> flatResult = patternStream.flatSelect(
-    new PatternFlatTimeoutFunction<Event, TimeoutEvent>() {...},
-    new PatternFlatSelectFunction<Event, ComplexEvent>() {...}
-);
-{% endhighlight %}
-
-</div>
-
-<div data-lang="scala" markdown="1">
-In order to treat partial patterns, the `select` API call offers an overloaded version which takes as the first parameter a timeout function and as second parameter a selection function.
-The timeout function is called with a map of string-event pairs of the partial match which has timed out and a long indicating when the timeout occurred.
-The string is defined by the name of the state to which the event has been matched.
-The timeout function returns exactly one result per call.
-The return type of the timeout function can be different from the select function.
-The timeout event and the select event are wrapped in `Left` and `Right` respectively so that the resulting data stream is of type `Either`.
-
-{% highlight scala %}
-val patternStream: PatternStream[Event] = CEP.pattern(input, pattern)
-
-DataStream[Either[TimeoutEvent, ComplexEvent]] result = patternStream.select{
-    (pattern: mutable.Map[String, Event], timestamp: Long) => TimeoutEvent()
-} {
-    pattern: mutable.Map[String, Event] => ComplexEvent()
-}
-{% endhighlight %}
-
-The `flatSelect` API call offers the same overloaded version which takes as the first parameter a timeout function and as second parameter a selection function.
-In contrast to the `select` functions, the `flatSelect` functions are called with an `Collector`.
-The collector can be used to emit an arbitrary number of events.
-
-{% highlight scala %}
-val patternStream: PatternStream[Event] = CEP.pattern(input, pattern)
-
-DataStream[Either[TimeoutEvent, ComplexEvent]] result = patternStream.flatSelect{
-    (pattern: mutable.Map[String, Event], timestamp: Long, out: Collector[TimeoutEvent]) =>
-        out.collect(TimeoutEvent())
-} {
-    (pattern: mutable.Map[String, Event], out: Collector[ComplexEvent]) =>
-        out.collect(ComplexEvent())
-}
-{% endhighlight %}
-
-</div>
-</div>
-
-## Examples
-
-The following example detects the pattern `start, middle(name = "error") -> end(name = "critical")` on a keyed data stream of `Events`.
-The events are keyed by their ids and a valid pattern has to occur within 10 seconds.
-The whole processing is done with event time.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-StreamExecutionEnvironment env = ...
-env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
-
-DataStream<Event> input = ...
-
-DataStream<Event> partitionedInput = input.keyBy(new KeySelector<Event, Integer>() {
-	@Override
-	public Integer getKey(Event value) throws Exception {
-		return value.getId();
-	}
-});
-
-Pattern<Event, ?> pattern = Pattern.<Event>begin("start")
-	.next("middle").where(new FilterFunction<Event>() {
-		@Override
-		public boolean filter(Event value) throws Exception {
-			return value.getName().equals("error");
-		}
-	}).followedBy("end").where(new FilterFunction<Event>() {
-		@Override
-		public boolean filter(Event value) throws Exception {
-			return value.getName().equals("critical");
-		}
-	}).within(Time.seconds(10));
-
-PatternStream<Event> patternStream = CEP.pattern(partitionedInput, pattern);
-
-DataStream<Alert> alerts = patternStream.select(new PatternSelectFunction<Event, Alert>() {
-	@Override
-	public Alert select(Map<String, Event> pattern) throws Exception {
-		return createAlert(pattern);
-	}
-});
-{% endhighlight %}
-</div>
-
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env : StreamExecutionEnvironment = ...
-env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
-
-val input : DataStream[Event] = ...
-
-val partitionedInput = input.keyBy(event => event.getId)
-
-val pattern = Pattern.begin("start")
-  .next("middle").where(_.getName == "error")
-  .followedBy("end").where(_.getName == "critical")
-  .within(Time.seconds(10))
-
-val patternStream = CEP.pattern(partitionedInput, pattern)
-
-val alerts = patternStream.select(createAlert(_)))
-{% endhighlight %}
-</div>
-</div>

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/libs/index.md
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/libs/index.md b/docs/apis/streaming/libs/index.md
deleted file mode 100644
index 4ba7e94..0000000
--- a/docs/apis/streaming/libs/index.md
+++ /dev/null
@@ -1,27 +0,0 @@
----
-title: "Streaming Libraries"
-sub-nav-group: streaming
-sub-nav-id: libs
-sub-nav-pos: 7
-sub-nav-title: Libraries
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-- Complex event processing: [CEP](cep.html)

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/non-windowed.svg
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/non-windowed.svg b/docs/apis/streaming/non-windowed.svg
deleted file mode 100644
index 3c1cdaa..0000000
--- a/docs/apis/streaming/non-windowed.svg
+++ /dev/null
@@ -1,22 +0,0 @@
-<?xml version="1.0" standalone="yes"?>
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<svg version="1.1" viewBox="0.0 0.0 800.0 600.0" fill="none" stroke="none" stroke-linecap="square" stroke-miterlimit="10" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><clipPath id="p.0"><path d="m0 0l800.0 0l0 600.0l-800.0 0l0 -600.0z" clip-rule="nonzero"></path></clipPath><g clip-path="url(#p.0)"><path fill="#000000" fill-opacity="0.0" d="m0 0l800.0 0l0 600.0l-800.0 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m145.49606 485.0l509.0079 0" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m145.49606 485.0l503.0079 0" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m648.50397 486.65173l4.538086 -1.6517334l-4.538086 -1.6517334z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m145.49606 485.0l0 -394.99213" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" s
 troke-linejoin="round" stroke-linecap="butt" d="m145.49606 485.0l0 -388.99213" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m147.1478 96.00787l-1.6517334 -4.5380936l-1.6517334 4.5380936z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m587.0 477.0l60.0 0l0 42.992126l-60.0 0z" fill-rule="nonzero"></path><path fill="#000000" d="m600.90625 502.41998l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm1.5426636 -10.1875l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm4.1292114 0l0 -9.859375l1.5 0l0 1.390625q0.453125 
 -0.71875 1.21875 -1.15625q0.78125 -0.453125 1.765625 -0.453125q1.09375 0 1.796875 0.453125q0.703125 0.453125 0.984375 1.28125q1.171875 -1.734375 3.046875 -1.734375q1.46875 0 2.25 0.8125q0.796875 0.8125 0.796875 2.5l0 6.765625l-1.671875 0l0 -6.203125q0 -1.0 -0.15625 -1.4375q-0.15625 -0.453125 -0.59375 -0.71875q-0.421875 -0.265625 -1.0 -0.265625q-1.03125 0 -1.71875 0.6875q-0.6875 0.6875 -0.6875 2.21875l0 5.71875l-1.671875 0l0 -6.40625q0 -1.109375 -0.40625 -1.65625q-0.40625 -0.5625 -1.34375 -0.5625q-0.703125 0 -1.3125 0.375q-0.59375 0.359375 -0.859375 1.078125q-0.265625 0.71875 -0.265625 2.0625l0 5.109375l-1.671875 0zm22.290771 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.85
 9375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m42.0 133.0l82.01575 0l0 42.992126l-82.01575 0z" fill-rule="nonzero"></path><path fill="#000000" d="m58.703125 159.92l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.671875 0l0 9.859375l-1.5 0zm3.2507172 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46
 875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm16.75 -0.234375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.
 34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094467 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm17.23973 0l-1.671875 0l0 -10.640625q-0.59375 0.578125 -1.578125 1.15625q-0.984375 0.5625 -1.765625 0.859375l0
  -1.625q1.40625 -0.65625 2.453125 -1.59375q1.046875 -0.9375 1.484375 -1.8125l1.078125 0l0 13.65625z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m42.0 254.0l82.01575 0l0 42.992126l-82.01575 0z" fill-rule="nonzero"></path><path fill="#000000" d="m58.703125 280.91998l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.671875 0l0 9.859375l-1.5 0zm3.2507172 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.
 34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm16.75 -0.234375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 
 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094467 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm19.724106 -1.609375l0 1.609375l-8.984375 0q-0.015625 -0.609375 0.1875 -1.15625q0.34375 -0.921875 1.09375 -1.8125q0.765625 -0.890625 2.1875 -2.0625q2.21875 -1.8125 3.0 -2.875q0.78125 -1.0625 0.78125 -2.015625q0 -0.984375 -0.71875 -1.671875q-0.703125 -0.6875 -1.
 84375 -0.6875q-1.203125 0 -1.9375 0.734375q-0.71875 0.71875 -0.71875 2.0l-1.71875 -0.171875q0.171875 -1.921875 1.328125 -2.921875q1.15625 -1.015625 3.09375 -1.015625q1.953125 0 3.09375 1.09375q1.140625 1.078125 1.140625 2.6875q0 0.8125 -0.34375 1.609375q-0.328125 0.78125 -1.109375 1.65625q-0.765625 0.859375 -2.5625 2.390625q-1.5 1.265625 -1.9375 1.71875q-0.421875 0.4375 -0.703125 0.890625l6.671875 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m42.0 375.0l82.01575 0l0 42.992126l-82.01575 0z" fill-rule="nonzero"></path><path fill="#000000" d="m58.703125 401.91998l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.
 671875 0l0 9.859375l-1.5 0zm3.2507172 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.9218
 75 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm16.75 -0.234375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094467 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.8906
 25 -0.28125 1.953125l0 5.15625l-1.671875 0zm10.958481 -3.59375l1.671875 -0.21875q0.28125 1.421875 0.96875 2.046875q0.703125 0.625 1.6875 0.625q1.1875 0 2.0 -0.8125q0.8125 -0.828125 0.8125 -2.03125q0 -1.140625 -0.765625 -1.890625q-0.75 -0.75 -1.90625 -0.75q-0.46875 0 -1.171875 0.1875l0.1875 -1.46875q0.15625 0.015625 0.265625 0.015625q1.0625 0 1.90625 -0.546875q0.859375 -0.5625 0.859375 -1.71875q0 -0.921875 -0.625 -1.515625q-0.609375 -0.609375 -1.59375 -0.609375q-0.96875 0 -1.625 0.609375q-0.640625 0.609375 -0.828125 1.84375l-1.671875 -0.296875q0.296875 -1.6875 1.375 -2.609375q1.09375 -0.921875 2.71875 -0.921875q1.109375 0 2.046875 0.484375q0.9375 0.46875 1.421875 1.296875q0.5 0.828125 0.5 1.75q0 0.890625 -0.46875 1.609375q-0.46875 0.71875 -1.40625 1.15625q1.21875 0.265625 1.875 1.15625q0.671875 0.875 0.671875 2.1875q0 1.78125 -1.296875 3.015625q-1.296875 1.234375 -3.28125 1.234375q-1.796875 0 -2.984375 -1.0625q-1.171875 -1.0625 -1.34375 -2.765625z" fill-rule="nonzero"></path><path fi
 ll="#9900ff" d="m177.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.518509 0 4.9338684 1.000473 6.714737 2.7813263c1.7808533 1.7808685 2.7813263 4.196228 2.7813263 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m203.49606 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.518509 0 4.9338684 1.000473 6.714737 2.7813263c1.7808533 1.7808685 2.7813263 4.196228 2.7813263 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m290.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.4960
 63 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m323.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m348.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m373.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.4
 96063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m442.50394 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m469.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m492.50394 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416
  6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m524.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496033 -9.496063l0 0c2.5185547 0 4.933899 1.000473 6.7147827 2.7813263c1.7808228 1.7808685 2.781311 4.196228 2.781311 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496094 9.496063l0 0c-5.244507 0 -9.496033 -4.251526 -9.496033 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m603.0079 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496094 -9.496063l0 0c2.5184937 0 4.933838 1.000473 6.7147217 2.7813263c1.7808228 1.7808685 2.781311 4.196228 2.781311 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496033 9.496063l0 0c-5.244568 0 -9.496094 -4.251526 -9.496094 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m374.97638 275.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.781341
 6c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m401.47244 275.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m209.0 275.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.518509 0 4.9338684 1.0004883 6.714737 2.7813416c1.7808533 1.7808533 2.7813263 4.1961975 2.7813263 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m242.0 275.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.
 496063l0 0c2.518509 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m267.0 275.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m292.0 275.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m568.48
 03 275.49606l0 0c0 -5.2445374 4.251587 -9.496063 9.496094 -9.496063l0 0c2.5184937 0 4.933899 1.0004883 6.7147217 2.7813416c1.7808838 1.7808533 2.781311 4.1961975 2.781311 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496033 9.496063l0 0c-5.244507 0 -9.496094 -4.251526 -9.496094 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m594.9764 275.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496033 -9.496063l0 0c2.5185547 0 4.933899 1.0004883 6.7147827 2.7813416c1.7808228 1.7808533 2.781311 4.1961975 2.781311 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496094 9.496063l0 0c-5.244507 0 -9.496033 -4.251526 -9.496033 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m618.4803 275.49606l0 0c0 -5.2445374 4.251587 -9.496063 9.496094 -9.496063l0 0c2.5184937 0 4.933899 1.0004883 6.7147217 2.7813416c1.7808838 1.7808533 2.781311 4.1961975 2.781311 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496033 9.496063l0 0c-5.244507 0 -9.496094 -4.251526 -9.496094 -9.496063z" fi
 ll-rule="nonzero"></path><path fill="#9900ff" d="m477.0 275.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m487.99213 396.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m514.48816 396.49606l0 0c0 -5.2445374 4.251587 -9.496063 9.496094 -9.496063l0 0c2.5184937 0 4.933899 1.0004883 6.7147217 2.7813416c1.7808838 1.7808533 2.781311 4.1961975 2.781311 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496033 9.4960
 63l0 0c-5.244507 0 -9.496094 -4.251526 -9.496094 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m185.76378 396.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.518509 0 4.9338684 1.0004883 6.714737 2.7813416c1.7808533 1.7808533 2.7813263 4.1961975 2.7813263 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m265.0 396.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m291.49606 396.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.78134
 16 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m315.0 396.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.0004883 6.7147217 2.7813416c1.7808533 1.7808533 2.7813416 4.1961975 2.7813416 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m558.01575 396.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496094 -9.496063l0 0c2.5184937 0 4.933838 1.0004883 6.7147217 2.7813416c1.7808228 1.7808533 2.781311 4.1961975 2.781311 6.7147217l0 0c0 5.2445374 -4.251526 9.496063 -9.496033 9.496063l0 0c-5.244568 0 -9.496094 -4.251526 -9.496094 -9.496063z" fill-rule="nonzero"></path></g></svg>
-

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/savepoints.md
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/savepoints.md b/docs/apis/streaming/savepoints.md
deleted file mode 100644
index 53a1ee8..0000000
--- a/docs/apis/streaming/savepoints.md
+++ /dev/null
@@ -1,110 +0,0 @@
----
-title: "Savepoints"
-is_beta: false
-sub-nav-group: streaming
-sub-nav-pos: 6
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-Programs written in the [Data Stream API](index.html) can resume execution from a **savepoint**. Savepoints allow both updating your programs and your Flink cluster without losing any state. This page covers all steps to trigger, restore, and dispose savepoints. For more details on how Flink handles state and failures, check out the [State in Streaming Programs](state_backends.html) and [Fault Tolerance](fault_tolerance.html) pages.
-
-* toc
-{:toc}
-
-## Overview
-
-Savepoints are **manually triggered checkpoints**, which take a snapshot of the program and write it out to a state backend. They rely on the regular checkpointing mechanism for this. During execution programs are periodically snapshotted on the worker nodes and produce checkpoints. For recovery only the last completed checkpoint is needed and older checkpoints can be safely discarded as soon as a new one is completed.
-
-Savepoints are similar to these periodic checkpoints except that they are **triggered by the user** and **don't automatically expire** when newer checkpoints are completed.
-
-<img src="fig/savepoints-overview.png" class="center" />
-
-In the above example the workers produce checkpoints **c<sub>1</sub>**, **c<sub>2</sub>**, **c<sub>3</sub>**, and **c<sub>4</sub>** for job *0xA312Bc*. Periodic checkpoints **c<sub>1</sub>** and **c<sub>3</sub>** have already been *discarded* and **c<sub>4</sub>** is the *latest checkpoint*. **c<sub>2</sub> is special**. It is the state associated with the savepoint **s<sub>1</sub>** and has been triggered by the user and it doesn't expire automatically (as c<sub>1</sub> and c<sub>3</sub> did after the completion of newer checkpoints).
-
-Note that **s<sub>1</sub>** is only a **pointer to the actual checkpoint data c<sub>2</sub>**. This means that the actual state is *not copied* for the savepoint and periodic checkpoint data is kept around.
-
-## Configuration
-
-Savepoints point to regular checkpoints and store their state in a configured [state backend](state_backends.html). Currently, the supported state backends are **jobmanager** and **filesystem**. The state backend configuration for the regular periodic checkpoints is **independent** of the savepoint state backend configuration. Checkpoint data is **not copied** for savepoints, but points to the configured checkpoint state backend.
-
-### JobManager
-
-This is the **default backend** for savepoints.
-
-Savepoints are stored on the heap of the job manager. They are *lost* after the job manager is shut down. This mode is only useful if you want to *stop* and *resume* your program while the **same cluster** keeps running. It is *not recommended* for production use. Savepoints are *not* part of the [job manager's highly available]({{ site.baseurl }}/setup/jobmanager_high_availability.html) state.
-
-<pre>
-savepoints.state.backend: jobmanager
-</pre>
-
-**Note**: If you don't configure a specific state backend for the savepoints, the jobmanager backend will be used.
-
-### File system
-
-Savepoints are stored in the configured **file system directory**. They are available between cluster instances and allow to move your program to another cluster.
-
-<pre>
-savepoints.state.backend: filesystem
-savepoints.state.backend.fs.dir: hdfs:///flink/savepoints
-</pre>
-
-**Note**: If you don't configure a specific directory, the job manager backend will be used.
-
-**Important**: A savepoint is a pointer to a completed checkpoint. That means that the state of a savepoint is not only found in the savepoint file itself, but also needs the actual checkpoint data (e.g. in a set of further files). Therefore, using the *filesystem* backend for savepoints and the *jobmanager* backend for checkpoints does not work, because the required checkpoint data won't be available after a job manager restart.
-
-## Changes to your program
-
-Savepoints **work out of the box**, but it is **highly recommended** that you slightly adjust your programs in order to be able to work with savepoints in future versions of your program.
-
-<img src="fig/savepoints-program_ids.png" class="center" />
-
-For savepoints **only stateful tasks matter**. In the above example, the source and map tasks are stateful whereas the sink is not stateful. Therefore, only the state of the source and map tasks are part of the savepoint.
-
-Each task is identified by its **generated task IDs** and **subtask index**. In the above example the state of the source (**s<sub>1</sub>**, **s<sub>2</sub>**) and map tasks (**m<sub>1</sub>**, **m<sub>2</sub>**) is identified by their respective task ID (*0xC322EC* for the source tasks and *0x27B3EF* for the map tasks) and subtask index. There is no state for the sinks (**t<sub>1</sub>**, **t<sub>2</sub>**). Their IDs therefore do not matter.
-
-<span class="label label-danger">Important</span> The IDs are generated **deterministically** from your program structure. This means that as long as your program does not change, the IDs do not change. **The only allowed changes are within the user function, e.g. you can change the implemented `MapFunction` without changing the topology**. In this case, it is straight forward to restore the state from a savepoint by mapping it back to the same task IDs and subtask indexes. This allows you to work with savepoints out of the box, but gets problematic as soon as you make changes to the topology, because they result in changed IDs and the savepoint state cannot be mapped to your program any more.
-
-<span class="label label-info">Recommended</span> In order to be able to change your program and **have fixed IDs**, the *DataStream* API provides a method to manually specify the task IDs. Each operator provides a **`uid(String)`** method to override the generated ID. The ID is a String, which will be deterministically hashed to a 16-byte hash value. It is **important** that the specified IDs are **unique per transformation and job**. If this is not the case, job submission will fail.
-
-{% highlight scala %}
-DataStream<String> stream = env.
-  // Stateful source (e.g. Kafka) with ID
-  .addSource(new StatefulSource())
-  .uid("source-id")
-  .shuffle()
-  // The stateful mapper with ID
-  .map(new StatefulMapper())
-  .uid("mapper-id")
-
-// Stateless sink (no specific ID required)
-stream.print()
-{% endhighlight %}
-
-## Command-line client
-
-You control the savepoints via the [command line client]({{site.baseurl}}/apis/cli.html#savepoints).
-
-## Current limitations
-
-- **Parallelism**: When restoring a savepoint, the parallelism of the program has to match the parallelism of the original program from which the savepoint was drawn. There is no mechanism to re-partition the savepoint's state yet.
-
-- **Chaining**: Chained operators are identified by the ID of the first task. It's not possible to manually assign an ID to an intermediate chained task, e.g. in the chain `[  a -> b -> c ]` only **a** can have its ID assigned manually, but not **b** or **c**. To work around this, you can [manually define the task chains](index.html#task-chaining-and-resource-groups). If you rely on the automatic ID assignment, a change in the chaining behaviour will also change the IDs.
-
-- **Disposing custom state handles**: Disposing an old savepoint does not work with custom state handles (if you are using a custom state backend), because the user code class loader is not available during disposal.


[43/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/common/index.md
----------------------------------------------------------------------
diff --git a/docs/apis/common/index.md b/docs/apis/common/index.md
deleted file mode 100644
index 05d87ce..0000000
--- a/docs/apis/common/index.md
+++ /dev/null
@@ -1,1352 +0,0 @@
----
-title: "Basic API Concepts"
-
-# Top-level navigation
-top-nav-group: apis
-top-nav-pos: 1
-top-nav-title: <strong>Basic API Concepts</strong>
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-Flink programs are regular programs that implement transformations on distributed collections
-(e.g., filtering, mapping, updating state, joining, grouping, defining windows, aggregating).
-Collections are initially created from sources (e.g., by reading files, kafka, or from local
-collections). Results are returned via sinks, which may for example write the data to
-(distributed) files, or to standard output (for example the command line terminal).
-Flink programs run in a variety of contexts, standalone, or embedded in other programs.
-The execution can happen in a local JVM, or on clusters of many machines.
-
-Depending on the type of data sources, i.e. bounded or unbounded sources you would either
-write a batch program or a streaming program where the DataSet API is used for the former
-and the DataStream API is used for the latter. This guide will introduce the basic concepts
-that are common to both APIs but please see our
-[Streaming Guide]({{ site.baseurl }}/apis/streaming/index.html) and
-[Batch Guide]({{ site.baseurl }}/apis/batch/index.html) for concrete information about
-writing programs with each API.
-
-**NOTE:** When showing actual examples of how the APIs can be used  we will use
-`StreamingExecutionEnvironment` and the `DataStream` API. The concepts are exactly the same
-in the `DataSet` API, just replace by `ExecutionEnvironment` and `DataSet`.
-
-* This will be replaced by the TOC
-{:toc}
-
-Linking with Flink
-------------------
-
-To write programs with Flink, you need to include the Flink library corresponding to
-your programming language in your project.
-
-The simplest way to do this is to use one of the quickstart scripts: either for
-[Java]({{ site.baseurl }}/quickstart/java_api_quickstart.html) or for [Scala]({{ site.baseurl }}/quickstart/scala_api_quickstart.html). They
-create a blank project from a template (a Maven Archetype), which sets up everything for you. To
-manually create the project, you can use the archetype and create a project by calling:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight bash %}
-mvn archetype:generate \
-    -DarchetypeGroupId=org.apache.flink \
-    -DarchetypeArtifactId=flink-quickstart-java \
-    -DarchetypeVersion={{site.version }}
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight bash %}
-mvn archetype:generate \
-    -DarchetypeGroupId=org.apache.flink \
-    -DarchetypeArtifactId=flink-quickstart-scala \
-    -DarchetypeVersion={{site.version }}
-{% endhighlight %}
-</div>
-</div>
-
-The archetypes are working for stable releases and preview versions (`-SNAPSHOT`).
-
-If you want to add Flink to an existing Maven project, add the following entry to your
-*dependencies* section in the *pom.xml* file of your project:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight xml %}
-<!-- Use this dependency if you are using the DataStream API -->
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-streaming-java{{ site.scala_version_suffix }}</artifactId>
-  <version>{{site.version }}</version>
-</dependency>
-<!-- Use this dependency if you are using the DataSet API -->
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-java</artifactId>
-  <version>{{site.version }}</version>
-</dependency>
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-clients{{ site.scala_version_suffix }}</artifactId>
-  <version>{{site.version }}</version>
-</dependency>
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight xml %}
-<!-- Use this dependency if you are using the DataStream API -->
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-streaming-scala{{ site.scala_version_suffix }}</artifactId>
-  <version>{{site.version }}</version>
-</dependency>
-<!-- Use this dependency if you are using the DataSet API -->
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-scala{{ site.scala_version_suffix }}</artifactId>
-  <version>{{site.version }}</version>
-</dependency>
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-clients{{ site.scala_version_suffix }}</artifactId>
-  <version>{{site.version }}</version>
-</dependency>
-{% endhighlight %}
-
-**Important:** When working with the Scala API you must have one of these two imports:
-{% highlight scala %}
-import org.apache.flink.api.scala._
-{% endhighlight %}
-
-or
-
-{% highlight scala %}
-import org.apache.flink.api.scala.createTypeInformation
-{% endhighlight %}
-
-The reason is that Flink analyzes the types that are used in a program and generates serializers
-and comparaters for them. By having either of those imports you enable an implicit conversion
-that creates the type information for Flink operations.
-</div>
-</div>
-
-#### Scala Dependency Versions
-
-Because Scala 2.10 binary is not compatible with Scala 2.11 binary, we provide multiple artifacts
-to support both Scala versions.
-
-Starting from the 0.10 line, we cross-build all Flink modules for both 2.10 and 2.11. If you want
-to run your program on Flink with Scala 2.11, you need to add a `_2.11` suffix to the `artifactId`
-values of the Flink modules in your dependencies section.
-
-If you are looking for building Flink with Scala 2.11, please check
-[build guide]({{ site.baseurl }}/setup/building.html#scala-versions).
-
-#### Hadoop Dependency Versions
-
-If you are using Flink together with Hadoop, the version of the dependency may vary depending on the
-version of Hadoop (or more specifically, HDFS) that you want to use Flink with. Please refer to the
-[downloads page](http://flink.apache.org/downloads.html) for a list of available versions, and instructions
-on how to link with custom versions of Hadoop.
-
-In order to link against the latest SNAPSHOT versions of the code, please follow
-[this guide](http://flink.apache.org/how-to-contribute.html#snapshots-nightly-builds).
-
-The *flink-clients* dependency is only necessary to invoke the Flink program locally (for example to
-run it standalone for testing and debugging).  If you intend to only export the program as a JAR
-file and [run it on a cluster]({{ site.baseurl }}/apis/cluster_execution.html), you can skip that dependency.
-
-{% top %}
-
-DataSet and DataStream
-----------------------
-
-Flink has the special classes `DataSet` and `DataStream` to represent data in a program. You
-can think of them as immutable collections of data that can contain duplicates. In the case
-of `DataSet` the data is finite while for a `DataStream` the number of elements can be unbounded.
-
-These collections differ from regular Java collections in some key ways. First, they
-are immutable, meaning that once they are created you cannot add or remove elements. You can also
-not simply inspect the elements inside.
-
-A collection is initially created by adding a source in a Flink program and new collections are
-derived from these by transforming them using API methods such as `map`, `filter` and so on.
-
-Anatomy of a Flink Program
---------------------------
-
-Flink program programs look like regular programs that transform collections of data.
-Each program consists of the same basic parts:
-
-1. Obtain an `execution environment`,
-2. Load/create the initial data,
-3. Specify transformations on this data,
-4. Specify where to put the results of your computations,
-5. Trigger the program execution
-
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-
-We will now give an overview of each of those steps, please refer to the respective sections for
-more details. Note that all core classes of the Java DataSet API are found in the package
-{% gh_link /flink-java/src/main/java/org/apache/flink/api/java "org.apache.flink.api.java" %}
-while the classes of the Java DataStream API can be found in
-{% gh_link /flink-streaming-java/src/main/java/org/apache/flink/streaming/api "org.apache.flink.streaming.api" %}.
-
-The `StreamExecutionEnvironment` is the basis for all Flink programs. You can
-obtain one using these static methods on `StreamExecutionEnvironment`:
-
-{% highlight java %}
-getExecutionEnvironment()
-
-createLocalEnvironment()
-
-createRemoteEnvironment(String host, int port, String... jarFiles)
-{% endhighlight %}
-
-Typically, you only need to use `getExecutionEnvironment()`, since this
-will do the right thing depending on the context: if you are executing
-your program inside an IDE or as a regular Java program it will create
-a local environment that will execute your program on your local machine. If
-you created a JAR file from your program, and invoke it through the
-[command line]({{ site.baseurl }}/apis/cli.html), the Flink cluster manager
-will execute your main method and `getExecutionEnvironment()` will return
-an execution environment for executing your program on a cluster.
-
-For specifying data sources the execution environment has several methods
-to read from files using various methods: you can just read them line by line,
-as CSV files, or using completely custom data input formats. To just read
-a text file as a sequence of lines, you can use:
-
-{% highlight java %}
-final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
-
-DataStream<String> text = env.readTextFile("file:///path/to/file");
-{% endhighlight %}
-
-This will give you a DataStream on which you can then apply transformations to create new
-derived DataStreams.
-
-You apply transformations by calling methods on DataStream with a transformation
-functions. For example, a map transformation looks like this:
-
-{% highlight java %}
-DataStream<String> input = ...;
-
-DataStream<Integer> parsed = input.map(new MapFunction<String, Integer>() {
-    @Override
-    public Integer map(String value) {
-        return Integer.parseInt(value);
-    }
-});
-{% endhighlight %}
-
-This will create a new DataStream by converting every String in the original
-collection to an Integer.
-
-Once you have a DataStream containing your final results, you can write it to an outside system
-by creating a sink. These are just some example methods for creating a sink:
-
-{% highlight java %}
-writeAsText(String path)
-
-print()
-{% endhighlight %}
-
-</div>
-<div data-lang="scala" markdown="1">
-
-We will now give an overview of each of those steps, please refer to the respective sections for
-more details. Note that all core classes of the Scala DataSet API are found in the package
-{% gh_link /flink-scala/src/main/scala/org/apache/flink/api/scala "org.apache.flink.api.scala" %}
-while the classes of the Scala DataStream API can be found in
-{% gh_link /flink-streaming-scala/src/main/java/org/apache/flink/streaming/api/scala "org.apache.flink.streaming.api.scala" %}.
-
-The `StreamExecutionEnvironment` is the basis for all Flink programs. You can
-obtain one using these static methods on `StreamExecutionEnvironment`:
-
-{% highlight scala %}
-getExecutionEnvironment()
-
-createLocalEnvironment()
-
-createRemoteEnvironment(host: String, port: Int, jarFiles: String*)
-{% endhighlight %}
-
-Typically, you only need to use `getExecutionEnvironment()`, since this
-will do the right thing depending on the context: if you are executing
-your program inside an IDE or as a regular Java program it will create
-a local environment that will execute your program on your local machine. If
-you created a JAR file from your program, and invoke it through the
-[command line]({{ site.baseurl }}/apis/cli.html), the Flink cluster manager
-will execute your main method and `getExecutionEnvironment()` will return
-an execution environment for executing your program on a cluster.
-
-For specifying data sources the execution environment has several methods
-to read from files using various methods: you can just read them line by line,
-as CSV files, or using completely custom data input formats. To just read
-a text file as a sequence of lines, you can use:
-
-{% highlight scala %}
-val env = StreamExecutionEnvironment.getExecutionEnvironment()
-
-val text: DataStream[String] = env.readTextFile("file:///path/to/file")
-{% endhighlight %}
-
-This will give you a DataStream on which you can then apply transformations to create new
-derived DataStreams.
-
-You apply transformations by calling methods on DataSet with a transformation
-functions. For example, a map transformation looks like this:
-
-{% highlight scala %}
-val input: DataSet[String] = ...
-
-val mapped = input.map { x => x.toInt }
-{% endhighlight %}
-
-This will create a new DataStream by converting every String in the original
-collection to an Integer.
-
-Once you have a DataStream containing your final results, you can write it to an outside system
-by creating a sink. These are just some example methods for creating a sink:
-
-{% highlight scala %}
-writeAsText(path: String)
-
-print()
-{% endhighlight %}
-
-</div>
-</div>
-
-Once you specified the complete program you need to **trigger the program execution** by calling
-`execute()` on the `StreamExecutionEnvironment`.
-Depending on the type of the `ExecutionEnvironment` the execution will be triggered on your local
-machine or submit your program for execution on a cluster.
-
-The `execute()` method is returning a `JobExecutionResult`, this contains execution
-times and accumulator results.
-
-Please see the [Streaming Guide]({{ site.baseurl }}/apis/streaming/index.html)
-for information about streaming data sources and sink and for more in-depths information
-about the supported transformations on DataStream.
-
-Check out the [Batch Guide]({{ site.baseurl }}/apis/batch/index.html)
-for information about batch data sources and sink and for more in-depths information
-about the supported transformations on DataSet.
-
-
-{% top %}
-
-Lazy Evaluation
----------------
-
-All Flink programs are executed lazily: When the program's main method is executed, the data loading
-and transformations do not happen directly. Rather, each operation is created and added to the
-program's plan. The operations are actually executed when the execution is explicitly triggered by
-an `execute()` call on the execution environment. Whether the program is executed locally
-or on a cluster depends on the type of execution environment
-
-The lazy evaluation lets you construct sophisticated programs that Flink executes as one
-holistically planned unit.
-
-{% top %}
-
-Specifying Keys
----------------
-
-Some transformations (join, coGroup, keyBy, groupBy) require that a key be defined on
-a collection of elements. Other transformations (Reduce, GroupReduce,
-Aggregate, Windows) allow data being grouped on a key before they are
-applied.
-
-A DataSet is grouped as
-{% highlight java %}
-DataSet<...> input = // [...]
-DataSet<...> reduced = input
-  .groupBy(/*define key here*/)
-  .reduceGroup(/*do something*/);
-{% endhighlight %}
-
-while a key can be specified on a DataStream using
-{% highlight java %}
-DataStream<...> input = // [...]
-DataStream<...> windowed = input
-  .key(/*define key here*/)
-  .window(/*window specification*/);
-{% endhighlight %}
-
-The data model of Flink is not based on key-value pairs. Therefore,
-you do not need to physically pack the data set types into keys and
-values. Keys are "virtual": they are defined as functions over the
-actual data to guide the grouping operator.
-
-**NOTE:** In the following discussion we will use the `DataStream` API and `keyBy`.
-For the DataSet API you just have to replace by `DataSet` and `groupBy`.
-
-### Define keys for Tuples
-{:.no_toc}
-
-The simplest case is grouping Tuples on one or more
-fields of the Tuple:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-DataStream<Tuple3<Integer,String,Long>> input = // [...]
-KeyedStream<Tuple3<Integer,String,Long> keyed = input.keyBy(0)
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val input: DataStream[(Int, String, Long)] = // [...]
-val keyed = input.keyBy(0)
-{% endhighlight %}
-</div>
-</div>
-
-The tuples is grouped on the first field (the one of
-Integer type).
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-DataStream<Tuple3<Integer,String,Long>> input = // [...]
-KeyedStream<Tuple3<Integer,String,Long> keyed = input.keyBy(0,1)
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val input: DataSet[(Int, String, Long)] = // [...]
-val grouped = input.groupBy(0,1)
-{% endhighlight %}
-</div>
-</div>
-
-Here, we group the tuples on a composite key consisting of the first and the
-second field.
-
-A note on nested Tuples: If you have a DataStream with a nested tuple, such as:
-
-{% highlight java %}
-DataStream<Tuple3<Tuple2<Integer, Float>,String,Long>> ds;
-{% endhighlight %}
-
-Specifying `keyBy(0)` will cause the system to use the full `Tuple2` as a key (with the Integer and Float being the key). If you want to "navigate" into the nested `Tuple2`, you have to use field expression keys which are explained below.
-
-### Define keys using Field Expressions
-{:.no_toc}
-
-You can use String-based field expressions to reference nested fields and define keys for grouping, sorting, joining, or coGrouping.
-
-Field expressions make it very easy to select fields in (nested) composite types such as [Tuple](#tuples-and-case-classes) and [POJO](#pojos) types.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-In the example below, we have a `WC` POJO with two fields "word" and "count". To group by the field `word`, we just pass its name to the `groupBy()` function.
-{% highlight java %}
-// some ordinary POJO (Plain old Java Object)
-public class WC {
-  public String word;
-  public int count;
-}
-DataStream<WC> words = // [...]
-DataStream<WC> wordCounts = words.keyBy("word").window(/*window specification*/);
-{% endhighlight %}
-
-**Field Expression Syntax**:
-
-- Select POJO fields by their field name. For example `"user"` refers to the "user" field of a POJO type.
-
-- Select Tuple fields by their field name or 0-offset field index. For example `"f0"` and `"5"` refer to the first and sixth field of a Java Tuple type, respectively.
-
-- You can select nested fields in POJOs and Tuples. For example `"user.zip"` refers to the "zip" field of a POJO which is stored in the "user" field of a POJO type. Arbitrary nesting and mixing of POJOs and Tuples is supported such as `"f1.user.zip"` or `"user.f3.1.zip"`.
-
-- You can select the full type using the `"*"` wildcard expressions. This does also work for types which are not Tuple or POJO types.
-
-**Field Expression Example**:
-
-{% highlight java %}
-public static class WC {
-  public ComplexNestedClass complex; //nested POJO
-  private int count;
-  // getter / setter for private field (count)
-  public int getCount() {
-    return count;
-  }
-  public void setCount(int c) {
-    this.count = c;
-  }
-}
-public static class ComplexNestedClass {
-  public Integer someNumber;
-  public float someFloat;
-  public Tuple3<Long, Long, String> word;
-  public IntWritable hadoopCitizen;
-}
-{% endhighlight %}
-
-These are valid field expressions for the example code above:
-
-- `"count"`: The count field in the `WC` class.
-
-- `"complex"`: Recursively selects all fields of the field complex of POJO type `ComplexNestedClass`.
-
-- `"complex.word.f2"`: Selects the last field of the nested `Tuple3`.
-
-- `"complex.hadoopCitizen"`: Selects the Hadoop `IntWritable` type.
-
-</div>
-<div data-lang="scala" markdown="1">
-
-In the example below, we have a `WC` POJO with two fields "word" and "count". To group by the field `word`, we just pass its name to the `groupBy()` function.
-{% highlight java %}
-// some ordinary POJO (Plain old Java Object)
-class WC(var word: String, var count: Int) {
-  def this() { this("", 0L) }
-}
-val words: DataStream[WC] = // [...]
-val wordCounts = words.keyBy("word").window(/*window specification*/)
-
-// or, as a case class, which is less typing
-case class WC(word: String, count: Int)
-val words: DataStream[WC] = // [...]
-val wordCounts = words.keyBy("word").window(/*window specification*/)
-{% endhighlight %}
-
-**Field Expression Syntax**:
-
-- Select POJO fields by their field name. For example `"user"` refers to the "user" field of a POJO type.
-
-- Select Tuple fields by their 1-offset field name or 0-offset field index. For example `"_1"` and `"5"` refer to the first and sixth field of a Scala Tuple type, respectively.
-
-- You can select nested fields in POJOs and Tuples. For example `"user.zip"` refers to the "zip" field of a POJO which is stored in the "user" field of a POJO type. Arbitrary nesting and mixing of POJOs and Tuples is supported such as `"_2.user.zip"` or `"user._4.1.zip"`.
-
-- You can select the full type using the `"_"` wildcard expressions. This does also work for types which are not Tuple or POJO types.
-
-**Field Expression Example**:
-
-{% highlight scala %}
-class WC(var complex: ComplexNestedClass, var count: Int) {
-  def this() { this(null, 0) }
-}
-
-class ComplexNestedClass(
-    var someNumber: Int,
-    someFloat: Float,
-    word: (Long, Long, String),
-    hadoopCitizen: IntWritable) {
-  def this() { this(0, 0, (0, 0, ""), new IntWritable(0)) }
-}
-{% endhighlight %}
-
-These are valid field expressions for the example code above:
-
-- `"count"`: The count field in the `WC` class.
-
-- `"complex"`: Recursively selects all fields of the field complex of POJO type `ComplexNestedClass`.
-
-- `"complex.word._3"`: Selects the last field of the nested `Tuple3`.
-
-- `"complex.hadoopCitizen"`: Selects the Hadoop `IntWritable` type.
-
-</div>
-</div>
-
-### Define keys using Key Selector Functions
-{:.no_toc}
-
-An additional way to define keys are "key selector" functions. A key selector function
-takes a single element as input and returns the key for the element. The key can be of any type and be derived from arbitrary computations.
-
-The following example shows a key selector function that simply returns the field of an object:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-// some ordinary POJO
-public class WC {public String word; public int count;}
-DataStream<WC> words = // [...]
-KeyedStream<WC> kyed = words
-  .keyBy(new KeySelector<WC, String>() {
-     public String getKey(WC wc) { return wc.word; }
-   });
-{% endhighlight %}
-
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-// some ordinary case class
-case class WC(word: String, count: Int)
-val words: DataStream[WC] = // [...]
-val keyed = words.keyBy( _.word )
-{% endhighlight %}
-</div>
-</div>
-
-{% top %}
-
-Specifying Transformation Functions
---------------------------
-
-Most transformations require user-defined functions. This section lists different ways
-of how they can be specified
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-#### Implementing an interface
-
-The most basic way is to implement one of the provided interfaces:
-
-{% highlight java %}
-class MyMapFunction implements MapFunction<String, Integer> {
-  public Integer map(String value) { return Integer.parseInt(value); }
-});
-data.map(new MyMapFunction());
-{% endhighlight %}
-
-#### Anonymous classes
-
-You can pass a function as an anonymous class:
-{% highlight java %}
-data.map(new MapFunction<String, Integer> () {
-  public Integer map(String value) { return Integer.parseInt(value); }
-});
-{% endhighlight %}
-
-#### Java 8 Lambdas
-
-Flink also supports Java 8 Lambdas in the Java API. Please see the full [Java 8 Guide]({{ site.baseurl }}/apis/java8.html).
-
-{% highlight java %}
-data.filter(s -> s.startsWith("http://"));
-{% endhighlight %}
-
-{% highlight java %}
-data.reduce((i1,i2) -> i1 + i2);
-{% endhighlight %}
-
-#### Rich functions
-
-All transformations that require a user-defined function can
-instead take as argument a *rich* function. For example, instead of
-
-{% highlight java %}
-class MyMapFunction implements MapFunction<String, Integer> {
-  public Integer map(String value) { return Integer.parseInt(value); }
-});
-{% endhighlight %}
-
-you can write
-
-{% highlight java %}
-class MyMapFunction extends RichMapFunction<String, Integer> {
-  public Integer map(String value) { return Integer.parseInt(value); }
-});
-{% endhighlight %}
-
-and pass the function as usual to a `map` transformation:
-
-{% highlight java %}
-data.map(new MyMapFunction());
-{% endhighlight %}
-
-Rich functions can also be defined as an anonymous class:
-{% highlight java %}
-data.map (new RichMapFunction<String, Integer>() {
-  public Integer map(String value) { return Integer.parseInt(value); }
-});
-{% endhighlight %}
-
-</div>
-<div data-lang="scala" markdown="1">
-
-
-#### Lambda Functions
-
-As already seen in previous examples all operations accept lambda functions for describing
-the operation:
-{% highlight scala %}
-val data: DataSet[String] = // [...]
-data.filter { _.startsWith("http://") }
-{% endhighlight %}
-
-{% highlight scala %}
-val data: DataSet[Int] = // [...]
-data.reduce { (i1,i2) => i1 + i2 }
-// or
-data.reduce { _ + _ }
-{% endhighlight %}
-
-#### Rich functions
-
-All transformations that take as argument a lambda function can
-instead take as argument a *rich* function. For example, instead of
-
-{% highlight scala %}
-data.map { x => x.toInt }
-{% endhighlight %}
-
-you can write
-
-{% highlight scala %}
-class MyMapFunction extends RichMapFunction[String, Int] {
-  def map(in: String):Int = { in.toInt }
-})
-{% endhighlight %}
-
-and pass the function to a `map` transformation:
-
-{% highlight scala %}
-data.map(new MyMapFunction())
-{% endhighlight %}
-
-Rich functions can also be defined as an anonymous class:
-{% highlight scala %}
-data.map (new RichMapFunction[String, Int] {
-  def map(in: String):Int = { in.toInt }
-})
-{% endhighlight %}
-</div>
-
-</div>
-
-Rich functions provide, in addition to the user-defined function (map,
-reduce, etc), four methods: `open`, `close`, `getRuntimeContext`, and
-`setRuntimeContext`. These are useful for parameterizing the function
-(see [Passing Parameters to Functions]({{ site.baseurl }}/apis/batch/index.html#passing-parameters-to-functions)),
-creating and finalizing local state, accessing broadcast variables (see
-[Broadcast Variables]({{ site.baseurl }}/apis/batch/index.html#broadcast-variables), and for accessing runtime
-information such as accumulators and counters (see
-[Accumulators and Counters](#accumulators--counters), and information
-on iterations (see [Iterations]({{ site.baseurl }}/apis/batch/iterations.html)).
-
-{% top %}
-
-Supported Data Types
---------------------
-
-Flink places some restrictions on the type of elements that can be in a DataSet or DataStream.
-The reason for this is that the system analyzes the types to determine
-efficient execution strategies.
-
-There are six different categories of data types:
-
-1. **Java Tuples** and **Scala Case Classes**
-2. **Java POJOs**
-3. **Primitive Types**
-4. **Regular Classes**
-5. **Values**
-6. **Hadoop Writables**
-7. **Special Types**
-
-#### Tuples and Case Classes
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-
-Tuples are composite types that contain a fixed number of fields with various types.
-The Java API provides classes from `Tuple1` up to `Tuple25`. Every field of a tuple
-can be an arbitrary Flink type including further tuples, resulting in nested tuples. Fields of a
-tuple can be accessed directly using the field's name as `tuple.f4`, or using the generic getter method
-`tuple.getField(int position)`. The field indices start at 0. Note that this stands in contrast
-to the Scala tuples, but it is more consistent with Java's general indexing.
-
-{% highlight java %}
-DataStream<Tuple2<String, Integer>> wordCounts = env.fromElements(
-    new Tuple2<String, Integer>("hello", 1),
-    new Tuple2<String, Integer>("world", 2));
-
-wordCounts.map(new MapFunction<Tuple2<String, Integer>, Integer>() {
-    @Override
-    public String map(Tuple2<String, Integer> value) throws Exception {
-        return value.f1;
-    }
-});
-
-wordCounts.keyBy(0); // also valid .keyBy("f0")
-
-
-{% endhighlight %}
-
-</div>
-<div data-lang="scala" markdown="1">
-
-Scala case classes (and Scala tuples which are a special case of case classes), are composite types that contain a fixed number of fields with various types. Tuple fields are addressed by their 1-offset names such as `_1` for the first field. Case class fields are accessed by their name.
-
-{% highlight scala %}
-case class WordCount(word: String, count: Int)
-val input = env.fromElements(
-    WordCount("hello", 1),
-    WordCount("world", 2)) // Case Class Data Set
-
-input.keyBy("word")// key by field expression "word"
-
-val input2 = env.fromElements(("hello", 1), ("world", 2)) // Tuple2 Data Set
-
-input2.keyBy(0, 1) // key by field positions 0 and 1
-{% endhighlight %}
-
-</div>
-</div>
-
-#### POJOs
-
-Java and Scala classes are treated by Flink as a special POJO data type if they fulfill the following requirements:
-
-- The class must be public.
-
-- It must have a public constructor without arguments (default constructor).
-
-- All fields are either public or must be accessible through getter and setter functions. For a field called `foo` the getter and setter methods must be named `getFoo()` and `setFoo()`.
-
-- The type of a field must be supported by Flink. At the moment, Flink uses [Avro](http://avro.apache.org) to serialize arbitrary objects (such as `Date`).
-
-Flink analyzes the structure of POJO types, i.e., it learns about the fields of a POJO. As a result POJO types are easier to use than general types. Moreover, Flink can process POJOs more efficiently than general types.
-
-The following example shows a simple POJO with two public fields.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-public class WordWithCount {
-
-    public String word;
-    public int count;
-
-    public WordWithCount() {}
-
-    public WordWithCount(String word, int count) {
-        this.word = word;
-        this.count = count;
-    }
-}
-
-DataStream<WordWithCount> wordCounts = env.fromElements(
-    new WordWithCount("hello", 1),
-    new WordWithCount("world", 2));
-
-wordCounts.keyBy("word"); // key by field expression "word"
-
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-class WordWithCount(var word: String, var count: Int) {
-    def this() {
-      this(null, -1)
-    }
-}
-
-val input = env.fromElements(
-    new WordWithCount("hello", 1),
-    new WordWithCount("world", 2)) // Case Class Data Set
-
-input.keyBy("word")// key by field expression "word"
-
-{% endhighlight %}
-</div>
-</div>
-
-#### Primitive Types
-
-Flink supports all Java and Scala primitive types such as `Integer`, `String`, and `Double`.
-
-#### General Class Types
-
-Flink supports most Java and Scala classes (API and custom).
-Restrictions apply to classes containing fields that cannot be serialized, like file pointers, I/O streams, or other native
-resources. Classes that follow the Java Beans conventions work well in general.
-
-All classes that are not identified as POJO types (see POJO requirements above) are handled by Flink as general class types.
-Flink treats these data types as black boxes and is not able to access their their content (i.e., for efficient sorting). General types are de/serialized using the serialization framework [Kryo](https://github.com/EsotericSoftware/kryo).
-
-#### Values
-
-*Value* types describe their serialization and deserialization manually. Instead of going through a
-general purpose serialization framework, they provide custom code for those operations by means of
-implementing the `org.apache.flinktypes.Value` interface with the methods `read` and `write`. Using
-a Value type is reasonable when general purpose serialization would be highly inefficient. An
-example would be a data type that implements a sparse vector of elements as an array. Knowing that
-the array is mostly zero, one can use a special encoding for the non-zero elements, while the
-general purpose serialization would simply write all array elements.
-
-The `org.apache.flinktypes.CopyableValue` interface supports manual internal cloning logic in a
-similar way.
-
-Flink comes with pre-defined Value types that correspond to basic data types. (`ByteValue`,
-`ShortValue`, `IntValue`, `LongValue`, `FloatValue`, `DoubleValue`, `StringValue`, `CharValue`,
-`BooleanValue`). These Value types act as mutable variants of the basic data types: Their value can
-be altered, allowing programmers to reuse objects and take pressure off the garbage collector.
-
-
-#### Hadoop Writables
-
-You can use types that implement the `org.apache.hadoop.Writable` interface. The serialization logic
-defined in the `write()`and `readFields()` methods will be used for serialization.
-
-#### Special Types
-
-You can use special types, including Scala's `Either`, `Option`, and `Try`.
-The Java API has its own custom implementation of `Either`.
-Similarly to Scala's `Either`, it represents a value of one two possible types, *Left* or *Right*.
-`Either` can be useful for error handling or operators that need to output two different types of records.
-
-#### Type Erasure & Type Inference
-
-*Note: This Section is only relevant for Java.*
-
-The Java compiler throws away much of the generic type information after compilation. This is
-known as *type erasure* in Java. It means that at runtime, an instance of an object does not know
-its generic type any more. For example, instances of `DataStream<String>` and `DataStream<Long>` look the
-same to the JVM.
-
-Flink requires type information at the time when it prepares the program for execution (when the
-main method of the program is called). The Flink Java API tries to reconstruct the type information
-that was thrown away in various ways and store it explicitly in the data sets and operators. You can
-retrieve the type via `DataStream.getType()`. The method returns an instance of `TypeInformation`,
-which is Flink's internal way of representing types.
-
-The type inference has its limits and needs the "cooperation" of the programmer in some cases.
-Examples for that are methods that create data sets from collections, such as
-`ExecutionEnvironment.fromCollection(),` where you can pass an argument that describes the type. But
-also generic functions like `MapFunction<I, O>` may need extra type information.
-
-The
-{% gh_link /flink-core/src/main/java/org/apache/flink/api/java/typeutils/ResultTypeQueryable.java "ResultTypeQueryable" %}
-interface can be implemented by input formats and functions to tell the API
-explicitly about their return type. The *input types* that the functions are invoked with can
-usually be inferred by the result types of the previous operations.
-
-Execution Configuration
------------------------
-
-The `StreamExecutionEnvironment` also contains the `ExecutionConfig` which allows to set job specific configuration values for the runtime.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
-ExecutionConfig executionConfig = env.getConfig();
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = StreamExecutionEnvironment.getExecutionEnvironment
-var executionConfig = env.getConfig
-{% endhighlight %}
-</div>
-</div>
-
-The following configuration options are available: (the default is bold)
-
-- **`enableClosureCleaner()`** / `disableClosureCleaner()`. The closure cleaner is enabled by default. The closure cleaner removes unneeded references to the surrounding class of anonymous functions inside Flink programs.
-With the closure cleaner disabled, it might happen that an anonymous user function is referencing the surrounding class, which is usually not Serializable. This will lead to exceptions by the serializer.
-
-- `getParallelism()` / `setParallelism(int parallelism)` Set the default parallelism for the job.
-
-- `getNumberOfExecutionRetries()` / `setNumberOfExecutionRetries(int numberOfExecutionRetries)` Sets the number of times that failed tasks are re-executed. A value of zero effectively disables fault tolerance. A value of `-1` indicates that the system default value (as defined in the configuration) should be used.
-
-- `getExecutionRetryDelay()` / `setExecutionRetryDelay(long executionRetryDelay)` Sets the delay in milliseconds that the system waits after a job has failed, before re-executing it. The delay starts after all tasks have been successfully been stopped on the TaskManagers, and once the delay is past, the tasks are re-started. This parameter is useful to delay re-execution in order to let certain time-out related failures surface fully (like broken connections that have not fully timed out), before attempting a re-execution and immediately failing again due to the same problem. This parameter only has an effect if the number of execution re-tries is one or more.
-
-- `getExecutionMode()` / `setExecutionMode()`. The default execution mode is PIPELINED. Sets the execution mode to execute the program. The execution mode defines whether data exchanges are performed in a batch or on a pipelined manner.
-
-- `enableForceKryo()` / **`disableForceKryo`**. Kryo is not forced by default. Forces the GenericTypeInformation to use the Kryo serializer for POJOS even though we could analyze them as a POJO. In some cases this might be preferable. For example, when Flink's internal serializers fail to handle a POJO properly.
-
-- `enableForceAvro()` / **`disableForceAvro()`**. Avro is not forced by default. Forces the Flink AvroTypeInformation to use the Avro serializer instead of Kryo for serializing Avro POJOs.
-
-- `enableObjectReuse()` / **`disableObjectReuse()`** By default, objects are not reused in Flink. Enabling the object reuse mode will instruct the runtime to reuse user objects for better performance. Keep in mind that this can lead to bugs when the user-code function of an operation is not aware of this behavior.
-
-- **`enableSysoutLogging()`** / `disableSysoutLogging()` JobManager status updates are printed to `System.out` by default. This setting allows to disable this behavior.
-
-- `getGlobalJobParameters()` / `setGlobalJobParameters()` This method allows users to set custom objects as a global configuration for the job. Since the `ExecutionConfig` is accessible in all user defined functions, this is an easy method for making configuration globally available in a job.
-
-- `addDefaultKryoSerializer(Class<?> type, Serializer<?> serializer)` Register a Kryo serializer instance for the given `type`.
-
-- `addDefaultKryoSerializer(Class<?> type, Class<? extends Serializer<?>> serializerClass)` Register a Kryo serializer class for the given `type`.
-
-- `registerTypeWithKryoSerializer(Class<?> type, Serializer<?> serializer)` Register the given type with Kryo and specify a serializer for it. By registering a type with Kryo, the serialization of the type will be much more efficient.
-
-- `registerKryoType(Class<?> type)` If the type ends up being serialized with Kryo, then it will be registered at Kryo to make sure that only tags (integer IDs) are written. If a type is not registered with Kryo, its entire class-name will be serialized with every instance, leading to much higher I/O costs.
-
-- `registerPojoType(Class<?> type)` Registers the given type with the serialization stack. If the type is eventually serialized as a POJO, then the type is registered with the POJO serializer. If the type ends up being serialized with Kryo, then it will be registered at Kryo to make sure that only tags are written. If a type is not registered with Kryo, its entire class-name will be serialized with every instance, leading to much higher I/O costs.
-
-Note that types registered with `registerKryoType()` are not available to Flink's Kryo serializer instance.
-
-- `disableAutoTypeRegistration()` Automatic type registration is enabled by default. The automatic type registration is registering all types (including sub-types) used by usercode with Kryo and the POJO serializer.
-
-- `setTaskCancellationInterval(long interval)` Sets the the interval (in milliseconds) to wait between consecutive attempts to cancel a running task. When a task is canceled a new thread is created which periodically calls `interrupt()` on the task thread, if the task thread does not terminate within a certain time. This parameter refers to the time between consecutive calls to `interrupt()` and is set by default to **30000** milliseconds, or **30 seconds**.
-
-The `RuntimeContext` which is accessible in `Rich*` functions through the `getRuntimeContext()` method also allows to access the `ExecutionConfig` in all user defined functions.
-
-{% top %}
-
-Program Packaging and Distributed Execution
------------------------------------------
-
-As described earlier, Flink programs can be executed on
-clusters by using a `remote environment`. Alternatively, programs can be packaged into JAR Files
-(Java Archives) for execution. Packaging the program is a prerequisite to executing them through the
-[command line interface]({{ site.baseurl }}/apis/cli.html).
-
-#### Packaging Programs
-
-To support execution from a packaged JAR file via the command line or web interface, a program must
-use the environment obtained by `StreamExecutionEnvironment.getExecutionEnvironment()`. This environment
-will act as the cluster's environment when the JAR is submitted to the command line or web
-interface. If the Flink program is invoked differently than through these interfaces, the
-environment will act like a local environment.
-
-To package the program, simply export all involved classes as a JAR file. The JAR file's manifest
-must point to the class that contains the program's *entry point* (the class with the public
-`main` method). The simplest way to do this is by putting the *main-class* entry into the
-manifest (such as `main-class: org.apache.flinkexample.MyProgram`). The *main-class* attribute is
-the same one that is used by the Java Virtual Machine to find the main method when executing a JAR
-files through the command `java -jar pathToTheJarFile`. Most IDEs offer to include that attribute
-automatically when exporting JAR files.
-
-
-#### Packaging Programs through Plans
-
-Additionally, we support packaging programs as *Plans*. Instead of defining a progam in the main
-method and calling
-`execute()` on the environment, plan packaging returns the *Program Plan*, which is a description of
-the program's data flow. To do that, the program must implement the
-`org.apache.flink.api.common.Program` interface, defining the `getPlan(String...)` method. The
-strings passed to that method are the command line arguments. The program's plan can be created from
-the environment via the `ExecutionEnvironment#createProgramPlan()` method. When packaging the
-program's plan, the JAR manifest must point to the class implementing the
-`org.apache.flinkapi.common.Program` interface, instead of the class with the main method.
-
-
-#### Summary
-
-The overall procedure to invoke a packaged program is as follows:
-
-1. The JAR's manifest is searched for a *main-class* or *program-class* attribute. If both
-attributes are found, the *program-class* attribute takes precedence over the *main-class*
-attribute. Both the command line and the web interface support a parameter to pass the entry point
-class name manually for cases where the JAR manifest contains neither attribute.
-
-2. If the entry point class implements the `org.apache.flinkapi.common.Program`, then the system
-calls the `getPlan(String...)` method to obtain the program plan to execute.
-
-3. If the entry point class does not implement the `org.apache.flinkapi.common.Program` interface,
-the system will invoke the main method of the class.
-
-{% top %}
-
-Accumulators & Counters
----------------------------
-
-Accumulators are simple constructs with an **add operation** and a **final accumulated result**,
-which is available after the job ended.
-
-The most straightforward accumulator is a **counter**: You can increment it using the
-```Accumulator.add(V value)``` method. At the end of the job Flink will sum up (merge) all partial
-results and send the result to the client. Accumulators are useful during debugging or if you
-quickly want to find out more about your data.
-
-Flink currently has the following **built-in accumulators**. Each of them implements the
-{% gh_link /flink-core/src/main/java/org/apache/flink/api/common/accumulators/Accumulator.java "Accumulator" %}
-interface.
-
-- {% gh_link /flink-core/src/main/java/org/apache/flink/api/common/accumulators/IntCounter.java "__IntCounter__" %},
-  {% gh_link /flink-core/src/main/java/org/apache/flink/api/common/accumulators/LongCounter.java "__LongCounter__" %}
-  and {% gh_link /flink-core/src/main/java/org/apache/flink/api/common/accumulators/DoubleCounter.java "__DoubleCounter__" %}:
-  See below for an example using a counter.
-- {% gh_link /flink-core/src/main/java/org/apache/flink/api/common/accumulators/Histogram.java "__Histogram__" %}:
-  A histogram implementation for a discrete number of bins. Internally it is just a map from Integer
-  to Integer. You can use this to compute distributions of values, e.g. the distribution of
-  words-per-line for a word count program.
-
-__How to use accumulators:__
-
-First you have to create an accumulator object (here a counter) in the user-defined transformation
-function where you want to use it.
-
-{% highlight java %}
-private IntCounter numLines = new IntCounter();
-{% endhighlight %}
-
-Second you have to register the accumulator object, typically in the ```open()``` method of the
-*rich* function. Here you also define the name.
-
-{% highlight java %}
-getRuntimeContext().addAccumulator("num-lines", this.numLines);
-{% endhighlight %}
-
-You can now use the accumulator anywhere in the operator function, including in the ```open()``` and
-```close()``` methods.
-
-{% highlight java %}
-this.numLines.add(1);
-{% endhighlight %}
-
-The overall result will be stored in the ```JobExecutionResult``` object which is
-returned from the `execute()` method of the execution environment
-(currently this only works if the execution waits for the
-completion of the job).
-
-{% highlight java %}
-myJobExecutionResult.getAccumulatorResult("num-lines")
-{% endhighlight %}
-
-All accumulators share a single namespace per job. Thus you can use the same accumulator in
-different operator functions of your job. Flink will internally merge all accumulators with the same
-name.
-
-A note on accumulators and iterations: Currently the result of accumulators is only available after
-the overall job ended. We plan to also make the result of the previous iteration available in the
-next iteration. You can use
-{% gh_link /flink-java/src/main/java/org/apache/flink/api/java/operators/IterativeDataSet.java#L98 "Aggregators" %}
-to compute per-iteration statistics and base the termination of iterations on such statistics.
-
-__Custom accumulators:__
-
-To implement your own accumulator you simply have to write your implementation of the Accumulator
-interface. Feel free to create a pull request if you think your custom accumulator should be shipped
-with Flink.
-
-You have the choice to implement either
-{% gh_link /flink-core/src/main/java/org/apache/flink/api/common/accumulators/Accumulator.java "Accumulator" %}
-or {% gh_link /flink-core/src/main/java/org/apache/flink/api/common/accumulators/SimpleAccumulator.java "SimpleAccumulator" %}.
-
-```Accumulator<V,R>``` is most flexible: It defines a type ```V``` for the value to add, and a
-result type ```R``` for the final result. E.g. for a histogram, ```V``` is a number and ```R``` is
- a histogram. ```SimpleAccumulator``` is for the cases where both types are the same, e.g. for counters.
-
-{% top %}
-
-Parallel Execution
-------------------
-
-This section describes how the parallel execution of programs can be configured in Flink. A Flink
-program consists of multiple tasks (transformations/operators, data sources, and sinks). A task is split into
-several parallel instances for execution and each parallel instance processes a subset of the task's
-input data. The number of parallel instances of a task is called its *parallelism*.
-
-
-The parallelism of a task can be specified in Flink on different levels.
-
-### Operator Level
-
-The parallelism of an individual operator, data source, or data sink can be defined by calling its
-`setParallelism()` method.  For example, like this:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
-
-DataStream<String> text = [...]
-DataStream<Tuple2<String, Integer>> wordCounts = text
-    .flatMap(new LineSplitter())
-    .keyBy(0)
-    .timeWindow(Time.seconds(5))
-    .sum(1).setParallelism(5);
-
-wordCounts.print();
-
-env.execute("Word Count Example");
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = StreamExecutionEnvironment.getExecutionEnvironment
-
-val text = [...]
-val wordCounts = text
-    .flatMap{ _.split(" ") map { (_, 1) } }
-    .keyBy(0)
-    .timeWindow(Time.seconds(5))
-    .sum(1).setParallelism(5)
-wordCounts.print()
-
-env.execute("Word Count Example")
-{% endhighlight %}
-</div>
-</div>
-
-### Execution Environment Level
-
-As mentioned [here](#anatomy-of-a-flink-program) Flink programs are executed in the context
-of an execution environment. An
-execution environment defines a default parallelism for all operators, data sources, and data sinks
-it executes. Execution environment parallelism can be overwritten by explicitly configuring the
-parallelism of an operator.
-
-The default parallelism of an execution environment can be specified by calling the
-`setParallelism()` method. To execute all operators, data sources, and data sinks with a parallelism
-of `3`, set the default parallelism of the execution environment as follows:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
-env.setParallelism(3);
-
-DataStream<String> text = [...]
-DataStream<Tuple2<String, Integer>> wordCounts = [...]
-wordCounts.print();
-
-env.execute("Word Count Example");
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = StreamExecutionEnvironment.getExecutionEnvironment
-env.setParallelism(3)
-
-val text = [...]
-val wordCounts = text
-    .flatMap{ _.split(" ") map { (_, 1) } }
-    .keyBy(0)
-    .timeWindow(Time.seconds(5))
-    .sum(1)
-wordCounts.print()
-
-env.execute("Word Count Example")
-{% endhighlight %}
-</div>
-</div>
-
-### Client Level
-
-The parallelism can be set at the Client when submitting jobs to Flink. The
-Client can either be a Java or a Scala program. One example of such a Client is
-Flink's Command-line Interface (CLI).
-
-For the CLI client, the parallelism parameter can be specified with `-p`. For
-example:
-
-    ./bin/flink run -p 10 ../examples/*WordCount-java*.jar
-
-
-In a Java/Scala program, the parallelism is set as follows:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-
-try {
-    PackagedProgram program = new PackagedProgram(file, args);
-    InetSocketAddress jobManagerAddress = RemoteExecutor.getInetFromHostport("localhost:6123");
-    Configuration config = new Configuration();
-
-    Client client = new Client(jobManagerAddress, config, program.getUserCodeClassLoader());
-
-    // set the parallelism to 10 here
-    client.run(program, 10, true);
-
-} catch (ProgramInvocationException e) {
-    e.printStackTrace();
-}
-
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-try {
-    PackagedProgram program = new PackagedProgram(file, args)
-    InetSocketAddress jobManagerAddress = RemoteExecutor.getInetFromHostport("localhost:6123")
-    Configuration config = new Configuration()
-
-    Client client = new Client(jobManagerAddress, new Configuration(), program.getUserCodeClassLoader())
-
-    // set the parallelism to 10 here
-    client.run(program, 10, true)
-
-} catch {
-    case e: Exception => e.printStackTrace
-}
-{% endhighlight %}
-</div>
-</div>
-
-
-### System Level
-
-A system-wide default parallelism for all execution environments can be defined by setting the
-`parallelism.default` property in `./conf/flink-conf.yaml`. See the
-[Configuration]({{ site.baseurl }}/setup/config.html) documentation for details.
-
-{% top %}
-
-Execution Plans
----------------
-
-Depending on various parameters such as data size or number of machines in the cluster, Flink's
-optimizer automatically chooses an execution strategy for your program. In many cases, it can be
-useful to know how exactly Flink will execute your program.
-
-__Plan Visualization Tool__
-
-Flink comes packaged with a visualization tool for execution plans. The HTML document containing
-the visualizer is located under ```tools/planVisualizer.html```. It takes a JSON representation of
-the job execution plan and visualizes it as a graph with complete annotations of execution
-strategies.
-
-The following code shows how to print the execution plan JSON from your program:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-
-...
-
-System.out.println(env.getExecutionPlan());
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = ExecutionEnvironment.getExecutionEnvironment
-
-...
-
-println(env.getExecutionPlan())
-{% endhighlight %}
-</div>
-</div>
-
-
-To visualize the execution plan, do the following:
-
-1. **Open** ```planVisualizer.html``` with your web browser,
-2. **Paste** the JSON string into the text field, and
-3. **Press** the draw button.
-
-After these steps, a detailed execution plan will be visualized.
-
-<img alt="A flink job execution graph." src="fig/plan_visualizer.png" width="80%">
-
-
-__Web Interface__
-
-Flink offers a web interface for submitting and executing jobs. The interface is part of the JobManager's
-web interface for monitoring, per default running on port 8081. Job submission via this interfaces requires
-that you have set `jobmanager.web.submit.enable: true` in `flink-conf.yaml`.
-
-You may specify program arguments before the job is executed. The plan visualization enables you to show
-the execution plan before executing the Flink job.
-
-{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/connectors.md
----------------------------------------------------------------------
diff --git a/docs/apis/connectors.md b/docs/apis/connectors.md
deleted file mode 100644
index 6568073..0000000
--- a/docs/apis/connectors.md
+++ /dev/null
@@ -1,23 +0,0 @@
----
-title:  "Connectors"
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-The *Connectors* have been moved. Redirecting to [{{ site.baseurl }}/apis/batch/connectors.html]({{ site.baseurl }}/apis/batch/connectors.html) in 1 second.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/index.md
----------------------------------------------------------------------
diff --git a/docs/apis/index.md b/docs/apis/index.md
deleted file mode 100644
index ab12b79..0000000
--- a/docs/apis/index.md
+++ /dev/null
@@ -1,21 +0,0 @@
----
-title: "Programming Guides"
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/java8.md
----------------------------------------------------------------------
diff --git a/docs/apis/java8.md b/docs/apis/java8.md
deleted file mode 100644
index c820355..0000000
--- a/docs/apis/java8.md
+++ /dev/null
@@ -1,198 +0,0 @@
----
-title: "Java 8 Programming Guide"
-# Top-level navigation
-top-nav-group: apis
-top-nav-pos: 12
-top-nav-title: Java 8
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-Java 8 introduces several new language features designed for faster and clearer coding. With the most important feature,
-the so-called "Lambda Expressions", Java 8 opens the door to functional programming. Lambda Expressions allow for implementing and
-passing functions in a straightforward way without having to declare additional (anonymous) classes.
-
-The newest version of Flink supports the usage of Lambda Expressions for all operators of the Java API.
-This document shows how to use Lambda Expressions and describes current limitations. For a general introduction to the
-Flink API, please refer to the [Programming Guide](programming_guide.html)
-
-* TOC
-{:toc}
-
-### Examples
-
-The following example illustrates how to implement a simple, inline `map()` function that squares its input using a Lambda Expression.
-The types of input `i` and output parameters of the `map()` function need not to be declared as they are inferred by the Java 8 compiler.
-
-~~~java
-env.fromElements(1, 2, 3)
-// returns the squared i
-.map(i -> i*i)
-.print();
-~~~
-
-The next two examples show different implementations of a function that uses a `Collector` for output.
-Functions, such as `flatMap()`, require a output type (in this case `String`) to be defined for the `Collector` in order to be type-safe.
-If the `Collector` type can not be inferred from the surrounding context, it need to be declared in the Lambda Expression's parameter list manually.
-Otherwise the output will be treated as type `Object` which can lead to undesired behaviour.
-
-~~~java
-DataSet<Integer> input = env.fromElements(1, 2, 3);
-
-// collector type must be declared
-input.flatMap((Integer number, Collector<String> out) -> {
-    StringBuilder builder = new StringBuilder();
-    for(int i = 0; i < number; i++) {
-        builder.append("a");
-        out.collect(builder.toString());
-    }
-})
-// returns (on separate lines) "a", "a", "aa", "a", "aa", "aaa"
-.print();
-~~~
-
-~~~java
-DataSet<Integer> input = env.fromElements(1, 2, 3);
-
-// collector type must not be declared, it is inferred from the type of the dataset
-DataSet<String> manyALetters = input.flatMap((number, out) -> {
-    StringBuilder builder = new StringBuilder();
-    for(int i = 0; i < number; i++) {
-       builder.append("a");
-       out.collect(builder.toString());
-    }
-});
-
-// returns (on separate lines) "a", "a", "aa", "a", "aa", "aaa"
-manyALetters.print();
-~~~
-
-The following code demonstrates a word count which makes extensive use of Lambda Expressions.
-
-~~~java
-DataSet<String> input = env.fromElements("Please count", "the words", "but not this");
-
-// filter out strings that contain "not"
-input.filter(line -> !line.contains("not"))
-// split each line by space
-.map(line -> line.split(" "))
-// emit a pair <word,1> for each array element
-.flatMap((String[] wordArray, Collector<Tuple2<String, Integer>> out)
-    -> Arrays.stream(wordArray).forEach(t -> out.collect(new Tuple2<>(t, 1)))
-    )
-// group and sum up
-.groupBy(0).sum(1)
-// print
-.print();
-~~~
-
-### Compiler Limitations
-Currently, Flink only supports jobs containing Lambda Expressions completely if they are **compiled with the Eclipse JDT compiler contained in Eclipse Luna 4.4.2 (and above)**.
-
-Only the Eclipse JDT compiler preserves the generic type information necessary to use the entire Lambda Expressions feature type-safely.
-Other compilers such as the OpenJDK's and Oracle JDK's `javac` throw away all generic parameters related to Lambda Expressions. This means that types such as `Tuple2<String,Integer` or `Collector<String>` declared as a Lambda function input or output parameter will be pruned to `Tuple2` or `Collector` in the compiled `.class` files, which is too little information for the Flink Compiler.
-
-How to compile a Flink job that contains Lambda Expressions with the JDT compiler will be covered in the next section.
-
-However, it is possible to implement functions such as `map()` or `filter()` with Lambda Expressions in Java 8 compilers other than the Eclipse JDT compiler as long as the function has no `Collector`s or `Iterable`s *and* only if the function handles unparameterized types such as `Integer`, `Long`, `String`, `MyOwnClass` (types without Generics!).
-
-#### Compile Flink jobs with the Eclipse JDT compiler and Maven
-
-If you are using the Eclipse IDE, you can run and debug your Flink code within the IDE without any problems after some configuration steps. The Eclipse IDE by default compiles its Java sources with the Eclipse JDT compiler. The next section describes how to configure the Eclipse IDE.
-
-If you are using a different IDE such as IntelliJ IDEA or you want to package your Jar-File with Maven to run your job on a cluster, you need to modify your project's `pom.xml` file and build your program with Maven. The [quickstart]({{site.baseurl}}/quickstart/setup_quickstart.html) contains preconfigured Maven projects which can be used for new projects or as a reference. Uncomment the mentioned lines in your generated quickstart `pom.xml` file if you want to use Java 8 with Lambda Expressions.
-
-Alternatively, you can manually insert the following lines to your Maven `pom.xml` file. Maven will then use the Eclipse JDT compiler for compilation.
-
-~~~xml
-<!-- put these lines under "project/build/pluginManagement/plugins" of your pom.xml -->
-
-<plugin>
-    <!-- Use compiler plugin with tycho as the adapter to the JDT compiler. -->
-    <artifactId>maven-compiler-plugin</artifactId>
-    <configuration>
-        <source>1.8</source>
-        <target>1.8</target>
-        <compilerId>jdt</compilerId>
-    </configuration>
-    <dependencies>
-        <!-- This dependency provides the implementation of compiler "jdt": -->
-        <dependency>
-            <groupId>org.eclipse.tycho</groupId>
-            <artifactId>tycho-compiler-jdt</artifactId>
-            <version>0.21.0</version>
-        </dependency>
-    </dependencies>
-</plugin>
-~~~
-
-If you are using Eclipse for development, the m2e plugin might complain about the inserted lines above and marks your `pom.xml` as invalid. If so, insert the following lines to your `pom.xml`.
-
-~~~xml
-<!-- put these lines under "project/build/pluginManagement/plugins/plugin[groupId="org.eclipse.m2e", artifactId="lifecycle-mapping"]/configuration/lifecycleMappingMetadata/pluginExecutions" of your pom.xml -->
-
-<pluginExecution>
-    <pluginExecutionFilter>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-compiler-plugin</artifactId>
-        <versionRange>[3.1,)</versionRange>
-        <goals>
-            <goal>testCompile</goal>
-            <goal>compile</goal>
-        </goals>
-    </pluginExecutionFilter>
-    <action>
-        <ignore></ignore>
-    </action>
-</pluginExecution>
-~~~
-
-#### Run and debug Flink jobs within the Eclipse IDE
-
-First of all, make sure you are running a current version of Eclipse IDE (4.4.2 or later). Also make sure that you have a Java 8 Runtime Environment (JRE) installed in Eclipse IDE (`Window` -> `Preferences` -> `Java` -> `Installed JREs`).
-
-Create/Import your Eclipse project.
-
-If you are using Maven, you also need to change the Java version in your `pom.xml` for the `maven-compiler-plugin`. Otherwise right click the `JRE System Library` section of your project and open the `Properties` window in order to switch to a Java 8 JRE (or above) that supports Lambda Expressions.
-
-The Eclipse JDT compiler needs a special compiler flag in order to store type information in `.class` files. Open the JDT configuration file at `{project directoy}/.settings/org.eclipse.jdt.core.prefs` with your favorite text editor and add the following line:
-
-~~~
-org.eclipse.jdt.core.compiler.codegen.lambda.genericSignature=generate
-~~~
-
-If not already done, also modify the Java versions of the following properties to `1.8` (or above):
-
-~~~
-org.eclipse.jdt.core.compiler.codegen.targetPlatform=1.8
-org.eclipse.jdt.core.compiler.compliance=1.8
-org.eclipse.jdt.core.compiler.source=1.8
-~~~
-
-After you have saved the file, perform a complete project refresh in Eclipse IDE.
-
-If you are using Maven, right click your Eclipse project and select `Maven` -> `Update Project...`.
-
-You have configured everything correctly, if the following Flink program runs without exceptions:
-
-~~~java
-final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-env.fromElements(1, 2, 3).map((in) -> new Tuple1<String>(" " + in)).print();
-env.execute();
-~~~

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/local_execution.md
----------------------------------------------------------------------
diff --git a/docs/apis/local_execution.md b/docs/apis/local_execution.md
deleted file mode 100644
index 07e15fa..0000000
--- a/docs/apis/local_execution.md
+++ /dev/null
@@ -1,126 +0,0 @@
----
-title:  "Local Execution"
-# Top-level navigation
-top-nav-group: apis
-top-nav-pos: 7
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-Flink can run on a single machine, even in a single Java Virtual Machine. This allows users to test and debug Flink programs locally. This section gives an overview of the local execution mechanisms.
-
-The local environments and executors allow you to run Flink programs in a local Java Virtual Machine, or with within any JVM as part of existing programs. Most examples can be launched locally by simply hitting the "Run" button of your IDE.
-
-There are two different kinds of local execution supported in Flink. The `LocalExecutionEnvironment` is starting the full Flink runtime, including a JobManager and a TaskManager. These include memory management and all the internal algorithms that are executed in the cluster mode.
-
-The `CollectionEnvironment` is executing the Flink program on Java collections. This mode will not start the full Flink runtime, so the execution is very low-overhead and lightweight. For example a `DataSet.map()`-transformation will be executed by applying the `map()` function to all elements in a Java list.
-
-* TOC
-{:toc}
-
-
-## Debugging
-
-If you are running Flink programs locally, you can also debug your program like any other Java program. You can either use `System.out.println()` to write out some internal variables or you can use the debugger. It is possible to set breakpoints within `map()`, `reduce()` and all the other methods.
-Please also refer to the [debugging section](programming_guide.html#debugging) in the Java API documentation for a guide to testing and local debugging utilities in the Java API.
-
-## Maven Dependency
-
-If you are developing your program in a Maven project, you have to add the `flink-clients` module using this dependency:
-
-~~~xml
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-clients{{ site.scala_version_suffix }}</artifactId>
-  <version>{{site.version}}</version>
-</dependency>
-~~~
-
-## Local Environment
-
-The `LocalEnvironment` is a handle to local execution for Flink programs. Use it to run a program within a local JVM - standalone or embedded in other programs.
-
-The local environment is instantiated via the method `ExecutionEnvironment.createLocalEnvironment()`. By default, it will use as many local threads for execution as your machine has CPU cores (hardware contexts). You can alternatively specify the desired parallelism. The local environment can be configured to log to the console using `enableLogging()`/`disableLogging()`.
-
-In most cases, calling `ExecutionEnvironment.getExecutionEnvironment()` is the even better way to go. That method returns a `LocalEnvironment` when the program is started locally (outside the command line interface), and it returns a pre-configured environment for cluster execution, when the program is invoked by the [command line interface](cli.html).
-
-~~~java
-public static void main(String[] args) throws Exception {
-    ExecutionEnvironment env = ExecutionEnvironment.createLocalEnvironment();
-
-    DataSet<String> data = env.readTextFile("file:///path/to/file");
-
-    data
-        .filter(new FilterFunction<String>() {
-            public boolean filter(String value) {
-                return value.startsWith("http://");
-            }
-        })
-        .writeAsText("file:///path/to/result");
-
-    JobExecutionResult res = env.execute();
-}
-~~~
-
-The `JobExecutionResult` object, which is returned after the execution finished, contains the program runtime and the accumulator results.
-
-The `LocalEnvironment` allows also to pass custom configuration values to Flink.
-
-~~~java
-Configuration conf = new Configuration();
-conf.setFloat(ConfigConstants.TASK_MANAGER_MEMORY_FRACTION_KEY, 0.5f);
-final ExecutionEnvironment env = ExecutionEnvironment.createLocalEnvironment(conf);
-~~~
-
-*Note:* The local execution environments do not start any web frontend to monitor the execution.
-
-## Collection Environment
-
-The execution on Java Collections using the `CollectionEnvironment` is a low-overhead approach for executing Flink programs. Typical use-cases for this mode are automated tests, debugging and code re-use.
-
-Users can use algorithms implemented for batch processing also for cases that are more interactive. A slightly changed variant of a Flink program could be used in a Java Application Server for processing incoming requests.
-
-**Skeleton for Collection-based execution**
-
-~~~java
-public static void main(String[] args) throws Exception {
-    // initialize a new Collection-based execution environment
-    final ExecutionEnvironment env = new CollectionEnvironment();
-
-    DataSet<User> users = env.fromCollection( /* get elements from a Java Collection */);
-
-    /* Data Set transformations ... */
-
-    // retrieve the resulting Tuple2 elements into a ArrayList.
-    Collection<...> result = new ArrayList<...>();
-    resultDataSet.output(new LocalCollectionOutputFormat<...>(result));
-
-    // kick off execution.
-    env.execute();
-
-    // Do some work with the resulting ArrayList (=Collection).
-    for(... t : result) {
-        System.err.println("Result = "+t);
-    }
-}
-~~~
-
-The `flink-examples-batch` module contains a full example, called `CollectionExecutionExample`.
-
-Please note that the execution of the collection-based Flink programs is only possible on small data, which fits into the JVM heap. The execution on collections is not multi-threaded, only one thread is used.


[51/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
[FLINK-4317, FLIP-3] [docs] Restructure docs

- Add redirect layout
- Remove Maven artifact name warning
- Add info box if stable, but not latest
- Add font-awesome 4.6.3
- Add sidenav layout

This closes #2387.


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/844c874b
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/844c874b
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/844c874b

Branch: refs/heads/flip-6
Commit: 844c874b52546eaca78af8cbfc6f08cb2b4d873c
Parents: c09ff03
Author: Ufuk Celebi <uc...@apache.org>
Authored: Wed Aug 17 15:06:04 2016 +0200
Committer: Ufuk Celebi <uc...@apache.org>
Committed: Wed Aug 24 11:25:56 2016 +0200

----------------------------------------------------------------------
 LICENSE                                         |    5 +-
 docs/README.md                                  |   42 +-
 docs/_config.yml                                |   17 +-
 docs/_includes/navbar.html                      |  117 -
 docs/_includes/sidenav.html                     |  149 ++
 docs/_layouts/base.html                         |   33 +-
 docs/_layouts/plain.html                        |  130 +-
 docs/_layouts/redirect.html                     |   27 +
 docs/apis/batch/connectors.md                   |  242 --
 docs/apis/batch/dataset_transformations.md      | 2338 ------------------
 docs/apis/batch/examples.md                     |  521 ----
 docs/apis/batch/fault_tolerance.md              |  100 -
 docs/apis/batch/fig/LICENSE.txt                 |   17 -
 .../fig/iterations_delta_iterate_operator.png   |  Bin 113607 -> 0 bytes
 ...terations_delta_iterate_operator_example.png |  Bin 335057 -> 0 bytes
 .../batch/fig/iterations_iterate_operator.png   |  Bin 63465 -> 0 bytes
 .../fig/iterations_iterate_operator_example.png |  Bin 102925 -> 0 bytes
 docs/apis/batch/fig/iterations_supersteps.png   |  Bin 54098 -> 0 bytes
 docs/apis/batch/hadoop_compatibility.md         |  249 --
 docs/apis/batch/index.md                        | 2274 -----------------
 docs/apis/batch/iterations.md                   |  213 --
 docs/apis/batch/libs/fig/LICENSE.txt            |   17 -
 .../apis/batch/libs/fig/gelly-example-graph.png |  Bin 18813 -> 0 bytes
 docs/apis/batch/libs/fig/gelly-filter.png       |  Bin 57192 -> 0 bytes
 docs/apis/batch/libs/fig/gelly-gsa-sssp1.png    |  Bin 31201 -> 0 bytes
 .../apis/batch/libs/fig/gelly-reduceOnEdges.png |  Bin 23843 -> 0 bytes
 .../batch/libs/fig/gelly-reduceOnNeighbors.png  |  Bin 34903 -> 0 bytes
 docs/apis/batch/libs/fig/gelly-union.png        |  Bin 50498 -> 0 bytes
 docs/apis/batch/libs/fig/gelly-vc-sssp1.png     |  Bin 28537 -> 0 bytes
 .../libs/fig/vertex-centric supersteps.png      |  Bin 80198 -> 0 bytes
 docs/apis/batch/libs/gelly.md                   |   26 -
 docs/apis/batch/libs/gelly/graph_algorithms.md  |  311 ---
 docs/apis/batch/libs/gelly/graph_api.md         |  836 -------
 docs/apis/batch/libs/gelly/graph_generators.md  |  657 -----
 docs/apis/batch/libs/gelly/index.md             |   74 -
 .../libs/gelly/iterative_graph_processing.md    |  971 --------
 docs/apis/batch/libs/gelly/library_methods.md   |  350 ---
 docs/apis/batch/libs/index.md                   |   29 -
 docs/apis/batch/libs/ml/als.md                  |  178 --
 docs/apis/batch/libs/ml/contribution_guide.md   |  110 -
 docs/apis/batch/libs/ml/cross_validation.md     |  175 --
 docs/apis/batch/libs/ml/distance_metrics.md     |  111 -
 docs/apis/batch/libs/ml/index.md                |  151 --
 docs/apis/batch/libs/ml/knn.md                  |  149 --
 docs/apis/batch/libs/ml/min_max_scaler.md       |  116 -
 .../batch/libs/ml/multiple_linear_regression.md |  164 --
 docs/apis/batch/libs/ml/optimization.md         |  385 ---
 docs/apis/batch/libs/ml/pipelines.md            |  445 ----
 docs/apis/batch/libs/ml/polynomial_features.md  |  111 -
 docs/apis/batch/libs/ml/quickstart.md           |  244 --
 docs/apis/batch/libs/ml/standard_scaler.md      |  116 -
 docs/apis/batch/libs/ml/svm.md                  |  223 --
 docs/apis/batch/libs/table.md                   |   26 -
 docs/apis/batch/python.md                       |  638 -----
 docs/apis/batch/zip_elements_guide.md           |  128 -
 docs/apis/best_practices.md                     |  403 ---
 docs/apis/cli.md                                |  322 ---
 docs/apis/cluster_execution.md                  |  156 --
 docs/apis/common/fig/plan_visualizer.png        |  Bin 145778 -> 0 bytes
 docs/apis/common/index.md                       | 1352 ----------
 docs/apis/connectors.md                         |   23 -
 docs/apis/index.md                              |   21 -
 docs/apis/java8.md                              |  198 --
 docs/apis/local_execution.md                    |  126 -
 docs/apis/metrics.md                            |  470 ----
 docs/apis/programming_guide.md                  |   26 -
 docs/apis/scala_api_extensions.md               |  409 ---
 docs/apis/scala_shell.md                        |  197 --
 docs/apis/streaming/connectors/cassandra.md     |  158 --
 docs/apis/streaming/connectors/elasticsearch.md |  183 --
 .../apis/streaming/connectors/elasticsearch2.md |  144 --
 .../streaming/connectors/filesystem_sink.md     |  133 -
 docs/apis/streaming/connectors/index.md         |   47 -
 docs/apis/streaming/connectors/kafka.md         |  293 ---
 docs/apis/streaming/connectors/kinesis.md       |  322 ---
 docs/apis/streaming/connectors/nifi.md          |  141 --
 docs/apis/streaming/connectors/rabbitmq.md      |  132 -
 docs/apis/streaming/connectors/redis.md         |  177 --
 docs/apis/streaming/connectors/twitter.md       |   89 -
 docs/apis/streaming/event_time.md               |  208 --
 .../streaming/event_timestamp_extractors.md     |  108 -
 .../streaming/event_timestamps_watermarks.md    |  332 ---
 docs/apis/streaming/fault_tolerance.md          |  462 ----
 docs/apis/streaming/fig/LICENSE.txt             |   17 -
 .../fig/parallel_streams_watermarks.svg         |  516 ----
 docs/apis/streaming/fig/rescale.svg             |  472 ----
 docs/apis/streaming/fig/savepoints-overview.png |  Bin 62824 -> 0 bytes
 .../streaming/fig/savepoints-program_ids.png    |  Bin 55492 -> 0 bytes
 .../streaming/fig/stream_watermark_in_order.svg |  314 ---
 .../fig/stream_watermark_out_of_order.svg       |  314 ---
 docs/apis/streaming/fig/times_clocks.svg        |  368 ---
 docs/apis/streaming/index.md                    | 1787 -------------
 docs/apis/streaming/libs/cep.md                 |  659 -----
 docs/apis/streaming/libs/index.md               |   27 -
 docs/apis/streaming/non-windowed.svg            |   22 -
 docs/apis/streaming/savepoints.md               |  110 -
 docs/apis/streaming/session-windows.svg         |   22 -
 docs/apis/streaming/sliding-windows.svg         |   22 -
 docs/apis/streaming/state.md                    |  295 ---
 docs/apis/streaming/state_backends.md           |  163 --
 docs/apis/streaming/storm_compatibility.md      |  287 ---
 docs/apis/streaming/tumbling-windows.svg        |   22 -
 docs/apis/streaming/windows.md                  |  678 -----
 docs/apis/streaming_guide.md                    |   26 -
 docs/apis/table.md                              | 2082 ----------------
 docs/concepts/concepts.md                       |  246 --
 docs/concepts/fig/checkpoints.svg               |  249 --
 .../fig/event_ingestion_processing_time.svg     |  375 ---
 docs/concepts/fig/parallel_dataflow.svg         |  487 ----
 docs/concepts/fig/processes.svg                 |  749 ------
 docs/concepts/fig/program_dataflow.svg          |  546 ----
 docs/concepts/fig/slot_sharing.svg              |  721 ------
 docs/concepts/fig/state_partitioning.svg        |  291 ---
 docs/concepts/fig/tasks_chains.svg              |  463 ----
 docs/concepts/fig/tasks_slots.svg               |  395 ---
 docs/concepts/fig/windows.svg                   |  193 --
 docs/concepts/index.md                          |  249 ++
 docs/dev/api_concepts.md                        | 1349 ++++++++++
 docs/dev/apis.md                                |   24 +
 docs/dev/batch/connectors.md                    |  238 ++
 docs/dev/batch/dataset_transformations.md       | 2335 +++++++++++++++++
 docs/dev/batch/examples.md                      |  519 ++++
 docs/dev/batch/fault_tolerance.md               |   98 +
 docs/dev/batch/hadoop_compatibility.md          |  248 ++
 docs/dev/batch/index.md                         | 2267 +++++++++++++++++
 docs/dev/batch/iterations.md                    |  212 ++
 docs/dev/batch/python.md                        |  635 +++++
 docs/dev/batch/zip_elements_guide.md            |  126 +
 docs/dev/cluster_execution.md                   |  155 ++
 docs/dev/connectors/cassandra.md                |  155 ++
 docs/dev/connectors/elasticsearch.md            |  180 ++
 docs/dev/connectors/elasticsearch2.md           |  141 ++
 docs/dev/connectors/filesystem_sink.md          |  130 +
 docs/dev/connectors/index.md                    |   46 +
 docs/dev/connectors/kafka.md                    |  289 +++
 docs/dev/connectors/kinesis.md                  |  319 +++
 docs/dev/connectors/nifi.md                     |  138 ++
 docs/dev/connectors/rabbitmq.md                 |  129 +
 docs/dev/connectors/redis.md                    |  174 ++
 docs/dev/connectors/twitter.md                  |   85 +
 docs/dev/datastream_api.md                      | 1779 +++++++++++++
 docs/dev/event_time.md                          |  206 ++
 docs/dev/event_timestamp_extractors.md          |  106 +
 docs/dev/event_timestamps_watermarks.md         |  329 +++
 docs/dev/index.md                               |   25 +
 docs/dev/java8.md                               |  196 ++
 docs/dev/libraries.md                           |   24 +
 docs/dev/libs/cep.md                            |  652 +++++
 docs/dev/libs/gelly/graph_algorithms.md         |  308 +++
 docs/dev/libs/gelly/graph_api.md                |  833 +++++++
 docs/dev/libs/gelly/graph_generators.md         |  654 +++++
 docs/dev/libs/gelly/index.md                    |   69 +
 .../libs/gelly/iterative_graph_processing.md    |  968 ++++++++
 docs/dev/libs/gelly/library_methods.md          |  347 +++
 docs/dev/libs/ml/als.md                         |  175 ++
 docs/dev/libs/ml/contribution_guide.md          |  106 +
 docs/dev/libs/ml/cross_validation.md            |  171 ++
 docs/dev/libs/ml/distance_metrics.md            |  107 +
 docs/dev/libs/ml/index.md                       |  144 ++
 docs/dev/libs/ml/knn.md                         |  144 ++
 docs/dev/libs/ml/min_max_scaler.md              |  112 +
 docs/dev/libs/ml/multiple_linear_regression.md  |  160 ++
 docs/dev/libs/ml/optimization.md                |  382 +++
 docs/dev/libs/ml/pipelines.md                   |  441 ++++
 docs/dev/libs/ml/polynomial_features.md         |  108 +
 docs/dev/libs/ml/quickstart.md                  |  243 ++
 docs/dev/libs/ml/standard_scaler.md             |  113 +
 docs/dev/libs/ml/svm.md                         |  220 ++
 docs/dev/libs/storm_compatibility.md            |  287 +++
 docs/dev/local_execution.md                     |  125 +
 docs/dev/quickstarts.md                         |   24 +
 docs/dev/scala_api_extensions.md                |  408 +++
 docs/dev/scala_shell.md                         |  193 ++
 docs/dev/state.md                               |  293 +++
 docs/dev/state_backends.md                      |  162 ++
 docs/dev/table_api.md                           | 2079 ++++++++++++++++
 docs/dev/types_serialization.md                 |  253 ++
 docs/dev/windows.md                             |  677 +++++
 docs/fig/ClientJmTm.svg                         |  348 +++
 docs/fig/FlinkOnYarn.svg                        |  151 ++
 docs/fig/back_pressure_sampling.png             |  Bin 0 -> 17635 bytes
 docs/fig/back_pressure_sampling_high.png        |  Bin 0 -> 77546 bytes
 docs/fig/back_pressure_sampling_in_progress.png |  Bin 0 -> 79112 bytes
 docs/fig/back_pressure_sampling_ok.png          |  Bin 0 -> 79668 bytes
 docs/fig/checkpointing.svg                      | 1731 +++++++++++++
 docs/fig/checkpoints.svg                        |  249 ++
 docs/fig/event_ingestion_processing_time.svg    |  375 +++
 docs/fig/flink-on-emr.png                       |  Bin 0 -> 103880 bytes
 docs/fig/gelly-example-graph.png                |  Bin 0 -> 18813 bytes
 docs/fig/gelly-filter.png                       |  Bin 0 -> 57192 bytes
 docs/fig/gelly-gsa-sssp1.png                    |  Bin 0 -> 31201 bytes
 docs/fig/gelly-reduceOnEdges.png                |  Bin 0 -> 23843 bytes
 docs/fig/gelly-reduceOnNeighbors.png            |  Bin 0 -> 34903 bytes
 docs/fig/gelly-union.png                        |  Bin 0 -> 50498 bytes
 docs/fig/gelly-vc-sssp1.png                     |  Bin 0 -> 28537 bytes
 docs/fig/iterations_delta_iterate_operator.png  |  Bin 0 -> 113607 bytes
 ...terations_delta_iterate_operator_example.png |  Bin 0 -> 335057 bytes
 docs/fig/iterations_iterate_operator.png        |  Bin 0 -> 63465 bytes
 .../fig/iterations_iterate_operator_example.png |  Bin 0 -> 102925 bytes
 docs/fig/iterations_supersteps.png              |  Bin 0 -> 54098 bytes
 docs/fig/job_and_execution_graph.svg            |  851 +++++++
 docs/fig/job_status.svg                         | 1049 ++++++++
 docs/fig/jobmanager_ha_overview.png             |  Bin 0 -> 57875 bytes
 docs/fig/non-windowed.svg                       |   22 +
 docs/fig/parallel_dataflow.svg                  |  487 ++++
 docs/fig/parallel_streams_watermarks.svg        |  516 ++++
 docs/fig/plan_visualizer.png                    |  Bin 0 -> 145778 bytes
 docs/fig/processes.svg                          |  749 ++++++
 docs/fig/program_dataflow.svg                   |  546 ++++
 docs/fig/projects_dependencies.svg              |  580 +++++
 docs/fig/rescale.svg                            |  472 ++++
 docs/fig/savepoints-overview.png                |  Bin 0 -> 62824 bytes
 docs/fig/savepoints-program_ids.png             |  Bin 0 -> 55492 bytes
 docs/fig/session-windows.svg                    |   22 +
 docs/fig/sliding-windows.svg                    |   22 +
 docs/fig/slot_sharing.svg                       |  721 ++++++
 docs/fig/slots.svg                              |  505 ++++
 docs/fig/slots_parallelism.svg                  |  695 ++++++
 docs/fig/stack.svg                              |  606 +++++
 docs/fig/state_machine.svg                      |  318 +++
 docs/fig/state_partitioning.svg                 |  291 +++
 docs/fig/stream_aligning.svg                    |  877 +++++++
 docs/fig/stream_barriers.svg                    |  309 +++
 docs/fig/stream_watermark_in_order.svg          |  314 +++
 docs/fig/stream_watermark_out_of_order.svg      |  314 +++
 docs/fig/tasks_chains.svg                       |  463 ++++
 docs/fig/tasks_slots.svg                        |  395 +++
 docs/fig/times_clocks.svg                       |  368 +++
 docs/fig/tumbling-windows.svg                   |   22 +
 docs/fig/vertex-centric supersteps.png          |  Bin 0 -> 80198 bytes
 docs/fig/windows.svg                            |  193 ++
 docs/index.md                                   |   27 +-
 docs/internals/_draft_distributed_akka.md       |   47 -
 docs/internals/add_operator.md                  |   24 +-
 docs/internals/back_pressure_monitoring.md      |   83 -
 docs/internals/coding_guidelines.md             |   25 -
 docs/internals/fig/ClientJmTm.svg               |  348 ---
 docs/internals/fig/LICENSE.txt                  |   17 -
 docs/internals/fig/back_pressure_sampling.png   |  Bin 17635 -> 0 bytes
 .../fig/back_pressure_sampling_high.png         |  Bin 77546 -> 0 bytes
 .../fig/back_pressure_sampling_in_progress.png  |  Bin 79112 -> 0 bytes
 .../internals/fig/back_pressure_sampling_ok.png |  Bin 79668 -> 0 bytes
 docs/internals/fig/checkpointing.svg            | 1731 -------------
 docs/internals/fig/job_and_execution_graph.svg  |  851 -------
 docs/internals/fig/job_status.svg               | 1049 --------
 docs/internals/fig/projects_dependencies.svg    |  580 -----
 docs/internals/fig/slots.svg                    |  505 ----
 docs/internals/fig/stack.svg                    |  606 -----
 docs/internals/fig/state_machine.svg            |  318 ---
 docs/internals/fig/stream_aligning.svg          |  877 -------
 docs/internals/fig/stream_barriers.svg          |  309 ---
 docs/internals/general_arch.md                  |   32 +-
 docs/internals/how_to_contribute.md             |   25 -
 docs/internals/ide_setup.md                     |   19 +-
 docs/internals/index.md                         |    4 +
 docs/internals/job_scheduling.md                |   23 +-
 docs/internals/logging.md                       |  106 -
 docs/internals/monitoring_rest_api.md           |  589 -----
 docs/internals/stream_checkpointing.md          |   25 +-
 docs/internals/types_serialization.md           |  258 --
 docs/libs/cep/index.md                          |   25 -
 docs/libs/gelly_guide.md                        |   25 -
 docs/libs/index.md                              |   25 -
 docs/libs/ml/als.md                             |   25 -
 docs/libs/ml/contribution_guide.md              |   25 -
 docs/libs/ml/distance_metrics.md                |   25 -
 docs/libs/ml/index.md                           |   25 -
 docs/libs/ml/min_max_scaler.md                  |   25 -
 docs/libs/ml/multiple_linear_regression.md      |   25 -
 docs/libs/ml/optimization.md                    |   25 -
 docs/libs/ml/pipelines.md                       |   25 -
 docs/libs/ml/polynomial_features.md             |   25 -
 docs/libs/ml/quickstart.md                      |   25 -
 docs/libs/ml/standard_scaler.md                 |   25 -
 docs/libs/ml/svm.md                             |   25 -
 docs/libs/table.md                              |   29 -
 docs/monitoring/back_pressure.md                |   81 +
 docs/monitoring/best_practices.md               |  402 +++
 docs/monitoring/index.md                        |   25 +
 docs/monitoring/logging.md                      |   98 +
 docs/monitoring/metrics.md                      |  468 ++++
 docs/monitoring/rest_api.md                     |  586 +++++
 docs/page/css/flink.css                         |  174 +-
 docs/page/font-awesome/css/font-awesome.css     | 2199 ++++++++++++++++
 docs/page/font-awesome/css/font-awesome.min.css |    4 +
 docs/page/font-awesome/fonts/FontAwesome.otf    |  Bin 0 -> 124988 bytes
 .../font-awesome/fonts/fontawesome-webfont.eot  |  Bin 0 -> 76518 bytes
 .../font-awesome/fonts/fontawesome-webfont.svg  |  685 +++++
 .../font-awesome/fonts/fontawesome-webfont.ttf  |  Bin 0 -> 152796 bytes
 .../font-awesome/fonts/fontawesome-webfont.woff |  Bin 0 -> 90412 bytes
 .../fonts/fontawesome-webfont.woff2             |  Bin 0 -> 71896 bytes
 docs/quickstart/java_api_quickstart.md          |   22 +-
 docs/quickstart/run_example_quickstart.md       |    7 +-
 docs/quickstart/scala_api_quickstart.md         |   21 +-
 docs/quickstart/setup_quickstart.md             |   11 +-
 docs/redirects/back_pressure.md                 |   24 +
 docs/redirects/basic_api_concepts.md            |   24 +
 docs/redirects/best_practices.md                |   24 +
 docs/redirects/cassandra.md                     |   24 +
 docs/redirects/cep.md                           |   24 +
 docs/redirects/cli.md                           |   24 +
 docs/redirects/cluster_execution.md             |   24 +
 docs/redirects/concepts.md                      |   24 +
 docs/redirects/connectors.md                    |   24 +
 docs/redirects/datastream_api.md                |   24 +
 docs/redirects/elasticsearch.md                 |   24 +
 docs/redirects/elasticsearch2.md                |   24 +
 docs/redirects/event_time.md                    |   24 +
 docs/redirects/event_timestamp_extractors.md    |   24 +
 docs/redirects/event_timestamps_watermarks.md   |   24 +
 docs/redirects/fault_tolerance.md               |   24 +
 docs/redirects/filesystem_sink.md               |   24 +
 docs/redirects/gelly.md                         |   24 +
 docs/redirects/java8.md                         |   24 +
 docs/redirects/kafka.md                         |   24 +
 docs/redirects/kinesis.md                       |   24 +
 docs/redirects/local_execution.md               |   24 +
 docs/redirects/metrics.md                       |   24 +
 docs/redirects/ml.md                            |   24 +
 docs/redirects/programming_guide.md             |   24 +
 docs/redirects/rest_api.md                      |   24 +
 docs/redirects/savepoints.md                    |   24 +
 docs/redirects/scala_api_extensions.md          |   24 +
 docs/redirects/scala_shell.md                   |   24 +
 docs/redirects/state.md                         |   24 +
 docs/redirects/state_backends.md                |   24 +
 docs/redirects/storm_compat.md                  |   24 +
 docs/redirects/streaming_guide.md               |   24 +
 docs/redirects/table.md                         |   24 +
 docs/redirects/types_serialization.md           |   24 +
 docs/redirects/windows.md                       |   24 +
 docs/setup/aws.md                               |   12 +-
 docs/setup/building.md                          |    5 +-
 docs/setup/cli.md                               |  322 +++
 docs/setup/cluster_setup.md                     |    7 +-
 docs/setup/config.md                            |   12 +-
 docs/setup/deployment.md                        |   24 +
 docs/setup/fault_tolerance.md                   |  460 ++++
 docs/setup/fig/FlinkOnYarn.svg                  |  151 --
 docs/setup/fig/LICENSE.txt                      |   17 -
 docs/setup/fig/flink-on-emr.png                 |  Bin 103880 -> 0 bytes
 docs/setup/fig/jobmanager_ha_overview.png       |  Bin 57875 -> 0 bytes
 docs/setup/fig/slots_parallelism.svg            |  695 ------
 docs/setup/gce_setup.md                         |    6 +-
 docs/setup/index.md                             |    6 +-
 docs/setup/jobmanager_high_availability.md      |   12 +-
 docs/setup/local_setup.md                       |    7 +-
 docs/setup/savepoints.md                        |  109 +
 docs/setup/yarn_setup.md                        |   12 +-
 pom.xml                                         |    1 +
 350 files changed, 45243 insertions(+), 42334 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/LICENSE
----------------------------------------------------------------------
diff --git a/LICENSE b/LICENSE
index 07df05f..f7699da 100644
--- a/LICENSE
+++ b/LICENSE
@@ -225,7 +225,7 @@ The Apache Flink project bundles the following files under the MIT License:
  - dagre v0.7.4 (https://github.com/cpettitt/dagre) - Copyright (c) 2012-2014 Chris Pettitt
  - dagre-d3 v0.4.17 (https://github.com/cpettitt/dagre-d3) - Copyright (c) 2013 Chris Pettitt
  - EvEmitter v1.0.2 (https://github.com/metafizzy/ev-emitter) - Copyright (C) 2016 David DeSandro
- - Font Awesome (code) v4.5.0 (http://fontawesome.io) - Copyright (c) 2014 Dave Gandy
+ - Font Awesome (code) v4.5.0, v4.6.3 (http://fontawesome.io) - Copyright (c) 2014 Dave Gandy
  - graphlib v1.0.7 (https://github.com/cpettitt/graphlib) - Copyright (c) 2012-2014 Chris Pettitt
  - imagesloaded v4.1.0 (https://github.com/desandro/imagesloaded) - Copyright (C) 2016 David DeSandro
  - JQuery v2.2.0 (http://jquery.com/) - Copyright 2014 jQuery Foundation and other contributors
@@ -300,7 +300,8 @@ The Apache Flink project bundles the following fonts under the
 Open Font License (OFT) - http://scripts.sil.org/OFL
 
  - Font Awesome (http://fortawesome.github.io/Font-Awesome/) - Created by Dave Gandy
-   -> fonts in "flink-runtime-web/web-dashboard/assets/fonts"
+   -> fonts in "flink-runtime-web/web-dashboard/web/fonts"
+   -> fonts in "docs/page/font-awesome/fonts"
 
 -----------------------------------------------------------------------
  The ISC License

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/README.md
----------------------------------------------------------------------
diff --git a/docs/README.md b/docs/README.md
index 52dfad3..879c33b 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -109,43 +109,19 @@ These will be replaced by a info or warning label. You can change the text of th
 
 ### Documentation
 
-#### Top Navigation
+#### Navigation
 
-You can modify the top-level navigation in two places. You can either edit the `_includes/navbar.html` file or add tags to your page frontmatter (recommended).
+The navigation on the left side of the docs is automatically generated when building the docs. You can modify the markup in `_include/sidenav.html`.
 
-    # Top-level navigation
-    top-nav-group: apis
-    top-nav-pos: 2
-    top-nav-title: <strong>Batch Guide</strong> (DataSet API)
+The structure of the navigation is determined by the front matter of all pages. The fields used to determine the structure are:
 
-This adds the page to the group `apis` (via `top-nav-group`) at position `2` (via `top-nav-pos`). Furthermore, it specifies a custom title for the navigation via `top-nav-title`. If this field is missing, the regular page title (via `title`) will be used. If no position is specified, the element will be added to the end of the group. If no group is specified, the page will not show up.
+- `nav-id` => ID of this page. Other pages can use this ID as their parent ID.
+- `nav-parent_id` => ID of the parent. This page will be listed under the page with id `nav-parent_id`.
 
-Currently, there are groups `quickstart`, `setup`, `deployment`, `apis`, `libs`, and `internals`.
+Level 0 is made up of all pages, which have nav-parent_id set to `root`. There is no limitation on how many levels you can nest.
 
-#### Sub Navigation
+The `title` of the page is used as the default link text. You can override this via `nav-title`. The relative position per navigational level is determined by `nav-pos`.
 
-A sub navigation is shown if the field `sub-nav-group` is specified. A sub navigation groups all pages with the same `sub-nav-group`. Check out the streaming or batch guide as an example.
+If you have a page with sub pages, the link target will be used to expand the sub level navigation. If you want to actually add a link to the page as well, you can add the `nav-show_overview: true` field to the front matter. This will then add an `Overview` sub page to the expanded list.
 
-    # Sub-level navigation
-    sub-nav-group: batch
-    sub-nav-id: dataset_api
-    sub-nav-pos: 1
-    sub-nav-title: DataSet API
-
-The fields work similar to their `top-nav-*` counterparts.
-
-In addition, you can specify a hierarchy via `sub-nav-id` and `sub-nav-parent`:
-
-    # Sub-level navigation
-    sub-nav-group: batch
-    sub-nav-parent: dataset_api
-    sub-nav-pos: 1
-    sub-nav-title: Transformations
-
-This will show the `Transformations` page under the `DataSet API` page. The `sub-nav-parent` field has to have a matching `sub-nav-id`.
-
-#### Breadcrumbs
-
-Pages with sub navigations can use breadcrumbs like `Batch Guide > Libraries > Machine Learning > Optimization`.
-
-The breadcrumbs for the last page are generated from the front matter. For the a sub navigation root to appear (like `Batch Guide` in the example above), you have to specify the `sub-nav-group-title`. This field designates a group page as the root.
+The nesting is also used for the breadcrumbs like `Application Development > Libraries > Machine Learning > Optimization`.

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/_config.yml
----------------------------------------------------------------------
diff --git a/docs/_config.yml b/docs/_config.yml
index d9bb57e..700d289 100644
--- a/docs/_config.yml
+++ b/docs/_config.yml
@@ -29,6 +29,7 @@
 version: "1.2-SNAPSHOT"
 version_hadoop1: "1.2-hadoop1-SNAPSHOT"
 version_short: "1.2" # Used for the top navbar w/o snapshot suffix
+is_snapshot_version: true
 
 # This suffix is appended to the Scala-dependent Maven artifact names
 scala_version_suffix: "_2.10"
@@ -40,6 +41,16 @@ jira_url: "https://issues.apache.org/jira/browse/FLINK"
 github_url: "https://github.com/apache/flink"
 download_url: "http://flink.apache.org/downloads.html"
 
+# Flag whether this is the latest stable version or not. If not, a warning
+# will be printed pointing to the docs of the latest stable version.
+is_latest: true
+is_stable: false
+latest_stable_url: http://ci.apache.org/projects/flink/flink-docs-release-1.1
+
+previous_docs:
+  1.1: http://ci.apache.org/projects/flink/flink-docs-release-1.1
+  1.0: http://ci.apache.org/projects/flink/flink-docs-release-1.0
+
 #------------------------------------------------------------------------------
 # BUILD CONFIG
 #------------------------------------------------------------------------------
@@ -47,14 +58,16 @@ download_url: "http://flink.apache.org/downloads.html"
 # to change anything here.
 #------------------------------------------------------------------------------
 
+# Used in some documents to initialize arrays. Don't delete.
+array: []
+
 defaults:
   -
     scope:
       path: ""
     values:
       layout: plain
-      top-nav-pos: 99999 # Move to end
-      sub-nav-pos: 99999 # Move to end
+      nav-pos: 99999 # Move to end if no pos specified
 
 markdown: KramdownPygments
 highlighter: pygments

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/_includes/navbar.html
----------------------------------------------------------------------
diff --git a/docs/_includes/navbar.html b/docs/_includes/navbar.html
deleted file mode 100644
index 5821a46..0000000
--- a/docs/_includes/navbar.html
+++ /dev/null
@@ -1,117 +0,0 @@
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-{% capture quickstart %}{{site.baseurl}}/quickstart{% endcapture %}
-{% capture setup %}{{site.baseurl}}/setup{% endcapture %}
-{% capture apis %}{{site.baseurl}}/apis{% endcapture %}
-{% capture libs %}{{site.baseurl}}/libs{% endcapture %}
-{% capture internals %}{{site.baseurl}}/internals{% endcapture %}
-    <!-- Top navbar. -->
-    <nav class="navbar navbar-default navbar-fixed-top">
-      <div class="container">
-        <!-- The logo. -->
-        <div class="navbar-header">
-          <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
-            <span class="icon-bar"></span>
-            <span class="icon-bar"></span>
-            <span class="icon-bar"></span>
-          </button>
-          <div class="navbar-logo">
-            <a href="http://flink.apache.org"><img alt="Apache Flink" src="{{ site.baseurl }}/page/img/navbar-brand-logo.jpg"></a>
-          </div>
-        </div><!-- /.navbar-header -->
-
-        <!-- The navigation links. -->
-        <div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
-          <ul class="nav navbar-nav">
-            <li class="hidden-sm {% if page.url == '/' %}active{% endif %}"><a href="{{ site.baseurl}}/">Docs v{{ site.version_short }}</a></li>
-
-            <li class="{% if page.url == '/concepts/concepts.html' %}active{% endif %}"><a href="{{ site.baseurl}}/concepts/concepts.html">Concepts</a></li>
-
-            <!-- Setup -->
-            <li class="dropdown{% if page.url contains '/setup/' %} active{% endif %}">
-              <a href="{{ setup }}" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-expanded="false">Setup <span class="caret"></span></a>
-              <ul class="dropdown-menu" role="menu">
-                {% assign setup_group = (site.pages | where: "top-nav-group" , "setup" | sort: "top-nav-pos") %}
-                {% for setup_group_page in setup_group %}
-                <li class="{% if page.url contains setup_group_page.url %}active{% endif %}"><a href="{{ site.baseurl }}{{ setup_group_page.url }}">{% if setup_group_page.top-nav-title %}{{ setup_group_page.top-nav-title }}{% else %}{{ setup_group_page.title }}{% endif %}</a></li>
-                {% endfor %}
-
-                <li class="divider"></li>
-                <li role="presentation" class="dropdown-header"><strong>Quickstart</strong></li>
-
-                <!-- Quickstart -->
-                {% assign quickstart_group = (site.pages | where: "top-nav-group" , "quickstart" | sort: "top-nav-pos") %}
-                {% for quickstart_page in quickstart_group %}
-                <li class="{% if page.url contains quickstart_page.url %}active{% endif %}"><a href="{{ site.baseurl }}{{ quickstart_page.url }}">{% if quickstart_page.top-nav-title %}{{ quickstart_page.top-nav-title }}{% else %}{{ quickstart_page.title }}{% endif %}</a></li>
-                {% endfor %}
-
-                <li class="divider"></li>
-                <li role="presentation" class="dropdown-header"><strong>Deployment</strong></li>
-                {% assign deployment_group = (site.pages | where: "top-nav-group" , "deployment" | sort: "top-nav-pos") %}
-                {% for deployment_group_page in deployment_group %}
-                <li class="{% if page.url contains deployment_group_page.url %}active{% endif %}"><a href="{{ site.baseurl }}{{ deployment_group_page.url }}">{% if deployment_group_page.top-nav-title %}{{ deployment_group_page.top-nav-title }}{% else %}{{ deployment_group_page.title }}{% endif %}</a></li>
-                {% endfor %}
-              </ul>
-            </li>
-
-            <!-- Programming Guides -->
-            <li class="dropdown{% unless page.url contains '/libs/' %}{% if page.url contains '/apis/' %} active{% endif %}{% endunless %}">
-              <a href="{{ apis }}" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-expanded="false">Programming Guides <span class="caret"></span></a>
-              <ul class="dropdown-menu" role="menu">
-                {% assign apis_group = (site.pages | where: "top-nav-group" , "apis" | sort: "top-nav-pos") %}
-                {% for apis_group_page in apis_group %}
-                <li class="{% if page.url contains apis_group_page.url %}active{% endif %}"><a href="{{ site.baseurl }}{{ apis_group_page.url }}">{% if apis_group_page.top-nav-title %}{{ apis_group_page.top-nav-title }}{% else %}{{ apis_group_page.title }}{% endif %}</a></li>
-                {% endfor %}
-              </ul>
-            </li>
-
-            <!-- Libraries -->
-            <li class="dropdown{% if page.url contains '/libs/' %} active{% endif %}">
-              <a href="{{ libs }}" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-expanded="false">Libraries <span class="caret"></span></a>
-                <ul class="dropdown-menu" role="menu">
-                  {% assign libs_group = (site.pages | where: "top-nav-group" , "libs" | sort: "top-nav-pos") %}
-                  {% for libs_page in libs_group %}
-                  <li class="{% if page.url contains libs_page.url %}active{% endif %}"><a href="{{ site.baseurl }}{{ libs_page.url }}">{% if libs_page.top-nav-title %}{{ libs_page.top-nav-title }}{% else %}{{ libs_page.title }}{% endif %}</a></li>
-                  {% endfor %}
-              </ul>
-            </li>
-
-            <!-- Internals -->
-            <li class="dropdown{% if page.url contains '/internals/' %} active{% endif %}">
-              <a href="{{ internals }}" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-expanded="false">Internals <span class="caret"></span></a>
-              <ul class="dropdown-menu" role="menu">
-                <li role="presentation" class="dropdown-header"><strong>Contribute</strong></li>
-                <li><a href="http://flink.apache.org/how-to-contribute.html"><small><span class="glyphicon glyphicon-new-window"></span></small> How to Contribute</a></li>
-                <li><a href="http://flink.apache.org/contribute-code.html#coding-guidelines"><small><span class="glyphicon glyphicon-new-window"></span></small> Coding Guidelines</a></li>
-                {% assign internals_group = (site.pages | where: "top-nav-group" , "internals" | sort: "top-nav-pos") %}
-                {% for internals_page in internals_group %}
-                <li class="{% if page.url contains internals_page.url %}active{% endif %}"><a href="{{ site.baseurl }}{{ internals_page.url }}">{% if internals_page.top-nav-title %}{{ internals_page.top-nav-title }}{% else %}{{ internals_page.title }}{% endif %}</a></li>
-                {% endfor %}
-              </ul>
-            </li>
-          </ul>
-          <form class="navbar-form navbar-right hidden-sm hidden-md" role="search" action="{{site.baseurl}}/search-results.html">
-            <div class="form-group">
-              <input type="text" class="form-control" size="16px" name="q" placeholder="Search all pages">
-            </div>
-            <button type="submit" class="btn btn-default">Search</button>
-          </form>
-        </div><!-- /.navbar-collapse -->
-      </div><!-- /.container -->
-    </nav>

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/_includes/sidenav.html
----------------------------------------------------------------------
diff --git a/docs/_includes/sidenav.html b/docs/_includes/sidenav.html
new file mode 100644
index 0000000..b56bcf2
--- /dev/null
+++ b/docs/_includes/sidenav.html
@@ -0,0 +1,149 @@
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+{% comment %}
+==============================================================================
+Extract the active nav IDs.
+==============================================================================
+{% endcomment %}
+
+{% assign active_nav_ids = site.array %}
+{% assign parent_id = page.nav-parent_id %}
+
+{% for i in (1..10) %}
+  {% if parent_id %}
+    {% assign active_nav_ids = active_nav_ids | push: parent_id %}
+    {% assign current = (site.pages | where: "nav-id" , parent_id | sort: "nav-pos") %}
+    {% if current.size > 0 %}
+      {% assign parent_id = current[0].nav-parent_id %}
+    {% else %}
+      {% break %}
+    {% endif %}
+  {% else %}
+    {% break %}
+  {% endif %}
+{% endfor %}
+
+{% comment %}
+==============================================================================
+Build the nested list from nav-id and nav-parent_id relations.
+==============================================================================
+This builds a nested list from all pages. The fields used to determine the
+structure are:
+
+- 'nav-id' => ID of this page. Other pages can use this ID as their
+  parent ID.
+- 'nav-parent_id' => ID of the parent. This page will be listed under
+  the page with id 'nav-parent_id'.
+
+Level 0 is made up of all pages, which have nav-parent_id set to 'root'.
+
+The 'title' of the page is used as the default link text. You can
+override this via 'nav-title'. The relative position per navigational
+level is determined by 'nav-pos'.
+{% endcomment %}
+
+{% assign elementsPosStack = site.array %}
+{% assign posStack = site.array %}
+
+{% assign elements = site.array %}
+{% assign children = (site.pages | where: "nav-parent_id" , "root" | sort: "nav-pos") %}
+{% if children.size > 0 %}
+  {% assign elements = elements | push: children %}
+{% endif %}
+
+{% assign elementsPos = 0 %}
+{% assign pos = 0 %}
+
+<div class="sidenav-logo">
+  <p><a href="{{ site.baseurl }}"><img class="bottom" alt="Apache Flink" src="{{ site.baseurl }}/page/img/navbar-brand-logo.jpg"></a> v{{ site.version }}</p>
+</div>
+<ul id="sidenav">
+{% for i in (1..10000) %}
+  {% if pos >= elements[elementsPos].size %}
+    {% if elementsPos == 0 %}
+      {% break %}
+    {% else %}
+      {% assign elementsPos = elementsPosStack | last %}
+      {% assign pos = posStack | last %}
+</li></ul></div>
+      {% assign elementsPosStack = elementsPosStack | pop %}
+      {% assign posStack = posStack | pop %}
+    {% endif %}
+  {% else %}
+    {% assign this = elements[elementsPos][pos] %}
+
+    {% if this.url == page.url %}
+      {% assign active = true %}
+    {% elsif this.nav-id and active_nav_ids contains this.nav-id %}
+      {% assign active = true %}
+    {% else %}
+      {% assign active = false %}
+    {% endif %}
+
+    {% capture title %}{% if this.nav-title %}{{ this.nav-title }}{% else %}{{ this.title }}{% endif %}{% endcapture %}
+    {% capture target %}"{{ site.baseurl }}{{ this.url }}"{% if active %} class="active"{% endif %}{% endcapture %}
+    {% capture overview_target %}"{{ site.baseurl }}{{ this.url }}"{% if this.url == page.url %} class="active"{% endif %}{% endcapture %}
+
+    {% assign pos = pos | plus: 1 %}
+    {% if this.nav-id %}
+      {% assign children = (site.pages | where: "nav-parent_id" , this.nav-id | sort: "nav-pos") %}
+      {% if children.size > 0 %}
+        {% capture collapse_target %}"#collapse-{{ i }}" data-toggle="collapse"{% if active %} class="active"{% endif %}{% endcapture %}
+        {% capture expand %}{% unless active %} <i class="fa fa-caret-down pull-right" aria-hidden="true" style="padding-top: 4px"></i>{% endunless %}{% endcapture %}
+<li><a href={{ collapse_target }}>{{ title }}{{ expand }}</a><div class="collapse{% if active %} in{% endif %}" id="collapse-{{ i }}"><ul>
+  {% if this.nav-show_overview %}<li><a href={{ overview_target }}>Overview</a></li>{% endif %}
+        {% assign elements = elements | push: children %}
+        {% assign elementsPosStack = elementsPosStack | push: elementsPos %}
+        {% assign posStack = posStack | push: pos %}
+
+        {% assign elementsPos = elements.size | minus: 1 %}
+        {% assign pos = 0 %}
+      {% else %}
+<li><a href={{ target }}>{{ title }}</a></li>
+      {% endif %}
+    {% else %}
+<li><a href={{ target }}>{{ title }}</a></li>
+    {% endif %}
+  {% endif %}
+{% endfor %}
+  <li class="divider"></li>
+  <li><a href="http://flink.apache.org"><i class="fa fa-external-link" aria-hidden="true"></i> Project Page</a></li>
+</ul>
+
+<div class="sidenav-search-box">
+  <form class="navbar-form" role="search" action="{{site.baseurl}}/search-results.html">
+    <div class="form-group">
+      <input type="text" class="form-control" size="16px" name="q" placeholder="Search Docs">
+    </div>
+    <button type="submit" class="btn btn-default">Go</button>
+  </form>
+</div>
+
+<div class="sidenav-versions">
+  <div class="dropdown">
+    <button class="btn btn-default dropdown-toggle" type="button" data-toggle="dropdown">Pick Docs Version
+    <span class="caret"></span></button>
+    <ul class="dropdown-menu">
+      {% for d in site.previous_docs %}
+      <li><a href="{{ d[1] }}">v{{ d[0] }}</a></li>
+      {% endfor %}
+    </ul>
+  </div>
+</div>

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/_layouts/base.html
----------------------------------------------------------------------
diff --git a/docs/_layouts/base.html b/docs/_layouts/base.html
index 690d2af..21065ec 100644
--- a/docs/_layouts/base.html
+++ b/docs/_layouts/base.html
@@ -32,6 +32,7 @@ under the License.
     <link rel="stylesheet" href="{{ site.baseurl }}/page/css/flink.css">
     <link rel="stylesheet" href="{{ site.baseurl }}/page/css/syntax.css">
     <link rel="stylesheet" href="{{ site.baseurl }}/page/css/codetabs.css">
+    <link rel="stylesheet" href="{{ site.baseurl }}/page/font-awesome/css/font-awesome.min.css">
     {% if page.mathjax %}
     <script type="text/x-mathjax-config">
         MathJax.Hub.Config({
@@ -53,20 +54,36 @@ under the License.
     <![endif]-->
   </head>
   <body>
-    {% comment %} Includes are found in the _includes directory. {% endcomment %}
-    {% include navbar.html %}
-
-    {% if page.mathjax %}
-    {% include latex_commands.html %}
-    {% endif %}
-
     <!-- Main content. -->
     <div class="container">
+      {% if site.is_stable %}
+      {% unless site.is_latest %}
+      <div class="row">
+        <div class="col-sm-12">
+          <div class="alert alert-info">
+              <strong>Note</strong>: This documentation is for Flink version <strong>{{ site.version }}</strong>. There is a more recent stable version available. Please consider updating and <a href="{{ site.latest_stable_url }}">check out the documentation for that version</a>.
+          </div>
+        </div>
+      </div>
+      {% endunless %}
+      {% endif %}
+
       {% comment %}
       This is the base for all content. The content from the layouts found in
       the _layouts directory goes here.
       {% endcomment %}
-      {{ content }}
+      <div class="row">
+        <div class="col-lg-3">
+          {% include sidenav.html %}
+        </div>
+        <div class="col-lg-9 content">
+          {% if page.mathjax %}
+          {% include latex_commands.html %}
+          {% endif %}
+
+          {{ content }}
+        </div>
+      </div>
     </div><!-- /.container -->
 
     <!-- jQuery (necessary for Bootstrap's JavaScript plugins) -->

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/_layouts/plain.html
----------------------------------------------------------------------
diff --git a/docs/_layouts/plain.html b/docs/_layouts/plain.html
index 7a076a2..63a6681 100644
--- a/docs/_layouts/plain.html
+++ b/docs/_layouts/plain.html
@@ -19,103 +19,39 @@ KIND, either express or implied.  See the License for the
 specific language governing permissions and limitations
 under the License.
 -->
-<div class="row">
-{% if page.sub-nav-group %}
-{% comment %}
-The plain layout with a sub navigation.
 
-- This is activated via the 'sub-nav-group' field in the preemble.
-- All pages of this sub nav group will be displayed in the sub navigation:
-  * Each element without a 'sub-nav-parent' field will be displayed on the 1st level, where the position is defined via 'sub-nav-pos'.
-  * If the page should be displayed as a child element, it needs to specify a 'sub-nav-parent' field, which matches the 'sub-nav-id' of its parent. The parent only needs to specify this if it expects child nodes.
-{% endcomment %}
-  <!-- Sub Navigation -->
-  <div class="col-sm-3">
-    <ul id="sub-nav">
-      {% comment %} Get all pages belonging to this group sorted by their position {% endcomment %}
-      {% assign group = (site.pages | where: "sub-nav-group" , page.sub-nav-group | where: "sub-nav-parent" , nil | sort: "sub-nav-pos") %}
-      {% for group_page in group %}
-        {% if group_page.sub-nav-id  %}
-        {% assign sub_group = (site.pages | where: "sub-nav-group" , page.sub-nav-group | where: "sub-nav-parent" , group_page.sub-nav-id | sort: "sub-nav-pos") %}
-        {% else %}
-        {% assign sub_group = nil %}
-        {% endif %}
-        <li><a href="{{ site.baseurl }}{{ group_page.url }}" class="{% if page.url contains group_page.url %}active{% endif %}">{% if group_page.sub-nav-title %}{{ group_page.sub-nav-title }}{% else %}{{ group_page.title }}{% endif %}</a>
-          {% if sub_group and sub_group.size() > 0 %}
-          <ul>
-            {% for sub_group_page in sub_group %}
-              <li><a href="{{ site.baseurl }}{{ sub_group_page.url }}" class="{% if page.url contains sub_group_page.url or (sub_group_page.sub-nav-id and page.sub-nav-parent and sub_group_page.sub-nav-id == page.sub-nav-parent) %}active{% endif %}">{% if sub_group_page.sub-nav-title %}{{ sub_group_page.sub-nav-title }}{% else %}{{ sub_group_page.title }}{% endif %}</a></li>
-            {% endfor %}
-          </ul>
-          {% endif %}
-        </li>
-      {% endfor %}
-    </ul>
-  </div>
-  <!-- Main -->
-  <div class="col-sm-9">
-    <!-- Top anchor -->
-    <a href="#top"></a>
+{% assign active_pages = site.array %}
+{% assign active = page %}
+
+{% for i in (1..10) %}
+  {% assign active_pages = active_pages | push: active %}
+  {% if active.nav-parent_id %}
+    {% assign next = (site.pages | where: "nav-id" , active.nav-parent_id ) %}
+    {% if next.size > 0 %}
+      {% assign active = next[0] %}
+    {% else %}
+      {% break %}
+    {% endif %}
+  {% else %}
+    {% break %}
+  {% endif %}
+{% endfor %}
+
+{% assign active_pages = active_pages | reverse %}
+
+<ol class="breadcrumb">
+{% for p in active_pages %}
+  {% capture title %}{% if p.nav-title %}{{ p.nav-title }}{% else %}{{ p.title }}{% endif %}{% endcapture %}
+  {% if forloop.last == true %}
+    <li class="active">{{ title }}</li>
+  {% elsif p.nav-show_overview %}
+    <li><a href="{{ site.baseurl }}{{ p.url }}">{{ title }}</a></li>
+  {% else %}
+    <li>{{ title }}</li>
+  {% endif %}
+{% endfor %}
+</ol>
+
+<h1>{{ page.title }}{% if page.is_beta %} <span class="beta">Beta</span>{% endif %}</h1>
 
-    <!-- Artifact name change warning. Remove for the 1.0 release. -->
-    <div class="panel panel-default">
-      <div class="panel-body"><strong>Important</strong>: Maven artifacts which depend on Scala are now suffixed with the Scala major version, e.g. "2.10" or "2.11". Please consult the <a href="https://cwiki.apache.org/confluence/display/FLINK/Maven+artifact+names+suffixed+with+Scala+version">migration guide on the project Wiki</a>.</div>
-    </div>
-
-    <!-- Breadcrumbs above the main heading -->
-    <ol class="breadcrumb">
-      {% for group_page in group %}
-      {% if group_page.sub-nav-group-title %}
-      <li><a href="{{ site.baseurl }}{{ group_page.url }}">{{ group_page.sub-nav-group-title }}</a></li>
-      {% endif %}
-      {% endfor %}
-
-      {% if page.sub-nav-parent %}
-      {% assign parent = (site.pages | where: "sub-nav-group" , page.sub-nav-group | where: "sub-nav-id" , page.sub-nav-parent | first) %}
-      {% if parent %}
-
-      {% if parent.sub-nav-parent %}
-      {% assign grandparent = (site.pages | where: "sub-nav-group" , page.sub-nav-group | where: "sub-nav-id" , parent.sub-nav-parent | first) %}
-
-      {% if grandparent %}
-      <li><a href="{{ site.baseurl }}{{ grandparent.url }}">{% if grandparent.sub-nav-title %}{{ grandparent.sub-nav-title }}{% else %}{{ grandparent.title }}{% endif %}</a></li>
-      {% endif %}
-
-      {% endif %}
-
-      <li><a href="{{ site.baseurl }}{{ parent.url }}">{% if parent.sub-nav-title %}{{ parent.sub-nav-title }}{% else %}{{ parent.title }}{% endif %}</a></li>
-      {% endif %}
-      {% endif %}
-      <li class="active">{% if page.sub-nav-title %}{{ page.sub-nav-title }}{% else %}{{ page.title }}{% endif %}</li>
-    </ol>
-
-    <div class="text">
-      <!-- Main heading -->
-      <h1>{{ page.title }}{% if page.is_beta %} <span class="beta">(Beta)</span>{% endif %}</h1>
-
-      <!-- Content -->
-      {{ content }}
-    </div>
-  </div>
-{% else %}
-{% comment %}
-The plain layout without a sub navigation (only text).
-{% endcomment %}  
-  <div class="col-md-8 col-md-offset-2 text">
-    <!-- Artifact name change warning. Remove for the 1.0 release. -->
-    <div class="panel panel-default">
-      <div class="panel-body"><strong>Important</strong>: Maven artifacts which depend on Scala are now suffixed with the Scala major version, e.g. "2.10" or "2.11". Please consult the <a href="https://cwiki.apache.org/confluence/display/FLINK/Maven+artifact+names+suffixed+with+Scala+version">migration guide on the project Wiki</a>.</div>
-    </div>
-
-    <h1>{{ page.title }}{% if page.is_beta %} <span class="beta">Beta</span>{% endif %}</h1>
 {{ content }}
-  </div>
-{% endif %}
-  {% comment %}
-  Removed until Robert complains... ;)
-  <div class="col-sm-8 col-sm-offset-2">
-    <!-- Disqus thread and some vertical offset -->
-    <div style="margin-top: 75px; margin-bottom: 50px" id="disqus_thread"></div>
-  </div>
-  {% endcomment %}
-</div>

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/_layouts/redirect.html
----------------------------------------------------------------------
diff --git a/docs/_layouts/redirect.html b/docs/_layouts/redirect.html
new file mode 100644
index 0000000..ff3d70c
--- /dev/null
+++ b/docs/_layouts/redirect.html
@@ -0,0 +1,27 @@
+---
+layout: base
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<meta http-equiv="refresh" content="1; url={{ site.baseurl }}{{ page.redirect }}" />
+
+<h1>Page '{{ page.title }}' Has Moved</h1>
+
+The <strong>{{ page.title }}</strong> has been moved. Redirecting to <a href="{{ site.baseurl }}/{{ page.redirect }}">{{ site.baseurl }}{{ page.redirect }}</a> in 1 second.

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/batch/connectors.md
----------------------------------------------------------------------
diff --git a/docs/apis/batch/connectors.md b/docs/apis/batch/connectors.md
deleted file mode 100644
index 21ed260..0000000
--- a/docs/apis/batch/connectors.md
+++ /dev/null
@@ -1,242 +0,0 @@
----
-title:  "Connectors"
-
-# Sub-level navigation
-sub-nav-group: batch
-sub-nav-pos: 4
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-* TOC
-{:toc}
-
-## Reading from file systems
-
-Flink has build-in support for the following file systems:
-
-| Filesystem                            | Scheme       | Notes  |
-| ------------------------------------- |--------------| ------ |
-| Hadoop Distributed File System (HDFS) &nbsp; | `hdfs://`    | All HDFS versions are supported |
-| Amazon S3                             | `s3://`      | Support through Hadoop file system implementation (see below) | 
-| MapR file system                      | `maprfs://`  | The user has to manually place the required jar files in the `lib/` dir |
-| Alluxio                               | `alluxio://` &nbsp; | Support through Hadoop file system implementation (see below) |
-
-
-
-### Using Hadoop file system implementations
-
-Apache Flink allows users to use any file system implementing the `org.apache.hadoop.fs.FileSystem`
-interface. There are Hadoop `FileSystem` implementations for
-
-- [S3](https://aws.amazon.com/s3/) (tested)
-- [Google Cloud Storage Connector for Hadoop](https://cloud.google.com/hadoop/google-cloud-storage-connector) (tested)
-- [Alluxio](http://alluxio.org/) (tested)
-- [XtreemFS](http://www.xtreemfs.org/) (tested)
-- FTP via [Hftp](http://hadoop.apache.org/docs/r1.2.1/hftp.html) (not tested)
-- and many more.
-
-In order to use a Hadoop file system with Flink, make sure that
-
-- the `flink-conf.yaml` has set the `fs.hdfs.hadoopconf` property set to the Hadoop configuration directory.
-- the Hadoop configuration (in that directory) has an entry for the required file system. Examples for S3 and Alluxio are shown below.
-- the required classes for using the file system are available in the `lib/` folder of the Flink installation (on all machines running Flink). If putting the files into the directory is not possible, Flink is also respecting the `HADOOP_CLASSPATH` environment variable to add Hadoop jar files to the classpath.
-
-#### Amazon S3
-
-For Amazon S3 support add the following entries into the `core-site.xml` file:
-
-~~~xml
-<!-- configure the file system implementation -->
-<property>
-  <name>fs.s3.impl</name>
-  <value>org.apache.hadoop.fs.s3native.NativeS3FileSystem</value>
-</property>
-
-<!-- set your AWS ID -->
-<property>
-  <name>fs.s3.awsAccessKeyId</name>
-  <value>putKeyHere</value>
-</property>
-
-<!-- set your AWS access key -->
-<property>
-  <name>fs.s3.awsSecretAccessKey</name>
-  <value>putSecretHere</value>
-</property>
-~~~
-
-#### Alluxio
-
-For Alluxio support add the following entry into the `core-site.xml` file:
-
-~~~xml
-<property>
-  <name>fs.alluxio.impl</name>
-  <value>alluxio.hadoop.FileSystem</value>
-</property>
-~~~
-
-
-## Connecting to other systems using Input/OutputFormat wrappers for Hadoop
-
-Apache Flink allows users to access many different systems as data sources or sinks.
-The system is designed for very easy extensibility. Similar to Apache Hadoop, Flink has the concept
-of so called `InputFormat`s and `OutputFormat`s.
-
-One implementation of these `InputFormat`s is the `HadoopInputFormat`. This is a wrapper that allows
-users to use all existing Hadoop input formats with Flink.
-
-This section shows some examples for connecting Flink to other systems.
-[Read more about Hadoop compatibility in Flink]({{ site.baseurl }}/apis/batch/hadoop_compatibility.html).
-
-## Avro support in Flink
-
-Flink has extensive build-in support for [Apache Avro](http://avro.apache.org/). This allows to easily read from Avro files with Flink.
-Also, the serialization framework of Flink is able to handle classes generated from Avro schemas.
-
-In order to read data from an Avro file, you have to specify an `AvroInputFormat`.
-
-**Example**:
-
-~~~java
-AvroInputFormat<User> users = new AvroInputFormat<User>(in, User.class);
-DataSet<User> usersDS = env.createInput(users);
-~~~
-
-Note that `User` is a POJO generated by Avro. Flink also allows to perform string-based key selection of these POJOs. For example:
-
-~~~java
-usersDS.groupBy("name")
-~~~
-
-
-Note that using the `GenericData.Record` type is possible with Flink, but not recommended. Since the record contains the full schema, its very data intensive and thus probably slow to use.
-
-Flink's POJO field selection also works with POJOs generated from Avro. However, the usage is only possible if the field types are written correctly to the generated class. If a field is of type `Object` you can not use the field as a join or grouping key.
-Specifying a field in Avro like this `{"name": "type_double_test", "type": "double"},` works fine, however specifying it as a UNION-type with only one field (`{"name": "type_double_test", "type": ["double"]},`) will generate a field of type `Object`. Note that specifying nullable types (`{"name": "type_double_test", "type": ["null", "double"]},`) is possible!
-
-
-
-### Access Microsoft Azure Table Storage
-
-_Note: This example works starting from Flink 0.6-incubating_
-
-This example is using the `HadoopInputFormat` wrapper to use an existing Hadoop input format implementation for accessing [Azure's Table Storage](https://azure.microsoft.com/en-us/documentation/articles/storage-introduction/).
-
-1. Download and compile the `azure-tables-hadoop` project. The input format developed by the project is not yet available in Maven Central, therefore, we have to build the project ourselves.
-Execute the following commands:
-
-   ~~~bash
-   git clone https://github.com/mooso/azure-tables-hadoop.git
-   cd azure-tables-hadoop
-   mvn clean install
-   ~~~
-
-2. Setup a new Flink project using the quickstarts:
-
-   ~~~bash
-   curl https://flink.apache.org/q/quickstart.sh | bash
-   ~~~
-
-3. Add the following dependencies (in the `<dependencies>` section) to your `pom.xml` file:
-
-   ~~~xml
-   <dependency>
-       <groupId>org.apache.flink</groupId>
-       <artifactId>flink-hadoop-compatibility{{ site.scala_version_suffix }}</artifactId>
-       <version>{{site.version}}</version>
-   </dependency>
-   <dependency>
-     <groupId>com.microsoft.hadoop</groupId>
-     <artifactId>microsoft-hadoop-azure</artifactId>
-     <version>0.0.4</version>
-   </dependency>
-   ~~~
-
-   `flink-hadoop-compatibility` is a Flink package that provides the Hadoop input format wrappers.
-   `microsoft-hadoop-azure` is adding the project we've build before to our project.
-
-The project is now prepared for starting to code. We recommend to import the project into an IDE, such as Eclipse or IntelliJ. (Import as a Maven project!).
-Browse to the code of the `Job.java` file. Its an empty skeleton for a Flink job.
-
-Paste the following code into it:
-
-~~~java
-import java.util.Map;
-import org.apache.flink.api.common.functions.MapFunction;
-import org.apache.flink.api.java.DataSet;
-import org.apache.flink.api.java.ExecutionEnvironment;
-import org.apache.flink.api.java.tuple.Tuple2;
-import org.apache.flink.hadoopcompatibility.mapreduce.HadoopInputFormat;
-import org.apache.hadoop.io.Text;
-import org.apache.hadoop.mapreduce.Job;
-import com.microsoft.hadoop.azure.AzureTableConfiguration;
-import com.microsoft.hadoop.azure.AzureTableInputFormat;
-import com.microsoft.hadoop.azure.WritableEntity;
-import com.microsoft.windowsazure.storage.table.EntityProperty;
-
-public class AzureTableExample {
-
-  public static void main(String[] args) throws Exception {
-    // set up the execution environment
-    final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-    
-    // create a  AzureTableInputFormat, using a Hadoop input format wrapper
-    HadoopInputFormat<Text, WritableEntity> hdIf = new HadoopInputFormat<Text, WritableEntity>(new AzureTableInputFormat(), Text.class, WritableEntity.class, new Job());
-
-    // set the Account URI, something like: https://apacheflink.table.core.windows.net
-    hdIf.getConfiguration().set(AzureTableConfiguration.Keys.ACCOUNT_URI.getKey(), "TODO"); 
-    // set the secret storage key here
-    hdIf.getConfiguration().set(AzureTableConfiguration.Keys.STORAGE_KEY.getKey(), "TODO");
-    // set the table name here
-    hdIf.getConfiguration().set(AzureTableConfiguration.Keys.TABLE_NAME.getKey(), "TODO");
-    
-    DataSet<Tuple2<Text, WritableEntity>> input = env.createInput(hdIf);
-    // a little example how to use the data in a mapper.
-    DataSet<String> fin = input.map(new MapFunction<Tuple2<Text,WritableEntity>, String>() {
-      @Override
-      public String map(Tuple2<Text, WritableEntity> arg0) throws Exception {
-        System.err.println("--------------------------------\nKey = "+arg0.f0);
-        WritableEntity we = arg0.f1;
-
-        for(Map.Entry<String, EntityProperty> prop : we.getProperties().entrySet()) {
-          System.err.println("key="+prop.getKey() + " ; value (asString)="+prop.getValue().getValueAsString());
-        }
-
-        return arg0.f0.toString();
-      }
-    });
-
-    // emit result (this works only locally)
-    fin.print();
-
-    // execute program
-    env.execute("Azure Example");
-  }
-}
-~~~
-
-The example shows how to access an Azure table and turn data into Flink's `DataSet` (more specifically, the type of the set is `DataSet<Tuple2<Text, WritableEntity>>`). With the `DataSet`, you can apply all known transformations to the DataSet.
-
-## Access MongoDB
-
-This [GitHub repository documents how to use MongoDB with Apache Flink (starting from 0.7-incubating)](https://github.com/okkam-it/flink-mongodb-test).
-
-


[70/89] [abbrv] flink git commit: [FLINK-3677] FileInputFormat: Allow to specify include/exclude file name patterns

Posted by se...@apache.org.
[FLINK-3677] FileInputFormat: Allow to specify include/exclude file name patterns

This closes #2109


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/48109104
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/48109104
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/48109104

Branch: refs/heads/flip-6
Commit: 4810910431e01bf143ae77a6e93a86f2fafbccd0
Parents: 259a3a5
Author: Ivan Mushketyk <iv...@gmail.com>
Authored: Tue Jun 14 22:44:19 2016 +0100
Committer: Maximilian Michels <mx...@apache.org>
Committed: Thu Aug 25 16:08:18 2016 +0200

----------------------------------------------------------------------
 flink-core/pom.xml                              |   7 +
 .../flink/api/common/io/FileInputFormat.java    |  20 ++-
 .../flink/api/common/io/FilePathFilter.java     |  69 ++++++++
 .../flink/api/common/io/GlobFilePathFilter.java | 111 ++++++++++++
 .../java/org/apache/flink/core/fs/Path.java     |  10 +-
 .../flink/core/fs/local/LocalFileStatus.java    |   8 +
 .../flink/api/common/io/DefaultFilterTest.java  |  70 ++++++++
 .../api/common/io/FileInputFormatTest.java      | 174 ++++++++++++-------
 .../api/common/io/GlobFilePathFilterTest.java   | 141 +++++++++++++++
 .../ContinuousFileMonitoringFunctionITCase.java |   4 +-
 .../hdfstests/ContinuousFileMonitoringTest.java |  13 +-
 .../environment/StreamExecutionEnvironment.java |  72 ++++++--
 .../ContinuousFileMonitoringFunction.java       |   8 +-
 .../api/functions/source/FilePathFilter.java    |  66 -------
 .../api/scala/StreamExecutionEnvironment.scala  |  43 ++++-
 ...ontinuousFileProcessingCheckpointITCase.java |   5 +-
 16 files changed, 650 insertions(+), 171 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/48109104/flink-core/pom.xml
----------------------------------------------------------------------
diff --git a/flink-core/pom.xml b/flink-core/pom.xml
index 9e290a0..dcb2599 100644
--- a/flink-core/pom.xml
+++ b/flink-core/pom.xml
@@ -103,6 +103,13 @@ under the License.
 			<scope>test</scope>
 		</dependency>
 
+		<dependency>
+			<groupId>com.google.guava</groupId>
+			<artifactId>guava</artifactId>
+			<version>${guava.version}</version>
+			<scope>test</scope>
+		</dependency>
+
 	</dependencies>
 
 	<build>

http://git-wip-us.apache.org/repos/asf/flink/blob/48109104/flink-core/src/main/java/org/apache/flink/api/common/io/FileInputFormat.java
----------------------------------------------------------------------
diff --git a/flink-core/src/main/java/org/apache/flink/api/common/io/FileInputFormat.java b/flink-core/src/main/java/org/apache/flink/api/common/io/FileInputFormat.java
index 72d6061..d0f5166 100644
--- a/flink-core/src/main/java/org/apache/flink/api/common/io/FileInputFormat.java
+++ b/flink-core/src/main/java/org/apache/flink/api/common/io/FileInputFormat.java
@@ -33,6 +33,7 @@ import org.apache.flink.core.fs.FileStatus;
 import org.apache.flink.core.fs.FileSystem;
 import org.apache.flink.core.fs.Path;
 
+import org.apache.flink.util.Preconditions;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -70,7 +71,7 @@ public abstract class FileInputFormat<OT> extends RichInputFormat<OT, FileInputS
 	 * The fraction that the last split may be larger than the others.
 	 */
 	private static final float MAX_SPLIT_SIZE_DISCREPANCY = 1.1f;
-	
+
 	/**
 	 * The timeout (in milliseconds) to wait for a filesystem stream to respond.
 	 */
@@ -218,7 +219,12 @@ public abstract class FileInputFormat<OT> extends RichInputFormat<OT, FileInputS
 	 * structure is enabled.
 	 */
 	protected boolean enumerateNestedFiles = false;
-	
+
+	/**
+	 * Files filter for determining what files/directories should be included.
+	 */
+	private FilePathFilter filesFilter = new GlobFilePathFilter();
+
 	// --------------------------------------------------------------------------------------------
 	//  Constructors
 	// --------------------------------------------------------------------------------------------	
@@ -332,6 +338,10 @@ public abstract class FileInputFormat<OT> extends RichInputFormat<OT, FileInputS
 		return splitLength;
 	}
 
+	public void setFilesFilter(FilePathFilter filesFilter) {
+		this.filesFilter = Preconditions.checkNotNull(filesFilter, "Files filter should not be null");
+	}
+
 	// --------------------------------------------------------------------------------------------
 	//  Pre-flight: Configuration, Splits, Sampling
 	// --------------------------------------------------------------------------------------------
@@ -625,7 +635,9 @@ public abstract class FileInputFormat<OT> extends RichInputFormat<OT, FileInputS
 	 */
 	protected boolean acceptFile(FileStatus fileStatus) {
 		final String name = fileStatus.getPath().getName();
-		return !name.startsWith("_") && !name.startsWith(".");
+		return !name.startsWith("_")
+			&& !name.startsWith(".")
+			&& !filesFilter.filterPath(fileStatus.getPath());
 	}
 
 	/**
@@ -735,7 +747,7 @@ public abstract class FileInputFormat<OT> extends RichInputFormat<OT, FileInputS
 			"File Input (unknown file)" :
 			"File Input (" + this.filePath.toString() + ')';
 	}
-	
+
 	// ============================================================================================
 	
 	/**

http://git-wip-us.apache.org/repos/asf/flink/blob/48109104/flink-core/src/main/java/org/apache/flink/api/common/io/FilePathFilter.java
----------------------------------------------------------------------
diff --git a/flink-core/src/main/java/org/apache/flink/api/common/io/FilePathFilter.java b/flink-core/src/main/java/org/apache/flink/api/common/io/FilePathFilter.java
new file mode 100644
index 0000000..4ab896c
--- /dev/null
+++ b/flink-core/src/main/java/org/apache/flink/api/common/io/FilePathFilter.java
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.api.common.io;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.core.fs.Path;
+
+import java.io.Serializable;
+
+/**
+ * The {@link #filterPath(Path)} method is responsible for deciding if a path is eligible for further
+ * processing or not. This can serve to exclude temporary or partial files that
+ * are still being written.
+ */
+@PublicEvolving
+public abstract class FilePathFilter implements Serializable {
+
+	// Name of an unfinished Hadoop file
+	public static final String HADOOP_COPYING = "_COPYING_";
+
+	public static FilePathFilter createDefaultFilter() {
+		return new DefaultFilter();
+	}
+
+	/**
+	 * Returns {@code true} if the {@code filePath} given is to be
+	 * ignored when processing a directory, e.g.
+	 * <pre>
+	 * {@code
+	 *
+	 * public boolean filterPaths(Path filePath) {
+	 *     return filePath.getName().startsWith(".") || filePath.getName().contains("_COPYING_");
+	 * }
+	 * }</pre>
+	 */
+	public abstract boolean filterPath(Path filePath);
+
+	/**
+	 * The default file path filtering method and is used
+	 * if no other such function is provided. This filter leaves out
+	 * files starting with ".", "_", and "_COPYING_".
+	 */
+	public static class DefaultFilter extends FilePathFilter {
+
+		DefaultFilter() {}
+
+		@Override
+		public boolean filterPath(Path filePath) {
+			return filePath == null ||
+				filePath.getName().startsWith(".") ||
+				filePath.getName().startsWith("_") ||
+				filePath.getName().contains(HADOOP_COPYING);
+		}
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/48109104/flink-core/src/main/java/org/apache/flink/api/common/io/GlobFilePathFilter.java
----------------------------------------------------------------------
diff --git a/flink-core/src/main/java/org/apache/flink/api/common/io/GlobFilePathFilter.java b/flink-core/src/main/java/org/apache/flink/api/common/io/GlobFilePathFilter.java
new file mode 100644
index 0000000..4aaf481
--- /dev/null
+++ b/flink-core/src/main/java/org/apache/flink/api/common/io/GlobFilePathFilter.java
@@ -0,0 +1,111 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.common.io;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.core.fs.Path;
+
+import java.nio.file.FileSystem;
+import java.nio.file.FileSystems;
+import java.nio.file.PathMatcher;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+/**
+ * Class for determining if a particular file should be included or excluded
+ * based on a set of include and exclude glob filters.
+ *
+ * Glob filter support the following expressions:
+ * <ul>
+ *     <li>* - matches any number of any characters including none</li>
+ *     <li>** - matches any file in all subdirectories</li>
+ *     <li>? - matches any single character</li>
+ *     <li>[abc] - matches one of the characters listed in a brackets</li>
+ *     <li>[a-z] - matches one character from the range given in the brackets</li>
+ * </ul>
+ *
+ * <p> If does not match an include pattern it is excluded. If it matches and include
+ * pattern but also matches an exclude pattern it is excluded.
+ *
+ * <p> If no patterns are provided all files are included
+ */
+@Internal
+public class GlobFilePathFilter extends FilePathFilter {
+
+	private static final long serialVersionUID = 1L;
+
+	private final List<PathMatcher> includeMatchers;
+	private final List<PathMatcher> excludeMatchers;
+
+	/**
+	 * Constructor for GlobFilePathFilter that will match all files
+	 */
+	public GlobFilePathFilter() {
+		this(Collections.<String>emptyList(), Collections.<String>emptyList());
+	}
+
+	/**
+	 * Constructor for GlobFilePathFilter
+	 *
+	 * @param includePatterns glob patterns for files to include
+	 * @param excludePatterns glob patterns for files to exclude
+	 */
+	public GlobFilePathFilter(List<String> includePatterns, List<String> excludePatterns) {
+		includeMatchers = buildPatterns(includePatterns);
+		excludeMatchers = buildPatterns(excludePatterns);
+	}
+
+	private List<PathMatcher> buildPatterns(List<String> patterns) {
+		FileSystem fileSystem = FileSystems.getDefault();
+		List<PathMatcher> matchers = new ArrayList<>();
+
+		for (String patternStr : patterns) {
+			matchers.add(fileSystem.getPathMatcher("glob:" + patternStr));
+		}
+
+		return matchers;
+	}
+
+	@Override
+	public boolean filterPath(Path filePath) {
+		if (includeMatchers.isEmpty() && excludeMatchers.isEmpty()) {
+			return false;
+		}
+
+		for (PathMatcher mather : includeMatchers) {
+			if (mather.matches(Paths.get(filePath.getPath()))) {
+				return shouldExclude(filePath);
+			}
+		}
+
+		return true;
+	}
+
+	private boolean shouldExclude(Path filePath) {
+		for (PathMatcher matcher : excludeMatchers) {
+			if (matcher.matches(Paths.get(filePath.getPath()))) {
+				return true;
+			}
+		}
+		return false;
+	}
+
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/48109104/flink-core/src/main/java/org/apache/flink/core/fs/Path.java
----------------------------------------------------------------------
diff --git a/flink-core/src/main/java/org/apache/flink/core/fs/Path.java b/flink-core/src/main/java/org/apache/flink/core/fs/Path.java
index 4c77199..7adfa42 100644
--- a/flink-core/src/main/java/org/apache/flink/core/fs/Path.java
+++ b/flink-core/src/main/java/org/apache/flink/core/fs/Path.java
@@ -145,7 +145,7 @@ public class Path implements IOReadableWritable, Serializable {
 	}
 
 	/**
- 	 * Checks if the provided path string is either null or has zero length and throws
+	 * Checks if the provided path string is either null or has zero length and throws
 	 * a {@link IllegalArgumentException} if any of the two conditions apply.
 	 * In addition, leading and tailing whitespaces are removed.
 	 *
@@ -333,6 +333,14 @@ public class Path implements IOReadableWritable, Serializable {
 	}
 
 	/**
+	 * Return full path.
+	 * @return full path
+	 */
+	public String getPath() {
+		return uri.getPath();
+	}
+
+	/**
 	 * Returns the parent of a path, i.e., everything that precedes the last separator
 	 * or <code>null</code> if at root.
 	 * 

http://git-wip-us.apache.org/repos/asf/flink/blob/48109104/flink-core/src/main/java/org/apache/flink/core/fs/local/LocalFileStatus.java
----------------------------------------------------------------------
diff --git a/flink-core/src/main/java/org/apache/flink/core/fs/local/LocalFileStatus.java b/flink-core/src/main/java/org/apache/flink/core/fs/local/LocalFileStatus.java
index 0aebd75..3e127ff 100644
--- a/flink-core/src/main/java/org/apache/flink/core/fs/local/LocalFileStatus.java
+++ b/flink-core/src/main/java/org/apache/flink/core/fs/local/LocalFileStatus.java
@@ -102,4 +102,12 @@ public class LocalFileStatus implements FileStatus {
 	public File getFile() {
 		return this.file;
 	}
+
+	@Override
+	public String toString() {
+		return "LocalFileStatus{" +
+			"file=" + file +
+			", path=" + path +
+			'}';
+	}
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/48109104/flink-core/src/test/java/org/apache/flink/api/common/io/DefaultFilterTest.java
----------------------------------------------------------------------
diff --git a/flink-core/src/test/java/org/apache/flink/api/common/io/DefaultFilterTest.java b/flink-core/src/test/java/org/apache/flink/api/common/io/DefaultFilterTest.java
new file mode 100644
index 0000000..6956518
--- /dev/null
+++ b/flink-core/src/test/java/org/apache/flink/api/common/io/DefaultFilterTest.java
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.api.common.io;
+
+import java.util.Arrays;
+import java.util.Collection;
+
+import org.apache.flink.core.fs.Path;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.junit.runners.Parameterized.Parameters;
+
+import static org.junit.Assert.assertEquals;
+
+@RunWith(Parameterized.class)
+public class DefaultFilterTest {
+	@Parameters
+	public static Collection<Object[]> data() {
+		return Arrays.asList(new Object[][] {
+			{"file.txt",			false},
+
+			{".file.txt",			true},
+			{"dir/.file.txt",		true},
+			{".dir/file.txt",		false},
+
+			{"_file.txt",			true},
+			{"dir/_file.txt",		true},
+			{"_dir/file.txt",		false},
+
+			// Check filtering Hadoop's unfinished files
+			{FilePathFilter.HADOOP_COPYING,			true},
+			{"dir/" + FilePathFilter.HADOOP_COPYING,		true},
+			{FilePathFilter.HADOOP_COPYING + "/file.txt",	false},
+		});
+	}
+
+	private final boolean shouldFilter;
+	private final String filePath;
+
+	public DefaultFilterTest(String filePath, boolean shouldFilter) {
+		this.filePath = filePath;
+		this.shouldFilter = shouldFilter;
+	}
+
+	@Test
+	public void test() {
+		FilePathFilter defaultFilter = FilePathFilter.createDefaultFilter();
+		Path path = new Path(filePath);
+		assertEquals(
+			String.format("File: %s", filePath),
+			shouldFilter,
+			defaultFilter.filterPath(path));
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/48109104/flink-core/src/test/java/org/apache/flink/api/common/io/FileInputFormatTest.java
----------------------------------------------------------------------
diff --git a/flink-core/src/test/java/org/apache/flink/api/common/io/FileInputFormatTest.java b/flink-core/src/test/java/org/apache/flink/api/common/io/FileInputFormatTest.java
index ae8802b..dcd6583 100644
--- a/flink-core/src/test/java/org/apache/flink/api/common/io/FileInputFormatTest.java
+++ b/flink-core/src/test/java/org/apache/flink/api/common/io/FileInputFormatTest.java
@@ -18,6 +18,7 @@
 
 package org.apache.flink.api.common.io;
 
+import com.google.common.collect.Lists;
 import org.apache.flink.api.common.io.FileInputFormat.FileBaseStatistics;
 import org.apache.flink.api.common.io.statistics.BaseStatistics;
 import org.apache.flink.configuration.Configuration;
@@ -27,16 +28,17 @@ import org.apache.flink.testutils.TestFileUtils;
 import org.apache.flink.types.IntValue;
 
 import org.junit.Assert;
+import org.junit.Rule;
 import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
 
 import java.io.BufferedOutputStream;
-import java.io.BufferedWriter;
 import java.io.File;
 import java.io.FileOutputStream;
-import java.io.FileWriter;
 import java.io.IOException;
 import java.io.InputStream;
 import java.net.URI;
+import java.util.Collections;
 
 import static org.junit.Assert.*;
 
@@ -45,6 +47,9 @@ import static org.junit.Assert.*;
  */
 public class FileInputFormatTest {
 
+	@Rule
+	public TemporaryFolder temporaryFolder = new TemporaryFolder();
+
 	// ------------------------------------------------------------------------
 	//  Statistics
 	// ------------------------------------------------------------------------
@@ -257,41 +262,21 @@ public class FileInputFormatTest {
 	public void testIgnoredUnderscoreFiles() {
 		try {
 			final String contents = "CONTENTS";
-			
+
 			// create some accepted, some ignored files
-			
-			File tempDir = new File(System.getProperty("java.io.tmpdir"));
-			File f;
-			do {
-				f = new File(tempDir, TestFileUtils.randomFileName(""));
-			}
-			while (f.exists());
 
-			assertTrue(f.mkdirs());
-			f.deleteOnExit();
-			
-			File child1 = new File(f, "dataFile1.txt");
-			File child2 = new File(f, "another_file.bin");
-			File luigiFile = new File(f, "_luigi");
-			File success = new File(f, "_SUCCESS");
-			
-			File[] files = { child1, child2, luigiFile, success };
-			
-			for (File child : files) {
-				child.deleteOnExit();
-			
-				BufferedWriter out = new BufferedWriter(new FileWriter(child));
-				try { 
-					out.write(contents);
-				} finally {
-					out.close();
-				}
-			}
-			
+
+			File child1 = temporaryFolder.newFile("dataFile1.txt");
+			File child2 = temporaryFolder.newFile("another_file.bin");
+			File luigiFile = temporaryFolder.newFile("_luigi");
+			File success = temporaryFolder.newFile("_SUCCESS");
+
+			createTempFiles(contents.getBytes(), child1, child2, luigiFile, success);
+
 			// test that only the valid files are accepted
 			
 			final DummyFileInputFormat format = new DummyFileInputFormat();
-			format.setFilePath(f.toURI().toString());
+			format.setFilePath(temporaryFolder.getRoot().toURI().toString());
 			format.configure(new Configuration());
 			FileInputSplit[] splits = format.createInputSplits(1);
 			
@@ -314,43 +299,95 @@ public class FileInputFormatTest {
 	}
 
 	@Test
+	public void testExcludeFiles() {
+		try {
+			final String contents = "CONTENTS";
+
+			// create some accepted, some ignored files
+
+			File child1 = temporaryFolder.newFile("dataFile1.txt");
+			File child2 = temporaryFolder.newFile("another_file.bin");
+
+			File[] files = { child1, child2 };
+
+			createTempFiles(contents.getBytes(), files);
+
+			// test that only the valid files are accepted
+
+			Configuration configuration = new Configuration();
+
+			final DummyFileInputFormat format = new DummyFileInputFormat();
+			format.setFilePath(temporaryFolder.getRoot().toURI().toString());
+			format.configure(configuration);
+			format.setFilesFilter(new GlobFilePathFilter(
+				Collections.singletonList("**"),
+				Collections.singletonList("**/another_file.bin")));
+			FileInputSplit[] splits = format.createInputSplits(1);
+
+			Assert.assertEquals(1, splits.length);
+
+			final URI uri1 = splits[0].getPath().toUri();
+
+			final URI childUri1 = child1.toURI();
+
+			Assert.assertEquals(uri1, childUri1);
+		}
+		catch (Exception e) {
+			System.err.println(e.getMessage());
+			e.printStackTrace();
+			Assert.fail(e.getMessage());
+		}
+	}
+
+	@Test
+	public void testReadMultiplePatterns() {
+		try {
+			final String contents = "CONTENTS";
+
+			// create some accepted, some ignored files
+
+			File child1 = temporaryFolder.newFile("dataFile1.txt");
+			File child2 = temporaryFolder.newFile("another_file.bin");
+			createTempFiles(contents.getBytes(), child1, child2);
+
+			// test that only the valid files are accepted
+
+			Configuration configuration = new Configuration();
+
+			final DummyFileInputFormat format = new DummyFileInputFormat();
+			format.setFilePath(temporaryFolder.getRoot().toURI().toString());
+			format.configure(configuration);
+			format.setFilesFilter(new GlobFilePathFilter(
+				Collections.singletonList("**"),
+				Lists.newArrayList("**/another_file.bin", "**/dataFile1.txt")
+			));
+			FileInputSplit[] splits = format.createInputSplits(1);
+
+			Assert.assertEquals(0, splits.length);
+		}
+		catch (Exception e) {
+			System.err.println(e.getMessage());
+			e.printStackTrace();
+			Assert.fail(e.getMessage());
+		}
+	}
+
+	@Test
 	public void testGetStatsIgnoredUnderscoreFiles() {
 		try {
-			final long SIZE = 2048;
+			final int SIZE = 2048;
 			final long TOTAL = 2*SIZE;
 
 			// create two accepted and two ignored files
-			File tempDir = new File(System.getProperty("java.io.tmpdir"));
-			File f;
-			do {
-				f = new File(tempDir, TestFileUtils.randomFileName(""));
-			}
-			while (f.exists());
-			
-			assertTrue(f.mkdirs());
-			f.deleteOnExit();
-
-			File child1 = new File(f, "dataFile1.txt");
-			File child2 = new File(f, "another_file.bin");
-			File luigiFile = new File(f, "_luigi");
-			File success = new File(f, "_SUCCESS");
-
-			File[] files = { child1, child2, luigiFile, success };
+			File child1 = temporaryFolder.newFile("dataFile1.txt");
+			File child2 = temporaryFolder.newFile("another_file.bin");
+			File luigiFile = temporaryFolder.newFile("_luigi");
+			File success = temporaryFolder.newFile("_SUCCESS");
 
-			for (File child : files) {
-				child.deleteOnExit();
+			createTempFiles(new byte[SIZE], child1, child2, luigiFile, success);
 
-				BufferedOutputStream out = new BufferedOutputStream(new FileOutputStream(child));
-				try {
-					for (long bytes = SIZE; bytes > 0; bytes--) {
-						out.write(0);
-					}
-				} finally {
-					out.close();
-				}
-			}
 			final DummyFileInputFormat format = new DummyFileInputFormat();
-			format.setFilePath(f.toURI().toString());
+			format.setFilePath(temporaryFolder.getRoot().toURI().toString());
 			format.configure(new Configuration());
 
 			// check that only valid files are used for statistics computation
@@ -406,7 +443,20 @@ public class FileInputFormatTest {
 	}
 	
 	// ------------------------------------------------------------------------
-	
+
+	private void createTempFiles(byte[] contents, File... files) throws IOException {
+		for (File child : files) {
+			child.deleteOnExit();
+
+			BufferedOutputStream out = new BufferedOutputStream(new FileOutputStream(child));
+			try {
+				out.write(contents);
+			} finally {
+				out.close();
+			}
+		}
+	}
+
 	private class DummyFileInputFormat extends FileInputFormat<IntValue> {
 		private static final long serialVersionUID = 1L;
 

http://git-wip-us.apache.org/repos/asf/flink/blob/48109104/flink-core/src/test/java/org/apache/flink/api/common/io/GlobFilePathFilterTest.java
----------------------------------------------------------------------
diff --git a/flink-core/src/test/java/org/apache/flink/api/common/io/GlobFilePathFilterTest.java b/flink-core/src/test/java/org/apache/flink/api/common/io/GlobFilePathFilterTest.java
new file mode 100644
index 0000000..bced076
--- /dev/null
+++ b/flink-core/src/test/java/org/apache/flink/api/common/io/GlobFilePathFilterTest.java
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.common.io;
+
+import org.apache.flink.core.fs.Path;
+import org.junit.Test;
+
+import java.util.Collections;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+public class GlobFilePathFilterTest {
+	@Test
+	public void defaultConstructorCreateMatchAllFilter() {
+		GlobFilePathFilter matcher = new GlobFilePathFilter();
+		assertFalse(matcher.filterPath(new Path("dir/file.txt")));
+	}
+
+	@Test
+	public void matchAllFilesByDefault() {
+		GlobFilePathFilter matcher = new GlobFilePathFilter(
+			Collections.<String>emptyList(),
+			Collections.<String>emptyList());
+
+		assertFalse(matcher.filterPath(new Path("dir/file.txt")));
+	}
+
+	@Test
+	public void excludeFilesNotInIncludePatterns() {
+		GlobFilePathFilter matcher = new GlobFilePathFilter(
+			Collections.singletonList("dir/*"),
+			Collections.<String>emptyList());
+
+		assertFalse(matcher.filterPath(new Path("dir/file.txt")));
+		assertTrue(matcher.filterPath(new Path("dir1/file.txt")));
+	}
+
+	@Test
+	public void excludeFilesIfMatchesExclude() {
+		GlobFilePathFilter matcher = new GlobFilePathFilter(
+			Collections.singletonList("dir/*"),
+			Collections.singletonList("dir/file.txt"));
+
+		assertTrue(matcher.filterPath(new Path("dir/file.txt")));
+	}
+
+	@Test
+	public void includeFileWithAnyCharacterMatcher() {
+		GlobFilePathFilter matcher = new GlobFilePathFilter(
+			Collections.singletonList("dir/?.txt"),
+			Collections.<String>emptyList());
+
+		assertFalse(matcher.filterPath(new Path("dir/a.txt")));
+		assertTrue(matcher.filterPath(new Path("dir/aa.txt")));
+	}
+
+	@Test
+	public void includeFileWithCharacterSetMatcher() {
+		GlobFilePathFilter matcher = new GlobFilePathFilter(
+			Collections.singletonList("dir/[acd].txt"),
+			Collections.<String>emptyList());
+
+		assertFalse(matcher.filterPath(new Path("dir/a.txt")));
+		assertFalse(matcher.filterPath(new Path("dir/c.txt")));
+		assertFalse(matcher.filterPath(new Path("dir/d.txt")));
+		assertTrue(matcher.filterPath(new Path("dir/z.txt")));
+	}
+
+	@Test
+	public void includeFileWithCharacterRangeMatcher() {
+		GlobFilePathFilter matcher = new GlobFilePathFilter(
+			Collections.singletonList("dir/[a-d].txt"),
+			Collections.<String>emptyList());
+
+		assertFalse(matcher.filterPath(new Path("dir/a.txt")));
+		assertFalse(matcher.filterPath(new Path("dir/b.txt")));
+		assertFalse(matcher.filterPath(new Path("dir/c.txt")));
+		assertFalse(matcher.filterPath(new Path("dir/d.txt")));
+		assertTrue(matcher.filterPath(new Path("dir/z.txt")));
+	}
+
+	@Test
+	public void excludeHDFSFile() {
+		GlobFilePathFilter matcher = new GlobFilePathFilter(
+			Collections.singletonList("**"),
+			Collections.singletonList("/dir/file2.txt"));
+
+		assertFalse(matcher.filterPath(new Path("hdfs:///dir/file1.txt")));
+		assertTrue(matcher.filterPath(new Path("hdfs:///dir/file2.txt")));
+		assertFalse(matcher.filterPath(new Path("hdfs:///dir/file3.txt")));
+	}
+
+	@Test
+	public void excludeFilenameWithStart() {
+		GlobFilePathFilter matcher = new GlobFilePathFilter(
+			Collections.singletonList("**"),
+			Collections.singletonList("\\*"));
+
+		assertTrue(matcher.filterPath(new Path("*")));
+		assertFalse(matcher.filterPath(new Path("**")));
+		assertFalse(matcher.filterPath(new Path("other.txt")));
+	}
+
+	@Test
+	public void singleStarPattern() {
+		GlobFilePathFilter matcher = new GlobFilePathFilter(
+			Collections.singletonList("*"),
+			Collections.<String>emptyList());
+
+		assertFalse(matcher.filterPath(new Path("a")));
+		assertTrue(matcher.filterPath(new Path("a/b")));
+		assertTrue(matcher.filterPath(new Path("a/b/c")));
+	}
+
+	@Test
+	public void doubleStarPattern() {
+		GlobFilePathFilter matcher = new GlobFilePathFilter(
+			Collections.singletonList("**"),
+			Collections.<String>emptyList());
+
+		assertFalse(matcher.filterPath(new Path("a")));
+		assertFalse(matcher.filterPath(new Path("a/b")));
+		assertFalse(matcher.filterPath(new Path("a/b/c")));
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/48109104/flink-fs-tests/src/test/java/org/apache/flink/hdfstests/ContinuousFileMonitoringFunctionITCase.java
----------------------------------------------------------------------
diff --git a/flink-fs-tests/src/test/java/org/apache/flink/hdfstests/ContinuousFileMonitoringFunctionITCase.java b/flink-fs-tests/src/test/java/org/apache/flink/hdfstests/ContinuousFileMonitoringFunctionITCase.java
index e6cd5d9..663345c 100644
--- a/flink-fs-tests/src/test/java/org/apache/flink/hdfstests/ContinuousFileMonitoringFunctionITCase.java
+++ b/flink-fs-tests/src/test/java/org/apache/flink/hdfstests/ContinuousFileMonitoringFunctionITCase.java
@@ -26,7 +26,7 @@ import org.apache.flink.core.fs.Path;
 import org.apache.flink.streaming.api.datastream.DataStream;
 import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
 import org.apache.flink.streaming.api.functions.sink.RichSinkFunction;
-import org.apache.flink.streaming.api.functions.source.FilePathFilter;
+import org.apache.flink.api.common.io.FilePathFilter;
 import org.apache.flink.streaming.api.functions.source.ContinuousFileMonitoringFunction;
 import org.apache.flink.streaming.api.functions.source.ContinuousFileReaderOperator;
 import org.apache.flink.streaming.api.functions.source.FileProcessingMode;
@@ -122,9 +122,9 @@ public class ContinuousFileMonitoringFunctionITCase extends StreamingProgramTest
 			StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
 			env.setParallelism(1);
 
+			format.setFilesFilter(FilePathFilter.createDefaultFilter());
 			ContinuousFileMonitoringFunction<String> monitoringFunction =
 				new ContinuousFileMonitoringFunction<>(format, hdfsURI,
-					FilePathFilter.createDefaultFilter(),
 					FileProcessingMode.PROCESS_CONTINUOUSLY,
 					env.getParallelism(), INTERVAL);
 

http://git-wip-us.apache.org/repos/asf/flink/blob/48109104/flink-fs-tests/src/test/java/org/apache/flink/hdfstests/ContinuousFileMonitoringTest.java
----------------------------------------------------------------------
diff --git a/flink-fs-tests/src/test/java/org/apache/flink/hdfstests/ContinuousFileMonitoringTest.java b/flink-fs-tests/src/test/java/org/apache/flink/hdfstests/ContinuousFileMonitoringTest.java
index def9378..4aadaec 100644
--- a/flink-fs-tests/src/test/java/org/apache/flink/hdfstests/ContinuousFileMonitoringTest.java
+++ b/flink-fs-tests/src/test/java/org/apache/flink/hdfstests/ContinuousFileMonitoringTest.java
@@ -26,7 +26,7 @@ import org.apache.flink.api.java.typeutils.TypeExtractor;
 import org.apache.flink.configuration.Configuration;
 import org.apache.flink.core.fs.FileInputSplit;
 import org.apache.flink.core.fs.Path;
-import org.apache.flink.streaming.api.functions.source.FilePathFilter;
+import org.apache.flink.api.common.io.FilePathFilter;
 import org.apache.flink.streaming.api.functions.source.ContinuousFileMonitoringFunction;
 import org.apache.flink.streaming.api.functions.source.ContinuousFileReaderOperator;
 import org.apache.flink.streaming.api.functions.source.FileProcessingMode;
@@ -216,8 +216,9 @@ public class ContinuousFileMonitoringTest {
 		}
 
 		TextInputFormat format = new TextInputFormat(new Path(hdfsURI));
+		format.setFilesFilter(new PathFilter());
 		ContinuousFileMonitoringFunction<String> monitoringFunction =
-			new ContinuousFileMonitoringFunction<>(format, hdfsURI, new PathFilter(),
+			new ContinuousFileMonitoringFunction<>(format, hdfsURI,
 				FileProcessingMode.PROCESS_ONCE, 1, INTERVAL);
 
 		monitoringFunction.open(new Configuration());
@@ -242,8 +243,9 @@ public class ContinuousFileMonitoringTest {
 		fc.start();
 
 		TextInputFormat format = new TextInputFormat(new Path(hdfsURI));
+		format.setFilesFilter(FilePathFilter.createDefaultFilter());
 		ContinuousFileMonitoringFunction<String> monitoringFunction =
-			new ContinuousFileMonitoringFunction<>(format, hdfsURI, FilePathFilter.createDefaultFilter(),
+			new ContinuousFileMonitoringFunction<>(format, hdfsURI,
 				FileProcessingMode.PROCESS_CONTINUOUSLY, 1, INTERVAL);
 
 		monitoringFunction.open(new Configuration());
@@ -291,8 +293,9 @@ public class ContinuousFileMonitoringTest {
 		Assert.assertTrue(fc.getFilesCreated().size() >= 1);
 
 		TextInputFormat format = new TextInputFormat(new Path(hdfsURI));
+		format.setFilesFilter(FilePathFilter.createDefaultFilter());
 		ContinuousFileMonitoringFunction<String> monitoringFunction =
-			new ContinuousFileMonitoringFunction<>(format, hdfsURI, FilePathFilter.createDefaultFilter(),
+			new ContinuousFileMonitoringFunction<>(format, hdfsURI,
 				FileProcessingMode.PROCESS_ONCE, 1, INTERVAL);
 
 		monitoringFunction.open(new Configuration());
@@ -427,7 +430,7 @@ public class ContinuousFileMonitoringTest {
 		assert (hdfs != null);
 
 		org.apache.hadoop.fs.Path file = new org.apache.hadoop.fs.Path(base + "/" + fileName + fileIdx);
-		Assert.assertTrue (!hdfs.exists(file));
+		Assert.assertFalse(hdfs.exists(file));
 
 		org.apache.hadoop.fs.Path tmp = new org.apache.hadoop.fs.Path(base + "/." + fileName + fileIdx);
 		FSDataOutputStream stream = hdfs.create(tmp);

http://git-wip-us.apache.org/repos/asf/flink/blob/48109104/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/environment/StreamExecutionEnvironment.java
----------------------------------------------------------------------
diff --git a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/environment/StreamExecutionEnvironment.java b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/environment/StreamExecutionEnvironment.java
index 1913a36..ead9564 100644
--- a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/environment/StreamExecutionEnvironment.java
+++ b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/environment/StreamExecutionEnvironment.java
@@ -52,7 +52,7 @@ import org.apache.flink.streaming.api.datastream.DataStream;
 import org.apache.flink.streaming.api.datastream.DataStreamSource;
 import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
 import org.apache.flink.streaming.api.functions.source.FileMonitoringFunction;
-import org.apache.flink.streaming.api.functions.source.FilePathFilter;
+import org.apache.flink.api.common.io.FilePathFilter;
 import org.apache.flink.streaming.api.functions.source.FileReadFunction;
 import org.apache.flink.streaming.api.functions.source.ContinuousFileMonitoringFunction;
 import org.apache.flink.streaming.api.functions.source.ContinuousFileReaderOperator;
@@ -917,11 +917,11 @@ public abstract class StreamExecutionEnvironment {
 		Preconditions.checkNotNull(filePath.isEmpty(), "The file path must not be empty.");
 
 		TextInputFormat format = new TextInputFormat(new Path(filePath));
+		format.setFilesFilter(FilePathFilter.createDefaultFilter());
 		TypeInformation<String> typeInfo = BasicTypeInfo.STRING_TYPE_INFO;
 		format.setCharsetName(charsetName);
 
-		return readFile(format, filePath, FileProcessingMode.PROCESS_ONCE, -1,
-			FilePathFilter.createDefaultFilter(), typeInfo);
+		return readFile(format, filePath, FileProcessingMode.PROCESS_ONCE, -1, typeInfo);
 	}
 
 	/**
@@ -952,7 +952,52 @@ public abstract class StreamExecutionEnvironment {
 	 */
 	public <OUT> DataStreamSource<OUT> readFile(FileInputFormat<OUT> inputFormat,
 												String filePath) {
-		return readFile(inputFormat, filePath, FileProcessingMode.PROCESS_ONCE, -1, FilePathFilter.createDefaultFilter());
+		return readFile(inputFormat, filePath, FileProcessingMode.PROCESS_ONCE, -1);
+	}
+
+	/**
+	 *
+	 * Reads the contents of the user-specified {@code filePath} based on the given {@link FileInputFormat}. Depending
+	 * on the provided {@link FileProcessingMode}.
+	 * <p>
+	 * See {@link #readFile(FileInputFormat, String, FileProcessingMode, long)}
+	 *
+	 * @param inputFormat
+	 * 		The input format used to create the data stream
+	 * @param filePath
+	 * 		The path of the file, as a URI (e.g., "file:///some/local/file" or "hdfs://host:port/file/path")
+	 * @param watchType
+	 * 		The mode in which the source should operate, i.e. monitor path and react to new data, or process once and exit
+	 * @param interval
+	 * 		In the case of periodic path monitoring, this specifies the interval (in millis) between consecutive path scans
+	 * @param filter
+	 * 		The files to be excluded from the processing
+	 * @param <OUT>
+	 * 		The type of the returned data stream
+	 * @return The data stream that represents the data read from the given file
+	 *
+	 * @deprecated Use {@link FileInputFormat#setFilesFilter(FilePathFilter)} to set a filter and
+	 * 		{@link StreamExecutionEnvironment#readFile(FileInputFormat, String, FileProcessingMode, long)}
+	 *
+	 */
+	@PublicEvolving
+	@Deprecated
+	public <OUT> DataStreamSource<OUT> readFile(FileInputFormat<OUT> inputFormat,
+												String filePath,
+												FileProcessingMode watchType,
+												long interval,
+												FilePathFilter filter) {
+		inputFormat.setFilesFilter(filter);
+
+		TypeInformation<OUT> typeInformation;
+		try {
+			typeInformation = TypeExtractor.getInputFormatTypes(inputFormat);
+		} catch (Exception e) {
+			throw new InvalidProgramException("The type returned by the input format could not be " +
+				"automatically determined. Please specify the TypeInformation of the produced type " +
+				"explicitly by using the 'createInput(InputFormat, TypeInformation)' method instead.");
+		}
+		return readFile(inputFormat, filePath, watchType, interval, typeInformation);
 	}
 
 	/**
@@ -986,8 +1031,6 @@ public abstract class StreamExecutionEnvironment {
 	 * 		The mode in which the source should operate, i.e. monitor path and react to new data, or process once and exit
 	 * @param interval
 	 * 		In the case of periodic path monitoring, this specifies the interval (in millis) between consecutive path scans
-	 * @param filter
-	 * 		The files to be excluded from the processing
 	 * @param <OUT>
 	 * 		The type of the returned data stream
 	 * @return The data stream that represents the data read from the given file
@@ -996,8 +1039,7 @@ public abstract class StreamExecutionEnvironment {
 	public <OUT> DataStreamSource<OUT> readFile(FileInputFormat<OUT> inputFormat,
 												String filePath,
 												FileProcessingMode watchType,
-												long interval,
-												FilePathFilter filter) {
+												long interval) {
 
 		TypeInformation<OUT> typeInformation;
 		try {
@@ -1007,7 +1049,7 @@ public abstract class StreamExecutionEnvironment {
 				"automatically determined. Please specify the TypeInformation of the produced type " +
 				"explicitly by using the 'createInput(InputFormat, TypeInformation)' method instead.");
 		}
-		return readFile(inputFormat, filePath, watchType, interval, filter, typeInformation);
+		return readFile(inputFormat, filePath, watchType, interval, typeInformation);
 	}
 
 	/**
@@ -1057,8 +1099,6 @@ public abstract class StreamExecutionEnvironment {
 	 * 		The path of the file, as a URI (e.g., "file:///some/local/file" or "hdfs://host:port/file/path")
 	 * @param watchType
 	 * 		The mode in which the source should operate, i.e. monitor path and react to new data, or process once and exit
-	 * @param filter
-	 * 		The files to be excluded from the processing
 	 * @param typeInformation
 	 * 		Information on the type of the elements in the output stream
 	 * @param interval
@@ -1072,7 +1112,6 @@ public abstract class StreamExecutionEnvironment {
 												String filePath,
 												FileProcessingMode watchType,
 												long interval,
-												FilePathFilter filter,
 												TypeInformation<OUT> typeInformation) {
 
 		Preconditions.checkNotNull(inputFormat, "InputFormat must not be null.");
@@ -1080,7 +1119,7 @@ public abstract class StreamExecutionEnvironment {
 		Preconditions.checkNotNull(filePath.isEmpty(), "The file path must not be empty.");
 
 		inputFormat.setFilePath(filePath);
-		return createFileInput(inputFormat, typeInformation, "Custom File Source", watchType, filter, interval);
+		return createFileInput(inputFormat, typeInformation, "Custom File Source", watchType, interval);
 	}
 
 	/**
@@ -1250,8 +1289,7 @@ public abstract class StreamExecutionEnvironment {
 		if (inputFormat instanceof FileInputFormat) {
 			FileInputFormat<OUT> format = (FileInputFormat<OUT>) inputFormat;
 			source = createFileInput(format, typeInfo, "Custom File source",
-				FileProcessingMode.PROCESS_ONCE,
-				FilePathFilter.createDefaultFilter(),  -1);
+				FileProcessingMode.PROCESS_ONCE, -1);
 		} else {
 			source = createInput(inputFormat, typeInfo, "Custom Source");
 		}
@@ -1270,14 +1308,12 @@ public abstract class StreamExecutionEnvironment {
 														TypeInformation<OUT> typeInfo,
 														String sourceName,
 														FileProcessingMode monitoringMode,
-														FilePathFilter pathFilter,
 														long interval) {
 
 		Preconditions.checkNotNull(inputFormat, "Unspecified file input format.");
 		Preconditions.checkNotNull(typeInfo, "Unspecified output type information.");
 		Preconditions.checkNotNull(sourceName, "Unspecified name for the source.");
 		Preconditions.checkNotNull(monitoringMode, "Unspecified monitoring mode.");
-		Preconditions.checkNotNull(pathFilter, "Unspecified path name filtering function.");
 
 		Preconditions.checkArgument(monitoringMode.equals(FileProcessingMode.PROCESS_ONCE) ||
 			interval >= ContinuousFileMonitoringFunction.MIN_MONITORING_INTERVAL,
@@ -1286,7 +1322,7 @@ public abstract class StreamExecutionEnvironment {
 
 		ContinuousFileMonitoringFunction<OUT> monitoringFunction = new ContinuousFileMonitoringFunction<>(
 			inputFormat, inputFormat.getFilePath().toString(),
-			pathFilter, monitoringMode, getParallelism(), interval);
+			monitoringMode, getParallelism(), interval);
 
 		ContinuousFileReaderOperator<OUT, ?> reader = new ContinuousFileReaderOperator<>(inputFormat);
 

http://git-wip-us.apache.org/repos/asf/flink/blob/48109104/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/source/ContinuousFileMonitoringFunction.java
----------------------------------------------------------------------
diff --git a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/source/ContinuousFileMonitoringFunction.java b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/source/ContinuousFileMonitoringFunction.java
index 8ff4a2a..d36daab 100644
--- a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/source/ContinuousFileMonitoringFunction.java
+++ b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/source/ContinuousFileMonitoringFunction.java
@@ -18,6 +18,7 @@ package org.apache.flink.streaming.api.functions.source;
 
 import org.apache.flink.annotation.Internal;
 import org.apache.flink.api.common.io.FileInputFormat;
+import org.apache.flink.api.common.io.FilePathFilter;
 import org.apache.flink.api.java.tuple.Tuple2;
 import org.apache.flink.configuration.Configuration;
 import org.apache.flink.core.fs.FileInputSplit;
@@ -81,15 +82,13 @@ public class ContinuousFileMonitoringFunction<OUT>
 
 	private Long globalModificationTime;
 
-	private FilePathFilter pathFilter;
-
 	private transient Object checkpointLock;
 
 	private volatile boolean isRunning = true;
 
 	public ContinuousFileMonitoringFunction(
 		FileInputFormat<OUT> format, String path,
-		FilePathFilter filter, FileProcessingMode watchType,
+		FileProcessingMode watchType,
 		int readerParallelism, long interval) {
 
 		if (watchType != FileProcessingMode.PROCESS_ONCE && interval < MIN_MONITORING_INTERVAL) {
@@ -98,7 +97,6 @@ public class ContinuousFileMonitoringFunction<OUT>
 		}
 		this.format = Preconditions.checkNotNull(format, "Unspecified File Input Format.");
 		this.path = Preconditions.checkNotNull(path, "Unspecified Path.");
-		this.pathFilter = Preconditions.checkNotNull(filter, "Unspecified File Path Filter.");
 
 		this.interval = interval;
 		this.watchType = watchType;
@@ -274,7 +272,7 @@ public class ContinuousFileMonitoringFunction<OUT>
 	 */
 	private boolean shouldIgnore(Path filePath, long modificationTime) {
 		assert (Thread.holdsLock(checkpointLock));
-		boolean shouldIgnore = ((pathFilter != null && pathFilter.filterPath(filePath)) || modificationTime <= globalModificationTime);
+		boolean shouldIgnore = modificationTime <= globalModificationTime;
 		if (shouldIgnore) {
 			LOG.debug("Ignoring " + filePath + ", with mod time= " + modificationTime + " and global mod time= " + globalModificationTime);
 		}

http://git-wip-us.apache.org/repos/asf/flink/blob/48109104/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/source/FilePathFilter.java
----------------------------------------------------------------------
diff --git a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/source/FilePathFilter.java b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/source/FilePathFilter.java
deleted file mode 100644
index 1a359ab..0000000
--- a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/source/FilePathFilter.java
+++ /dev/null
@@ -1,66 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.flink.streaming.api.functions.source;
-
-import org.apache.flink.annotation.PublicEvolving;
-import org.apache.flink.core.fs.Path;
-
-import java.io.Serializable;
-
-/**
- * An interface to be implemented by the user when using the {@link ContinuousFileMonitoringFunction}.
- * The {@link #filterPath(Path)} method is responsible for deciding if a path is eligible for further
- * processing or not. This can serve to exclude temporary or partial files that
- * are still being written.
- */
-@PublicEvolving
-public abstract class FilePathFilter implements Serializable {
-
-	public static FilePathFilter createDefaultFilter() {
-		return new DefaultFilter();
-	}
-	/**
-	 * Returns {@code true} if the {@code filePath} given is to be
-	 * ignored when processing a directory, e.g.
-	 * <pre>
-	 * {@code
-	 *
-	 * public boolean filterPaths(Path filePath) {
-	 *     return filePath.getName().startsWith(".") || filePath.getName().contains("_COPYING_");
-	 * }
-	 * }</pre>
-	 */
-	public abstract boolean filterPath(Path filePath);
-
-	/**
-	 * The default file path filtering method and is used
-	 * if no other such function is provided. This filter leaves out
-	 * files starting with ".", "_", and "_COPYING_".
-	 */
-	public static class DefaultFilter extends FilePathFilter {
-
-		DefaultFilter() {}
-
-		@Override
-		public boolean filterPath(Path filePath) {
-			return filePath == null ||
-				filePath.getName().startsWith(".") ||
-				filePath.getName().startsWith("_") ||
-				filePath.getName().contains("_COPYING_");
-		}
-	}
-}

http://git-wip-us.apache.org/repos/asf/flink/blob/48109104/flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/StreamExecutionEnvironment.scala
----------------------------------------------------------------------
diff --git a/flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/StreamExecutionEnvironment.scala b/flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/StreamExecutionEnvironment.scala
index f6dab1e..9cb36a5 100644
--- a/flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/StreamExecutionEnvironment.scala
+++ b/flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/StreamExecutionEnvironment.scala
@@ -20,7 +20,7 @@ package org.apache.flink.streaming.api.scala
 
 import com.esotericsoftware.kryo.Serializer
 import org.apache.flink.annotation.{Internal, Public, PublicEvolving}
-import org.apache.flink.api.common.io.{FileInputFormat, InputFormat}
+import org.apache.flink.api.common.io.{FileInputFormat, FilePathFilter, InputFormat}
 import org.apache.flink.api.common.restartstrategy.RestartStrategies.RestartStrategyConfiguration
 import org.apache.flink.api.common.typeinfo.TypeInformation
 import org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer
@@ -467,6 +467,40 @@ class StreamExecutionEnvironment(javaEnv: JavaEnv) {
 
   /**
     * Reads the contents of the user-specified path based on the given [[FileInputFormat]].
+    * Depending on the provided [[FileProcessingMode]].
+    *
+    * @param inputFormat
+    *          The input format used to create the data stream
+    * @param filePath
+    *          The path of the file, as a URI (e.g., "file:///some/local/file" or
+    *          "hdfs://host:port/file/path")
+    * @param watchType
+    *          The mode in which the source should operate, i.e. monitor path and react
+    *          to new data, or process once and exit
+    * @param interval
+    *          In the case of periodic path monitoring, this specifies the interval (in millis)
+    *          between consecutive path scans
+    * @param filter
+    *          The files to be excluded from the processing
+    * @return The data stream that represents the data read from the given file
+    *
+    * @deprecated Use {@link FileInputFormat#setFilesFilter(FilePathFilter)} to set a filter and
+    *         {@link StreamExecutionEnvironment#readFile(FileInputFormat,
+      *              String, FileProcessingMode, long)}
+    */
+  @PublicEvolving
+  @Deprecated
+  def readFile[T: TypeInformation](
+                                    inputFormat: FileInputFormat[T],
+                                    filePath: String,
+                                    watchType: FileProcessingMode,
+                                    interval: Long,
+                                    filter: FilePathFilter): DataStream[T] = {
+    asScalaStream(javaEnv.readFile(inputFormat, filePath, watchType, interval, filter))
+  }
+
+  /**
+    * Reads the contents of the user-specified path based on the given [[FileInputFormat]].
     * Depending on the provided [[FileProcessingMode]], the source
     * may periodically monitor (every `interval` ms) the path for new data
     * ([[FileProcessingMode.PROCESS_CONTINUOUSLY]]), or process
@@ -496,8 +530,6 @@ class StreamExecutionEnvironment(javaEnv: JavaEnv) {
     * @param interval
     *          In the case of periodic path monitoring, this specifies the interval (in millis)
     *          between consecutive path scans
-    * @param filter
-    *          The files to be excluded from the processing
     * @return The data stream that represents the data read from the given file
     */
   @PublicEvolving
@@ -505,10 +537,9 @@ class StreamExecutionEnvironment(javaEnv: JavaEnv) {
       inputFormat: FileInputFormat[T],
       filePath: String,
       watchType: FileProcessingMode,
-      interval: Long,
-      filter: FilePathFilter): DataStream[T] = {
+      interval: Long): DataStream[T] = {
     val typeInfo = implicitly[TypeInformation[T]]
-    asScalaStream(javaEnv.readFile(inputFormat, filePath, watchType, interval, filter, typeInfo))
+    asScalaStream(javaEnv.readFile(inputFormat, filePath, watchType, interval, typeInfo))
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/flink/blob/48109104/flink-tests/src/test/java/org/apache/flink/test/checkpointing/ContinuousFileProcessingCheckpointITCase.java
----------------------------------------------------------------------
diff --git a/flink-tests/src/test/java/org/apache/flink/test/checkpointing/ContinuousFileProcessingCheckpointITCase.java b/flink-tests/src/test/java/org/apache/flink/test/checkpointing/ContinuousFileProcessingCheckpointITCase.java
index d540a92..a265c0a 100644
--- a/flink-tests/src/test/java/org/apache/flink/test/checkpointing/ContinuousFileProcessingCheckpointITCase.java
+++ b/flink-tests/src/test/java/org/apache/flink/test/checkpointing/ContinuousFileProcessingCheckpointITCase.java
@@ -29,7 +29,7 @@ import org.apache.flink.streaming.api.datastream.DataStream;
 import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
 import org.apache.flink.streaming.api.functions.sink.RichSinkFunction;
 import org.apache.flink.streaming.api.functions.source.ContinuousFileMonitoringFunction;
-import org.apache.flink.streaming.api.functions.source.FilePathFilter;
+import org.apache.flink.api.common.io.FilePathFilter;
 import org.apache.flink.streaming.api.functions.source.FileProcessingMode;
 import org.apache.flink.test.util.SuccessException;
 import org.apache.flink.util.Collector;
@@ -112,8 +112,9 @@ public class ContinuousFileProcessingCheckpointITCase extends StreamFaultToleran
 		// create the monitoring source along with the necessary readers.
 		TestingSinkFunction sink = new TestingSinkFunction();
 		TextInputFormat format = new TextInputFormat(new org.apache.flink.core.fs.Path(localFsURI));
+		format.setFilesFilter(FilePathFilter.createDefaultFilter());
 		DataStream<String> inputStream = env.readFile(format, localFsURI,
-			FileProcessingMode.PROCESS_CONTINUOUSLY, INTERVAL, FilePathFilter.createDefaultFilter());
+			FileProcessingMode.PROCESS_CONTINUOUSLY, INTERVAL);
 
 		inputStream.flatMap(new FlatMapFunction<String, String>() {
 			@Override


[83/89] [abbrv] flink git commit: [FLINK-4355] [cluster management] Implement TaskManager side of registration at ResourceManager.

Posted by se...@apache.org.
[FLINK-4355] [cluster management] Implement TaskManager side of registration at ResourceManager.

This closes #2353


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/7db27883
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/7db27883
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/7db27883

Branch: refs/heads/flip-6
Commit: 7db2788377200589b86703f8f91f19d004881b88
Parents: 84bd375
Author: Stephan Ewen <se...@apache.org>
Authored: Wed Aug 10 20:42:45 2016 +0200
Committer: Stephan Ewen <se...@apache.org>
Committed: Thu Aug 25 20:21:04 2016 +0200

----------------------------------------------------------------------
 .../HighAvailabilityServices.java               |  39 +++
 .../runtime/highavailability/NonHaServices.java |  59 ++++
 .../StandaloneLeaderRetrievalService.java       |  72 +++--
 .../apache/flink/runtime/rpc/RpcEndpoint.java   |   1 -
 .../apache/flink/runtime/rpc/RpcService.java    |  27 ++
 .../flink/runtime/rpc/akka/AkkaRpcService.java  |  18 ++
 .../runtime/rpc/akka/messages/RunAsync.java     |   1 +
 .../rpc/registration/RegistrationResponse.java  |  84 ++++++
 .../rpc/registration/RetryingRegistration.java  | 292 +++++++++++++++++++
 .../rpc/resourcemanager/ResourceManager.java    |  23 ++
 .../resourcemanager/ResourceManagerGateway.java |  21 +-
 .../runtime/rpc/taskexecutor/SlotReport.java    |  38 +++
 .../runtime/rpc/taskexecutor/TaskExecutor.java  | 169 ++++++++---
 .../rpc/taskexecutor/TaskExecutorGateway.java   |  29 +-
 .../TaskExecutorRegistrationSuccess.java        |  75 +++++
 ...TaskExecutorToResourceManagerConnection.java | 194 ++++++++++++
 .../runtime/rpc/akka/AkkaRpcServiceTest.java    |  51 +++-
 .../rpc/taskexecutor/TaskExecutorTest.java      |  87 +-----
 18 files changed, 1105 insertions(+), 175 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/7db27883/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/HighAvailabilityServices.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/HighAvailabilityServices.java b/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/HighAvailabilityServices.java
new file mode 100644
index 0000000..094d36f
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/HighAvailabilityServices.java
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.highavailability;
+
+import org.apache.flink.runtime.leaderretrieval.LeaderRetrievalService;
+
+/**
+ * This class gives access to all services needed for
+ *
+ * <ul>
+ *     <li>ResourceManager leader election and leader retrieval</li>
+ *     <li>JobManager leader election and leader retrieval</li>
+ *     <li>Persistence for checkpoint metadata</li>
+ *     <li>Registering the latest completed checkpoint(s)</li>
+ * </ul>
+ */
+public interface HighAvailabilityServices {
+
+	/**
+	 * Gets the leader retriever for the cluster's resource manager.
+	 */
+	LeaderRetrievalService getResourceManagerLeaderRetriever() throws Exception;
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/7db27883/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/NonHaServices.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/NonHaServices.java b/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/NonHaServices.java
new file mode 100644
index 0000000..b8c2ed8
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/NonHaServices.java
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.highavailability;
+
+import org.apache.flink.runtime.leaderretrieval.LeaderRetrievalService;
+import org.apache.flink.runtime.leaderretrieval.StandaloneLeaderRetrievalService;
+
+import java.util.UUID;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * An implementation of the {@link HighAvailabilityServices} for the non-high-availability case.
+ * This implementation can be used for testing, and for cluster setups that do not
+ * tolerate failures of the master processes (JobManager, ResourceManager).
+ * 
+ * <p>This implementation has no dependencies on any external services. It returns fix
+ * pre-configured leaders, and stores checkpoints and metadata simply on the heap and therefore
+ * in volatile memory.
+ */
+public class NonHaServices implements HighAvailabilityServices {
+
+	/** The fix address of the ResourceManager */
+	private final String resourceManagerAddress;
+
+	/**
+	 * Creates a new services class for the fix pre-defined leaders.
+	 * 
+	 * @param resourceManagerAddress    The fix address of the ResourceManager
+	 */
+	public NonHaServices(String resourceManagerAddress) {
+		this.resourceManagerAddress = checkNotNull(resourceManagerAddress);
+	}
+
+	// ------------------------------------------------------------------------
+	//  Services
+	// ------------------------------------------------------------------------
+
+	@Override
+	public LeaderRetrievalService getResourceManagerLeaderRetriever() throws Exception {
+		return new StandaloneLeaderRetrievalService(resourceManagerAddress, new UUID(0, 0));
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/7db27883/flink-runtime/src/main/java/org/apache/flink/runtime/leaderretrieval/StandaloneLeaderRetrievalService.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/leaderretrieval/StandaloneLeaderRetrievalService.java b/flink-runtime/src/main/java/org/apache/flink/runtime/leaderretrieval/StandaloneLeaderRetrievalService.java
index 26a34aa..16b163c 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/leaderretrieval/StandaloneLeaderRetrievalService.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/leaderretrieval/StandaloneLeaderRetrievalService.java
@@ -18,44 +18,74 @@
 
 package org.apache.flink.runtime.leaderretrieval;
 
-import org.apache.flink.util.Preconditions;
+import java.util.UUID;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.apache.flink.util.Preconditions.checkState;
 
 /**
- * Standalone implementation of the {@link LeaderRetrievalService}. The standalone implementation
- * assumes that there is only a single {@link org.apache.flink.runtime.jobmanager.JobManager} whose
- * address is given to the service when creating it. This address is directly given to the
- * {@link LeaderRetrievalListener} when the service is started.
+ * Standalone implementation of the {@link LeaderRetrievalService}. This implementation
+ * assumes that there is only a single contender for leadership
+ * (e.g., a single JobManager or ResourceManager process) and that this process is
+ * reachable under a constant address.
+ * 
+ * <p>As soon as this service is started, it immediately notifies the leader listener
+ * of the leader contender with the pre-configured address.
  */
 public class StandaloneLeaderRetrievalService implements LeaderRetrievalService {
 
-	/** Address of the only JobManager */
-	private final String jobManagerAddress;
+	private final Object startStopLock = new Object();
+	
+	/** The fix address of the leader */
+	private final String leaderAddress;
+
+	/** The fix leader ID (leader lock fencing token) */
+	private final UUID leaderId;
 
-	/** Listener which wants to be notified about the new leader */
-	private LeaderRetrievalListener leaderListener;
+	/** Flag whether this service is started */
+	private boolean started;
 
 	/**
-	 * Creates a StandaloneLeaderRetrievalService with the given JobManager address.
+	 * Creates a StandaloneLeaderRetrievalService with the given leader address.
+	 * The leaderId will be null.
 	 *
-	 * @param jobManagerAddress The JobManager's address which is returned to the
-	 * 							{@link LeaderRetrievalListener}
+	 * @param leaderAddress The leader's pre-configured address
 	 */
-	public StandaloneLeaderRetrievalService(String jobManagerAddress) {
-		this.jobManagerAddress = jobManagerAddress;
+	public StandaloneLeaderRetrievalService(String leaderAddress) {
+		this.leaderAddress = checkNotNull(leaderAddress);
+		this.leaderId = null;
 	}
 
+	/**
+	 * Creates a StandaloneLeaderRetrievalService with the given leader address.
+	 *
+	 * @param leaderAddress The leader's pre-configured address
+	 * @param leaderId      The constant leaderId.
+	 */
+	public StandaloneLeaderRetrievalService(String leaderAddress, UUID leaderId) {
+		this.leaderAddress = checkNotNull(leaderAddress);
+		this.leaderId = checkNotNull(leaderId);
+	}
+
+	// ------------------------------------------------------------------------
+
 	@Override
 	public void start(LeaderRetrievalListener listener) {
-		Preconditions.checkNotNull(listener, "Listener must not be null.");
-		Preconditions.checkState(leaderListener == null, "StandaloneLeaderRetrievalService can " +
-				"only be started once.");
+		checkNotNull(listener, "Listener must not be null.");
 
-		leaderListener = listener;
+		synchronized (startStopLock) {
+			checkState(!started, "StandaloneLeaderRetrievalService can only be started once.");
+			started = true;
 
-		// directly notify the listener, because we already know the leading JobManager's address
-		leaderListener.notifyLeaderAddress(jobManagerAddress, null);
+			// directly notify the listener, because we already know the leading JobManager's address
+			listener.notifyLeaderAddress(leaderAddress, leaderId);
+		}
 	}
 
 	@Override
-	public void stop() {}
+	public void stop() {
+		synchronized (startStopLock) {
+			started = false;
+		}
+	}
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/7db27883/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java
index 67ac182..a28bc14 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java
@@ -237,7 +237,6 @@ public abstract class RpcEndpoint<C extends RpcGateway> {
 	 * }</pre>
 	 */
 	public void validateRunsInMainThread() {
-		// because the initialization is lazy, it can be that certain methods are
 		assert currentMainThread.get() == Thread.currentThread();
 	}
 

http://git-wip-us.apache.org/repos/asf/flink/blob/7db27883/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcService.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcService.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcService.java
index f93be83..fabdb05 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcService.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/RpcService.java
@@ -18,8 +18,11 @@
 
 package org.apache.flink.runtime.rpc;
 
+import scala.concurrent.ExecutionContext;
 import scala.concurrent.Future;
 
+import java.util.concurrent.TimeUnit;
+
 /**
  * Interface for rpc services. An rpc service is used to start and connect to a {@link RpcEndpoint}.
  * Connecting to a rpc server will return a {@link RpcGateway} which can be used to call remote
@@ -71,4 +74,28 @@ public interface RpcService {
 	 * @return Fully qualified address
 	 */
 	<C extends RpcGateway> String getAddress(C selfGateway);
+
+	/**
+	 * Gets the execution context, provided by this RPC service. This execution
+	 * context can be used for example for the {@code onComplete(...)} or {@code onSuccess(...)}
+	 * methods of Futures.
+	 * 
+	 * <p><b>IMPORTANT:</b> This execution context does not isolate the method invocations against
+	 * any concurrent invocations and is therefore not suitable to run completion methods of futures
+	 * that modify state of an {@link RpcEndpoint}. For such operations, one needs to use the
+	 * {@link RpcEndpoint#getMainThreadExecutionContext() MainThreadExecutionContext} of that
+	 * {@code RpcEndpoint}.
+	 * 
+	 * @return The execution context provided by the RPC service
+	 */
+	ExecutionContext getExecutionContext();
+
+	/**
+	 * Execute the runnable in the execution context of this RPC Service, as returned by
+	 * {@link #getExecutionContext()}, after a scheduled delay.
+	 *
+	 * @param runnable Runnable to be executed
+	 * @param delay    The delay after which the runnable will be executed
+	 */
+	void scheduleRunnable(Runnable runnable, long delay, TimeUnit unit);
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/7db27883/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
index 7b33524..b647bbd 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
@@ -38,14 +38,18 @@ import org.apache.flink.runtime.rpc.StartStoppable;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import scala.concurrent.ExecutionContext;
 import scala.concurrent.Future;
+import scala.concurrent.duration.FiniteDuration;
 
 import javax.annotation.concurrent.ThreadSafe;
 import java.lang.reflect.InvocationHandler;
 import java.lang.reflect.Proxy;
 import java.util.HashSet;
 import java.util.Set;
+import java.util.concurrent.TimeUnit;
 
+import static org.apache.flink.util.Preconditions.checkArgument;
 import static org.apache.flink.util.Preconditions.checkNotNull;
 import static org.apache.flink.util.Preconditions.checkState;
 
@@ -199,4 +203,18 @@ public class AkkaRpcService implements RpcService {
 			throw new IllegalArgumentException("Cannot get address for non " + className + '.');
 		}
 	}
+
+	@Override
+	public ExecutionContext getExecutionContext() {
+		return actorSystem.dispatcher();
+	}
+
+	@Override
+	public void scheduleRunnable(Runnable runnable, long delay, TimeUnit unit) {
+		checkNotNull(runnable, "runnable");
+		checkNotNull(unit, "unit");
+		checkArgument(delay >= 0, "delay must be zero or larger");
+
+		actorSystem.scheduler().scheduleOnce(new FiniteDuration(delay, unit), runnable, getExecutionContext());
+	}
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/7db27883/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RunAsync.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RunAsync.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RunAsync.java
index c18906c..ce4f9d6 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RunAsync.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RunAsync.java
@@ -36,6 +36,7 @@ public final class RunAsync implements Serializable {
 	private final long delay;
 
 	/**
+	 * Creates a new {@code RunAsync} message.
 	 * 
 	 * @param runnable  The Runnable to run.
 	 * @param delay     The delay in milliseconds. Zero indicates immediate execution.

http://git-wip-us.apache.org/repos/asf/flink/blob/7db27883/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/registration/RegistrationResponse.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/registration/RegistrationResponse.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/registration/RegistrationResponse.java
new file mode 100644
index 0000000..2de560a
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/registration/RegistrationResponse.java
@@ -0,0 +1,84 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.registration;
+
+import java.io.Serializable;
+
+/**
+ * Base class for responses given to registration attempts from {@link RetryingRegistration}.
+ */
+public abstract class RegistrationResponse implements Serializable {
+
+	private static final long serialVersionUID = 1L;
+
+	// ----------------------------------------------------------------------------
+	
+	/**
+	 * Base class for a successful registration. Concrete registration implementations
+	 * will typically extend this class to attach more information.
+	 */
+	public static class Success extends RegistrationResponse {
+		private static final long serialVersionUID = 1L;
+		
+		@Override
+		public String toString() {
+			return "Registration Successful";
+		}
+	}
+
+	// ----------------------------------------------------------------------------
+
+	/**
+	 * A rejected (declined) registration.
+	 */
+	public static final class Decline extends RegistrationResponse {
+		private static final long serialVersionUID = 1L;
+
+		/** the rejection reason */
+		private final String reason;
+
+		/**
+		 * Creates a new rejection message.
+		 * 
+		 * @param reason The reason for the rejection.
+		 */
+		public Decline(String reason) {
+			this.reason = reason != null ? reason : "(unknown)";
+		}
+
+		/**
+		 * Gets the reason for the rejection.
+		 */
+		public String getReason() {
+			return reason;
+		}
+
+		@Override
+		public String toString() {
+			return "Registration Declined (" + reason + ')';
+		}
+	}
+}
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/flink/blob/7db27883/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/registration/RetryingRegistration.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/registration/RetryingRegistration.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/registration/RetryingRegistration.java
new file mode 100644
index 0000000..4c93684
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/registration/RetryingRegistration.java
@@ -0,0 +1,292 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.registration;
+
+import akka.dispatch.OnFailure;
+import akka.dispatch.OnSuccess;
+
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.runtime.rpc.RpcGateway;
+import org.apache.flink.runtime.rpc.RpcService;
+
+import org.slf4j.Logger;
+
+import scala.concurrent.Future;
+import scala.concurrent.Promise;
+import scala.concurrent.impl.Promise.DefaultPromise;
+
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+
+/**
+ * This utility class implements the basis of registering one component at another component,
+ * for example registering the TaskExecutor at the ResourceManager.
+ * This {@code RetryingRegistration} implements both the initial address resolution
+ * and the retries-with-backoff strategy.
+ * 
+ * <p>The registration gives access to a future that is completed upon successful registration.
+ * The registration can be canceled, for example when the target where it tries to register
+ * at looses leader status.
+ * 
+ * @param <Gateway> The type of the gateway to connect to.
+ * @param <Success> The type of the successful registration responses.
+ */
+public abstract class RetryingRegistration<Gateway extends RpcGateway, Success extends RegistrationResponse.Success> {
+
+	// ------------------------------------------------------------------------
+	//  default configuration values
+	// ------------------------------------------------------------------------
+
+	private static final long INITIAL_REGISTRATION_TIMEOUT_MILLIS = 100;
+
+	private static final long MAX_REGISTRATION_TIMEOUT_MILLIS = 30000;
+
+	private static final long ERROR_REGISTRATION_DELAY_MILLIS = 10000;
+
+	private static final long REFUSED_REGISTRATION_DELAY_MILLIS = 30000;
+
+	// ------------------------------------------------------------------------
+	// Fields
+	// ------------------------------------------------------------------------
+
+	private final Logger log;
+
+	private final RpcService rpcService;
+
+	private final String targetName;
+
+	private final Class<Gateway> targetType;
+
+	private final String targetAddress;
+
+	private final UUID leaderId;
+
+	private final Promise<Tuple2<Gateway, Success>> completionPromise;
+
+	private final long initialRegistrationTimeout;
+
+	private final long maxRegistrationTimeout;
+
+	private final long delayOnError;
+
+	private final long delayOnRefusedRegistration;
+
+	private volatile boolean canceled;
+
+	// ------------------------------------------------------------------------
+
+	public RetryingRegistration(
+			Logger log,
+			RpcService rpcService,
+			String targetName,
+			Class<Gateway> targetType,
+			String targetAddress,
+			UUID leaderId) {
+		this(log, rpcService, targetName, targetType, targetAddress, leaderId,
+				INITIAL_REGISTRATION_TIMEOUT_MILLIS, MAX_REGISTRATION_TIMEOUT_MILLIS,
+				ERROR_REGISTRATION_DELAY_MILLIS, REFUSED_REGISTRATION_DELAY_MILLIS);
+	}
+
+	public RetryingRegistration(
+			Logger log,
+			RpcService rpcService,
+			String targetName, 
+			Class<Gateway> targetType,
+			String targetAddress,
+			UUID leaderId,
+			long initialRegistrationTimeout,
+			long maxRegistrationTimeout,
+			long delayOnError,
+			long delayOnRefusedRegistration) {
+
+		checkArgument(initialRegistrationTimeout > 0, "initial registration timeout must be greater than zero");
+		checkArgument(maxRegistrationTimeout > 0, "maximum registration timeout must be greater than zero");
+		checkArgument(delayOnError >= 0, "delay on error must be non-negative");
+		checkArgument(delayOnRefusedRegistration >= 0, "delay on refused registration must be non-negative");
+
+		this.log = checkNotNull(log);
+		this.rpcService = checkNotNull(rpcService);
+		this.targetName = checkNotNull(targetName);
+		this.targetType = checkNotNull(targetType);
+		this.targetAddress = checkNotNull(targetAddress);
+		this.leaderId = checkNotNull(leaderId);
+		this.initialRegistrationTimeout = initialRegistrationTimeout;
+		this.maxRegistrationTimeout = maxRegistrationTimeout;
+		this.delayOnError = delayOnError;
+		this.delayOnRefusedRegistration = delayOnRefusedRegistration;
+
+		this.completionPromise = new DefaultPromise<>();
+	}
+
+	// ------------------------------------------------------------------------
+	//  completion and cancellation
+	// ------------------------------------------------------------------------
+
+	public Future<Tuple2<Gateway, Success>> getFuture() {
+		return completionPromise.future();
+	}
+
+	/**
+	 * Cancels the registration procedure.
+	 */
+	public void cancel() {
+		canceled = true;
+	}
+
+	/**
+	 * Checks if the registration was canceled.
+	 * @return True if the registration was canceled, false otherwise.
+	 */
+	public boolean isCanceled() {
+		return canceled;
+	}
+
+	// ------------------------------------------------------------------------
+	//  registration
+	// ------------------------------------------------------------------------
+
+	protected abstract Future<RegistrationResponse> invokeRegistration(
+			Gateway gateway, UUID leaderId, long timeoutMillis) throws Exception;
+
+	/**
+	 * This method resolves the target address to a callable gateway and starts the
+	 * registration after that.
+	 */
+	@SuppressWarnings("unchecked")
+	public void startRegistration() {
+		try {
+			// trigger resolution of the resource manager address to a callable gateway
+			Future<Gateway> resourceManagerFuture = rpcService.connect(targetAddress, targetType);
+	
+			// upon success, start the registration attempts
+			resourceManagerFuture.onSuccess(new OnSuccess<Gateway>() {
+				@Override
+				public void onSuccess(Gateway result) {
+					log.info("Resolved {} address, beginning registration", targetName);
+					register(result, 1, initialRegistrationTimeout);
+				}
+			}, rpcService.getExecutionContext());
+	
+			// upon failure, retry, unless this is cancelled
+			resourceManagerFuture.onFailure(new OnFailure() {
+				@Override
+				public void onFailure(Throwable failure) {
+					if (!isCanceled()) {
+						log.warn("Could not resolve {} address {}, retrying...", targetName, targetAddress);
+						startRegistration();
+					}
+				}
+			}, rpcService.getExecutionContext());
+		}
+		catch (Throwable t) {
+			cancel();
+			completionPromise.tryFailure(t);
+		}
+	}
+
+	/**
+	 * This method performs a registration attempt and triggers either a success notification or a retry,
+	 * depending on the result.
+	 */
+	@SuppressWarnings("unchecked")
+	private void register(final Gateway gateway, final int attempt, final long timeoutMillis) {
+		// eager check for canceling to avoid some unnecessary work
+		if (canceled) {
+			return;
+		}
+
+		try {
+			log.info("Registration at {} attempt {} (timeout={}ms)", targetName, attempt, timeoutMillis);
+			Future<RegistrationResponse> registrationFuture = invokeRegistration(gateway, leaderId, timeoutMillis);
+	
+			// if the registration was successful, let the TaskExecutor know
+			registrationFuture.onSuccess(new OnSuccess<RegistrationResponse>() {
+				
+				@Override
+				public void onSuccess(RegistrationResponse result) throws Throwable {
+					if (!isCanceled()) {
+						if (result instanceof RegistrationResponse.Success) {
+							// registration successful!
+							Success success = (Success) result;
+							completionPromise.success(new Tuple2<>(gateway, success));
+						}
+						else {
+							// registration refused or unknown
+							if (result instanceof RegistrationResponse.Decline) {
+								RegistrationResponse.Decline decline = (RegistrationResponse.Decline) result;
+								log.info("Registration at {} was declined: {}", targetName, decline.getReason());
+							} else {
+								log.error("Received unknown response to registration attempt: " + result);
+							}
+
+							log.info("Pausing and re-attempting registration in {} ms", delayOnRefusedRegistration);
+							registerLater(gateway, 1, initialRegistrationTimeout, delayOnRefusedRegistration);
+						}
+					}
+				}
+			}, rpcService.getExecutionContext());
+	
+			// upon failure, retry
+			registrationFuture.onFailure(new OnFailure() {
+				@Override
+				public void onFailure(Throwable failure) {
+					if (!isCanceled()) {
+						if (failure instanceof TimeoutException) {
+							// we simply have not received a response in time. maybe the timeout was
+							// very low (initial fast registration attempts), maybe the target endpoint is
+							// currently down.
+							if (log.isDebugEnabled()) {
+								log.debug("Registration at {} ({}) attempt {} timed out after {} ms",
+										targetName, targetAddress, attempt, timeoutMillis);
+							}
+	
+							long newTimeoutMillis = Math.min(2 * timeoutMillis, maxRegistrationTimeout);
+							register(gateway, attempt + 1, newTimeoutMillis);
+						}
+						else {
+							// a serious failure occurred. we still should not give up, but keep trying
+							log.error("Registration at " + targetName + " failed due to an error", failure);
+							log.info("Pausing and re-attempting registration in {} ms", delayOnError);
+	
+							registerLater(gateway, 1, initialRegistrationTimeout, delayOnError);
+						}
+					}
+				}
+			}, rpcService.getExecutionContext());
+		}
+		catch (Throwable t) {
+			cancel();
+			completionPromise.tryFailure(t);
+		}
+	}
+
+	private void registerLater(final Gateway gateway, final int attempt, final long timeoutMillis, long delay) {
+		rpcService.scheduleRunnable(new Runnable() {
+			@Override
+			public void run() {
+				register(gateway, attempt, timeoutMillis);
+			}
+		}, delay, TimeUnit.MILLISECONDS);
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/7db27883/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/ResourceManager.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/ResourceManager.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/ResourceManager.java
index 729ef0c..6f34465 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/ResourceManager.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/ResourceManager.java
@@ -19,19 +19,24 @@
 package org.apache.flink.runtime.rpc.resourcemanager;
 
 import akka.dispatch.Mapper;
+
+import org.apache.flink.runtime.clusterframework.types.ResourceID;
 import org.apache.flink.runtime.instance.InstanceID;
 import org.apache.flink.runtime.rpc.RpcMethod;
 import org.apache.flink.runtime.rpc.RpcEndpoint;
 import org.apache.flink.runtime.rpc.RpcService;
 import org.apache.flink.runtime.rpc.jobmaster.JobMaster;
 import org.apache.flink.runtime.rpc.jobmaster.JobMasterGateway;
+import org.apache.flink.runtime.rpc.taskexecutor.TaskExecutorRegistrationSuccess;
 import org.apache.flink.util.Preconditions;
+
 import scala.concurrent.ExecutionContext;
 import scala.concurrent.ExecutionContext$;
 import scala.concurrent.Future;
 
 import java.util.HashMap;
 import java.util.Map;
+import java.util.UUID;
 import java.util.concurrent.ExecutorService;
 
 /**
@@ -93,4 +98,22 @@ public class ResourceManager extends RpcEndpoint<ResourceManagerGateway> {
 		System.out.println("SlotRequest: " + slotRequest);
 		return new SlotAssignment();
 	}
+
+
+	/**
+	 *
+	 * @param resourceManagerLeaderId  The fencing token for the ResourceManager leader 
+	 * @param taskExecutorAddress      The address of the TaskExecutor that registers
+	 * @param resourceID               The resource ID of the TaskExecutor that registers
+	 *
+	 * @return The response by the ResourceManager.
+	 */
+	@RpcMethod
+	public org.apache.flink.runtime.rpc.registration.RegistrationResponse registerTaskExecutor(
+			UUID resourceManagerLeaderId,
+			String taskExecutorAddress,
+			ResourceID resourceID) {
+
+		return new TaskExecutorRegistrationSuccess(new InstanceID(), 5000);
+	}
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/7db27883/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/ResourceManagerGateway.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/ResourceManagerGateway.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/ResourceManagerGateway.java
index 464a261..afddb01 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/ResourceManagerGateway.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/resourcemanager/ResourceManagerGateway.java
@@ -18,14 +18,18 @@
 
 package org.apache.flink.runtime.rpc.resourcemanager;
 
+import org.apache.flink.runtime.clusterframework.types.ResourceID;
 import org.apache.flink.runtime.rpc.RpcGateway;
 import org.apache.flink.runtime.rpc.RpcTimeout;
 import org.apache.flink.runtime.rpc.jobmaster.JobMaster;
+
 import scala.concurrent.Future;
 import scala.concurrent.duration.FiniteDuration;
 
+import java.util.UUID;
+
 /**
- * {@link ResourceManager} rpc gateway interface.
+ * The {@link ResourceManager}'s RPC gateway interface.
  */
 public interface ResourceManagerGateway extends RpcGateway {
 
@@ -55,4 +59,19 @@ public interface ResourceManagerGateway extends RpcGateway {
 	 * @return Future slot assignment
 	 */
 	Future<SlotAssignment> requestSlot(SlotRequest slotRequest);
+
+	/**
+	 * 
+	 * @param resourceManagerLeaderId  The fencing token for the ResourceManager leader 
+	 * @param taskExecutorAddress      The address of the TaskExecutor that registers
+	 * @param resourceID               The resource ID of the TaskExecutor that registers
+	 * @param timeout                  The timeout for the response.
+	 * 
+	 * @return The future to the response by the ResourceManager.
+	 */
+	Future<org.apache.flink.runtime.rpc.registration.RegistrationResponse> registerTaskExecutor(
+			UUID resourceManagerLeaderId,
+			String taskExecutorAddress,
+			ResourceID resourceID,
+			@RpcTimeout FiniteDuration timeout);
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/7db27883/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/SlotReport.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/SlotReport.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/SlotReport.java
new file mode 100644
index 0000000..e42fa4a
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/SlotReport.java
@@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.taskexecutor;
+
+import java.io.Serializable;
+
+/**
+ * A report about the current status of all slots of the TaskExecutor, describing
+ * which slots are available and allocated, and what jobs (JobManagers) the allocated slots
+ * have been allocated to.
+ */
+public class SlotReport implements Serializable{
+
+	private static final long serialVersionUID = 1L;
+
+	// ------------------------------------------------------------------------
+	
+	@Override
+	public String toString() {
+		return "SlotReport";
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/7db27883/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutor.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutor.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutor.java
index 3a7dd9f..1a637bb 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutor.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutor.java
@@ -18,67 +18,152 @@
 
 package org.apache.flink.runtime.rpc.taskexecutor;
 
-import akka.dispatch.ExecutionContexts$;
-import org.apache.flink.runtime.deployment.TaskDeploymentDescriptor;
-import org.apache.flink.runtime.executiongraph.ExecutionAttemptID;
-import org.apache.flink.runtime.messages.Acknowledge;
-import org.apache.flink.runtime.rpc.RpcMethod;
+import org.apache.flink.runtime.clusterframework.types.ResourceID;
+import org.apache.flink.runtime.highavailability.HighAvailabilityServices;
+import org.apache.flink.runtime.leaderretrieval.LeaderRetrievalListener;
 import org.apache.flink.runtime.rpc.RpcEndpoint;
+import org.apache.flink.runtime.rpc.RpcMethod;
 import org.apache.flink.runtime.rpc.RpcService;
-import org.apache.flink.util.Preconditions;
-import scala.concurrent.ExecutionContext;
 
-import java.util.HashSet;
-import java.util.Set;
-import java.util.concurrent.ExecutorService;
+import java.util.UUID;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
 
 /**
  * TaskExecutor implementation. The task executor is responsible for the execution of multiple
  * {@link org.apache.flink.runtime.taskmanager.Task}.
- *
- * It offers the following methods as part of its rpc interface to interact with him remotely:
- * <ul>
- *     <li>{@link #executeTask(TaskDeploymentDescriptor)} executes a given task on the TaskExecutor</li>
- *     <li>{@link #cancelTask(ExecutionAttemptID)} cancels a given task identified by the {@link ExecutionAttemptID}</li>
- * </ul>
  */
 public class TaskExecutor extends RpcEndpoint<TaskExecutorGateway> {
-	private final ExecutionContext executionContext;
-	private final Set<ExecutionAttemptID> tasks = new HashSet<>();
 
-	public TaskExecutor(RpcService rpcService, ExecutorService executorService) {
+	/** The unique resource ID of this TaskExecutor */
+	private final ResourceID resourceID;
+
+	/** The access to the leader election and metadata storage services */
+	private final HighAvailabilityServices haServices;
+
+	// --------- resource manager --------
+
+	private TaskExecutorToResourceManagerConnection resourceManagerConnection;
+
+	// ------------------------------------------------------------------------
+
+	public TaskExecutor(
+			RpcService rpcService,
+			HighAvailabilityServices haServices,
+			ResourceID resourceID) {
+
 		super(rpcService);
-		this.executionContext = ExecutionContexts$.MODULE$.fromExecutor(
-			Preconditions.checkNotNull(executorService));
+
+		this.haServices = checkNotNull(haServices);
+		this.resourceID = checkNotNull(resourceID);
+	}
+
+	// ------------------------------------------------------------------------
+	//  Properties
+	// ------------------------------------------------------------------------
+
+	public ResourceID getResourceID() {
+		return resourceID;
+	}
+
+	// ------------------------------------------------------------------------
+	//  Life cycle
+	// ------------------------------------------------------------------------
+
+	@Override
+	public void start() {
+		// start by connecting to the ResourceManager
+		try {
+			haServices.getResourceManagerLeaderRetriever().start(new ResourceManagerLeaderListener());
+		} catch (Exception e) {
+			onFatalErrorAsync(e);
+		}
+	}
+
+
+	// ------------------------------------------------------------------------
+	//  RPC methods - ResourceManager related
+	// ------------------------------------------------------------------------
+
+	@RpcMethod
+	public void notifyOfNewResourceManagerLeader(String newLeaderAddress, UUID newLeaderId) {
+		if (resourceManagerConnection != null) {
+			if (newLeaderAddress != null) {
+				// the resource manager switched to a new leader
+				log.info("ResourceManager leader changed from {} to {}. Registering at new leader.",
+						resourceManagerConnection.getResourceManagerAddress(), newLeaderAddress);
+			}
+			else {
+				// address null means that the current leader is lost without a new leader being there, yet
+				log.info("Current ResourceManager {} lost leader status. Waiting for new ResourceManager leader.",
+						resourceManagerConnection.getResourceManagerAddress());
+			}
+
+			// drop the current connection or connection attempt
+			if (resourceManagerConnection != null) {
+				resourceManagerConnection.close();
+				resourceManagerConnection = null;
+			}
+		}
+
+		// establish a connection to the new leader
+		if (newLeaderAddress != null) {
+			log.info("Attempting to register at ResourceManager {}", newLeaderAddress);
+			resourceManagerConnection = 
+					new TaskExecutorToResourceManagerConnection(log, this, newLeaderAddress, newLeaderId);
+			resourceManagerConnection.start();
+		}
 	}
 
+	// ------------------------------------------------------------------------
+	//  Error handling
+	// ------------------------------------------------------------------------
+
 	/**
-	 * Execute the given task on the task executor. The task is described by the provided
-	 * {@link TaskDeploymentDescriptor}.
-	 *
-	 * @param taskDeploymentDescriptor Descriptor for the task to be executed
-	 * @return Acknowledge the start of the task execution
+	 * Notifies the TaskExecutor that a fatal error has occurred and it cannot proceed.
+	 * This method should be used when asynchronous threads want to notify the
+	 * TaskExecutor of a fatal error.
+	 * 
+	 * @param t The exception describing the fatal error
 	 */
-	@RpcMethod
-	public Acknowledge executeTask(TaskDeploymentDescriptor taskDeploymentDescriptor) {
-		tasks.add(taskDeploymentDescriptor.getExecutionId());
-		return Acknowledge.get();
+	void onFatalErrorAsync(final Throwable t) {
+		runAsync(new Runnable() {
+			@Override
+			public void run() {
+				onFatalError(t);
+			}
+		});
 	}
 
 	/**
-	 * Cancel a task identified by it {@link ExecutionAttemptID}. If the task cannot be found, then
-	 * the method throws an {@link Exception}.
-	 *
-	 * @param executionAttemptId Execution attempt ID identifying the task to be canceled.
-	 * @return Acknowledge the task canceling
-	 * @throws Exception if the task with the given execution attempt id could not be found
+	 * Notifies the TaskExecutor that a fatal error has occurred and it cannot proceed.
+	 * This method must only be called from within the TaskExecutor's main thread.
+	 * 
+	 * @param t The exception describing the fatal error
 	 */
-	@RpcMethod
-	public Acknowledge cancelTask(ExecutionAttemptID executionAttemptId) throws Exception {
-		if (tasks.contains(executionAttemptId)) {
-			return Acknowledge.get();
-		} else {
-			throw new Exception("Could not find task.");
+	void onFatalError(Throwable t) {
+		// to be determined, probably delegate to a fatal error handler that 
+		// would either log (mini cluster) ot kill the process (yarn, mesos, ...)
+		log.error("FATAL ERROR", t);
+	}
+
+	// ------------------------------------------------------------------------
+	//  Utility classes
+	// ------------------------------------------------------------------------
+
+	/**
+	 * The listener for leader changes of the resource manager
+	 */
+	private class ResourceManagerLeaderListener implements LeaderRetrievalListener {
+
+		@Override
+		public void notifyLeaderAddress(String leaderAddress, UUID leaderSessionID) {
+			getSelf().notifyOfNewResourceManagerLeader(leaderAddress, leaderSessionID);
+		}
+
+		@Override
+		public void handleError(Exception exception) {
+			onFatalErrorAsync(exception);
 		}
 	}
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/7db27883/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorGateway.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorGateway.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorGateway.java
index 450423e..b0b21b0 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorGateway.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorGateway.java
@@ -18,31 +18,18 @@
 
 package org.apache.flink.runtime.rpc.taskexecutor;
 
-import org.apache.flink.runtime.deployment.TaskDeploymentDescriptor;
-import org.apache.flink.runtime.executiongraph.ExecutionAttemptID;
-import org.apache.flink.runtime.messages.Acknowledge;
 import org.apache.flink.runtime.rpc.RpcGateway;
-import scala.concurrent.Future;
+
+import java.util.UUID;
 
 /**
- * {@link TaskExecutor} rpc gateway interface
+ * {@link TaskExecutor} RPC gateway interface
  */
 public interface TaskExecutorGateway extends RpcGateway {
-	/**
-	 * Execute the given task on the task executor. The task is described by the provided
-	 * {@link TaskDeploymentDescriptor}.
-	 *
-	 * @param taskDeploymentDescriptor Descriptor for the task to be executed
-	 * @return Future acknowledge of the start of the task execution
-	 */
-	Future<Acknowledge> executeTask(TaskDeploymentDescriptor taskDeploymentDescriptor);
 
-	/**
-	 * Cancel a task identified by it {@link ExecutionAttemptID}. If the task cannot be found, then
-	 * the method throws an {@link Exception}.
-	 *
-	 * @param executionAttemptId Execution attempt ID identifying the task to be canceled.
-	 * @return Future acknowledge of the task canceling
-	 */
-	Future<Acknowledge> cancelTask(ExecutionAttemptID executionAttemptId);
+	// ------------------------------------------------------------------------
+	//  ResourceManager handlers
+	// ------------------------------------------------------------------------
+
+	void notifyOfNewResourceManagerLeader(String address, UUID resourceManagerLeaderId);
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/7db27883/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorRegistrationSuccess.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorRegistrationSuccess.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorRegistrationSuccess.java
new file mode 100644
index 0000000..641102d
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorRegistrationSuccess.java
@@ -0,0 +1,75 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.taskexecutor;
+
+import org.apache.flink.runtime.instance.InstanceID;
+import org.apache.flink.runtime.rpc.registration.RegistrationResponse;
+
+import java.io.Serializable;
+
+/**
+ * Base class for responses from the ResourceManager to a registration attempt by a
+ * TaskExecutor.
+ */
+public final class TaskExecutorRegistrationSuccess extends RegistrationResponse.Success implements Serializable {
+
+	private static final long serialVersionUID = 1L;
+
+	private final InstanceID registrationId;
+
+	private final long heartbeatInterval;
+
+	/**
+	 * Create a new {@code TaskExecutorRegistrationSuccess} message.
+	 * 
+	 * @param registrationId     The ID that the ResourceManager assigned the registration.
+	 * @param heartbeatInterval  The interval in which the ResourceManager will heartbeat the TaskExecutor.
+	 */
+	public TaskExecutorRegistrationSuccess(InstanceID registrationId, long heartbeatInterval) {
+		this.registrationId = registrationId;
+		this.heartbeatInterval = heartbeatInterval;
+	}
+
+	/**
+	 * Gets the ID that the ResourceManager assigned the registration.
+	 */
+	public InstanceID getRegistrationId() {
+		return registrationId;
+	}
+
+	/**
+	 * Gets the interval in which the ResourceManager will heartbeat the TaskExecutor.
+	 */
+	public long getHeartbeatInterval() {
+		return heartbeatInterval;
+	}
+
+	@Override
+	public String toString() {
+		return "TaskExecutorRegistrationSuccess (" + registrationId + " / " + heartbeatInterval + ')';
+	}
+
+}
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/flink/blob/7db27883/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorToResourceManagerConnection.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorToResourceManagerConnection.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorToResourceManagerConnection.java
new file mode 100644
index 0000000..ef75862
--- /dev/null
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorToResourceManagerConnection.java
@@ -0,0 +1,194 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.taskexecutor;
+
+import akka.dispatch.OnFailure;
+import akka.dispatch.OnSuccess;
+
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.runtime.clusterframework.types.ResourceID;
+import org.apache.flink.runtime.instance.InstanceID;
+import org.apache.flink.runtime.rpc.RpcService;
+import org.apache.flink.runtime.rpc.registration.RegistrationResponse;
+import org.apache.flink.runtime.rpc.registration.RetryingRegistration;
+import org.apache.flink.runtime.rpc.resourcemanager.ResourceManagerGateway;
+
+import org.slf4j.Logger;
+
+import scala.concurrent.Future;
+import scala.concurrent.duration.FiniteDuration;
+
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.apache.flink.util.Preconditions.checkState;
+
+public class TaskExecutorToResourceManagerConnection {
+
+	/** the logger for all log messages of this class */
+	private final Logger log;
+
+	/** the TaskExecutor whose connection to the ResourceManager this represents */
+	private final TaskExecutor taskExecutor;
+
+	private final UUID resourceManagerLeaderId;
+
+	private final String resourceManagerAddress;
+
+	private ResourceManagerRegistration pendingRegistration;
+
+	private ResourceManagerGateway registeredResourceManager;
+
+	private InstanceID registrationId;
+
+	/** flag indicating that the connection is closed */
+	private volatile boolean closed;
+
+
+	public TaskExecutorToResourceManagerConnection(
+			Logger log,
+			TaskExecutor taskExecutor,
+			String resourceManagerAddress,
+			UUID resourceManagerLeaderId) {
+
+		this.log = checkNotNull(log);
+		this.taskExecutor = checkNotNull(taskExecutor);
+		this.resourceManagerAddress = checkNotNull(resourceManagerAddress);
+		this.resourceManagerLeaderId = checkNotNull(resourceManagerLeaderId);
+	}
+
+	// ------------------------------------------------------------------------
+	//  Life cycle
+	// ------------------------------------------------------------------------
+
+	@SuppressWarnings("unchecked")
+	public void start() {
+		checkState(!closed, "The connection is already closed");
+		checkState(!isRegistered() && pendingRegistration == null, "The connection is already started");
+
+		ResourceManagerRegistration registration = new ResourceManagerRegistration(
+				log, taskExecutor.getRpcService(),
+				resourceManagerAddress, resourceManagerLeaderId,
+				taskExecutor.getAddress(), taskExecutor.getResourceID());
+
+		Future<Tuple2<ResourceManagerGateway, TaskExecutorRegistrationSuccess>> future = registration.getFuture();
+		
+		future.onSuccess(new OnSuccess<Tuple2<ResourceManagerGateway, TaskExecutorRegistrationSuccess>>() {
+			@Override
+			public void onSuccess(Tuple2<ResourceManagerGateway, TaskExecutorRegistrationSuccess> result) {
+				registeredResourceManager = result.f0;
+				registrationId = result.f1.getRegistrationId();
+			}
+		}, taskExecutor.getMainThreadExecutionContext());
+		
+		// this future should only ever fail if there is a bug, not if the registration is declined
+		future.onFailure(new OnFailure() {
+			@Override
+			public void onFailure(Throwable failure) {
+				taskExecutor.onFatalError(failure);
+			}
+		}, taskExecutor.getMainThreadExecutionContext());
+	}
+
+	public void close() {
+		closed = true;
+
+		// make sure we do not keep re-trying forever
+		if (pendingRegistration != null) {
+			pendingRegistration.cancel();
+		}
+	}
+
+	public boolean isClosed() {
+		return closed;
+	}
+
+	// ------------------------------------------------------------------------
+	//  Properties
+	// ------------------------------------------------------------------------
+
+	public UUID getResourceManagerLeaderId() {
+		return resourceManagerLeaderId;
+	}
+
+	public String getResourceManagerAddress() {
+		return resourceManagerAddress;
+	}
+
+	/**
+	 * Gets the ResourceManagerGateway. This returns null until the registration is completed.
+	 */
+	public ResourceManagerGateway getResourceManager() {
+		return registeredResourceManager;
+	}
+
+	/**
+	 * Gets the ID under which the TaskExecutor is registered at the ResourceManager.
+	 * This returns null until the registration is completed.
+	 */
+	public InstanceID getRegistrationId() {
+		return registrationId;
+	}
+
+	public boolean isRegistered() {
+		return registeredResourceManager != null;
+	}
+
+	// ------------------------------------------------------------------------
+
+	@Override
+	public String toString() {
+		return String.format("Connection to ResourceManager %s (leaderId=%s)",
+				resourceManagerAddress, resourceManagerLeaderId); 
+	}
+
+	// ------------------------------------------------------------------------
+	//  Utilities
+	// ------------------------------------------------------------------------
+
+	static class ResourceManagerRegistration
+			extends RetryingRegistration<ResourceManagerGateway, TaskExecutorRegistrationSuccess> {
+
+		private final String taskExecutorAddress;
+		
+		private final ResourceID resourceID;
+
+		public ResourceManagerRegistration(
+				Logger log,
+				RpcService rpcService,
+				String targetAddress,
+				UUID leaderId,
+				String taskExecutorAddress,
+				ResourceID resourceID) {
+
+			super(log, rpcService, "ResourceManager", ResourceManagerGateway.class, targetAddress, leaderId);
+			this.taskExecutorAddress = checkNotNull(taskExecutorAddress);
+			this.resourceID = checkNotNull(resourceID);
+		}
+
+		@Override
+		protected Future<RegistrationResponse> invokeRegistration(
+				ResourceManagerGateway resourceManager, UUID leaderId, long timeoutMillis) throws Exception {
+
+			FiniteDuration timeout = new FiniteDuration(timeoutMillis, TimeUnit.MILLISECONDS);
+			return resourceManager.registerTaskExecutor(leaderId, taskExecutorAddress, resourceID, timeout);
+		}
+	}
+}

http://git-wip-us.apache.org/repos/asf/flink/blob/7db27883/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java
index fd55904..7b4ab89 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java
@@ -20,15 +20,17 @@ package org.apache.flink.runtime.rpc.akka;
 
 import akka.actor.ActorSystem;
 import akka.util.Timeout;
+
+import org.apache.flink.core.testutils.OneShotLatch;
 import org.apache.flink.runtime.akka.AkkaUtils;
-import org.apache.flink.runtime.rpc.RpcEndpoint;
-import org.apache.flink.runtime.rpc.RpcGateway;
-import org.apache.flink.runtime.rpc.RpcService;
 import org.apache.flink.runtime.rpc.jobmaster.JobMaster;
 import org.apache.flink.runtime.rpc.resourcemanager.ResourceManagerGateway;
 import org.apache.flink.runtime.rpc.resourcemanager.ResourceManager;
 import org.apache.flink.util.TestLogger;
+
+import org.junit.AfterClass;
 import org.junit.Test;
+
 import scala.concurrent.duration.Deadline;
 import scala.concurrent.duration.FiniteDuration;
 
@@ -41,6 +43,49 @@ import static org.junit.Assert.assertTrue;
 
 public class AkkaRpcServiceTest extends TestLogger {
 
+	// ------------------------------------------------------------------------
+	//  shared test members
+	// ------------------------------------------------------------------------
+
+	private static ActorSystem actorSystem = AkkaUtils.createDefaultActorSystem();
+
+	private static AkkaRpcService akkaRpcService =
+			new AkkaRpcService(actorSystem, new Timeout(10000, TimeUnit.MILLISECONDS));
+
+	@AfterClass
+	public static void shutdown() {
+		akkaRpcService.stopService();
+		actorSystem.shutdown();
+	}
+
+	// ------------------------------------------------------------------------
+	//  tests
+	// ------------------------------------------------------------------------
+
+	@Test
+	public void testScheduleRunnable() throws Exception {
+		final OneShotLatch latch = new OneShotLatch();
+		final long delay = 100;
+		final long start = System.nanoTime();
+
+		akkaRpcService.scheduleRunnable(new Runnable() {
+			@Override
+			public void run() {
+				latch.trigger();
+			}
+		}, delay, TimeUnit.MILLISECONDS);
+
+		latch.await();
+		final long stop = System.nanoTime();
+
+		assertTrue("call was not properly delayed", ((stop - start) / 1000000) >= delay);
+	}
+
+	// ------------------------------------------------------------------------
+	//  specific component tests - should be moved to the test classes
+	//  for those components
+	// ------------------------------------------------------------------------
+
 	/**
 	 * Tests that the {@link JobMaster} can connect to the {@link ResourceManager} using the
 	 * {@link AkkaRpcService}.

http://git-wip-us.apache.org/repos/asf/flink/blob/7db27883/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorTest.java
index c96f4f6..9f9bab3 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorTest.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorTest.java
@@ -18,93 +18,8 @@
 
 package org.apache.flink.runtime.rpc.taskexecutor;
 
-import org.apache.flink.api.common.ExecutionConfig;
-import org.apache.flink.api.common.JobID;
-import org.apache.flink.configuration.Configuration;
-import org.apache.flink.runtime.blob.BlobKey;
-import org.apache.flink.runtime.deployment.InputGateDeploymentDescriptor;
-import org.apache.flink.runtime.deployment.ResultPartitionDeploymentDescriptor;
-import org.apache.flink.runtime.deployment.TaskDeploymentDescriptor;
-import org.apache.flink.runtime.executiongraph.ExecutionAttemptID;
-import org.apache.flink.runtime.jobgraph.JobVertexID;
-import org.apache.flink.runtime.messages.Acknowledge;
-import org.apache.flink.runtime.rpc.MainThreadExecutor;
-import org.apache.flink.runtime.rpc.RpcEndpoint;
-import org.apache.flink.runtime.rpc.RpcGateway;
-import org.apache.flink.runtime.rpc.RpcService;
-import org.apache.flink.runtime.rpc.StartStoppable;
-import org.apache.flink.runtime.util.DirectExecutorService;
-import org.apache.flink.util.SerializedValue;
 import org.apache.flink.util.TestLogger;
-import org.junit.Test;
-import org.mockito.Matchers;
-import org.mockito.cglib.proxy.InvocationHandler;
-import org.mockito.cglib.proxy.Proxy;
-import scala.concurrent.Future;
-
-import java.net.URL;
-import java.util.Collections;
-
-import static org.junit.Assert.fail;
-import static org.mockito.Mockito.mock;
-import static org.mockito.Mockito.when;
 
 public class TaskExecutorTest extends TestLogger {
-
-	/**
-	 * Tests that we can deploy and cancel a task on the TaskExecutor without exceptions
-	 */
-	@Test
-	public void testTaskExecution() throws Exception {
-		RpcService testingRpcService = mock(RpcService.class);
-		InvocationHandler invocationHandler = mock(InvocationHandler.class);
-		Object selfGateway = Proxy.newProxyInstance(ClassLoader.getSystemClassLoader(), new Class<?>[] {TaskExecutorGateway.class, MainThreadExecutor.class, StartStoppable.class}, invocationHandler);
-		when(testingRpcService.startServer(Matchers.any(RpcEndpoint.class))).thenReturn((RpcGateway)selfGateway);
-
-		DirectExecutorService directExecutorService = new DirectExecutorService();
-		TaskExecutor taskExecutor = new TaskExecutor(testingRpcService, directExecutorService);
-		taskExecutor.start();
-
-		TaskDeploymentDescriptor tdd = new TaskDeploymentDescriptor(
-			new JobID(),
-			"Test job",
-			new JobVertexID(),
-			new ExecutionAttemptID(),
-			new SerializedValue<ExecutionConfig>(null),
-			"Test task",
-			0,
-			1,
-			0,
-			new Configuration(),
-			new Configuration(),
-			"Invokable",
-			Collections.<ResultPartitionDeploymentDescriptor>emptyList(),
-			Collections.<InputGateDeploymentDescriptor>emptyList(),
-			Collections.<BlobKey>emptyList(),
-			Collections.<URL>emptyList(),
-			0
-		);
-
-		Acknowledge ack = taskExecutor.executeTask(tdd);
-
-		ack = taskExecutor.cancelTask(tdd.getExecutionId());
-	}
-
-	/**
-	 * Tests that cancelling a non-existing task will return an exception
-	 */
-	@Test(expected=Exception.class)
-	public void testWrongTaskCancellation() throws Exception {
-		RpcService testingRpcService = mock(RpcService.class);
-		InvocationHandler invocationHandler = mock(InvocationHandler.class);
-		Object selfGateway = Proxy.newProxyInstance(ClassLoader.getSystemClassLoader(), new Class<?>[] {TaskExecutorGateway.class, MainThreadExecutor.class, StartStoppable.class}, invocationHandler);
-		when(testingRpcService.startServer(Matchers.any(RpcEndpoint.class))).thenReturn((RpcGateway)selfGateway);
-		DirectExecutorService directExecutorService = null;
-		TaskExecutor taskExecutor = new TaskExecutor(testingRpcService, directExecutorService);
-		taskExecutor.start();
-
-		taskExecutor.cancelTask(new ExecutionAttemptID());
-
-		fail("The cancellation should have thrown an exception.");
-	}
+	
 }


[58/89] [abbrv] flink git commit: [FLINK-4231] [java] Switch DistinctOperator from GroupReduce to Reduce

Posted by se...@apache.org.
[FLINK-4231] [java] Switch DistinctOperator from GroupReduce to Reduce

Rewrite the DistinctOperator using Reduce to support both the sort-based
combine and the recently added hash-based combine. This is configured by
the new method DistinctOperator.setCombineHint(CombineHint).

The tests for combineability are removed as Reduce is inherently
combineable.

This closes #2272


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/ad8e665f
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/ad8e665f
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/ad8e665f

Branch: refs/heads/flip-6
Commit: ad8e665f0607414b0ed50eab01e14c1446e86569
Parents: 28743cf
Author: Greg Hogan <co...@greghogan.com>
Authored: Mon Jul 18 11:50:44 2016 -0400
Committer: Greg Hogan <co...@greghogan.com>
Committed: Wed Aug 24 09:02:22 2016 -0400

----------------------------------------------------------------------
 .../operators/base/ReduceOperatorBase.java      |  7 +-
 .../api/java/operators/DistinctOperator.java    | 82 ++++++++++++--------
 .../api/java/operators/ReduceOperator.java      |  2 +-
 .../translation/DistinctTranslationTest.java    | 52 ++++---------
 .../optimizer/DistinctCompilationTest.java      | 66 ++++++++++++++--
 .../translation/DistinctTranslationTest.scala   | 52 -------------
 6 files changed, 128 insertions(+), 133 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/ad8e665f/flink-core/src/main/java/org/apache/flink/api/common/operators/base/ReduceOperatorBase.java
----------------------------------------------------------------------
diff --git a/flink-core/src/main/java/org/apache/flink/api/common/operators/base/ReduceOperatorBase.java b/flink-core/src/main/java/org/apache/flink/api/common/operators/base/ReduceOperatorBase.java
index 88a6fac..7828748 100644
--- a/flink-core/src/main/java/org/apache/flink/api/common/operators/base/ReduceOperatorBase.java
+++ b/flink-core/src/main/java/org/apache/flink/api/common/operators/base/ReduceOperatorBase.java
@@ -31,6 +31,7 @@ import org.apache.flink.api.common.operators.util.TypeComparable;
 import org.apache.flink.api.common.operators.util.UserCodeClassWrapper;
 import org.apache.flink.api.common.operators.util.UserCodeObjectWrapper;
 import org.apache.flink.api.common.operators.util.UserCodeWrapper;
+import org.apache.flink.api.common.typeinfo.AtomicType;
 import org.apache.flink.api.common.typeinfo.TypeInformation;
 import org.apache.flink.api.common.typeutils.CompositeType;
 import org.apache.flink.api.common.typeutils.TypeComparator;
@@ -191,7 +192,7 @@ public class ReduceOperatorBase<T, FT extends ReduceFunction<T>> extends SingleI
 
 		int[] inputColumns = getKeyColumns(0);
 
-		if (!(inputType instanceof CompositeType) && inputColumns.length > 0) {
+		if (!(inputType instanceof CompositeType) && inputColumns.length > 1) {
 			throw new InvalidProgramException("Grouping is only possible on composite types.");
 		}
 
@@ -202,7 +203,9 @@ public class ReduceOperatorBase<T, FT extends ReduceFunction<T>> extends SingleI
 
 		if (inputColumns.length > 0) {
 			boolean[] inputOrderings = new boolean[inputColumns.length];
-			TypeComparator<T> inputComparator = ((CompositeType<T>) inputType).createComparator(inputColumns, inputOrderings, 0, executionConfig);
+			TypeComparator<T> inputComparator = inputType instanceof AtomicType
+					? ((AtomicType<T>) inputType).createComparator(false, executionConfig)
+					: ((CompositeType<T>) inputType).createComparator(inputColumns, inputOrderings, 0, executionConfig);
 
 			Map<TypeComparable<T>, T> aggregateMap = new HashMap<TypeComparable<T>, T>(inputData.size() / 10);
 

http://git-wip-us.apache.org/repos/asf/flink/blob/ad8e665f/flink-java/src/main/java/org/apache/flink/api/java/operators/DistinctOperator.java
----------------------------------------------------------------------
diff --git a/flink-java/src/main/java/org/apache/flink/api/java/operators/DistinctOperator.java b/flink-java/src/main/java/org/apache/flink/api/java/operators/DistinctOperator.java
index ee1669d..267513d 100644
--- a/flink-java/src/main/java/org/apache/flink/api/java/operators/DistinctOperator.java
+++ b/flink-java/src/main/java/org/apache/flink/api/java/operators/DistinctOperator.java
@@ -20,19 +20,19 @@ package org.apache.flink.api.java.operators;
 
 import org.apache.flink.annotation.Internal;
 import org.apache.flink.annotation.Public;
-import org.apache.flink.api.common.functions.GroupCombineFunction;
-import org.apache.flink.api.common.functions.GroupReduceFunction;
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.api.common.functions.ReduceFunction;
 import org.apache.flink.api.common.operators.Keys;
+import org.apache.flink.api.common.operators.Keys.SelectorFunctionKeys;
 import org.apache.flink.api.common.operators.Operator;
 import org.apache.flink.api.common.operators.SingleInputSemanticProperties;
 import org.apache.flink.api.common.operators.UnaryOperatorInformation;
-import org.apache.flink.api.common.operators.base.GroupReduceOperatorBase;
+import org.apache.flink.api.common.operators.base.ReduceOperatorBase;
+import org.apache.flink.api.common.operators.base.ReduceOperatorBase.CombineHint;
 import org.apache.flink.api.common.typeinfo.TypeInformation;
-import org.apache.flink.api.common.operators.Keys.SelectorFunctionKeys;
-import org.apache.flink.api.java.operators.translation.PlanUnwrappingReduceGroupOperator;
-import org.apache.flink.api.java.tuple.Tuple2;
-import org.apache.flink.util.Collector;
 import org.apache.flink.api.java.DataSet;
+import org.apache.flink.api.java.operators.translation.PlanUnwrappingReduceOperator;
+import org.apache.flink.api.java.tuple.Tuple2;
 
 /**
  * This operator represents the application of a "distinct" function on a data set, and the
@@ -47,6 +47,8 @@ public class DistinctOperator<T> extends SingleInputOperator<T, T, DistinctOpera
 
 	private final String distinctLocationName;
 
+	private CombineHint hint = CombineHint.OPTIMIZER_CHOOSES;
+
 	public DistinctOperator(DataSet<T> input, Keys<T> keys, String distinctLocationName) {
 		super(input, input.getType());
 
@@ -61,9 +63,9 @@ public class DistinctOperator<T> extends SingleInputOperator<T, T, DistinctOpera
 	}
 
 	@Override
-	protected org.apache.flink.api.common.operators.base.GroupReduceOperatorBase<?, T, ?> translateToDataFlow(Operator<T> input) {
+	protected org.apache.flink.api.common.operators.SingleInputOperator<?, T, ?> translateToDataFlow(Operator<T> input) {
 
-		final GroupReduceFunction<T, T> function = new DistinctFunction<>();
+		final ReduceFunction<T> function = new DistinctFunction<>();
 
 		String name = getName() != null ? getName() : "Distinct at " + distinctLocationName;
 
@@ -71,10 +73,10 @@ public class DistinctOperator<T> extends SingleInputOperator<T, T, DistinctOpera
 
 			int[] logicalKeyPositions = keys.computeLogicalKeyPositions();
 			UnaryOperatorInformation<T, T> operatorInfo = new UnaryOperatorInformation<>(getInputType(), getResultType());
-			GroupReduceOperatorBase<T, T, GroupReduceFunction<T, T>> po =
-					new GroupReduceOperatorBase<>(function, operatorInfo, logicalKeyPositions, name);
+			ReduceOperatorBase<T, ReduceFunction<T>> po =
+					new ReduceOperatorBase<>(function, operatorInfo, logicalKeyPositions, name);
 
-			po.setCombinable(true);
+			po.setCombineHint(hint);
 			po.setInput(input);
 			po.setParallelism(getParallelism());
 
@@ -96,10 +98,8 @@ public class DistinctOperator<T> extends SingleInputOperator<T, T, DistinctOpera
 			@SuppressWarnings("unchecked")
 			SelectorFunctionKeys<T, ?> selectorKeys = (SelectorFunctionKeys<T, ?>) keys;
 
-			PlanUnwrappingReduceGroupOperator<T, T, ?> po = translateSelectorFunctionDistinct(
-							selectorKeys, function, getResultType(), name, input);
-
-			po.setParallelism(this.getParallelism());
+			org.apache.flink.api.common.operators.SingleInputOperator<?, T, ?> po =
+				translateSelectorFunctionDistinct(selectorKeys, function, getResultType(), name, input, parallelism, hint);
 
 			return po;
 		}
@@ -108,41 +108,55 @@ public class DistinctOperator<T> extends SingleInputOperator<T, T, DistinctOpera
 		}
 	}
 
+	/**
+	 * Sets the strategy to use for the combine phase of the reduce.
+	 *
+	 * If this method is not called, then the default hint will be used.
+	 * ({@link org.apache.flink.api.common.operators.base.ReduceOperatorBase.CombineHint#OPTIMIZER_CHOOSES})
+	 *
+	 * @param strategy The hint to use.
+	 * @return The DistinctOperator object, for function call chaining.
+	 */
+	@PublicEvolving
+	public DistinctOperator<T> setCombineHint(CombineHint strategy) {
+		this.hint = strategy;
+		return this;
+	}
+
 	// --------------------------------------------------------------------------------------------
 
-	private static <IN, OUT, K> PlanUnwrappingReduceGroupOperator<IN, OUT, K> translateSelectorFunctionDistinct(
+	private static <IN, K> org.apache.flink.api.common.operators.SingleInputOperator<?, IN, ?> translateSelectorFunctionDistinct(
 			SelectorFunctionKeys<IN, ?> rawKeys,
-			GroupReduceFunction<IN, OUT> function,
-			TypeInformation<OUT> outputType,
+			ReduceFunction<IN> function,
+			TypeInformation<IN> outputType,
 			String name,
-			Operator<IN> input)
+			Operator<IN> input,
+			int parallelism,
+			CombineHint hint)
 	{
 		@SuppressWarnings("unchecked")
 		final SelectorFunctionKeys<IN, K> keys = (SelectorFunctionKeys<IN, K>) rawKeys;
-		
+
 		TypeInformation<Tuple2<K, IN>> typeInfoWithKey = KeyFunctions.createTypeWithKey(keys);
 		Operator<Tuple2<K, IN>> keyedInput = KeyFunctions.appendKeyExtractor(input, keys);
-		
-		PlanUnwrappingReduceGroupOperator<IN, OUT, K> reducer =
-				new PlanUnwrappingReduceGroupOperator<>(function, keys, name, outputType, typeInfoWithKey, true);
+
+		PlanUnwrappingReduceOperator<IN, K> reducer =
+				new PlanUnwrappingReduceOperator<>(function, keys, name, outputType, typeInfoWithKey);
 		reducer.setInput(keyedInput);
+		reducer.setCombineHint(hint);
+		reducer.setParallelism(parallelism);
 
-		return reducer;
+		return KeyFunctions.appendKeyRemover(reducer, keys);
 	}
-	
+
 	@Internal
-	public static final class DistinctFunction<T> implements GroupReduceFunction<T, T>, GroupCombineFunction<T, T> {
+	public static final class DistinctFunction<T> implements ReduceFunction<T> {
 
 		private static final long serialVersionUID = 1L;
 
 		@Override
-		public void reduce(Iterable<T> values, Collector<T> out) {
-			out.collect(values.iterator().next());
-		}
-
-		@Override
-		public void combine(Iterable<T> values, Collector<T> out) {
-			out.collect(values.iterator().next());
+		public T reduce(T value1, T value2) throws Exception {
+			return value1;
 		}
 	}
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/ad8e665f/flink-java/src/main/java/org/apache/flink/api/java/operators/ReduceOperator.java
----------------------------------------------------------------------
diff --git a/flink-java/src/main/java/org/apache/flink/api/java/operators/ReduceOperator.java b/flink-java/src/main/java/org/apache/flink/api/java/operators/ReduceOperator.java
index e02b64f..42dcf05 100644
--- a/flink-java/src/main/java/org/apache/flink/api/java/operators/ReduceOperator.java
+++ b/flink-java/src/main/java/org/apache/flink/api/java/operators/ReduceOperator.java
@@ -165,7 +165,7 @@ public class ReduceOperator<IN> extends SingleInputUdfOperator<IN, IN, ReduceOpe
 	 * Sets the strategy to use for the combine phase of the reduce.
 	 *
 	 * If this method is not called, then the default hint will be used.
-	 * ({@link org.apache.flink.api.common.operators.base.ReduceOperatorBase.CombineHint.OPTIMIZER_CHOOSES})
+	 * ({@link org.apache.flink.api.common.operators.base.ReduceOperatorBase.CombineHint#OPTIMIZER_CHOOSES})
 	 *
 	 * @param strategy The hint to use.
 	 * @return The ReduceOperator object, for function call chaining.

http://git-wip-us.apache.org/repos/asf/flink/blob/ad8e665f/flink-java/src/test/java/org/apache/flink/api/java/operators/translation/DistinctTranslationTest.java
----------------------------------------------------------------------
diff --git a/flink-java/src/test/java/org/apache/flink/api/java/operators/translation/DistinctTranslationTest.java b/flink-java/src/test/java/org/apache/flink/api/java/operators/translation/DistinctTranslationTest.java
index 9824ee1..cbdac4a 100644
--- a/flink-java/src/test/java/org/apache/flink/api/java/operators/translation/DistinctTranslationTest.java
+++ b/flink-java/src/test/java/org/apache/flink/api/java/operators/translation/DistinctTranslationTest.java
@@ -21,14 +21,13 @@ package org.apache.flink.api.java.operators.translation;
 import org.apache.flink.api.common.Plan;
 import org.apache.flink.api.common.operators.GenericDataSinkBase;
 import org.apache.flink.api.common.operators.GenericDataSourceBase;
-import org.apache.flink.api.common.operators.base.GroupReduceOperatorBase;
 import org.apache.flink.api.common.operators.base.MapOperatorBase;
+import org.apache.flink.api.common.operators.base.ReduceOperatorBase;
 import org.apache.flink.api.common.typeinfo.TypeInformation;
 import org.apache.flink.api.java.DataSet;
 import org.apache.flink.api.java.ExecutionEnvironment;
 import org.apache.flink.api.java.functions.KeySelector;
 import org.apache.flink.api.java.io.DiscardingOutputFormat;
-import org.apache.flink.api.java.operators.DistinctOperator;
 import org.apache.flink.api.java.tuple.Tuple2;
 import org.apache.flink.api.java.tuple.Tuple3;
 import org.apache.flink.api.java.typeutils.TupleTypeInfo;
@@ -50,31 +49,6 @@ import static org.junit.Assert.fail;
 public class DistinctTranslationTest {
 
 	@Test
-	public void testCombinable() {
-		try {
-			ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
-			
-			DataSet<String> input = env.fromElements("1", "2", "1", "3");
-			
-			
-			DistinctOperator<String> op = input.distinct(new KeySelector<String, String>() {
-				public String getKey(String value) { return value; }
-			});
-			
-			op.output(new DiscardingOutputFormat<String>());
-			
-			Plan p = env.createProgramPlan();
-			
-			GroupReduceOperatorBase<?, ?, ?> reduceOp = (GroupReduceOperatorBase<?, ?, ?>) p.getDataSinks().iterator().next().getInput();
-			assertTrue(reduceOp.isCombinable());
-		}
-		catch (Exception e) {
-			e.printStackTrace();
-			fail(e.getMessage());
-		}
-	}
-
-	@Test
 	public void translateDistinctPlain() {
 		try {
 			final int parallelism = 8;
@@ -88,8 +62,8 @@ public class DistinctTranslationTest {
 
 			GenericDataSinkBase<?> sink = p.getDataSinks().iterator().next();
 
-			// currently distinct is translated to a GroupReduce
-			GroupReduceOperatorBase<?, ?, ?> reducer = (GroupReduceOperatorBase<?, ?, ?>) sink.getInput();
+			// currently distinct is translated to a Reduce
+			ReduceOperatorBase<?, ?> reducer = (ReduceOperatorBase<?, ?>) sink.getInput();
 
 			// check types
 			assertEquals(initialData.getType(), reducer.getOperatorInfo().getInputType());
@@ -124,8 +98,8 @@ public class DistinctTranslationTest {
 
 			GenericDataSinkBase<?> sink = p.getDataSinks().iterator().next();
 
-			// currently distinct is translated to a GroupReduce
-			GroupReduceOperatorBase<?, ?, ?> reducer = (GroupReduceOperatorBase<?, ?, ?>) sink.getInput();
+			// currently distinct is translated to a Reduce
+			ReduceOperatorBase<?, ?> reducer = (ReduceOperatorBase<?, ?>) sink.getInput();
 
 			// check types
 			assertEquals(initialData.getType(), reducer.getOperatorInfo().getInputType());
@@ -160,8 +134,8 @@ public class DistinctTranslationTest {
 
 			GenericDataSinkBase<?> sink = p.getDataSinks().iterator().next();
 
-			// currently distinct is translated to a GroupReduce
-			GroupReduceOperatorBase<?, ?, ?> reducer = (GroupReduceOperatorBase<?, ?, ?>) sink.getInput();
+			// currently distinct is translated to a Reduce
+			ReduceOperatorBase<?, ?> reducer = (ReduceOperatorBase<?, ?>) sink.getInput();
 
 			// check types
 			assertEquals(initialData.getType(), reducer.getOperatorInfo().getInputType());
@@ -200,7 +174,8 @@ public class DistinctTranslationTest {
 
 			GenericDataSinkBase<?> sink = p.getDataSinks().iterator().next();
 
-			PlanUnwrappingReduceGroupOperator<?, ?, ?> reducer = (PlanUnwrappingReduceGroupOperator<?, ?, ?>) sink.getInput();
+			MapOperatorBase<?, ?, ?> keyRemover = (MapOperatorBase<?, ?, ?>) sink.getInput();
+			PlanUnwrappingReduceOperator<?, ?> reducer = (PlanUnwrappingReduceOperator<?, ?>) keyRemover.getInput();
 			MapOperatorBase<?, ?, ?> keyExtractor = (MapOperatorBase<?, ?, ?>) reducer.getInput();
 
 			// check the parallelisms
@@ -216,7 +191,10 @@ public class DistinctTranslationTest {
 			assertEquals(keyValueInfo, keyExtractor.getOperatorInfo().getOutputType());
 
 			assertEquals(keyValueInfo, reducer.getOperatorInfo().getInputType());
-			assertEquals(initialData.getType(), reducer.getOperatorInfo().getOutputType());
+			assertEquals(keyValueInfo, reducer.getOperatorInfo().getOutputType());
+
+			assertEquals(keyValueInfo, keyRemover.getOperatorInfo().getInputType());
+			assertEquals(initialData.getType(), keyRemover.getOperatorInfo().getOutputType());
 
 			// check keys
 			assertEquals(KeyExtractingMapper.class, keyExtractor.getUserCodeWrapper().getUserCodeClass());
@@ -244,8 +222,8 @@ public class DistinctTranslationTest {
 
 			GenericDataSinkBase<?> sink = p.getDataSinks().iterator().next();
 
-			// currently distinct is translated to a GroupReduce
-			GroupReduceOperatorBase<?, ?, ?> reducer = (GroupReduceOperatorBase<?, ?, ?>) sink.getInput();
+			// currently distinct is translated to a Reduce
+			ReduceOperatorBase<?, ?> reducer = (ReduceOperatorBase<?, ?>) sink.getInput();
 
 			// check types
 			assertEquals(initialData.getType(), reducer.getOperatorInfo().getInputType());

http://git-wip-us.apache.org/repos/asf/flink/blob/ad8e665f/flink-optimizer/src/test/java/org/apache/flink/optimizer/DistinctCompilationTest.java
----------------------------------------------------------------------
diff --git a/flink-optimizer/src/test/java/org/apache/flink/optimizer/DistinctCompilationTest.java b/flink-optimizer/src/test/java/org/apache/flink/optimizer/DistinctCompilationTest.java
index 20a4ef6..89e0f21 100644
--- a/flink-optimizer/src/test/java/org/apache/flink/optimizer/DistinctCompilationTest.java
+++ b/flink-optimizer/src/test/java/org/apache/flink/optimizer/DistinctCompilationTest.java
@@ -18,6 +18,7 @@
 package org.apache.flink.optimizer;
 
 import org.apache.flink.api.common.Plan;
+import org.apache.flink.api.common.operators.base.ReduceOperatorBase.CombineHint;
 import org.apache.flink.api.common.operators.util.FieldList;
 import org.apache.flink.api.java.DataSet;
 import org.apache.flink.api.java.ExecutionEnvironment;
@@ -29,8 +30,8 @@ import org.apache.flink.optimizer.plan.OptimizedPlan;
 import org.apache.flink.optimizer.plan.SingleInputPlanNode;
 import org.apache.flink.optimizer.plan.SinkPlanNode;
 import org.apache.flink.optimizer.plan.SourcePlanNode;
-import org.apache.flink.runtime.operators.DriverStrategy;
 import org.apache.flink.optimizer.util.CompilerTestBase;
+import org.apache.flink.runtime.operators.DriverStrategy;
 import org.junit.Test;
 
 import static org.junit.Assert.assertEquals;
@@ -71,8 +72,8 @@ public class DistinctCompilationTest extends CompilerTestBase implements java.io
 			assertEquals(reduceNode, sinkNode.getInput().getSource());
 
 			// check that both reduce and combiner have the same strategy
-			assertEquals(DriverStrategy.SORTED_GROUP_REDUCE, reduceNode.getDriverStrategy());
-			assertEquals(DriverStrategy.SORTED_GROUP_COMBINE, combineNode.getDriverStrategy());
+			assertEquals(DriverStrategy.SORTED_REDUCE, reduceNode.getDriverStrategy());
+			assertEquals(DriverStrategy.SORTED_PARTIAL_REDUCE, combineNode.getDriverStrategy());
 
 			// check the keys
 			assertEquals(new FieldList(0, 1), reduceNode.getKeys(0));
@@ -93,6 +94,57 @@ public class DistinctCompilationTest extends CompilerTestBase implements java.io
 	}
 
 	@Test
+	public void testDistinctWithCombineHint() {
+		try {
+			ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+			env.setParallelism(8);
+
+			DataSet<Tuple2<String, Double>> data = env.readCsvFile("file:///will/never/be/read").types(String.class, Double.class)
+					.name("source").setParallelism(6);
+
+			data
+					.distinct().setCombineHint(CombineHint.HASH).name("reducer")
+					.output(new DiscardingOutputFormat<Tuple2<String, Double>>()).name("sink");
+
+			Plan p = env.createProgramPlan();
+			OptimizedPlan op = compileNoStats(p);
+
+			OptimizerPlanNodeResolver resolver = getOptimizerPlanNodeResolver(op);
+
+			// get the original nodes
+			SourcePlanNode sourceNode = resolver.getNode("source");
+			SingleInputPlanNode reduceNode = resolver.getNode("reducer");
+			SinkPlanNode sinkNode = resolver.getNode("sink");
+
+			// get the combiner
+			SingleInputPlanNode combineNode = (SingleInputPlanNode) reduceNode.getInput().getSource();
+
+			// check wiring
+			assertEquals(sourceNode, combineNode.getInput().getSource());
+			assertEquals(reduceNode, sinkNode.getInput().getSource());
+
+			// check that both reduce and combiner have the same strategy
+			assertEquals(DriverStrategy.SORTED_REDUCE, reduceNode.getDriverStrategy());
+			assertEquals(DriverStrategy.HASHED_PARTIAL_REDUCE, combineNode.getDriverStrategy());
+
+			// check the keys
+			assertEquals(new FieldList(0, 1), reduceNode.getKeys(0));
+			assertEquals(new FieldList(0, 1), combineNode.getKeys(0));
+			assertEquals(new FieldList(0, 1), reduceNode.getInput().getLocalStrategyKeys());
+
+			// check parallelism
+			assertEquals(6, sourceNode.getParallelism());
+			assertEquals(6, combineNode.getParallelism());
+			assertEquals(8, reduceNode.getParallelism());
+			assertEquals(8, sinkNode.getParallelism());
+		} catch (Exception e) {
+			System.err.println(e.getMessage());
+			e.printStackTrace();
+			fail(e.getClass().getSimpleName() + " in test: " + e.getMessage());
+		}
+	}
+
+	@Test
 	public void testDistinctWithSelectorFunctionKey() {
 		try {
 			ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
@@ -129,8 +181,8 @@ public class DistinctCompilationTest extends CompilerTestBase implements java.io
 			assertEquals(keyProjector, sinkNode.getInput().getSource());
 
 			// check that both reduce and combiner have the same strategy
-			assertEquals(DriverStrategy.SORTED_GROUP_REDUCE, reduceNode.getDriverStrategy());
-			assertEquals(DriverStrategy.SORTED_GROUP_COMBINE, combineNode.getDriverStrategy());
+			assertEquals(DriverStrategy.SORTED_REDUCE, reduceNode.getDriverStrategy());
+			assertEquals(DriverStrategy.SORTED_PARTIAL_REDUCE, combineNode.getDriverStrategy());
 
 			// check the keys
 			assertEquals(new FieldList(0), reduceNode.getKeys(0));
@@ -185,8 +237,8 @@ public class DistinctCompilationTest extends CompilerTestBase implements java.io
 			assertEquals(reduceNode, sinkNode.getInput().getSource());
 
 			// check that both reduce and combiner have the same strategy
-			assertEquals(DriverStrategy.SORTED_GROUP_REDUCE, reduceNode.getDriverStrategy());
-			assertEquals(DriverStrategy.SORTED_GROUP_COMBINE, combineNode.getDriverStrategy());
+			assertEquals(DriverStrategy.SORTED_REDUCE, reduceNode.getDriverStrategy());
+			assertEquals(DriverStrategy.SORTED_PARTIAL_REDUCE, combineNode.getDriverStrategy());
 
 			// check the keys
 			assertEquals(new FieldList(1), reduceNode.getKeys(0));

http://git-wip-us.apache.org/repos/asf/flink/blob/ad8e665f/flink-tests/src/test/scala/org/apache/flink/api/scala/operators/translation/DistinctTranslationTest.scala
----------------------------------------------------------------------
diff --git a/flink-tests/src/test/scala/org/apache/flink/api/scala/operators/translation/DistinctTranslationTest.scala b/flink-tests/src/test/scala/org/apache/flink/api/scala/operators/translation/DistinctTranslationTest.scala
deleted file mode 100644
index c540f61..0000000
--- a/flink-tests/src/test/scala/org/apache/flink/api/scala/operators/translation/DistinctTranslationTest.scala
+++ /dev/null
@@ -1,52 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.flink.api.scala.operators.translation
-
-import org.apache.flink.api.common.operators.base.GroupReduceOperatorBase
-import org.apache.flink.api.java.io.DiscardingOutputFormat
-import org.junit.Assert
-import org.junit.Test
-
-import org.apache.flink.api.scala._
-
-class DistinctTranslationTest {
-  @Test
-  def testCombinable(): Unit = {
-    try {
-      val env = ExecutionEnvironment.getExecutionEnvironment
-      val input = env.fromElements("1", "2", "1", "3")
-
-      val op = input.distinct { x => x}
-      op.output(new DiscardingOutputFormat[String])
-
-      val p = env.createProgramPlan()
-
-      val reduceOp =
-        p.getDataSinks.iterator.next.getInput.asInstanceOf[GroupReduceOperatorBase[_, _, _]]
-
-      Assert.assertTrue(reduceOp.isCombinable)
-    }
-    catch {
-      case e: Exception => {
-        e.printStackTrace()
-        Assert.fail(e.getMessage)
-      }
-    }
-  }
-}
-


[86/89] [abbrv] flink git commit: [FLINK-4400] [cluster mngmt] Implement leadership election among JobMasters

Posted by se...@apache.org.
[FLINK-4400] [cluster mngmt] Implement leadership election among JobMasters

Adapt related components to the changes in HighAvailabilityServices

Add comments for getJobMasterElectionService in HighAvailabilityServices

This closes #2377.


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/9923b5e5
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/9923b5e5
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/9923b5e5

Branch: refs/heads/flip-6
Commit: 9923b5e50ce9d5bb66a57089860e1c6cb97a73f7
Parents: 23048b5
Author: xiaogang.sxg <xi...@alibaba-inc.com>
Authored: Wed Aug 17 13:46:00 2016 +0800
Committer: Stephan Ewen <se...@apache.org>
Committed: Thu Aug 25 20:21:05 2016 +0200

----------------------------------------------------------------------
 .../HighAvailabilityServices.java               |   9 +
 .../runtime/highavailability/NonHaServices.java |   8 +
 .../flink/runtime/rpc/jobmaster/JobMaster.java  | 318 +++++++++----------
 .../runtime/rpc/akka/AkkaRpcServiceTest.java    |  53 +---
 4 files changed, 179 insertions(+), 209 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/9923b5e5/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/HighAvailabilityServices.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/HighAvailabilityServices.java b/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/HighAvailabilityServices.java
index 094d36f..73e4f1f 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/HighAvailabilityServices.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/HighAvailabilityServices.java
@@ -18,6 +18,8 @@
 
 package org.apache.flink.runtime.highavailability;
 
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.runtime.leaderelection.LeaderElectionService;
 import org.apache.flink.runtime.leaderretrieval.LeaderRetrievalService;
 
 /**
@@ -36,4 +38,11 @@ public interface HighAvailabilityServices {
 	 * Gets the leader retriever for the cluster's resource manager.
 	 */
 	LeaderRetrievalService getResourceManagerLeaderRetriever() throws Exception;
+
+	/**
+	 * Gets the leader election service for the given job.
+	 *
+	 * @param jobID The identifier of the job running the election.
+	 */
+	LeaderElectionService getJobMasterLeaderElectionService(JobID jobID) throws Exception;
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/9923b5e5/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/NonHaServices.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/NonHaServices.java b/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/NonHaServices.java
index b8c2ed8..3d2769b 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/NonHaServices.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/NonHaServices.java
@@ -18,6 +18,9 @@
 
 package org.apache.flink.runtime.highavailability;
 
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.runtime.leaderelection.LeaderElectionService;
+import org.apache.flink.runtime.leaderelection.StandaloneLeaderElectionService;
 import org.apache.flink.runtime.leaderretrieval.LeaderRetrievalService;
 import org.apache.flink.runtime.leaderretrieval.StandaloneLeaderRetrievalService;
 
@@ -56,4 +59,9 @@ public class NonHaServices implements HighAvailabilityServices {
 	public LeaderRetrievalService getResourceManagerLeaderRetriever() throws Exception {
 		return new StandaloneLeaderRetrievalService(resourceManagerAddress, new UUID(0, 0));
 	}
+
+	@Override
+	public LeaderElectionService getJobMasterLeaderElectionService(JobID jobID) throws Exception {
+		return new StandaloneLeaderElectionService();
+	}
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/9923b5e5/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/jobmaster/JobMaster.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/jobmaster/JobMaster.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/jobmaster/JobMaster.java
index e53cd68..49b200b 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/jobmaster/JobMaster.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/jobmaster/JobMaster.java
@@ -18,68 +18,77 @@
 
 package org.apache.flink.runtime.rpc.jobmaster;
 
-import akka.dispatch.Futures;
-import akka.dispatch.Mapper;
-import akka.dispatch.OnComplete;
-import org.apache.flink.runtime.instance.InstanceID;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.runtime.highavailability.HighAvailabilityServices;
+import org.apache.flink.runtime.jobgraph.JobGraph;
+import org.apache.flink.runtime.jobmanager.RecoveryMode;
+import org.apache.flink.runtime.leaderelection.LeaderContender;
+import org.apache.flink.runtime.leaderelection.LeaderElectionService;
 import org.apache.flink.runtime.messages.Acknowledge;
 import org.apache.flink.runtime.rpc.RpcMethod;
-import org.apache.flink.runtime.rpc.resourcemanager.JobMasterRegistration;
-import org.apache.flink.runtime.rpc.resourcemanager.RegistrationResponse;
 import org.apache.flink.runtime.rpc.resourcemanager.ResourceManagerGateway;
 import org.apache.flink.runtime.rpc.RpcEndpoint;
 import org.apache.flink.runtime.rpc.RpcService;
 import org.apache.flink.runtime.taskmanager.TaskExecutionState;
 import org.apache.flink.util.Preconditions;
-import scala.Tuple2;
-import scala.concurrent.ExecutionContext;
-import scala.concurrent.ExecutionContext$;
-import scala.concurrent.Future;
-import scala.concurrent.duration.Deadline;
-import scala.concurrent.duration.FiniteDuration;
 
 import java.util.UUID;
-import java.util.concurrent.ExecutorService;
-import java.util.concurrent.ScheduledExecutorService;
-import java.util.concurrent.ScheduledThreadPoolExecutor;
-import java.util.concurrent.TimeUnit;
-import java.util.concurrent.TimeoutException;
 
 /**
  * JobMaster implementation. The job master is responsible for the execution of a single
  * {@link org.apache.flink.runtime.jobgraph.JobGraph}.
- *
+ * <p>
  * It offers the following methods as part of its rpc interface to interact with the JobMaster
  * remotely:
  * <ul>
- *     <li>{@link #registerAtResourceManager(String)} triggers the registration at the resource manager</li>
  *     <li>{@link #updateTaskExecutionState(TaskExecutionState)} updates the task execution state for
  * given task</li>
  * </ul>
  */
 public class JobMaster extends RpcEndpoint<JobMasterGateway> {
-	/** Execution context for future callbacks */
-	private final ExecutionContext executionContext;
-
-	/** Execution context for scheduled runnables */
-	private final ScheduledExecutorService scheduledExecutorService;
-
-	private final FiniteDuration initialRegistrationTimeout = new FiniteDuration(500, TimeUnit.MILLISECONDS);
-	private final FiniteDuration maxRegistrationTimeout = new FiniteDuration(30, TimeUnit.SECONDS);
-	private final FiniteDuration registrationDuration = new FiniteDuration(365, TimeUnit.DAYS);
-	private final long failedRegistrationDelay = 10000;
 
 	/** Gateway to connected resource manager, null iff not connected */
 	private ResourceManagerGateway resourceManager = null;
 
-	/** UUID to filter out old registration runs */
-	private UUID currentRegistrationRun;
+	/** Logical representation of the job */
+	private final JobGraph jobGraph;
+	private final JobID jobID;
+
+	/** Configuration of the job */
+	private final Configuration configuration;
+	private final RecoveryMode recoveryMode;
+
+	/** Service to contend for and retrieve the leadership of JM and RM */
+	private final HighAvailabilityServices highAvailabilityServices;
+
+	/** Leader Management */
+	private LeaderElectionService leaderElectionService = null;
+	private UUID leaderSessionID;
+
+	/**
+	 * The JM's Constructor
+	 *
+	 * @param jobGraph The representation of the job's execution plan
+	 * @param configuration The job's configuration
+	 * @param rpcService The RPC service at which the JM serves
+	 * @param highAvailabilityService The cluster's HA service from the JM can elect and retrieve leaders.
+	 */
+	public JobMaster(
+		JobGraph jobGraph,
+		Configuration configuration,
+		RpcService rpcService,
+		HighAvailabilityServices highAvailabilityService) {
 
-	public JobMaster(RpcService rpcService, ExecutorService executorService) {
 		super(rpcService);
-		executionContext = ExecutionContext$.MODULE$.fromExecutor(
-			Preconditions.checkNotNull(executorService));
-		scheduledExecutorService = new ScheduledThreadPoolExecutor(1);
+
+		this.jobGraph = Preconditions.checkNotNull(jobGraph);
+		this.jobID = Preconditions.checkNotNull(jobGraph.getJobID());
+
+		this.configuration = Preconditions.checkNotNull(configuration);
+		this.recoveryMode = RecoveryMode.fromConfig(configuration);
+
+		this.highAvailabilityServices = Preconditions.checkNotNull(highAvailabilityService);
 	}
 
 	public ResourceManagerGateway getResourceManager() {
@@ -87,6 +96,91 @@ public class JobMaster extends RpcEndpoint<JobMasterGateway> {
 	}
 
 	//----------------------------------------------------------------------------------------------
+	// Initialization methods
+	//----------------------------------------------------------------------------------------------
+	public void start() {
+		super.start();
+
+		// register at the election once the JM starts
+		registerAtElectionService();
+	}
+
+
+	//----------------------------------------------------------------------------------------------
+	// JobMaster Leadership methods
+	//----------------------------------------------------------------------------------------------
+
+	/**
+	 * Retrieves the election service and contend for the leadership.
+	 */
+	private void registerAtElectionService() {
+		try {
+			leaderElectionService = highAvailabilityServices.getJobMasterLeaderElectionService(jobID);
+			leaderElectionService.start(new JobMasterLeaderContender());
+		} catch (Exception e) {
+			throw new RuntimeException("Fail to register at the election of JobMaster", e);
+		}
+	}
+
+	/**
+	 * Start the execution when the leadership is granted.
+	 *
+	 * @param newLeaderSessionID The identifier of the new leadership session
+	 */
+	public void grantJobMasterLeadership(final UUID newLeaderSessionID) {
+		runAsync(new Runnable() {
+			@Override
+			public void run() {
+				log.info("JobManager {} grants leadership with session id {}.", getAddress(), newLeaderSessionID);
+
+				// The operation may be blocking, but since JM is idle before it grants the leadership, it's okay that
+				// JM waits here for the operation's completeness.
+				leaderSessionID = newLeaderSessionID;
+				leaderElectionService.confirmLeaderSessionID(newLeaderSessionID);
+
+				// TODO:: execute the job when the leadership is granted.
+			}
+		});
+	}
+
+	/**
+	 * Stop the execution when the leadership is revoked.
+	 */
+	public void revokeJobMasterLeadership() {
+		runAsync(new Runnable() {
+			@Override
+			public void run() {
+				log.info("JobManager {} was revoked leadership.", getAddress());
+
+				// TODO:: cancel the job's execution and notify all listeners
+				cancelAndClearEverything(new Exception("JobManager is no longer the leader."));
+
+				leaderSessionID = null;
+			}
+		});
+	}
+
+	/**
+	 * Handles error occurring in the leader election service
+	 *
+	 * @param exception Exception thrown in the leader election service
+	 */
+	public void onJobMasterElectionError(final Exception exception) {
+		runAsync(new Runnable() {
+			@Override
+			public void run() {
+				log.error("Received an error from the LeaderElectionService.", exception);
+
+				// TODO:: cancel the job's execution and shutdown the JM
+				cancelAndClearEverything(exception);
+
+				leaderSessionID = null;
+			}
+		});
+
+	}
+
+	//----------------------------------------------------------------------------------------------
 	// RPC methods
 	//----------------------------------------------------------------------------------------------
 
@@ -109,18 +203,7 @@ public class JobMaster extends RpcEndpoint<JobMasterGateway> {
 	 */
 	@RpcMethod
 	public void registerAtResourceManager(final String address) {
-		currentRegistrationRun = UUID.randomUUID();
-
-		Future<ResourceManagerGateway> resourceManagerFuture = getRpcService().connect(address, ResourceManagerGateway.class);
-
-		handleResourceManagerRegistration(
-			new JobMasterRegistration(getAddress()),
-			1,
-			resourceManagerFuture,
-			currentRegistrationRun,
-			initialRegistrationTimeout,
-			maxRegistrationTimeout,
-			registrationDuration.fromNow());
+		//TODO:: register at the RM
 	}
 
 	//----------------------------------------------------------------------------------------------
@@ -128,124 +211,37 @@ public class JobMaster extends RpcEndpoint<JobMasterGateway> {
 	//----------------------------------------------------------------------------------------------
 
 	/**
-	 * Helper method to handle the resource manager registration process. If a registration attempt
-	 * times out, then a new attempt with the doubled time out is initiated. The whole registration
-	 * process has a deadline. Once this deadline is overdue without successful registration, the
-	 * job master shuts down.
+	 * Cancel the current job and notify all listeners the job's cancellation.
 	 *
-	 * @param jobMasterRegistration Job master registration info which is sent to the resource
-	 *                              manager
-	 * @param attemptNumber Registration attempt number
-	 * @param resourceManagerFuture Future of the resource manager gateway
-	 * @param registrationRun UUID describing the current registration run
-	 * @param timeout Timeout of the last registration attempt
-	 * @param maxTimeout Maximum timeout between registration attempts
-	 * @param deadline Deadline for the registration
+	 * @param cause Cause for the cancelling.
 	 */
-	void handleResourceManagerRegistration(
-		final JobMasterRegistration jobMasterRegistration,
-		final int attemptNumber,
-		final Future<ResourceManagerGateway> resourceManagerFuture,
-		final UUID registrationRun,
-		final FiniteDuration timeout,
-		final FiniteDuration maxTimeout,
-		final Deadline deadline) {
-
-		// filter out concurrent registration runs
-		if (registrationRun.equals(currentRegistrationRun)) {
-
-			log.info("Start registration attempt #{}.", attemptNumber);
-
-			if (deadline.isOverdue()) {
-				// we've exceeded our registration deadline. This means that we have to shutdown the JobMaster
-				log.error("Exceeded registration deadline without successfully registering at the ResourceManager.");
-				shutDown();
-			} else {
-				Future<Tuple2<RegistrationResponse, ResourceManagerGateway>> registrationResponseFuture = resourceManagerFuture.flatMap(new Mapper<ResourceManagerGateway, Future<Tuple2<RegistrationResponse, ResourceManagerGateway>>>() {
-					@Override
-					public Future<Tuple2<RegistrationResponse, ResourceManagerGateway>> apply(ResourceManagerGateway resourceManagerGateway) {
-						return resourceManagerGateway.registerJobMaster(jobMasterRegistration, timeout).zip(Futures.successful(resourceManagerGateway));
-					}
-				}, executionContext);
-
-				registrationResponseFuture.onComplete(new OnComplete<Tuple2<RegistrationResponse, ResourceManagerGateway>>() {
-					@Override
-					public void onComplete(Throwable failure, Tuple2<RegistrationResponse, ResourceManagerGateway> tuple) throws Throwable {
-						if (failure != null) {
-							if (failure instanceof TimeoutException) {
-								// we haven't received an answer in the given timeout interval,
-								// so increase it and try again.
-								final FiniteDuration newTimeout = timeout.$times(2L).min(maxTimeout);
-
-								handleResourceManagerRegistration(
-									jobMasterRegistration,
-									attemptNumber + 1,
-									resourceManagerFuture,
-									registrationRun,
-									newTimeout,
-									maxTimeout,
-									deadline);
-							} else {
-								log.error("Received unknown error while registering at the ResourceManager.", failure);
-								shutDown();
-							}
-						} else {
-							final RegistrationResponse response = tuple._1();
-							final ResourceManagerGateway gateway = tuple._2();
-
-							if (response.isSuccess()) {
-								finishResourceManagerRegistration(gateway, response.getInstanceID());
-							} else {
-								log.info("The registration was refused. Try again.");
-
-								scheduledExecutorService.schedule(new Runnable() {
-									@Override
-									public void run() {
-										// we have to execute scheduled runnable in the main thread
-										// because we need consistency wrt currentRegistrationRun
-										runAsync(new Runnable() {
-											@Override
-											public void run() {
-												// our registration attempt was refused. Start over.
-												handleResourceManagerRegistration(
-													jobMasterRegistration,
-													1,
-													resourceManagerFuture,
-													registrationRun,
-													initialRegistrationTimeout,
-													maxTimeout,
-													deadline);
-											}
-										});
-									}
-								}, failedRegistrationDelay, TimeUnit.MILLISECONDS);
-							}
-						}
-					}
-				}, getMainThreadExecutionContext()); // use the main thread execution context to execute the call back in the main thread
-			}
-		} else {
-			log.info("Discard out-dated registration run.");
-		}
+	private void cancelAndClearEverything(Throwable cause) {
+		// currently, nothing to do here
 	}
 
-	/**
-	 * Finish the resource manager registration by setting the new resource manager gateway.
-	 *
-	 * @param resourceManager New resource manager gateway
-	 * @param instanceID Instance id assigned by the resource manager
-	 */
-	void finishResourceManagerRegistration(ResourceManagerGateway resourceManager, InstanceID instanceID) {
-		log.info("Successfully registered at the ResourceManager under instance id {}.", instanceID);
-		this.resourceManager = resourceManager;
-	}
+	// ------------------------------------------------------------------------
+	//  Utility classes
+	// ------------------------------------------------------------------------
+	private class JobMasterLeaderContender implements LeaderContender {
 
-	/**
-	 * Return if the job master is connected to a resource manager.
-	 *
-	 * @return true if the job master is connected to the resource manager
-	 */
-	public boolean isConnected() {
-		return resourceManager != null;
+		@Override
+		public void grantLeadership(UUID leaderSessionID) {
+			JobMaster.this.grantJobMasterLeadership(leaderSessionID);
+		}
+
+		@Override
+		public void revokeLeadership() {
+			JobMaster.this.revokeJobMasterLeadership();
+		}
+
+		@Override
+		public String getAddress() {
+			return JobMaster.this.getAddress();
+		}
+
+		@Override
+		public void handleError(Exception exception) {
+			onJobMasterElectionError(exception);
+		}
 	}
 }

http://git-wip-us.apache.org/repos/asf/flink/blob/9923b5e5/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java
index 7b4ab89..2790cf8 100644
--- a/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java
+++ b/flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceTest.java
@@ -20,9 +20,12 @@ package org.apache.flink.runtime.rpc.akka;
 
 import akka.actor.ActorSystem;
 import akka.util.Timeout;
-
 import org.apache.flink.core.testutils.OneShotLatch;
+import org.apache.flink.configuration.Configuration;
 import org.apache.flink.runtime.akka.AkkaUtils;
+import org.apache.flink.runtime.highavailability.HighAvailabilityServices;
+import org.apache.flink.runtime.highavailability.NonHaServices;
+import org.apache.flink.runtime.jobgraph.JobGraph;
 import org.apache.flink.runtime.rpc.jobmaster.JobMaster;
 import org.apache.flink.runtime.rpc.resourcemanager.ResourceManagerGateway;
 import org.apache.flink.runtime.rpc.resourcemanager.ResourceManager;
@@ -31,6 +34,7 @@ import org.apache.flink.util.TestLogger;
 import org.junit.AfterClass;
 import org.junit.Test;
 
+import org.mockito.Mockito;
 import scala.concurrent.duration.Deadline;
 import scala.concurrent.duration.FiniteDuration;
 
@@ -80,51 +84,4 @@ public class AkkaRpcServiceTest extends TestLogger {
 
 		assertTrue("call was not properly delayed", ((stop - start) / 1000000) >= delay);
 	}
-
-	// ------------------------------------------------------------------------
-	//  specific component tests - should be moved to the test classes
-	//  for those components
-	// ------------------------------------------------------------------------
-
-	/**
-	 * Tests that the {@link JobMaster} can connect to the {@link ResourceManager} using the
-	 * {@link AkkaRpcService}.
-	 */
-	@Test
-	public void testJobMasterResourceManagerRegistration() throws Exception {
-		Timeout akkaTimeout = new Timeout(10, TimeUnit.SECONDS);
-		ActorSystem actorSystem = AkkaUtils.createDefaultActorSystem();
-		ActorSystem actorSystem2 = AkkaUtils.createDefaultActorSystem();
-		AkkaRpcService akkaRpcService = new AkkaRpcService(actorSystem, akkaTimeout);
-		AkkaRpcService akkaRpcService2 = new AkkaRpcService(actorSystem2, akkaTimeout);
-		ExecutorService executorService = new ForkJoinPool();
-
-		ResourceManager resourceManager = new ResourceManager(akkaRpcService, executorService);
-		JobMaster jobMaster = new JobMaster(akkaRpcService2, executorService);
-
-		resourceManager.start();
-		jobMaster.start();
-
-		ResourceManagerGateway rm = resourceManager.getSelf();
-
-		assertTrue(rm instanceof AkkaGateway);
-
-		AkkaGateway akkaClient = (AkkaGateway) rm;
-
-		
-		jobMaster.registerAtResourceManager(AkkaUtils.getAkkaURL(actorSystem, akkaClient.getRpcEndpoint()));
-
-		// wait for successful registration
-		FiniteDuration timeout = new FiniteDuration(200, TimeUnit.SECONDS);
-		Deadline deadline = timeout.fromNow();
-
-		while (deadline.hasTimeLeft() && !jobMaster.isConnected()) {
-			Thread.sleep(100);
-		}
-
-		assertFalse(deadline.isOverdue());
-
-		jobMaster.shutDown();
-		resourceManager.shutDown();
-	}
 }


[06/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/fig/job_and_execution_graph.svg
----------------------------------------------------------------------
diff --git a/docs/fig/job_and_execution_graph.svg b/docs/fig/job_and_execution_graph.svg
new file mode 100644
index 0000000..2f90ea1
--- /dev/null
+++ b/docs/fig/job_and_execution_graph.svg
@@ -0,0 +1,851 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<!-- Created with Inkscape (http://www.inkscape.org/) -->
+
+<svg
+   xmlns:dc="http://purl.org/dc/elements/1.1/"
+   xmlns:cc="http://creativecommons.org/ns#"
+   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+   xmlns:svg="http://www.w3.org/2000/svg"
+   xmlns="http://www.w3.org/2000/svg"
+   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+   width="860.40625"
+   height="541.0625"
+   id="svg2"
+   version="1.1"
+   inkscape:version="0.48.5 r10040">
+  <defs
+     id="defs4" />
+  <sodipodi:namedview
+     id="base"
+     pagecolor="#ffffff"
+     bordercolor="#666666"
+     borderopacity="1.0"
+     inkscape:pageopacity="0.0"
+     inkscape:pageshadow="2"
+     inkscape:zoom="0.35"
+     inkscape:cx="430.20675"
+     inkscape:cy="270.53212"
+     inkscape:document-units="px"
+     inkscape:current-layer="layer1"
+     showgrid="false"
+     fit-margin-top="0"
+     fit-margin-left="0"
+     fit-margin-right="0"
+     fit-margin-bottom="0"
+     inkscape:window-width="1600"
+     inkscape:window-height="838"
+     inkscape:window-x="1912"
+     inkscape:window-y="-8"
+     inkscape:window-maximized="1" />
+  <metadata
+     id="metadata7">
+    <rdf:RDF>
+      <cc:Work
+         rdf:about="">
+        <dc:format>image/svg+xml</dc:format>
+        <dc:type
+           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+        <dc:title></dc:title>
+      </cc:Work>
+    </rdf:RDF>
+  </metadata>
+  <g
+     inkscape:label="Layer 1"
+     inkscape:groupmode="layer"
+     id="layer1"
+     transform="translate(55.206749,-261.8318)">
+    <g
+       id="g2989"
+       transform="translate(-78.581749,159.5193)">
+      <path
+         id="path2991"
+         d="m 336.57135,443.36045 c 0,-3.6384 2.96324,-6.60164 6.60164,-6.60164 l 500.22462,0 c 3.63841,0 6.60165,2.96324 6.60165,6.60164 l 0,26.36907 c 0,3.63841 -2.96324,6.56414 -6.60165,6.56414 l -500.22462,0 c -3.6384,0 -6.60164,-2.92573 -6.60164,-6.56414 z"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path2993"
+         d="m 336.57135,443.36045 c 0,-3.6384 2.96324,-6.60164 6.60164,-6.60164 l 500.22462,0 c 3.63841,0 6.60165,2.96324 6.60165,6.60164 l 0,26.36907 c 0,3.63841 -2.96324,6.56414 -6.60165,6.56414 l -500.22462,0 c -3.6384,0 -6.60164,-2.92573 -6.60164,-6.56414 z"
+         style="fill:none;stroke:#ed7d31;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text2995"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="453.48053"
+         x="347.62067"
+         xml:space="preserve">Intermediate</text>
+      <text
+         id="text2997"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="469.98465"
+         x="347.62067"
+         xml:space="preserve">Result</text>
+      <path
+         id="path2999"
+         d="m 336.57135,550.86223 c 0,-9.33982 7.57689,-16.8792 16.8792,-16.8792 l 479.6695,0 c 9.30232,0 16.87921,7.53938 16.87921,16.8792 l 0,67.51682 c 0,9.30232 -7.57689,16.87921 -16.87921,16.87921 l -479.6695,0 c -9.30231,0 -16.8792,-7.57689 -16.8792,-16.87921 z"
+         style="fill:#c5e0b4;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3001"
+         d="m 336.57135,550.86223 c 0,-9.33982 7.57689,-16.8792 16.8792,-16.8792 l 479.6695,0 c 9.30232,0 16.87921,7.53938 16.87921,16.8792 l 0,67.51682 c 0,9.30232 -7.57689,16.87921 -16.87921,16.87921 l -479.6695,0 c -9.30231,0 -16.8792,-7.57689 -16.8792,-16.87921 z"
+         style="fill:none;stroke:#548235;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3003"
+         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="581.19519"
+         x="350.63596"
+         xml:space="preserve">Execution</text>
+      <text
+         id="text3005"
+         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="599.19965"
+         x="350.63596"
+         xml:space="preserve">Job Vertex</text>
+      <path
+         id="path3007"
+         d="m 336.57135,345.1985 c 0,-9.33983 7.57689,-16.87921 16.91671,-16.87921 l 479.59448,0 c 9.33983,0 16.91672,7.53938 16.91672,16.87921 l 0,67.62935 c 0,9.33982 -7.57689,16.91671 -16.91672,16.91671 l -479.59448,0 c -9.33982,0 -16.91671,-7.57689 -16.91671,-16.91671 z"
+         style="fill:#c5e0b4;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3009"
+         d="m 336.57135,345.1985 c 0,-9.33983 7.57689,-16.87921 16.91671,-16.87921 l 479.59448,0 c 9.33983,0 16.91672,7.53938 16.91672,16.87921 l 0,67.62935 c 0,9.33982 -7.57689,16.91671 -16.91672,16.91671 l -479.59448,0 c -9.33982,0 -16.91671,-7.57689 -16.91671,-16.91671 z"
+         style="fill:none;stroke:#548235;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3011"
+         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="375.60931"
+         x="350.63596"
+         xml:space="preserve">Execution</text>
+      <text
+         id="text3013"
+         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="393.6138"
+         x="350.63596"
+         xml:space="preserve">Job Vertex</text>
+      <path
+         id="path3015"
+         d="m 336.57135,168.41696 c 0,-9.30232 7.57689,-16.87921 16.8792,-16.87921 l 479.6695,0 c 9.30232,0 16.87921,7.57689 16.87921,16.87921 l 0,67.51682 c 0,9.33982 -7.57689,16.8792 -16.87921,16.8792 l -479.6695,0 c -9.30231,0 -16.8792,-7.53938 -16.8792,-16.8792 z"
+         style="fill:#c5e0b4;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3017"
+         d="m 336.57135,168.41696 c 0,-9.30232 7.57689,-16.87921 16.8792,-16.87921 l 479.6695,0 c 9.30232,0 16.87921,7.57689 16.87921,16.87921 l 0,67.51682 c 0,9.33982 -7.57689,16.8792 -16.87921,16.8792 l -479.6695,0 c -9.30231,0 -16.8792,-7.53938 -16.8792,-16.8792 z"
+         style="fill:none;stroke:#548235;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3019"
+         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="198.77649"
+         x="350.63596"
+         xml:space="preserve">Execution</text>
+      <text
+         id="text3021"
+         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="216.78098"
+         x="350.63596"
+         xml:space="preserve">Job Vertex</text>
+      <path
+         id="path3023"
+         d="m 24.643639,116.76659 c 0,-7.31432 5.888967,-13.20329 13.16578,-13.20329 l 211.665231,0 c 7.27681,0 13.20329,5.88897 13.20329,13.20329 l 0,512.1901 c 0,7.27681 -5.92648,13.16578 -13.20329,13.16578 l -211.665231,0 c -7.276813,0 -13.16578,-5.88897 -13.16578,-13.16578 z"
+         style="fill:none;stroke:#85888d;stroke-width:2.51312613px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3025"
+         style="font-size:16.20403671px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="123.25767"
+         x="108.06521"
+         xml:space="preserve">JobGraph</text>
+      <path
+         id="path3027"
+         d="m 181.95783,550.29959 c 17.17928,17.14177 17.17928,44.9362 0,62.07797 -17.17928,17.17928 -45.04872,17.17928 -62.228,0 -17.17928,-17.14177 -17.17928,-44.9362 0,-62.07797 17.17928,-17.17928 45.04872,-17.17928 62.228,0"
+         style="fill:#d6dce5;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3029"
+         d="m 181.95783,550.29959 c 17.17928,17.14177 17.17928,44.9362 0,62.07797 -17.17928,17.17928 -45.04872,17.17928 -62.228,0 -17.17928,-17.14177 -17.17928,-44.9362 0,-62.07797 17.17928,-17.17928 45.04872,-17.17928 62.228,0"
+         style="fill:none;stroke:#85888d;stroke-width:2.51312613px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3031"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="578.25909"
+         x="119.9825"
+         xml:space="preserve">JobVertex</text>
+      <text
+         id="text3033"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="594.76318"
+         x="141.73792"
+         xml:space="preserve">(A)</text>
+      <path
+         id="path3035"
+         d="m 235.24923,198.02121 c 17.15115,17.17928 17.15115,45.02997 0,62.20924 -17.15115,17.18866 -44.95495,17.18866 -62.1061,0 -17.15114,-17.17927 -17.15114,-45.02996 0,-62.20924 17.15115,-17.18866 44.95495,-17.18866 62.1061,0"
+         style="fill:#d6dce5;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3037"
+         d="m 235.24923,198.02121 c 17.15115,17.17928 17.15115,45.02997 0,62.20924 -17.15115,17.18866 -44.95495,17.18866 -62.1061,0 -17.15114,-17.17927 -17.15114,-45.02996 0,-62.20924 17.15115,-17.18866 44.95495,-17.18866 62.1061,0"
+         style="fill:none;stroke:#85888d;stroke-width:2.50374866px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3039"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="226.05666"
+         x="173.28711"
+         xml:space="preserve">JobVertex</text>
+      <text
+         id="text3041"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="242.56078"
+         x="194.59242"
+         xml:space="preserve">(D)</text>
+      <path
+         id="path3043"
+         d="m 127.8506,390.07843 c 17.19804,17.14177 17.19804,44.95495 0,62.09672 -17.17928,17.16052 -45.029967,17.16052 -62.209247,0 -17.17928,-17.14177 -17.17928,-44.95495 0,-62.09672 17.17928,-17.16053 45.029967,-17.16053 62.209247,0"
+         style="fill:#d6dce5;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3045"
+         d="m 127.8506,390.07843 c 17.19804,17.14177 17.19804,44.95495 0,62.09672 -17.17928,17.16052 -45.029967,17.16052 -62.209247,0 -17.17928,-17.14177 -17.17928,-44.95495 0,-62.09672 17.17928,-17.16053 45.029967,-17.16053 62.209247,0"
+         style="fill:none;stroke:#85888d;stroke-width:2.49437141px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3047"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="418.08206"
+         x="65.897766"
+         xml:space="preserve">JobVertex</text>
+      <text
+         id="text3049"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="434.58615"
+         x="87.653183"
+         xml:space="preserve">(B)</text>
+      <path
+         id="path3051"
+         d="m 495.49844,554.70694 -16.84169,-20.29255 2.88821,-2.4006 16.82295,20.29255 -2.86947,2.4006 z m -18.52962,-16.44785 -2.85071,-12.26555 11.51537,5.08251 -8.66466,7.18304 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3053"
+         d="m 50.412559,493.43543 c 0,-2.982 2.419353,-5.4201 5.4201,-5.4201 l 75.900161,0 c 3.00075,0 5.4201,2.4381 5.4201,5.4201 l 0,21.6804 c 0,2.98199 -2.41935,5.4201 -5.4201,5.4201 l -75.900161,0 c -3.000747,0 -5.4201,-2.43811 -5.4201,-5.4201 z"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3055"
+         d="m 50.412559,493.43543 c 0,-2.982 2.419353,-5.4201 5.4201,-5.4201 l 75.900161,0 c 3.00075,0 5.4201,2.4381 5.4201,5.4201 l 0,21.6804 c 0,2.98199 -2.41935,5.4201 -5.4201,5.4201 l -75.900161,0 c -3.000747,0 -5.4201,-2.43811 -5.4201,-5.4201 z"
+         style="fill:none;stroke:#ed7d31;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3057"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="501.16693"
+         x="55.542233"
+         xml:space="preserve">Intermediate</text>
+      <text
+         id="text3059"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="517.67102"
+         x="66.945076"
+         xml:space="preserve">Data Set</text>
+      <path
+         id="path3061"
+         d="m 162.79056,493.43543 c 0,-2.982 2.41935,-5.4201 5.4201,-5.4201 l 75.90015,0 c 2.982,0 5.4201,2.4381 5.4201,5.4201 l 0,21.6804 c 0,2.98199 -2.4381,5.4201 -5.4201,5.4201 l -75.90015,0 c -3.00075,0 -5.4201,-2.43811 -5.4201,-5.4201 z"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3063"
+         d="m 162.79056,493.43543 c 0,-2.982 2.41935,-5.4201 5.4201,-5.4201 l 75.90015,0 c 2.982,0 5.4201,2.4381 5.4201,5.4201 l 0,21.6804 c 0,2.98199 -2.4381,5.4201 -5.4201,5.4201 l -75.90015,0 c -3.00075,0 -5.4201,-2.43811 -5.4201,-5.4201 z"
+         style="fill:none;stroke:#ed7d31;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3065"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="501.16693"
+         x="167.96423"
+         xml:space="preserve">Intermediate</text>
+      <text
+         id="text3067"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="517.67102"
+         x="179.36708"
+         xml:space="preserve">Data Set</text>
+      <path
+         id="path3069"
+         d="m 118.47326,152.2223 c 17.18866,17.17928 17.18866,45.03934 0,62.21862 -17.1699,17.17928 -45.029963,17.17928 -62.209243,0 -17.17928,-17.17928 -17.17928,-45.03934 0,-62.21862 17.17928,-17.17928 45.039343,-17.17928 62.209243,0"
+         style="fill:#d6dce5;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3071"
+         d="m 118.47326,152.2223 c 17.18866,17.17928 17.18866,45.03934 0,62.21862 -17.1699,17.17928 -45.029963,17.17928 -62.209243,0 -17.17928,-17.17928 -17.17928,-45.03934 0,-62.21862 17.17928,-17.17928 45.039343,-17.17928 62.209243,0"
+         style="fill:none;stroke:#85888d;stroke-width:2.50374866px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3073"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="180.25125"
+         x="56.508911"
+         xml:space="preserve">JobVertex</text>
+      <text
+         id="text3075"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="196.75536"
+         x="77.814217"
+         xml:space="preserve">(C)</text>
+      <path
+         id="path3077"
+         d="m 181.65776,551.49989 19.26104,-20.78017 -2.73818,-2.55064 -19.26105,20.78018 2.73819,2.55063 z m 20.74266,-16.86045 3.52588,-12.07801 -11.77793,4.42611 8.25205,7.6519 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3079"
+         d="m 95.57381,485.98982 0,-11.57163 -3.750935,0 0,11.57163 3.750935,0 z m 3.750934,-9.69616 -5.626401,-11.25281 -5.626402,11.25281 11.252803,0 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3081"
+         d="m 115.41625,326.10624 52.00671,-45.40506 -2.45686,-2.8132 -52.02546,45.40506 2.47561,2.8132 z m 53.05697,-41.3353 4.78244,-11.64665 -12.17178,3.16954 7.38934,8.47711 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3083"
+         d="m 217.64797,484.90205 0,-194.74852 -3.75093,0 0,194.74852 3.75093,0 z m 3.75094,-192.87305 -5.62641,-11.2528 -5.6264,11.2528 11.25281,0 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3085"
+         d="m 95.104943,321.88644 0.28132,-85.16497 -3.750934,-0.0188 -0.28132,85.18372 3.750934,0 z m 4.032255,-83.2895 -5.588893,-11.27156 -5.663911,11.23405 11.252804,0.0375 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3087"
+         d="m 55.101227,332.7829 c 0,-3.00074 2.419353,-5.4201 5.4201,-5.4201 l 75.750123,0 c 2.98199,0 5.4201,2.41936 5.4201,5.4201 l 0,21.66165 c 0,3.00075 -2.43811,5.4201 -5.4201,5.4201 l -75.750123,0 c -3.000747,0 -5.4201,-2.41935 -5.4201,-5.4201 z"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3089"
+         d="m 55.101227,332.7829 c 0,-3.00074 2.419353,-5.4201 5.4201,-5.4201 l 75.750123,0 c 2.98199,0 5.4201,2.41936 5.4201,5.4201 l 0,21.66165 c 0,3.00075 -2.43811,5.4201 -5.4201,5.4201 l -75.750123,0 c -3.000747,0 -5.4201,-2.41935 -5.4201,-5.4201 z"
+         style="fill:none;stroke:#ed7d31;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3091"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="340.54807"
+         x="60.154701"
+         xml:space="preserve">Intermediate</text>
+      <text
+         id="text3093"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="357.05219"
+         x="71.557541"
+         xml:space="preserve">Data Set</text>
+      <path
+         id="path3095"
+         d="m 326.59386,117.55429 c 0,-7.72693 6.26406,-13.99099 13.99099,-13.99099 l 527.94402,0 c 7.72693,0 13.99099,6.26406 13.99099,13.99099 l 0,510.6147 c 0,7.68942 -6.26406,13.95348 -13.99099,13.95348 l -527.94402,0 c -7.72693,0 -13.99099,-6.26406 -13.99099,-13.95348 z"
+         style="fill:none;stroke:#85888d;stroke-width:2.51312613px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3097"
+         style="font-size:16.20403671px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="123.49081"
+         x="546.51147"
+         xml:space="preserve">ExecutionGraph</text>
+      <path
+         id="path3099"
+         d="m 560.50214,552.51264 c 17.14177,17.17928 17.14177,45.01122 0,62.1905 -17.14178,17.17928 -44.97371,17.17928 -62.11548,0 -17.14177,-17.17928 -17.14177,-45.01122 0,-62.1905 17.14177,-17.21678 44.9737,-17.21678 62.11548,0"
+         style="fill:#d6dce5;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3101"
+         d="m 560.50214,552.51264 c 17.14177,17.17928 17.14177,45.01122 0,62.1905 -17.14178,17.17928 -44.97371,17.17928 -62.11548,0 -17.14177,-17.17928 -17.14177,-45.01122 0,-62.1905 17.14177,-17.21678 44.9737,-17.21678 62.11548,0"
+         style="fill:none;stroke:#85888d;stroke-width:2.51312613px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3103"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="572.29529"
+         x="499.31607"
+         xml:space="preserve">Execution</text>
+      <text
+         id="text3105"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="588.79938"
+         x="509.66867"
+         xml:space="preserve">Vertex</text>
+      <text
+         id="text3107"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="605.30347"
+         x="508.76843"
+         xml:space="preserve">A (0/2)</text>
+      <path
+         id="path3109"
+         d="m 766.16587,550.29959 c 17.17928,17.14177 17.17928,44.9362 0,62.07797 -17.17928,17.17928 -45.04872,17.17928 -62.228,0 -17.17928,-17.14177 -17.17928,-44.9362 0,-62.07797 17.17928,-17.17928 45.04872,-17.17928 62.228,0"
+         style="fill:#d6dce5;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3111"
+         d="m 766.16587,550.29959 c 17.17928,17.14177 17.17928,44.9362 0,62.07797 -17.17928,17.17928 -45.04872,17.17928 -62.228,0 -17.17928,-17.14177 -17.17928,-44.9362 0,-62.07797 17.17928,-17.17928 45.04872,-17.17928 62.228,0"
+         style="fill:none;stroke:#85888d;stroke-width:2.51312613px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3113"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="570.00702"
+         x="704.90192"
+         xml:space="preserve">Execution</text>
+      <text
+         id="text3115"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="586.51111"
+         x="715.25446"
+         xml:space="preserve">Vertex</text>
+      <text
+         id="text3117"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="603.01526"
+         x="714.35425"
+         xml:space="preserve">A (1/2)</text>
+      <path
+         id="path3119"
+         d="m 336.57135,494.29814 c 0,-3.45086 2.8132,-6.26406 6.30157,-6.26406 l 500.86228,0 c 3.45086,0 6.26406,2.8132 6.26406,6.26406 l 0,25.13126 c 0,3.45086 -2.8132,6.26406 -6.26406,6.26406 l -500.86228,0 c -3.48837,0 -6.30157,-2.8132 -6.30157,-6.26406 z"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3121"
+         d="m 336.57135,494.29814 c 0,-3.45086 2.8132,-6.26406 6.30157,-6.26406 l 500.86228,0 c 3.45086,0 6.26406,2.8132 6.26406,6.26406 l 0,25.13126 c 0,3.45086 -2.8132,6.26406 -6.26406,6.26406 l -500.86228,0 c -3.48837,0 -6.30157,-2.8132 -6.30157,-6.26406 z"
+         style="fill:none;stroke:#ed7d31;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3123"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="503.78391"
+         x="347.53018"
+         xml:space="preserve">Intermediate</text>
+      <text
+         id="text3125"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="520.28802"
+         x="347.53018"
+         xml:space="preserve">Result</text>
+      <path
+         id="path3127"
+         d="m 434.09564,496.3424 c 0,-2.79444 2.25056,-5.04501 5.04501,-5.04501 l 79.33226,0 c 2.7757,0 5.02626,2.25057 5.02626,5.04501 l 0,20.10501 c 0,2.77569 -2.25056,5.02625 -5.02626,5.02625 l -79.33226,0 c -2.79445,0 -5.04501,-2.25056 -5.04501,-5.02625 z"
+         style="fill:#fbe5d6;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3129"
+         d="m 434.09564,496.3424 c 0,-2.79444 2.25056,-5.04501 5.04501,-5.04501 l 79.33226,0 c 2.7757,0 5.02626,2.25057 5.02626,5.04501 l 0,20.10501 c 0,2.77569 -2.25056,5.02625 -5.02626,5.02625 l -79.33226,0 c -2.79445,0 -5.04501,-2.25056 -5.04501,-5.02625 z"
+         style="fill:none;stroke:#843c0c;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3131"
+         style="font-size:12.45310211px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="503.60291"
+         x="444.14053"
+         xml:space="preserve">Intermediate</text>
+      <text
+         id="text3133"
+         style="font-size:12.45310211px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="518.60663"
+         x="436.7887"
+         xml:space="preserve">Result</text>
+      <text
+         id="text3135"
+         style="font-size:12.45310211px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="518.60663"
+         x="475.49835"
+         xml:space="preserve">Partition</text>
+      <path
+         id="path3137"
+         d="m 631.65736,496.36116 c 0,-2.8132 2.25056,-5.06377 5.02625,-5.06377 l 79.33227,0 c 2.77569,0 5.02625,2.25057 5.02625,5.06377 l 0,20.105 c 0,2.7757 -2.25056,5.02626 -5.02625,5.02626 l -79.33227,0 c -2.77569,0 -5.02625,-2.25056 -5.02625,-5.02626 z"
+         style="fill:#fbe5d6;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3139"
+         d="m 631.65736,496.36116 c 0,-2.8132 2.25056,-5.06377 5.02625,-5.06377 l 79.33227,0 c 2.77569,0 5.02625,2.25057 5.02625,5.06377 l 0,20.105 c 0,2.7757 -2.25056,5.02626 -5.02625,5.02626 l -79.33227,0 c -2.77569,0 -5.02625,-2.25056 -5.02625,-5.02626 z"
+         style="fill:none;stroke:#843c0c;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3141"
+         style="font-size:12.45310211px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="503.60291"
+         x="641.72223"
+         xml:space="preserve">Intermediate</text>
+      <text
+         id="text3143"
+         style="font-size:12.45310211px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="518.60663"
+         x="634.37042"
+         xml:space="preserve">Result</text>
+      <text
+         id="text3145"
+         style="font-size:12.45310211px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="518.60663"
+         x="673.08008"
+         xml:space="preserve">Partition</text>
+      <path
+         id="path3147"
+         d="m 116.78534,552.36261 -16.71041,-20.42384 2.90698,-2.36309 16.71041,20.40508 -2.90698,2.38185 z m -18.417084,-16.59789 -2.794446,-12.26555 11.49661,5.13878 -8.702164,7.12677 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3149"
+         d="m 506.09483,345.06721 c 17.16053,17.14178 17.16053,44.95495 0,62.09672 -17.14177,17.16053 -44.95495,17.16053 -62.09672,0 -17.16052,-17.14177 -17.16052,-44.95494 0,-62.09672 17.14177,-17.16052 44.95495,-17.16052 62.09672,0"
+         style="fill:#d6dce5;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3151"
+         d="m 506.09483,345.06721 c 17.16053,17.14178 17.16053,44.95495 0,62.09672 -17.14177,17.16053 -44.95495,17.16053 -62.09672,0 -17.16052,-17.14177 -17.16052,-44.95494 0,-62.09672 17.14177,-17.16052 44.95495,-17.16052 62.09672,0"
+         style="fill:none;stroke:#85888d;stroke-width:2.49437141px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3153"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="364.74869"
+         x="444.95163"
+         xml:space="preserve">Execution</text>
+      <text
+         id="text3155"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="381.25281"
+         x="455.3042"
+         xml:space="preserve">Vertex</text>
+      <text
+         id="text3157"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="397.75693"
+         x="454.40399"
+         xml:space="preserve">B (0/2)</text>
+      <path
+         id="path3159"
+         d="m 702.24995,347.56159 c 17.14177,17.17928 17.14177,44.9737 0,62.11547 -17.14177,17.14177 -44.9362,17.14177 -62.11548,0 -17.14177,-17.14177 -17.14177,-44.93619 0,-62.11547 17.17928,-17.14177 44.97371,-17.14177 62.11548,0"
+         style="fill:#d6dce5;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3161"
+         d="m 702.24995,347.56159 c 17.14177,17.17928 17.14177,44.9737 0,62.11547 -17.14177,17.14177 -44.9362,17.14177 -62.11548,0 -17.14177,-17.14177 -17.14177,-44.93619 0,-62.11547 17.17928,-17.14177 44.97371,-17.14177 62.11548,0"
+         style="fill:none;stroke:#85888d;stroke-width:2.51312613px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3163"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="367.29498"
+         x="641.09949"
+         xml:space="preserve">Execution</text>
+      <text
+         id="text3165"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="383.7991"
+         x="651.45203"
+         xml:space="preserve">Vertex</text>
+      <text
+         id="text3167"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="400.30319"
+         x="650.55182"
+         xml:space="preserve">B (1/2)</text>
+      <path
+         id="path3169"
+         d="m 551.10604,540.0783 14.4411,-53.91969 -3.61965,-0.97524 -14.4411,53.93844 3.61965,0.95649 z m 17.59189,-51.14399 -2.53188,-12.32182 -8.34583,9.41484 10.87771,2.90698 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3171"
+         d="m 770.55446,543.36036 14.4411,-53.90092 -3.6384,-0.97525 -14.4411,53.93844 3.6384,0.93773 z m 17.59189,-51.12523 -2.55064,-12.34058 -8.32707,9.41485 10.87771,2.92573 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3173"
+         d="m 694.448,553.30034 -16.84169,-20.44259 2.88822,-2.36309 16.84169,20.40508 -2.88822,2.4006 z m -18.56712,-16.57913 -2.8132,-12.26555 11.51537,5.10127 -8.70217,7.16428 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3175"
+         d="m 336.57135,275.50614 c 0,-3.63841 2.96324,-6.60165 6.60164,-6.60165 l 500.22462,0 c 3.63841,0 6.60165,2.96324 6.60165,6.60165 l 0,26.36906 c 0,3.63841 -2.96324,6.56414 -6.60165,6.56414 l -500.22462,0 c -3.6384,0 -6.60164,-2.92573 -6.60164,-6.56414 z"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3177"
+         d="m 336.57135,275.50614 c 0,-3.63841 2.96324,-6.60165 6.60164,-6.60165 l 500.22462,0 c 3.63841,0 6.60165,2.96324 6.60165,6.60165 l 0,26.36906 c 0,3.63841 -2.96324,6.56414 -6.60165,6.56414 l -500.22462,0 c -3.6384,0 -6.60164,-2.92573 -6.60164,-6.56414 z"
+         style="fill:none;stroke:#ed7d31;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3179"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="285.63223"
+         x="347.62067"
+         xml:space="preserve">Intermediate</text>
+      <text
+         id="text3181"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="302.13635"
+         x="347.62067"
+         xml:space="preserve">Result</text>
+      <path
+         id="path3183"
+         d="m 628.39405,278.76945 c 0,-2.77569 2.25056,-5.02625 5.02625,-5.02625 l 79.4823,0 c 2.77569,0 5.02625,2.25056 5.02625,5.02625 l 0,20.14252 c 0,2.77569 -2.25056,5.02625 -5.02625,5.02625 l -79.4823,0 c -2.77569,0 -5.02625,-2.25056 -5.02625,-5.02625 z"
+         style="fill:#fbe5d6;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3185"
+         d="m 628.39405,278.76945 c 0,-2.77569 2.25056,-5.02625 5.02625,-5.02625 l 79.4823,0 c 2.77569,0 5.02625,2.25056 5.02625,5.02625 l 0,20.14252 c 0,2.77569 -2.25056,5.02625 -5.02625,5.02625 l -79.4823,0 c -2.77569,0 -5.02625,-2.25056 -5.02625,-5.02625 z"
+         style="fill:none;stroke:#843c0c;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3187"
+         style="font-size:12.45310211px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="286.06693"
+         x="638.53546"
+         xml:space="preserve">Intermediate</text>
+      <text
+         id="text3189"
+         style="font-size:12.45310211px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="301.07068"
+         x="631.18365"
+         xml:space="preserve">Result</text>
+      <text
+         id="text3191"
+         style="font-size:12.45310211px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="301.07068"
+         x="669.89331"
+         xml:space="preserve">Partition</text>
+      <path
+         id="path3193"
+         d="m 430.34471,278.7882 c 0,-2.79444 2.25056,-5.045 5.04501,-5.045 l 79.33226,0 c 2.77569,0 5.02625,2.25056 5.02625,5.045 l 0,20.10501 c 0,2.77569 -2.25056,5.02625 -5.02625,5.02625 l -79.33226,0 c -2.79445,0 -5.04501,-2.25056 -5.04501,-5.02625 z"
+         style="fill:#fbe5d6;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3195"
+         d="m 430.34471,278.7882 c 0,-2.79444 2.25056,-5.045 5.04501,-5.045 l 79.33226,0 c 2.77569,0 5.02625,2.25056 5.02625,5.045 l 0,20.10501 c 0,2.77569 -2.25056,5.02625 -5.02625,5.02625 l -79.33226,0 c -2.79445,0 -5.04501,-2.25056 -5.04501,-5.02625 z"
+         style="fill:none;stroke:#843c0c;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3197"
+         style="font-size:12.45310211px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="286.06693"
+         x="440.48862"
+         xml:space="preserve">Intermediate</text>
+      <text
+         id="text3199"
+         style="font-size:12.45310211px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="301.07068"
+         x="433.13678"
+         xml:space="preserve">Result</text>
+      <text
+         id="text3201"
+         style="font-size:12.45310211px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="301.07068"
+         x="471.84641"
+         xml:space="preserve">Partition</text>
+      <path
+         id="path3203"
+         d="m 475.99358,326.25628 0,-8.73968 -3.75093,0 0,8.73968 3.75093,0 z m 3.75094,-6.86421 -5.62641,-11.2528 -5.6264,11.2528 11.25281,0 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3205"
+         d="m 674.94315,327.38156 0,-9.86496 -3.75094,0 0,9.86496 3.75094,0 z m 3.75093,-7.98949 -5.6264,-11.2528 -5.6264,11.2528 11.2528,0 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3207"
+         d="m 458.64551,268.60442 -0.0188,-2.8132 2.8132,-0.0188 0.0188,2.8132 -2.8132,0.0188 z m -0.0375,-5.62641 0,-2.8132 2.8132,-0.0188 0,2.8132 -2.8132,0.0188 z m -0.0188,-5.6264 -0.0188,-2.8132 2.8132,-0.0188 0.0188,2.8132 -2.8132,0.0188 z m -0.0188,-5.6264 -0.0188,-2.8132 2.8132,-0.0188 0.0188,2.8132 -2.8132,0.0188 z m -0.0375,-5.6264 0,-2.8132 2.8132,-0.0188 0,2.8132 -2.8132,0.0188 z m -0.0188,-5.6264 -0.0188,-2.8132 2.8132,-0.0188 0.0188,2.8132 -2.8132,0.0188 z m -0.0188,-5.6264 -0.0188,-2.8132 2.8132,-0.0188 0.0188,2.8132 -2.8132,0.0188 z m -0.0375,-5.62641 0,-2.8132 2.8132,-0.0187 0,2.8132 -2.8132,0.0188 z m -0.0188,-5.6264 -0.0188,-2.8132 2.8132,-0.0188 0.0188,2.8132 -2.8132,0.0187 z m -0.0375,-5.6264 0,-2.8132 2.8132,-0.0188 0,2.8132 -2.8132,0.0188 z m -0.0188,-5.6264 -0.0188,-2.8132 2.8132,-0.0188 0.0188,2.8132 -2.81321,0.0188 z m -0.0188,-5.6264 -0.0188,-2.8132 2.8132,-0.0188 0.0188,2.8132 -2.8132,0.0188 z m -0.0375,-5.6264 0,-2.8132 2.8132,-0.0188 0,2.8132 -2.8132,0.
 0188 z m -0.0188,-5.62641 -0.0188,-2.8132 2.8132,-0.0187 0.0188,2.8132 -2.8132,0.0187 z m -0.0188,-5.6264 -0.0188,-2.8132 2.8132,-0.0188 0.0188,2.8132 -2.8132,0.0188 z m -0.0375,-5.6264 0,-2.8132 2.8132,-0.0188 0,2.8132 -2.8132,0.0188 z m -0.0188,-5.6264 -0.0188,-2.8132 2.8132,-0.0188 0.0188,2.8132 -2.8132,0.0188 z m -0.0375,-5.6264 0,-2.8132 2.8132,-0.0188 0,2.8132 -2.8132,0.0188 z m -0.0188,-5.6264 -0.0188,-2.8132 2.8132,-0.0188 0.0188,2.8132 -2.8132,0.0188 z m -0.0188,-5.62641 -0.0188,-2.8132 2.8132,-0.0187 0.0188,2.8132 -2.8132,0.0187 z m -0.0375,-5.6264 0,-2.8132 2.8132,-0.0188 0,2.8132 -2.8132,0.0188 z m -0.0188,-5.6264 -0.0188,-2.8132 2.8132,-0.0187 0.0188,2.8132 -2.81321,0.0188 z m -0.0188,-5.6264 -0.0188,-2.8132 2.8132,-0.0188 0.0188,2.8132 -2.8132,0.0188 z m -0.0375,-5.6264 0,-0.16879 2.8132,0 0,0.15003 -2.8132,0.0188 z m -2.79445,1.25656 4.16354,-8.45836 4.27607,8.42085 -8.43961,0.0375 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3209"
+         d="m 654.16297,263.76571 -0.0375,-2.8132 2.8132,-0.0375 0.0375,2.8132 -2.8132,0.0375 z m -0.075,-5.6264 -0.0375,-2.8132 2.8132,-0.0375 0.0375,2.8132 -2.8132,0.0375 z m -0.075,-5.6264 -0.0375,-2.8132 2.77569,-0.0375 0.075,2.8132 -2.8132,0.0375 z m -0.11253,-5.6264 -0.0375,-2.81321 2.8132,-0.0375 0.0375,2.8132 -2.8132,0.0375 z m -0.075,-5.62641 -0.0375,-2.8132 2.81321,-0.0375 0.0375,2.8132 -2.81321,0.0375 z m -0.075,-5.6264 -0.0375,-2.8132 2.8132,-0.0375 0.0375,2.8132 -2.8132,0.0375 z m -0.075,-5.6264 -0.075,-2.8132 2.8132,-0.0375 0.075,2.8132 -2.8132,0.0375 z m -0.11253,-5.6264 -0.0375,-2.8132 2.8132,-0.0375 0.0375,2.8132 -2.8132,0.0375 z m -0.075,-5.58889 -0.0375,-2.8132 2.8132,-0.075 0.0375,2.8132 -2.8132,0.075 z m -0.075,-5.62641 -0.0375,-2.8132 2.8132,-0.075 0.0375,2.8132 -2.8132,0.075 z m -0.075,-5.6264 -0.075,-2.8132 2.8132,-0.075 0.075,2.8132 -2.8132,0.075 z m -0.11253,-5.6264 -0.0375,-2.8132 2.8132,-0.075 0.0375,2.8132 -2.81321,0.075 z m -0.075,-5.6264 -0.0375,-2.813
 2 2.8132,-0.075 0.0375,2.8132 -2.8132,0.075 z m -0.075,-5.6264 -0.0375,-2.8132 2.8132,-0.075 0.0375,2.8132 -2.8132,0.075 z m -0.075,-5.6264 -0.075,-2.8132 2.8132,-0.075 0.075,2.8132 -2.8132,0.075 z m -0.11253,-5.62641 -0.0375,-2.8132 2.8132,-0.0375 0.0375,2.7757 -2.8132,0.075 z m -0.075,-5.6264 -0.0375,-2.8132 2.8132,-0.0375 0.0375,2.8132 -2.8132,0.0375 z m -0.075,-5.6264 -0.0375,-2.8132 2.8132,-0.0375 0.0375,2.8132 -2.8132,0.0375 z m -0.075,-5.6264 -0.075,-2.8132 2.8132,-0.0375 0.075,2.8132 -2.81321,0.0375 z m -0.11252,-5.6264 -0.0375,-2.8132 2.8132,-0.0375 0.0375,2.8132 -2.8132,0.0375 z m -0.075,-5.6264 -0.0375,-2.8132 2.8132,-0.0375 0.0375,2.8132 -2.8132,0.0375 z m -2.85071,-3.75094 4.08852,-8.47711 4.35108,8.36458 -8.4396,0.11253 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3211"
+         d="m 607.35131,168.64201 c 17.17928,17.14177 17.17928,44.9362 0,62.07797 -17.17928,17.17928 -45.01122,17.17928 -62.1905,0 -17.17928,-17.14177 -17.17928,-44.9362 0,-62.07797 17.17928,-17.17928 45.01122,-17.17928 62.1905,0"
+         style="fill:#d6dce5;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3213"
+         d="m 607.35131,168.64201 c 17.17928,17.14177 17.17928,44.9362 0,62.07797 -17.17928,17.17928 -45.01122,17.17928 -62.1905,0 -17.17928,-17.14177 -17.17928,-44.9362 0,-62.07797 17.17928,-17.17928 45.01122,-17.17928 62.1905,0"
+         style="fill:none;stroke:#85888d;stroke-width:2.51312613px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3215"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="188.31085"
+         x="546.13019"
+         xml:space="preserve">Execution</text>
+      <text
+         id="text3217"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="204.81496"
+         x="556.48279"
+         xml:space="preserve">Vertex</text>
+      <text
+         id="text3219"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="221.31908"
+         x="555.13245"
+         xml:space="preserve">D (0/2)</text>
+      <path
+         id="path3221"
+         d="m 803.48767,171.11763 c 17.21679,17.14177 17.21679,44.9737 0,62.11547 -17.17928,17.14177 -45.01121,17.14177 -62.19049,0 -17.17928,-17.14177 -17.17928,-44.9737 0,-62.11547 17.17928,-17.14177 45.01121,-17.14177 62.19049,0"
+         style="fill:#d6dce5;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3223"
+         d="m 803.48767,171.11763 c 17.21679,17.14177 17.21679,44.9737 0,62.11547 -17.17928,17.14177 -45.01121,17.14177 -62.19049,0 -17.17928,-17.14177 -17.17928,-44.9737 0,-62.11547 17.17928,-17.14177 45.01121,-17.14177 62.19049,0"
+         style="fill:none;stroke:#85888d;stroke-width:2.51312613px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3225"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="190.85715"
+         x="742.27802"
+         xml:space="preserve">Execution</text>
+      <text
+         id="text3227"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="207.36125"
+         x="752.63055"
+         xml:space="preserve">Vertex</text>
+      <text
+         id="text3229"
+         style="font-size:13.80343914px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="223.86537"
+         x="751.28021"
+         xml:space="preserve">D (1/2)</text>
+      <path
+         id="path3231"
+         d="m 789.04657,432.7078 -0.60015,-173.63076 3.75094,0 0.60015,173.63076 -3.75094,0 z m -4.35108,-171.75529 5.58889,-11.2528 5.66391,11.21529 -11.2528,0.0375 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3233"
+         d="m 580.85095,432.7078 -0.43135,-173.64951 3.75093,0 0.43136,173.63075 -3.75094,0.0188 z m -4.18229,-171.75529 5.58889,-11.27156 5.66392,11.23405 -11.25281,0.0375 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3235"
+         d="m 491.55996,267.1228 46.5491,-24.39983 -1.74419,-3.31958 -46.54909,24.39983 1.74418,3.31958 z m 46.62412,-20.19879 7.35183,-10.22129 -12.58439,0.24381 5.23256,9.97748 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3237"
+         d="m 685.37074,265.41612 46.51159,-24.26854 -1.72543,-3.30083 -46.5491,24.23104 1.76294,3.33833 z m 46.58661,-20.0675 7.38934,-10.20254 -12.56563,0.22506 5.17629,9.97748 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3239"
+         d="m 275.46863,368.81063 19.93621,0 0,-24.26855 19.91746,48.5371 -19.91746,48.51833 0,-24.24979 -19.93621,0 z"
+         style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3241"
+         d="m 275.46863,368.81063 19.93621,0 0,-24.26855 19.91746,48.5371 -19.91746,48.51833 0,-24.24979 -19.93621,0 z"
+         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3243"
+         d="m 487.6965,488.55921 157.72679,-54.01345 -1.2003,-3.56339 -157.72679,54.01345 1.2003,3.56339 z m 157.16415,-49.84992 8.85221,-8.96473 -12.49062,-1.68792 3.63841,10.65265 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3245"
+         d="m 657.95141,488.70925 -164.14089,-55.88892 1.2003,-3.56339 164.14089,55.88892 -1.2003,3.56339 z m -163.57825,-51.72539 -8.8522,-8.96473 12.49061,-1.68792 -3.63841,10.65265 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3247"
+         d="m 518.19159,446.64252 c 0,-2.79445 2.25057,-5.04501 5.02626,-5.04501 l 79.33226,0 c 2.77569,0 5.02625,2.25056 5.02625,5.04501 l 0,20.10501 c 0,2.77569 -2.25056,5.02625 -5.02625,5.02625 l -79.33226,0 c -2.77569,0 -5.02626,-2.25056 -5.02626,-5.02625 z"
+         style="fill:#fbe5d6;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3249"
+         d="m 518.19159,446.66127 c 0,-2.8132 2.25057,-5.06376 5.02626,-5.06376 l 79.33226,0 c 2.77569,0 5.02625,2.25056 5.02625,5.06376 l 0,20.10501 c 0,2.77569 -2.25056,5.02626 -5.02625,5.02626 l -79.33226,0 c -2.77569,0 -5.02626,-2.25057 -5.02626,-5.02626 z"
+         style="fill:none;stroke:#843c0c;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3251"
+         style="font-size:12.45310211px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="453.91522"
+         x="528.3172"
+         xml:space="preserve">Intermediate</text>
+      <text
+         id="text3253"
+         style="font-size:12.45310211px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="468.91898"
+         x="520.96539"
+         xml:space="preserve">Result</text>
+      <text
+         id="text3255"
+         style="font-size:12.45310211px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="468.91898"
+         x="559.67505"
+         xml:space="preserve">Partition</text>
+      <path
+         id="path3257"
+         d="m 746.2109,446.66127 c 0,-2.8132 2.25056,-5.06376 5.02625,-5.06376 l 79.36978,0 c 2.77569,0 5.02625,2.25056 5.02625,5.06376 l 0,20.10501 c 0,2.77569 -2.25056,5.02626 -5.02625,5.02626 l -79.36978,0 c -2.77569,0 -5.02625,-2.25057 -5.02625,-5.02626 z"
+         style="fill:#fbe5d6;fill-opacity:1;fill-rule:evenodd;stroke:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3259"
+         d="m 746.2109,446.66127 c 0,-2.8132 2.25056,-5.06376 5.02625,-5.06376 l 79.36978,0 c 2.77569,0 5.02625,2.25056 5.02625,5.06376 l 0,20.10501 c 0,2.77569 -2.25056,5.02626 -5.02625,5.02626 l -79.36978,0 c -2.77569,0 -5.02625,-2.25057 -5.02625,-5.02626 z"
+         style="fill:none;stroke:#843c0c;stroke-width:1.87546718px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <text
+         id="text3261"
+         style="font-size:12.45310211px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="453.91522"
+         x="756.29614"
+         xml:space="preserve">Intermediate</text>
+      <text
+         id="text3263"
+         style="font-size:12.45310211px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="468.91898"
+         x="748.94434"
+         xml:space="preserve">Result</text>
+      <text
+         id="text3265"
+         style="font-size:12.45310211px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Arial"
+         y="468.91898"
+         x="787.65393"
+         xml:space="preserve">Partition</text>
+      <path
+         id="path3267"
+         d="m 463.63425,485.7085 -0.78769,-54.38855 3.75093,-0.0563 0.7877,54.38855 -3.75094,0.0563 z m -4.51987,-52.45682 5.45761,-11.34657 5.79519,11.17778 -11.2528,0.16879 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3269"
+         d="m 678.54404,488.03408 0,-54.08847 -3.75093,0 0,54.08847 3.75093,0 z m 3.75094,-52.21301 -5.6264,-11.2528 -5.62641,11.2528 11.25281,0 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+      <path
+         id="path3271"
+         d="m 96.98041,377.21272 0,-7.8207 3.75093,0 0,7.8207 -3.75093,0 z m -3.750934,-5.94523 5.626401,-11.2528 5.626403,11.2528 -11.252804,0 z"
+         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none"
+         inkscape:connector-curvature="0" />
+    </g>
+  </g>
+</svg>


[24/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/concepts/fig/tasks_slots.svg
----------------------------------------------------------------------
diff --git a/docs/concepts/fig/tasks_slots.svg b/docs/concepts/fig/tasks_slots.svg
deleted file mode 100644
index 8fa8ac5..0000000
--- a/docs/concepts/fig/tasks_slots.svg
+++ /dev/null
@@ -1,395 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<svg
-   xmlns:dc="http://purl.org/dc/elements/1.1/"
-   xmlns:cc="http://creativecommons.org/ns#"
-   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
-   xmlns:svg="http://www.w3.org/2000/svg"
-   xmlns="http://www.w3.org/2000/svg"
-   version="1.1"
-   width="922.50159"
-   height="293.81097"
-   id="svg2">
-  <defs
-     id="defs4" />
-  <metadata
-     id="metadata7">
-    <rdf:RDF>
-      <cc:Work
-         rdf:about="">
-        <dc:format>image/svg+xml</dc:format>
-        <dc:type
-           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
-        <dc:title></dc:title>
-      </cc:Work>
-    </rdf:RDF>
-  </metadata>
-  <g
-     transform="translate(-110.17777,-296.88527)"
-     id="layer1">
-    <g
-       transform="translate(84.824534,262.63367)"
-       id="g3402">
-      <path
-         d="m 26.181522,98.086936 0,160.502484 450.749798,0 0,-160.502484 -450.749798,0 z"
-         id="path3404"
-         style="fill:#c5e0b4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 26.181522,98.086936 450.749798,0 0,160.502484 -450.749798,0 z"
-         id="path3406"
-         style="fill:none;stroke:#935f1c;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="194.22502"
-         y="119.99486"
-         id="text3408"
-         xml:space="preserve"
-         style="font-size:17.55437279px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">TaskManager</text>
-      <path
-         d="m 36.656007,137.46237 0,112.52803 136.440243,0 0,-112.52803 -136.440243,0 z"
-         id="path3410"
-         style="fill:#e4eaf4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 36.656007,137.46237 136.440243,0 0,112.52803 -136.440243,0 z"
-         id="path3412"
-         style="fill:none;stroke:#898c92;stroke-width:1.24718571px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="71.012276"
-         y="156.82913"
-         id="text3414"
-         xml:space="preserve"
-         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task Slot</text>
-      <path
-         d="m 183.57073,137.47175 0,112.52803 136.44024,0 0,-112.52803 -136.44024,0 z"
-         id="path3416"
-         style="fill:#e4eaf4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 183.57073,137.47175 136.44024,0 0,112.52803 -136.44024,0 z"
-         id="path3418"
-         style="fill:none;stroke:#898c92;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="217.92635"
-         y="156.82913"
-         id="text3420"
-         xml:space="preserve"
-         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task Slot</text>
-      <path
-         d="m 330.47608,137.30296 0,112.52803 136.44024,0 0,-112.52803 -136.44024,0 z"
-         id="path3422"
-         style="fill:#e4eaf4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 330.47608,137.30296 136.44024,0 0,112.52803 -136.44024,0 z"
-         id="path3424"
-         style="fill:none;stroke:#898c92;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="364.84039"
-         y="156.76062"
-         id="text3426"
-         xml:space="preserve"
-         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task Slot</text>
-      <path
-         d="m 47.749396,203.732 c 0,-17.69503 14.347324,-32.04235 32.042357,-32.04235 17.695033,0 32.042357,14.34732 32.042357,32.04235 0,17.69504 -14.347324,32.03298 -32.042357,32.03298 -17.695033,0 -32.042357,-14.33794 -32.042357,-32.03298"
-         id="path3428"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 47.749396,203.732 c 0,-17.69503 14.347324,-32.04235 32.042357,-32.04235 17.695033,0 32.042357,14.34732 32.042357,32.04235 0,17.69504 -14.347324,32.03298 -32.042357,32.03298 -17.695033,0 -32.042357,-14.33794 -32.042357,-32.03298"
-         id="path3430"
-         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="62.373657"
-         y="201.69257"
-         id="text3432"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source</text>
-      <text
-         x="72.126091"
-         y="213.69556"
-         id="text3434"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[1]</text>
-      <path
-         d="m 97.449277,203.732 c 0,-17.69503 14.347323,-32.04235 32.042353,-32.04235 17.69504,0 32.04236,14.34732 32.04236,32.04235 0,17.69504 -14.34732,32.03298 -32.04236,32.03298 -17.69503,0 -32.042353,-14.33794 -32.042353,-32.03298"
-         id="path3436"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 97.449277,203.732 c 0,-17.69503 14.347323,-32.04235 32.042353,-32.04235 17.69504,0 32.04236,14.34732 32.04236,32.04235 0,17.69504 -14.34732,32.03298 -32.04236,32.03298 -17.69503,0 -32.042353,-14.33794 -32.042353,-32.03298"
-         id="path3438"
-         style="fill:none;stroke:#000000;stroke-width:0.62828153px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="113.99941"
-         y="201.69257"
-         id="text3440"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">map()</text>
-      <text
-         x="121.80136"
-         y="213.69556"
-         id="text3442"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[1]</text>
-      <path
-         d="m 219.82351,204.04146 c 0,-17.69504 14.34733,-32.04236 32.04236,-32.04236 17.69503,0 32.04236,14.34732 32.04236,32.04236 0,17.69503 -14.34733,32.04235 -32.04236,32.04235 -17.69503,0 -32.04236,-14.34732 -32.04236,-32.04235"
-         id="path3444"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="229.63536"
-         y="190.0311"
-         id="text3446"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">keyBy</text>
-      <text
-         x="260.54306"
-         y="190.0311"
-         id="text3448"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">()/</text>
-      <text
-         x="226.18449"
-         y="202.03407"
-         id="text3450"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">window()/</text>
-      <text
-         x="233.68637"
-         y="214.03706"
-         id="text3452"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">apply()</text>
-      <text
-         x="244.18898"
-         y="226.04005"
-         id="text3454"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[1]</text>
-      <path
-         d="m 362.52781,202.64423 c 0,-17.70441 14.34733,-32.05173 32.03298,-32.05173 17.70442,0 32.03299,14.34732 32.03299,32.05173 0,17.68566 -14.32857,32.03298 -32.03299,32.03298 -17.68565,0 -32.03298,-14.34732 -32.03298,-32.03298"
-         id="path3456"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="383.62009"
-         y="200.63976"
-         id="text3458"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Sink</text>
-      <text
-         x="386.9209"
-         y="212.64275"
-         id="text3460"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[1]</text>
-      <path
-         d="m 42.91069,228.62883 0,-1.24718 1.247186,0 0,1.24718 -1.247186,0 z m 0,-2.49437 0,-1.25656 1.247186,0 0,1.25656 -1.247186,0 z m 0,-2.50375 0,-1.24718 1.247186,0 0,1.24718 -1.247186,0 z m 0,-2.50375 0,-1.24718 1.247186,0 0,1.24718 -1.247186,0 z m 0,-2.49437 0,-1.25656 1.247186,0 0,1.25656 -1.247186,0 z m 0,-2.50375 0,-1.24718 1.247186,0 0,1.24718 -1.247186,0 z m 0,-2.50375 0,-1.24718 1.247186,0 0,1.24718 -1.247186,0 z m 0,-2.49437 0,-1.25656 1.247186,0 0,1.25656 -1.247186,0 z m 0,-2.50375 0,-1.24718 1.247186,0 0,1.24718 -1.247186,0 z m 0,-2.50374 0,-1.24719 1.247186,0 0,1.24719 -1.247186,0 z m 0,-2.49438 0,-1.25656 1.247186,0 0,1.25656 -1.247186,0 z m 0,-2.50374 0,-1.24719 1.247186,0 0,1.24719 -1.247186,0 z m 0,-2.50375 0,-1.24719 1.247186,0 0,1.24719 -1.247186,0 z m 0,-2.49437 0,-1.25657 1.247186,0 0,1.25657 -1.247186,0 z m 0,-2.50375 0,-1.24719 1.247186,0 0,1.24719 -1.247186,0 z m 0,-2.50375 0,-1.24719 1.247186,0 0,1.24719 -1.247186,0 z m 0,-2.49437 0,-1.25657 1.247186,
 0 0,1.25657 -1.247186,0 z m 0,-2.50375 0,-1.24719 1.247186,0 0,1.24719 -1.247186,0 z m 0,-2.50375 0,-1.24719 1.247186,0 0,1.24719 -1.247186,0 z m 0,-2.49437 0,-1.25657 1.247186,0 0,1.25657 -1.247186,0 z m 0.0094,-2.53188 0.05626,-1.09715 0.02813,-0.21568 1.237809,0.18755 -0.02813,0.17817 0.0094,-0.0563 -0.05626,1.06902 -1.247186,-0.0656 z m 0.309453,-2.6069 0.271942,-1.04089 0.07502,-0.21568 1.172167,0.42198 -0.06564,0.18755 0.01875,-0.0563 -0.262565,1.02213 -1.209676,-0.31883 z m 0.815828,-2.485 0.440735,-0.9096 0.159414,-0.26256 1.069017,0.65641 -0.150038,0.23443 0.03751,-0.0563 -0.431358,0.88147 -1.12528,-0.54389 z m 1.312827,-2.25993 0.534508,-0.73144 0.271943,-0.30007 0.928356,0.84396 -0.253188,0.27194 0.03751,-0.0469 -0.515753,0.7033 -1.003375,-0.7408 z m 1.72543,-1.96925 0.581395,-0.5345 0.42198,-0.30946 0.740809,1.00338 -0.393848,0.2907 0.04689,-0.0375 -0.553263,0.50637 -0.84396,-0.91898 z m 2.081768,-1.57539 0.581395,-0.36571 0.56264,-0.27195 0.543886,1.12528 -0.534508,0.25
 319 0.05626,-0.0281 -0.56264,0.34696 -0.647037,-1.05964 z m 2.353712,-1.14403 0.581395,-0.2063 0.66579,-0.17817 0.31883,1.20967 -0.647036,0.16879 0.05626,-0.0188 -0.553263,0.19692 -0.42198,-1.17216 z m 2.541258,-0.63766 0.590772,-0.0938 0.712678,-0.0375 0.06564,1.24718 -0.684546,0.0375 0.05626,-0.009 -0.553262,0.0844 -0.187547,-1.22843 z m 2.588145,-0.15942 1.247185,0 0,1.24719 -1.247185,0 0,-1.24719 z m 2.494371,0 1.256563,0 0,1.24719 -1.256563,0 0,-1.24719 z m 2.503749,0 1.247185,0 0,1.24719 -1.247185,0 0,-1.24719 z m 2.503748,0 1.247186,0 0,1.24719 -1.247186,0 0,-1.24719 z m 2.494372,0 1.256563,0 0,1.24719 -1.256563,0 0,-1.24719 z m 2.503749,0 1.247185,0 0,1.24719 -1.247185,0 0,-1.24719 z m 2.503748,0 1.247186,0 0,1.24719 -1.247186,0 0,-1.24719 z m 2.494372,0 1.256563,0 0,1.24719 -1.256563,0 0,-1.24719 z m 2.503749,0 1.247185,0 0,1.24719 -1.247185,0 0,-1.24719 z m 2.503748,0 1.247186,0 0,1.24719 -1.247186,0 0,-1.24719 z m 2.494372,0 1.256563,0 0,1.24719 -1.256563,0 0,-1.24719 z m
  2.503748,0 1.247186,0 0,1.24719 -1.247186,0 0,-1.24719 z m 2.503749,0 1.247186,0 0,1.24719 -1.247186,0 0,-1.24719 z m 2.494372,0 1.256563,0 0,1.24719 -1.256563,0 0,-1.24719 z m 2.503748,0 1.247186,0 0,1.24719 -1.247186,0 0,-1.24719 z m 2.503749,0 1.247186,0 0,1.24719 -1.247186,0 0,-1.24719 z m 2.494371,0 1.256564,0 0,1.24719 -1.256564,0 0,-1.24719 z m 2.503749,0 1.247185,0 0,1.24719 -1.247185,0 0,-1.24719 z m 2.503745,0 1.24719,0 0,1.24719 -1.24719,0 0,-1.24719 z m 2.49437,0 1.25657,0 0,1.24719 -1.25657,0 0,-1.24719 z m 2.50375,0 1.24719,0 0,1.24719 -1.24719,0 0,-1.24719 z m 2.50375,0 1.24719,0 0,1.24719 -1.24719,0 0,-1.24719 z m 2.49437,0 1.25656,0 0,1.24719 -1.25656,0 0,-1.24719 z m 2.50375,0 1.24719,0 0,1.24719 -1.24719,0 0,-1.24719 z m 2.50375,0 1.24718,0 0,1.24719 -1.24718,0 0,-1.24719 z m 2.49437,0 1.25656,0 0,1.24719 -1.25656,0 0,-1.24719 z m 2.50375,0 1.24718,0 0,1.24719 -1.24718,0 0,-1.24719 z m 2.50375,0 1.24718,0 0,1.24719 -1.24718,0 0,-1.24719 z m 2.49437,0 1.25656,0 0,
 1.24719 -1.25656,0 0,-1.24719 z m 2.50375,0 1.24718,0 0,1.24719 -1.24718,0 0,-1.24719 z m 2.50375,0 1.24718,0 0,1.24719 -1.24718,0 0,-1.24719 z m 2.49437,0 1.25656,0 0,1.24719 -1.25656,0 0,-1.24719 z m 2.50375,0 1.24718,0 0,1.24719 -1.24718,0 0,-1.24719 z m 2.50375,0 1.24718,0 0,1.24719 -1.24718,0 0,-1.24719 z m 2.49437,0 1.25656,0 0,1.24719 -1.25656,0 0,-1.24719 z m 2.50375,0 1.24718,0 0,1.24719 -1.24718,0 0,-1.24719 z m 2.50374,0 1.24719,0 0,1.24719 -1.24719,0 0,-1.24719 z m 2.49438,0 1.25656,0 0,1.24719 -1.25656,0 0,-1.24719 z m 2.53188,0.0563 0.13128,0.009 1.17217,0.17817 -0.18755,1.23781 -1.14404,-0.17817 0.0657,0.009 -0.10315,-0.009 0.0656,-1.24718 z m 2.57876,0.50637 0.10315,0.0281 1.12529,0.40322 -0.42199,1.17217 -1.09714,-0.39385 0.0563,0.0188 -0.0844,-0.0188 0.31883,-1.20968 z m 2.41936,0.994 0.0281,0.0188 1.07839,0.65641 0.0563,0.0469 -0.74081,1.00337 -0.0375,-0.0281 0.0469,0.0281 -1.02213,-0.62828 0.0469,0.0281 0,0 0.54389,-1.12528 z m 2.21305,1.50975 0.83458,0.75957 0.1
 1253,0.13128 -0.91898,0.84396 -0.10315,-0.11253 0.0375,0.0469 -0.80645,-0.75019 0.84396,-0.91898 z m 1.8192,1.88485 0.60953,0.81583 0.15004,0.24381 -1.06902,0.65641 -0.13128,-0.22506 0.0281,0.0469 -0.59077,-0.79707 1.00337,-0.74081 z m 1.41598,2.19429 0.40322,0.84396 0.13129,0.35634 -1.18155,0.42198 -0.11253,-0.3282 0.0188,0.0563 -0.38447,-0.80645 1.12528,-0.54389 z m 0.93773,2.43811 0.22506,0.85334 0.0656,0.43136 -1.23781,0.18754 -0.0563,-0.40322 0.009,0.0656 -0.21567,-0.81583 1.20967,-0.31883 z m 0.44074,2.58815 0.0469,0.89084 0,0.38447 -1.24718,0 0,-0.37509 0,0.0375 -0.0469,-0.87209 1.24719,-0.0656 z m 0.0469,2.53188 0,1.24718 -1.24718,0 0,-1.24718 1.24718,0 z m 0,2.49437 0,1.25656 -1.24718,0 0,-1.25656 1.24718,0 z m 0,2.50375 0,1.24718 -1.24718,0 0,-1.24718 1.24718,0 z m 0,2.50375 0,1.24718 -1.24718,0 0,-1.24718 1.24718,0 z m 0,2.49437 0,1.25656 -1.24718,0 0,-1.25656 1.24718,0 z m 0,2.50375 0,1.24718 -1.24718,0 0,-1.24718 1.24718,0 z m 0,2.50374 0,1.24719 -1.24718,0 0,-1.24719 1
 .24718,0 z m 0,2.49438 0,1.25656 -1.24718,0 0,-1.25656 1.24718,0 z m 0,2.50374 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.50375 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.49437 0,1.25657 -1.24718,0 0,-1.25657 1.24718,0 z m 0,2.50375 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.50375 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.49437 0,1.25657 -1.24718,0 0,-1.25657 1.24718,0 z m 0,2.50375 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.50375 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.49437 0,1.25657 -1.24718,0 0,-1.25657 1.24718,0 z m 0,2.50375 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.50375 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.49437 0,0.68455 -0.0281,0.60015 -1.24718,-0.0656 0.0281,-0.5814 0,0.0281 0,-0.66579 1.24718,0 z m -0.15941,2.58815 -0.10315,0.72205 -0.15004,0.57202 -1.20968,-0.31883 0.14066,-0.53451 -0.009,0.0563 0.10315,-0.68454 1.22843,0.18755 z m -0.63766,2.54125 -0.22506,0.62829 -0.28132,0.58139 -1.12528,-0.54388 0.26
 257,-0.55327 -0.0188,0.0656 0.21568,-0.60015 1.17217,0.42198 z m -1.13466,2.36309 -0.30007,0.497 -0.43136,0.5814 -1.00337,-0.75019 0.4126,-0.55326 -0.0281,0.0469 0.28132,-0.47824 1.06901,0.65641 z m -1.58477,2.08177 -0.30945,0.34696 -0.6189,0.56264 -0.84396,-0.92835 0.59077,-0.54389 -0.0375,0.0375 0.30007,-0.31883 0.91898,0.84396 z m -1.95986,1.72543 -0.2907,0.21568 -0.80645,0.48762 -0.64703,-1.05964 0.77831,-0.47824 -0.0469,0.0281 0.27194,-0.2063 0.74081,1.01275 z m -2.25994,1.30345 -0.25319,0.12191 -0.97524,0.35634 -0.42198,-1.18155 0.94711,-0.33758 -0.0656,0.0188 0.22506,-0.10315 0.54388,1.12528 z m -2.48499,0.82521 -0.21568,0.0563 -1.07839,0.15942 -0.18755,-1.23781 1.04088,-0.15004 -0.0563,0.009 0.17817,-0.0469 0.31883,1.20968 z m -2.6069,0.30945 -0.23444,0.009 -1.05026,0 0,-1.24719 1.03151,0 -0.0281,0 0.21568,-0.009 0.0656,1.24719 z m -2.53188,0.009 -1.24719,0 0,-1.24719 1.24719,0 0,1.24719 z m -2.50375,0 -1.24719,0 0,-1.24719 1.24719,0 0,1.24719 z m -2.49437,0 -1.25657,0 0,-1.
 24719 1.25657,0 0,1.24719 z m -2.50375,0 -1.24719,0 0,-1.24719 1.24719,0 0,1.24719 z m -2.50375,0 -1.24719,0 0,-1.24719 1.24719,0 0,1.24719 z m -2.49437,0 -1.25656,0 0,-1.24719 1.25656,0 0,1.24719 z m -2.50375,0 -1.24719,0 0,-1.24719 1.24719,0 0,1.24719 z m -2.50375,0 -1.24718,0 0,-1.24719 1.24718,0 0,1.24719 z m -2.49437,0 -1.25656,0 0,-1.24719 1.25656,0 0,1.24719 z m -2.50375,0 -1.24718,0 0,-1.24719 1.24718,0 0,1.24719 z m -2.50375,0 -1.24718,0 0,-1.24719 1.24718,0 0,1.24719 z m -2.49437,0 -1.25656,0 0,-1.24719 1.25656,0 0,1.24719 z m -2.50375,0 -1.24718,0 0,-1.24719 1.24718,0 0,1.24719 z m -2.50375,0 -1.24718,0 0,-1.24719 1.24718,0 0,1.24719 z m -2.49437,0 -1.25656,0 0,-1.24719 1.25656,0 0,1.24719 z m -2.50375,0 -1.24718,0 0,-1.24719 1.24718,0 0,1.24719 z m -2.50375,0 -1.24718,0 0,-1.24719 1.24718,0 0,1.24719 z m -2.49437,0 -1.25656,0 0,-1.24719 1.25656,0 0,1.24719 z m -2.50375,0 -1.24718,0 0,-1.24719 1.24718,0 0,1.24719 z m -2.50374,0 -1.247187,0 0,-1.24719 1.247187,0 0,1.24719 
 z m -2.494373,0 -1.256563,0 0,-1.24719 1.256563,0 0,1.24719 z m -2.503749,0 -1.247186,0 0,-1.24719 1.247186,0 0,1.24719 z m -2.503749,0 -1.247185,0 0,-1.24719 1.247185,0 0,1.24719 z m -2.494371,0 -1.256563,0 0,-1.24719 1.256563,0 0,1.24719 z m -2.503749,0 -1.247185,0 0,-1.24719 1.247185,0 0,1.24719 z m -2.503748,0 -1.247186,0 0,-1.24719 1.247186,0 0,1.24719 z m -2.494372,0 -1.256563,0 0,-1.24719 1.256563,0 0,1.24719 z m -2.503749,0 -1.247185,0 0,-1.24719 1.247185,0 0,1.24719 z m -2.503748,0 -1.247186,0 0,-1.24719 1.247186,0 0,1.24719 z m -2.494372,0 -1.256563,0 0,-1.24719 1.256563,0 0,1.24719 z m -2.503749,0 -1.247185,0 0,-1.24719 1.247185,0 0,1.24719 z m -2.503748,0 -1.247186,0 0,-1.24719 1.247186,0 0,1.24719 z m -2.494372,0 -1.256563,0 0,-1.24719 1.256563,0 0,1.24719 z m -2.503748,0 -1.247186,0 0,-1.24719 1.247186,0 0,1.24719 z m -2.503749,0 -1.247186,0 0,-1.24719 1.247186,0 0,1.24719 z m -2.494372,0 -1.256563,0 0,-1.24719 1.256563,0 0,1.24719 z m -2.503748,0 -1.247186,0 0,-1.2471
 9 1.247186,0 0,1.24719 z m -2.531881,-0.0188 -1.003375,-0.0469 -0.300075,-0.0469 0.187547,-1.23781 0.262565,0.0469 -0.05626,-0.009 0.975243,0.0469 -0.06564,1.24718 z m -2.597522,-0.31883 -0.956488,-0.25319 -0.300075,-0.10315 0.42198,-1.18154 0.271943,0.10315 -0.05626,-0.0188 0.928356,0.24382 -0.309452,1.20967 z m -2.484994,-0.84396 -0.825206,-0.39385 -0.337584,-0.2063 0.656414,-1.06901 0.309452,0.19692 -0.05626,-0.0375 0.797073,0.38447 -0.543885,1.12528 z m -2.250561,-1.31283 -0.656413,-0.48762 -0.365717,-0.33758 0.843961,-0.92836 0.337584,0.31883 -0.04689,-0.0375 0.628282,0.45949 -0.74081,1.01275 z m -1.959863,-1.74418 -0.468867,-0.51575 -0.365716,-0.48762 1.003375,-0.75019 0.346961,0.46887 -0.03751,-0.0563 0.450112,0.497 -0.928356,0.84396 z m -1.566015,-2.09115 -0.309452,-0.51575 -0.309452,-0.63766 1.12528,-0.54388 0.300075,0.60952 -0.03751,-0.0469 0.300075,0.48762 -1.069016,0.64703 z m -1.115903,-2.36308 -0.17817,-0.497 -0.196924,-0.75957 1.209677,-0.30945 0.187546,0.73143 -0.018
 75,-0.0563 0.168792,0.45949 -1.172167,0.43136 z m -0.628282,-2.54126 -0.07502,-0.50638 -0.03751,-0.80645 1.247186,-0.0656 0.03751,0.77832 -0.0094,-0.0656 0.07502,0.47824 -1.237809,0.18755 z"
-         id="path3462"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 215.7725,229.43528 0,-1.24718 1.24719,0 0,1.24718 -1.24719,0 z m 0,-2.49437 0,-1.25656 1.24719,0 0,1.25656 -1.24719,0 z m 0,-2.50375 0,-1.24718 1.24719,0 0,1.24718 -1.24719,0 z m 0,-2.50375 0,-1.24718 1.24719,0 0,1.24718 -1.24719,0 z m 0,-2.49437 0,-1.25656 1.24719,0 0,1.25656 -1.24719,0 z m 0,-2.50375 0,-1.24718 1.24719,0 0,1.24718 -1.24719,0 z m 0,-2.50374 0,-1.24719 1.24719,0 0,1.24719 -1.24719,0 z m 0,-2.49438 0,-1.25656 1.24719,0 0,1.25656 -1.24719,0 z m 0,-2.50374 0,-1.24719 1.24719,0 0,1.24719 -1.24719,0 z m 0,-2.50375 0,-1.24719 1.24719,0 0,1.24719 -1.24719,0 z m 0,-2.49438 0,-1.25656 1.24719,0 0,1.25656 -1.24719,0 z m 0,-2.50374 0,-1.24719 1.24719,0 0,1.24719 -1.24719,0 z m 0,-2.50375 0,-1.24719 1.24719,0 0,1.24719 -1.24719,0 z m 0,-2.49437 0,-1.25657 1.24719,0 0,1.25657 -1.24719,0 z m 0,-2.50375 0,-1.24719 1.24719,0 0,1.24719 -1.24719,0 z m 0,-2.50375 0,-1.24719 1.24719,0 0,1.24719 -1.24719,0 z m 0,-2.49437 0,-1.25657 1.24719,0 0,1.25657 -1.24719,0 z m 0,-2.5
 0375 0,-1.24719 1.24719,0 0,1.24719 -1.24719,0 z m 0,-2.50375 0,-1.24719 1.24719,0 0,1.24719 -1.24719,0 z m 0,-2.49437 0,-1.25656 1.24719,0 0,1.25656 -1.24719,0 z m 0,-2.50375 0,-0.7877 0.0281,-0.497 1.24718,0.0656 -0.0281,0.47824 0,-0.0375 0,0.77832 -1.24719,0 z m 0.14066,-2.58815 0.11253,-0.72205 0.14066,-0.56264 1.21906,0.30945 -0.14066,0.53451 0.009,-0.0656 -0.10315,0.69392 -1.23781,-0.18755 z m 0.64704,-2.54125 0.19692,-0.54389 0.31883,-0.65641 1.12528,0.54388 -0.30007,0.62828 0.0187,-0.0563 -0.18754,0.51575 -1.17217,-0.43135 z m 1.15341,-2.34434 0.2063,-0.33758 0.54389,-0.73143 1.00337,0.75018 -0.52513,0.7033 0.0281,-0.0469 -0.18755,0.31883 -1.06902,-0.65642 z m 1.60353,-2.05363 0.13128,-0.15004 0.83458,-0.75019 0.83459,0.92836 -0.80645,0.73143 0.0469,-0.0469 -0.10315,0.12191 -0.93773,-0.83458 z m 2.06301,-1.71606 0.97524,-0.59077 0.15004,-0.075 0.54389,1.12528 -0.13129,0.0656 0.0563,-0.0375 -0.94711,0.5814 -0.64704,-1.06902 z m 2.3162,-1.20967 0.91898,-0.33759 0.32821,-0.0844
  0.30945,1.20967 -0.30007,0.075 0.0563,-0.0187 -0.88147,0.3282 -0.43136,-1.17216 z m 2.52251,-0.7033 0.85333,-0.13129 0.45012,-0.0281 0.0656,1.25657 -0.42198,0.0187 0.0563,-0.009 -0.81583,0.12191 -0.18754,-1.22843 z m 2.58814,-0.19693 1.24719,0 0,1.24719 -1.24719,0 0,-1.24719 z m 2.50375,0 1.24719,0 0,1.24719 -1.24719,0 0,-1.24719 z m 2.49437,0 1.25656,0 0,1.24719 -1.25656,0 0,-1.24719 z m 2.50375,0 1.24719,0 0,1.24719 -1.24719,0 0,-1.24719 z m 2.50375,0 1.24718,0 0,1.24719 -1.24718,0 0,-1.24719 z m 2.49437,0 1.25656,0 0,1.24719 -1.25656,0 0,-1.24719 z m 2.50375,0 1.24719,0 0,1.24719 -1.24719,0 0,-1.24719 z m 2.50375,0 1.24718,0 0,1.24719 -1.24718,0 0,-1.24719 z m 2.49437,0 1.25656,0 0,1.24719 -1.25656,0 0,-1.24719 z m 2.50375,0 1.24718,0 0,1.24719 -1.24718,0 0,-1.24719 z m 2.50375,0 1.24718,0 0,1.24719 -1.24718,0 0,-1.24719 z m 2.49437,0 1.25656,0 0,1.24719 -1.25656,0 0,-1.24719 z m 2.50375,0 1.24718,0 0,1.24719 -1.24718,0 0,-1.24719 z m 2.50375,0 1.24718,0 0,1.24719 -1.24718,0 0,-
 1.24719 z m 2.49437,0 1.25656,0 0,1.24719 -1.25656,0 0,-1.24719 z m 2.50375,0 1.24718,0 0,1.24719 -1.24718,0 0,-1.24719 z m 2.50374,0 1.24719,0 0,1.24719 -1.24719,0 0,-1.24719 z m 2.49438,0 1.25656,0 0,1.24719 -1.25656,0 0,-1.24719 z m 2.50374,0 1.24719,0 0,1.24719 -1.24719,0 0,-1.24719 z m 2.53189,0.009 1.10652,0.0563 0.19693,0.0281 -0.17817,1.2378 -0.1688,-0.0281 0.0563,0.009 -1.0784,-0.0563 0.0657,-1.24719 z m 2.59752,0.30945 0.96586,0.24381 0.30008,0.10315 -0.43136,1.18155 -0.27194,-0.10315 0.0656,0.0187 -0.93773,-0.23443 0.30945,-1.20968 z m 2.48499,0.83459 0.74081,0.36571 0.4126,0.25319 -0.64703,1.06902 -0.38447,-0.24382 0.0563,0.0375 -0.72205,-0.34696 0.54388,-1.13465 z m 2.24119,1.34095 0.50637,0.3751 0.50638,0.45949 -0.84396,0.92835 -0.47825,-0.44073 0.0469,0.0375 -0.47824,-0.35634 0.74081,-1.00338 z m 1.93173,1.77232 0.28132,0.30945 0.5345,0.71268 -1.00337,0.75019 -0.51575,-0.69393 0.0375,0.0469 -0.26257,-0.2907 0.92836,-0.83458 z m 1.51912,2.11928 0.11253,0.19692 0.47825,
 0.97525 -1.13466,0.54388 -0.45949,-0.95649 0.0375,0.0563 -0.10315,-0.15942 1.06901,-0.65641 z m 1.05027,2.4006 0.0188,0.0563 0.30945,1.2003 0.009,0.075 -1.23781,0.18755 0,-0.0469 0.009,0.0656 -0.2907,-1.14404 0.0188,0.0563 -0.009,-0.0188 1.17217,-0.43135 z m 0.52513,2.63503 0.0563,1.14403 0,0.13129 -1.24718,0 0,-0.12191 0,0.0375 -0.0657,-1.12528 1.25657,-0.0656 z m 0.0563,2.53188 0,1.24718 -1.24718,0 0,-1.24718 1.24718,0 z m 0,2.49437 0,1.25656 -1.24718,0 0,-1.25656 1.24718,0 z m 0,2.50375 0,1.24718 -1.24718,0 0,-1.24718 1.24718,0 z m 0,2.50375 0,1.24718 -1.24718,0 0,-1.24718 1.24718,0 z m 0,2.49437 0,1.25656 -1.24718,0 0,-1.25656 1.24718,0 z m 0,2.50375 0,1.24718 -1.24718,0 0,-1.24718 1.24718,0 z m 0,2.50375 0,1.24718 -1.24718,0 0,-1.24718 1.24718,0 z m 0,2.49437 0,1.25656 -1.24718,0 0,-1.25656 1.24718,0 z m 0,2.50375 0,1.24718 -1.24718,0 0,-1.24718 1.24718,0 z m 0,2.50375 0,1.24718 -1.24718,0 0,-1.24718 1.24718,0 z m 0,2.49437 0,1.25656 -1.24718,0 0,-1.25656 1.24718,0 z m 0,2.5037
 5 0,1.24718 -1.24718,0 0,-1.24718 1.24718,0 z m 0,2.50374 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.49438 0,1.25656 -1.24718,0 0,-1.25656 1.24718,0 z m 0,2.50374 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.50375 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.49437 0,1.25657 -1.24718,0 0,-1.25657 1.24718,0 z m 0,2.50375 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.50375 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.49437 0,1.25657 -1.24718,0 0,-1.25657 1.24718,0 z m -0.0281,2.53188 -0.0375,0.65642 -0.0938,0.64703 -1.23781,-0.17817 0.0938,-0.6189 -0.009,0.0563 0.0281,-0.62828 1.25656,0.0656 z m -0.40323,2.58815 -0.13128,0.52513 -0.26256,0.73143 -1.17217,-0.43136 0.25319,-0.69392 -0.0188,0.0563 0.12191,-0.497 1.20967,0.30945 z m -0.92835,2.45686 -0.15942,0.32821 -0.48762,0.80645 -1.06901,-0.65641 0.47824,-0.76895 -0.0281,0.0469 0.15003,-0.30945 1.11591,0.55326 z m -1.4066,2.1943 -0.10315,0.14066 -0.76894,0.84396 -0.91898,-0.84396 0.74081,-0.81583 -0.0375,0.046
 9 0.0844,-0.11253 1.00338,0.74081 z m -1.88485,1.9036 -0.89084,0.66579 -0.1688,0.10315 -0.64703,-1.06902 0.14066,-0.0844 -0.0469,0.0281 0.86272,-0.64704 0.75018,1.00338 z m -2.19429,1.42535 -0.84396,0.40323 -0.34697,0.13128 -0.43135,-1.17217 0.31883,-0.1219 -0.0469,0.0187 0.80645,-0.38447 0.54389,1.12528 z m -2.44749,0.93774 -0.75019,0.19692 -0.52513,0.075 -0.18754,-1.23781 0.497,-0.0656 -0.0657,0.009 0.73144,-0.18755 0.30007,1.20968 z m -2.57877,0.4126 -0.69392,0.0375 -0.59077,0 0,-1.24719 0.57202,0 -0.0281,0 0.67517,-0.0375 0.0656,1.24719 z m -2.53188,0.0375 -1.25656,0 0,-1.24719 1.25656,0 0,1.24719 z m -2.50375,0 -1.24718,0 0,-1.24719 1.24718,0 0,1.24719 z m -2.50374,0 -1.24719,0 0,-1.24719 1.24719,0 0,1.24719 z m -2.49438,0 -1.25656,0 0,-1.24719 1.25656,0 0,1.24719 z m -2.50374,0 -1.24719,0 0,-1.24719 1.24719,0 0,1.24719 z m -2.50375,0 -1.24719,0 0,-1.24719 1.24719,0 0,1.24719 z m -2.49437,0 -1.25657,0 0,-1.24719 1.25657,0 0,1.24719 z m -2.50375,0 -1.24719,0 0,-1.24719 1.24719,0
  0,1.24719 z m -2.50375,0 -1.24719,0 0,-1.24719 1.24719,0 0,1.24719 z m -2.49437,0 -1.25657,0 0,-1.24719 1.25657,0 0,1.24719 z m -2.50375,0 -1.24719,0 0,-1.24719 1.24719,0 0,1.24719 z m -2.50375,0 -1.24719,0 0,-1.24719 1.24719,0 0,1.24719 z m -2.49437,0 -1.25657,0 0,-1.24719 1.25657,0 0,1.24719 z m -2.50375,0 -1.24719,0 0,-1.24719 1.24719,0 0,1.24719 z m -2.50375,0 -1.24719,0 0,-1.24719 1.24719,0 0,1.24719 z m -2.49437,0 -1.25656,0 0,-1.24719 1.25656,0 0,1.24719 z m -2.50375,0 -1.24719,0 0,-1.24719 1.24719,0 0,1.24719 z m -2.50375,0 -1.24718,0 0,-1.24719 1.24718,0 0,1.24719 z m -2.49437,0 -1.00338,0 -0.28132,-0.009 0.0656,-1.25656 0.26256,0.0188 -0.0375,0 0.994,0 0,1.24719 z m -2.58815,-0.11253 -0.93773,-0.14066 -0.35634,-0.0938 0.30945,-1.20968 0.32821,0.0844 -0.0656,-0.009 0.9096,0.13128 -0.18755,1.23781 z m -2.55063,-0.60015 -0.75019,-0.27194 -0.46886,-0.22506 0.54388,-1.12528 0.44074,0.2063 -0.0563,-0.0187 0.72206,0.26256 -0.43136,1.17217 z m -2.36309,-1.1159 -0.52513,-0.31883 -
 0.56264,-0.42198 0.75019,-1.00338 0.5345,0.40323 -0.0469,-0.0281 0.497,0.30007 -0.64704,1.06902 z m -2.09115,-1.57539 -0.30007,-0.27195 -0.60953,-0.67517 0.92836,-0.84396 0.59077,0.65642 -0.0469,-0.0469 0.28132,0.25319 -0.84397,0.92836 z m -1.70667,-1.97862 -0.12191,-0.15942 -0.58139,-0.94711 1.06902,-0.65641 0.56264,0.92835 -0.0281,-0.0469 0.10315,0.13128 -1.00337,0.75019 z m -1.30345,-2.34434 -0.40323,-1.1159 -0.0375,-0.12191 1.21905,-0.30945 0.0187,0.0938 -0.0187,-0.0563 0.39385,1.0784 -1.17217,0.43135 z m -0.74081,-2.51312 -0.15941,-1.05026 -0.009,-0.24382 1.24718,-0.075 0.009,0.22505 -0.009,-0.0563 0.15942,1.01275 -1.23781,0.18755 z"
-         id="path3464"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 359.24575,229.29462 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49438 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51312 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25657 1.25656,0 0,1.25657 -1.25656,0 z m 0,-2.
 51313 0,-1.23781 1.25656,0 0,1.23781 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.49437 0,-1.25656 1.25656,0 0,1.25656 -1.25656,0 z m 0,-2.51313 0,-0.78769 0.0375,-0.48763 1.2378,0.0563 -0.0187,0.46887 0,-0.0375 0,0.78769 -1.25656,0 z m 0.15003,-2.58814 0.11253,-0.73143 0.13129,-0.54389 1.21905,0.30007 -0.13128,0.52514 0,-0.0563 -0.0938,0.69392 -1.23781,-0.18754 z m 0.63766,-2.53188 0.2063,-0.56264 0.30008,-0.63766 1.14403,0.54388 -0.31882,0.61891 0.0375,-0.0563 -0.2063,0.52513 -1.16279,-0.43135 z m 1.16279,-2.34434 0.2063,-0.35634 0.54389,-0.71267 0.994,0.73143 -0.52513,0.71268 0.0375,-0.0563 -0.20631,0.31883 -1.05026,-0.63766 z m 1.59415,-2.06301 0.13128,-0.15004 0.82521,-0.75019 0.84396,0.93774 -0.80645,0.71268 0.0563,-0.0375 -0.11253,0.13128 -0.93773,-0.84396 z m 2.06301,-1.70668 0.97525,-0.60015 0.15003,-0.075 0.54389,1.12528 -0.13128,0.0563 0.0563,-0.0375 -0.95649,0.5814 -0.63766,-1.05027 z m 2.30683,-1.21905 0.93773,-0.33758 0.31883,-0.0938 0.3
 0008,1.21906 -0.28132,0.075 0.0563,-0.0187 -0.90022,0.31883 -0.43136,-1.16279 z m 2.53188,-0.71268 0.86271,-0.13128 0.43136,-0.0188 0.075,1.25657 -0.4126,0.0188 0.0563,-0.0188 -0.8252,0.13128 -0.18755,-1.23781 z m 2.58814,-0.18754 1.23781,0 0,1.2378 -1.23781,0 0,-1.2378 z m 2.49438,0 1.25656,0 0,1.2378 -1.25656,0 0,-1.2378 z m 2.49437,0 1.25656,0 0,1.2378 -1.25656,0 0,-1.2378 z m 2.51312,0 1.23781,0 0,1.2378 -1.23781,0 0,-1.2378 z m 2.49437,0 1.25657,0 0,1.2378 -1.25657,0 0,-1.2378 z m 2.49438,0 1.25656,0 0,1.2378 -1.25656,0 0,-1.2378 z m 2.51312,0 1.23781,0 0,1.2378 -1.23781,0 0,-1.2378 z m 2.49437,0 1.25657,0 0,1.2378 -1.25657,0 0,-1.2378 z m 2.49437,0 1.25657,0 0,1.2378 -1.25657,0 0,-1.2378 z m 2.51313,0 1.23781,0 0,1.2378 -1.23781,0 0,-1.2378 z m 2.49437,0 1.25657,0 0,1.2378 -1.25657,0 0,-1.2378 z m 2.49437,0 1.25657,0 0,1.2378 -1.25657,0 0,-1.2378 z m 2.51313,0 1.23781,0 0,1.2378 -1.23781,0 0,-1.2378 z m 2.49437,0 1.25656,0 0,1.2378 -1.25656,0 0,-1.2378 z m 2.49437,0 1.25657,0 
 0,1.2378 -1.25657,0 0,-1.2378 z m 2.51313,0 1.23781,0 0,1.2378 -1.23781,0 0,-1.2378 z m 2.49437,0 1.25656,0 0,1.2378 -1.25656,0 0,-1.2378 z m 2.49437,0 1.25656,0 0,1.2378 -1.25656,0 0,-1.2378 z m 2.51313,0 1.23781,0 0,1.2378 -1.23781,0 0,-1.2378 z m 2.53188,0 1.10652,0.0563 0.18755,0.0375 -0.16879,1.23781 -0.16879,-0.0375 0.0563,0.0188 -1.08777,-0.0563 0.075,-1.25656 z m 2.58814,0.30007 0.97525,0.26257 0.30007,0.0938 -0.43136,1.18154 -0.26256,-0.0938 0.0563,0.0187 -0.93773,-0.24381 0.30007,-1.21905 z m 2.49437,0.84396 0.75019,0.37509 0.39385,0.24381 -0.63766,1.06902 -0.37509,-0.24381 0.0375,0.0375 -0.71268,-0.35634 0.54388,-1.12528 z m 2.23181,1.33158 0.52513,0.39385 0.48762,0.45011 -0.84396,0.91898 -0.46886,-0.43136 0.0563,0.0375 -0.48762,-0.35634 0.73143,-1.01275 z m 1.93173,1.7817 0.30008,0.31883 0.52513,0.69392 -1.01275,0.75019 -0.50638,-0.67517 0.0375,0.0375 -0.26257,-0.28132 0.91898,-0.84396 z m 1.51913,2.11927 0.13128,0.20631 0.46887,0.95648 -1.12528,0.54389 -0.46887,-0.93773
  0.0375,0.0563 -0.11253,-0.16879 1.06902,-0.65642 z m 1.06902,2.4006 0.0188,0.0563 0.30008,1.2003 0.0188,0.075 -1.23781,0.18755 -0.0188,-0.0375 0.0188,0.0563 -0.28132,-1.14404 0.0188,0.0563 -0.0188,-0.0187 1.18155,-0.43136 z m 0.50637,2.62566 0.075,1.16279 0,0.11252 -1.25656,0 0,-0.11252 0,0.0375 -0.0563,-1.12529 1.23781,-0.075 z m 0.075,2.53188 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51312 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49438 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51312 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.49438 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51312 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.49438 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51312 0,1.23781 -1.25656,0 0,-1.23781 1
 .25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.51313 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25657 -1.25656,0 0,-1.25657 1.25656,0 z m 0,2.51313 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m -0.0375,2.53188 -0.0375,0.67517 -0.0938,0.63766 -1.23781,-0.18755 0.0938,-0.60015 0,0.0563 0.0375,-0.63766 1.23781,0.0563 z m -0.39385,2.58815 -0.13128,0.54388 -0.26256,0.71268 -1.18155,-0.43136 0.26257,-0.67517 -0.0188,0.0563 0.13129,-0.50638 1.20029,0.30008 z m -0.91897,2.45686 -0.1688,0.35634 -0.48762,0.78769 -1.06901,-0.65641 0.46886,-0.75019 -0.0375,0.0375 0.16879,-0.31883 1.12529,0.54389 z m -1.40661,2.21305 -0.11252,0.15004 -0.76894,0.8252 -0.91898,-0.84396 0.73143,-0.80645 -0.0375,0.0563 0.0938,-0.13129 1.01275,0.75019 z m
  -1.89422,1.89422 -0.90022,0.67517 -0.15004,0.0938 -0.65641,-1.06901 0.13128,-0.075 -0.0375,0.0375 0.88147,-0.65642 0.73143,0.994 z m -2.17554,1.42536 -0.86271,0.4126 -0.33759,0.13128 -0.43136,-1.18154 0.30008,-0.11253 -0.0563,0.0188 0.84396,-0.39385 0.54388,1.12528 z m -2.45686,0.95649 -0.76894,0.18754 -0.50638,0.075 -0.18755,-1.23781 0.48763,-0.075 -0.075,0.0188 0.75018,-0.18755 0.30008,1.21906 z m -2.56939,0.4126 -0.71268,0.0375 -0.58139,0 0,-1.25656 0.56264,0 -0.0375,0 0.69392,-0.0375 0.075,1.25656 z m -2.53188,0.0375 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51313,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51313,0 -1.2378,0 0,-1.25656 1.2378,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1
 .25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49438,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49438,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.49437,0 -1.25656,0 0,-1.25656 1.25656,0 0,1.25656 z m -2.51312,0 -1.23781,0 0,-1.25656 1.23781,0 0,1.25656 z m -2.49437,0 -1.25657,0 0,-1.25656 1.25657,0 0,1.25656 z m -2.49438,0 -1.0315,0 -0.26257,-0.0188 0.075,-1.25656 0.24381,0.0188 -0.0375,0 1.01275,0 0,1.25656 z m -2.58814,-0.11253 -0.95649,-0.15004 -0.33758,-0.075 0.30007,-1.21905 0.31883,0.075 -0.075,-0.0188 0.91898,0.15004 -0.16879,1.23781 z m -2.56939,-0.60015 -0.75019,-0.28132 -0.45011,-0.2063 0.54389,-1.12528 0.4126,0.18755 -0.0563,-0.0188 0.73144,0.26257 -0.43136,1.18154 z m -2.36309,-1.12528 -0.52513,-0.31883 -0.54389,-0.4126 0.73144,-0.994 0.52513,0.
 39385 -0.0375,-0.0375 0.50637,0.31883 -0.65641,1.05026 z m -2.08177,-1.57539 -0.31883,-0.28132 -0.58139,-0.65642 0.91898,-0.84396 0.58139,0.63766 -0.0563,-0.0375 0.30007,0.26257 -0.84396,0.91898 z m -1.70667,-1.96924 -0.13129,-0.16879 -0.58139,-0.93774 1.06902,-0.63766 0.56264,0.90023 -0.0375,-0.0375 0.11252,0.13128 -0.99399,0.75019 z m -1.27532,-2.26932 -0.0188,-0.0375 -0.43135,-1.16279 -0.0188,-0.0938 1.2003,-0.31883 0.0188,0.075 -0.0188,-0.0563 0.4126,1.08778 -0.0375,-0.0563 0.0188,0.0188 -1.12528,0.54388 z m -0.7877,-2.56939 -0.15003,-1.08777 -0.0188,-0.22506 1.25656,-0.0563 0,0.18755 0,-0.0563 0.15004,1.05027 -1.23781,0.18754 z"
-         id="path3466"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 521.99879,55.063718 57.20175,42.798162 -0.75018,1.012752 -57.20175,-42.798162 0.75018,-1.012752 z m 57.33304,40.547601 2.51312,5.007501 -5.51387,-0.994001 3.00075,-4.0135 z"
-         id="path3468"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 496.32365,98.086936 0,160.502484 450.71228,0 0,-160.502484 -450.71228,0 z"
-         id="path3470"
-         style="fill:#c5e0b4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 496.32365,98.086936 450.71228,0 0,160.502484 -450.71228,0 z"
-         id="path3472"
-         style="fill:none;stroke:#935f1c;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="664.28186"
-         y="119.99486"
-         id="text3474"
-         xml:space="preserve"
-         style="font-size:17.55437279px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">TaskManager</text>
-      <path
-         d="m 506.78875,137.47175 0,112.52803 136.42149,0 0,-112.52803 -136.42149,0 z"
-         id="path3476"
-         style="fill:#e4eaf4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 506.78875,137.47175 136.42149,0 0,112.52803 -136.42149,0 z"
-         id="path3478"
-         style="fill:none;stroke:#898c92;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="541.06915"
-         y="156.82913"
-         id="text3480"
-         xml:space="preserve"
-         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task Slot</text>
-      <path
-         d="m 653.71286,137.47175 0,112.52803 136.42148,0 0,-112.52803 -136.42148,0 z"
-         id="path3482"
-         style="fill:#e4eaf4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 653.71286,137.47175 136.42148,0 0,112.52803 -136.42148,0 z"
-         id="path3484"
-         style="fill:none;stroke:#898c92;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="687.98322"
-         y="156.82913"
-         id="text3486"
-         xml:space="preserve"
-         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task Slot</text>
-      <path
-         d="m 800.59945,137.32171 0,112.52803 136.45899,0 0,-112.52803 -136.45899,0 z"
-         id="path3488"
-         style="fill:#e4eaf4;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 800.59945,137.32171 136.45899,0 0,112.52803 -136.45899,0 z"
-         id="path3490"
-         style="fill:none;stroke:#898c92;stroke-width:1.23780835px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="834.89728"
-         y="156.76062"
-         id="text3492"
-         xml:space="preserve"
-         style="font-size:15.00373745px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Task Slot</text>
-      <path
-         d="m 517.87277,203.732 c 0,-17.68565 14.30981,-32.03298 31.95796,-32.03298 17.64814,0 31.95796,14.34733 31.95796,32.03298 0,17.70442 -14.30982,32.03298 -31.95796,32.03298 -17.64815,0 -31.95796,-14.32856 -31.95796,-32.03298"
-         id="path3494"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 517.87277,203.732 c 0,-17.68565 14.30981,-32.03298 31.95796,-32.03298 17.64814,0 31.95796,14.34733 31.95796,32.03298 0,17.70442 -14.30982,32.03298 -31.95796,32.03298 -17.64815,0 -31.95796,-14.32856 -31.95796,-32.03298"
-         id="path3496"
-         style="fill:none;stroke:#000000;stroke-width:0.61890417px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="532.43054"
-         y="201.69257"
-         id="text3498"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Source</text>
-      <text
-         x="542.18298"
-         y="213.69556"
-         id="text3500"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[2]</text>
-      <path
-         d="m 567.5914,203.75076 c 0,-17.70441 14.29106,-32.03298 31.95796,-32.03298 17.62939,0 31.95796,14.32857 31.95796,32.03298 0,17.70441 -14.32857,32.03298 -31.95796,32.03298 -17.6669,0 -31.95796,-14.32857 -31.95796,-32.03298"
-         id="path3502"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 567.5914,203.75076 c 0,-17.70441 14.29106,-32.03298 31.95796,-32.03298 17.62939,0 31.95796,14.32857 31.95796,32.03298 0,17.70441 -14.32857,32.03298 -31.95796,32.03298 -17.6669,0 -31.95796,-14.32857 -31.95796,-32.03298"
-         id="path3504"
-         style="fill:none;stroke:#000000;stroke-width:0.63765883px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="584.05627"
-         y="201.69257"
-         id="text3506"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">map()</text>
-      <text
-         x="591.85822"
-         y="213.69556"
-         id="text3508"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[2]</text>
-      <path
-         d="m 689.94688,204.05083 c 0,-17.70441 14.32857,-32.03298 31.95796,-32.03298 17.66691,0 31.95797,14.32857 31.95797,32.03298 0,17.70441 -14.29106,32.03298 -31.95797,32.03298 -17.62939,0 -31.95796,-14.32857 -31.95796,-32.03298"
-         id="path3510"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="699.6922"
-         y="190.0311"
-         id="text3512"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">keyBy</text>
-      <text
-         x="730.59991"
-         y="190.0311"
-         id="text3514"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">()/</text>
-      <text
-         x="696.24133"
-         y="202.03407"
-         id="text3516"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">window()/</text>
-      <text
-         x="703.74323"
-         y="214.03706"
-         id="text3518"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:normal;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">apply()</text>
-      <text
-         x="714.24585"
-         y="226.04005"
-         id="text3520"
-         xml:space="preserve"
-         style="font-size:10.05250454px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">[2]</text>
-      <path
-         d="m 513.05281,228.65696 0,-1.27531 1.23781,0 0,1.27531 -1.23781,0 z m 0,-2.51312 0,-1.23781 1.23781,0 0,1.23781 -1.23781,0 z m 0,-2.51313 0,-1.23781 1.23781,0 0,1.23781 -1.23781,0 z m 0,-2.47561 0,-1.27532 1.23781,0 0,1.27532 -1.23781,0 z m 0,-2.51313 0,-1.23781 1.23781,0 0,1.23781 -1.23781,0 z m 0,-2.51313 0,-1.23781 1.23781,0 0,1.23781 -1.23781,0 z m 0,-2.47561 0,-1.27532 1.23781,0 0,1.27532 -1.23781,0 z m 0,-2.51313 0,-1.23781 1.23781,0 0,1.23781 -1.23781,0 z m 0,-2.51313 0,-1.2378 1.23781,0 0,1.2378 -1.23781,0 z m 0,-2.47561 0,-1.27532 1.23781,0 0,1.27532 -1.23781,0 z m 0,-2.51313 0,-1.23781 1.23781,0 0,1.23781 -1.23781,0 z m 0,-2.51312 0,-1.23781 1.23781,0 0,1.23781 -1.23781,0 z m 0,-2.47562 0,-1.27532 1.23781,0 0,1.27532 -1.23781,0 z m 0,-2.51313 0,-1.23781 1.23781,0 0,1.23781 -1.23781,0 z m 0,-2.51312 0,-1.23781 1.23781,0 0,1.23781 -1.23781,0 z m 0,-2.47562 0,-1.27532 1.23781,0 0,1.27532 -1.23781,0 z m 0,-2.51313 0,-1.2378 1.23781,0 0,1.2378 -1.23781,0 z m 0,-2.5131
 2 0,-1.23781 1.23781,0 0,1.23781 -1.23781,0 z m 0,-2.47562 0,-1.27532 1.23781,0 0,1.27532 -1.23781,0 z m 0,-2.51313 0,-1.2378 1.23781,0 0,1.2378 -1.23781,0 z m 0,-2.51312 0.075,-1.12528 0,-0.18755 1.23781,0.18755 0,0.15004 0,-0.0375 -0.075,1.05026 -1.23781,-0.0375 z m 0.30008,-2.62566 0.30007,-1.05026 0.075,-0.18754 1.16279,0.4126 -0.075,0.18755 0.0375,-0.075 -0.26256,1.01275 -1.23781,-0.30008 z m 0.8252,-2.47561 0.45012,-0.90023 0.15003,-0.26256 1.08778,0.63766 -0.15004,0.22505 0,-0.0375 -0.41261,0.86272 -1.12528,-0.52513 z m 1.31283,-2.25056 0.52513,-0.75019 0.30008,-0.30007 0.90022,0.86271 -0.22505,0.26257 0.0375,-0.0375 -0.52513,0.67516 -1.01275,-0.71267 z m 1.72543,-1.988 0.60015,-0.52513 0.4126,-0.30007 0.75019,0.97524 -0.4126,0.30007 0.0375,-0.0375 -0.56264,0.52514 -0.82521,-0.93774 z m 2.06302,-1.57539 0.60015,-0.37509 0.56264,-0.26257 0.56264,1.12528 -0.56264,0.26257 0.075,-0.0375 -0.56264,0.33758 -0.67516,-1.05026 z m 2.36308,-1.12528 0.60015,-0.22506 0.67517,-0.18754 0.30
 008,1.2378 -0.63766,0.15004 0.0375,0 -0.52513,0.18755 -0.45012,-1.16279 z m 2.55064,-0.67517 0.60015,-0.075 0.71268,-0.0375 0.075,1.23781 -0.71268,0.0375 0.075,0 -0.56264,0.075 -0.18755,-1.23781 z m 2.58814,-0.15004 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.51313,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.47562,0 1.27531,0 0,1.27532 -1.27531,0 0,-1.27532 z m 2.51312,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.51313,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.47562,0 1.27531,0 0,1.27532 -1.27531,0 0,-1.27532 z m 2.51312,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.51313,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.47562,0 1.27531,0 0,1.27532 -1.27531,0 0,-1.27532 z m 2.51312,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.51313,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.47561,0 1.27532,0 0,1.27532 -1.27532,0 0,-1.27532 z m 2.51313,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.51313,0 1.2378,0 0,1.27532 -1.2378,0 0,-1.27532 z m 2.47561,0
  1.27532,0 0,1.27532 -1.27532,0 0,-1.27532 z m 2.51313,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.51313,0 1.2378,0 0,1.27532 -1.2378,0 0,-1.27532 z m 2.47561,0 1.27532,0 0,1.27532 -1.27532,0 0,-1.27532 z m 2.51313,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.51312,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.47562,0 1.27532,0 0,1.27532 -1.27532,0 0,-1.27532 z m 2.51313,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.51312,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.47562,0 1.27532,0 0,1.27532 -1.27532,0 0,-1.27532 z m 2.51313,0 1.2378,0 0,1.27532 -1.2378,0 0,-1.27532 z m 2.51312,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.47562,0 1.27532,0 0,1.27532 -1.27532,0 0,-1.27532 z m 2.51312,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.51313,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.47562,0 1.27532,0 0,1.27532 -1.27532,0 0,-1.27532 z m 2.51312,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.51313,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.
 47562,0 1.27531,0 0,1.27532 -1.27531,0 0,-1.27532 z m 2.51312,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.51313,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.47562,0 1.27531,0 0,1.27532 -1.27531,0 0,-1.27532 z m 2.51312,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.51313,0 1.23781,0 0,1.27532 -1.23781,0 0,-1.27532 z m 2.51312,0.075 0.11253,0 1.2003,0.18755 -0.18755,1.23781 -1.16279,-0.18755 0.075,0 -0.11253,0 0.075,-1.23781 z m 2.58815,0.48762 0.075,0.0375 1.16279,0.41261 -0.45012,1.16279 -1.08777,-0.3751 0.0375,0 -0.075,0 0.33759,-1.23781 z m 2.47561,1.05027 1.05027,0.63765 0.0375,0.0375 -0.75019,1.01276 -0.0375,-0.0375 0.075,0.0375 -1.05026,-0.63766 0.67516,-1.05026 z m 2.13804,1.46286 0.8252,0.75019 0.15004,0.15003 -0.93773,0.86272 -0.11253,-0.15004 0.0375,0.075 -0.7877,-0.75019 0.82521,-0.93773 z m 1.83796,1.87547 0.60014,0.8252 0.15004,0.26257 -1.08777,0.63766 -0.11253,-0.22506 0.0375,0.0375 -0.60015,-0.7877 1.01276,-0.75018 z m 1.38784,2.21305 0.4126,0.8252 0.150
 04,0.3751 -1.2003,0.4126 -0.11253,-0.33758 0.0375,0.075 -0.4126,-0.78769 1.12528,-0.56264 z m 0.93773,2.43811 0.22506,0.86271 0.075,0.4126 -1.23781,0.18755 -0.075,-0.4126 0.0375,0.075 -0.22505,-0.7877 1.20029,-0.33758 z m 0.45012,2.58814 0.0375,0.90022 0,0.3751 -1.23781,0 0,-0.3751 0,0.0375 -0.0375,-0.86271 1.23781,-0.075 z m 0.0375,2.55064 0,1.2378 -1.23781,0 0,-1.2378 1.23781,0 z m 0,2.47561 0,1.27532 -1.23781,0 0,-1.27532 1.23781,0 z m 0,2.51313 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.51312 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.47562 0,1.27532 -1.23781,0 0,-1.27532 1.23781,0 z m 0,2.51313 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.51312 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.47562 0,1.27532 -1.23781,0 0,-1.27532 1.23781,0 z m 0,2.51313 0,1.2378 -1.23781,0 0,-1.2378 1.23781,0 z m 0,2.51312 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.47562 0,1.27532 -1.23781,0 0,-1.27532 1.23781,0 z m 0,2.51313 0,1.2378 -1.23781,0 0,-1.2378 1.23781,0 z 
 m 0,2.51312 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.47562 0,1.27532 -1.23781,0 0,-1.27532 1.23781,0 z m 0,2.51312 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.51313 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.47562 0,1.27531 -1.23781,0 0,-1.27531 1.23781,0 z m 0,2.51312 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.51313 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.47562 0,0.67516 0,0.60015 -1.27532,-0.0375 0.0375,-0.60015 0,0.0375 0,-0.67516 1.23781,0 z m -0.15004,2.58814 -0.11253,0.71268 -0.15004,0.56264 -1.2003,-0.30008 0.15004,-0.52513 -0.0375,0.0375 0.11253,-0.67517 1.23781,0.18755 z m -0.63766,2.55064 -0.22506,0.60015 -0.26256,0.60015 -1.12528,-0.52514 0.26256,-0.56264 -0.0375,0.0375 0.22506,-0.60015 1.16279,0.45012 z m -1.12528,2.36308 -0.30008,0.48763 -0.45011,0.56264 -1.01275,-0.71268 0.45011,-0.56264 -0.0375,0.0375 0.26257,-0.48762 1.08777,0.67516 z m -1.57539,2.06302 -0.33759,0.33758 -0.60015,0.56264 -0.86271,-0.90022 0.60015,-0.56264 -0.0375
 ,0.0375 0.30007,-0.30008 0.93774,0.82521 z m -1.988,1.72543 -0.26256,0.22505 -0.82521,0.48763 -0.63766,-1.05027 0.7877,-0.48762 -0.0375,0.0375 0.22506,-0.22505 0.75018,1.01275 z m -2.25056,1.31283 -0.26256,0.11252 -0.97525,0.3751 -0.4126,-1.2003 0.93773,-0.33759 -0.0375,0.0375 0.18755,-0.11252 0.56264,1.12528 z m -2.47562,0.8252 -0.22505,0.0375 -1.08777,0.18755 -0.18755,-1.23781 1.05026,-0.18755 -0.0375,0.0375 0.15004,-0.0375 0.33758,1.2003 z m -2.62565,0.30008 -0.22506,0 -1.05026,0 0,-1.23781 1.05026,0 -0.0375,0 0.18754,0 0.075,1.23781 z m -2.51313,0 -1.27531,0 0,-1.23781 1.27531,0 0,1.23781 z m -2.51312,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.51313,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.47561,0 -1.27532,0 0,-1.23781 1.27532,0 0,1.23781 z m -2.51313,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.51313,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.47561,0 -1.27532,0 0,-1.23781 1.27532,0 0,1.23781 z m -2.51313,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.2378
 1 z m -2.51313,0 -1.2378,0 0,-1.23781 1.2378,0 0,1.23781 z m -2.47561,0 -1.27532,0 0,-1.23781 1.27532,0 0,1.23781 z m -2.51313,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.51312,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.47562,0 -1.27532,0 0,-1.23781 1.27532,0 0,1.23781 z m -2.51313,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.51312,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.47562,0 -1.27532,0 0,-1.23781 1.27532,0 0,1.23781 z m -2.51313,0 -1.2378,0 0,-1.23781 1.2378,0 0,1.23781 z m -2.51312,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.47562,0 -1.27532,0 0,-1.23781 1.27532,0 0,1.23781 z m -2.51313,0 -1.2378,0 0,-1.23781 1.2378,0 0,1.23781 z m -2.51312,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.47562,0 -1.27532,0 0,-1.23781 1.27532,0 0,1.23781 z m -2.51312,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.51313,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.47562,0 -1.27531,0 0,-1.23781 1.27531,0 0,1.23781 z m -2.51312,0 -1.23781,0 0,-1.2
 3781 1.23781,0 0,1.23781 z m -2.51313,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.47562,0 -1.27531,0 0,-1.23781 1.27531,0 0,1.23781 z m -2.51312,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.51313,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.47561,0 -1.27532,0 0,-1.23781 1.27532,0 0,1.23781 z m -2.51313,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.51313,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.47561,0 -1.27532,0 0,-1.23781 1.27532,0 0,1.23781 z m -2.51313,0 -1.23781,0 0,-1.23781 1.23781,0 0,1.23781 z m -2.51313,0 -1.2378,0 0,-1.23781 1.2378,0 0,1.23781 z m -2.47561,0 -1.27532,0 0,-1.23781 1.27532,0 0,1.23781 z m -2.55064,0 -0.97524,-0.0375 -0.33758,-0.075 0.18754,-1.23781 0.30008,0.0375 -0.075,0 0.97524,0.075 -0.075,1.23781 z m -2.58814,-0.33759 -0.93774,-0.22505 -0.33758,-0.11253 0.45011,-1.2003 0.26257,0.11253 -0.0375,0 0.90022,0.22505 -0.30007,1.2003 z m -2.47562,-0.8252 -0.8252,-0.41261 -0.33759,-0.18754 0.63766,-1.08777 0.33758,0.18754 -0.075,0
  0.78769,0.3751 -0.52513,1.12528 z m -2.25056,-1.31283 -0.63766,-0.48762 -0.4126,-0.33759 0.86271,-0.93773 0.33759,0.33758 -0.0375,-0.0375 0.63766,0.45012 -0.75019,1.01275 z m -1.95049,-1.76294 -0.48762,-0.48762 -0.37509,-0.52513 1.01275,-0.75019 0.37509,0.48762 -0.0375,-0.0375 0.4126,0.48762 -0.90023,0.82521 z m -1.57539,-2.10052 -0.30007,-0.48762 -0.33759,-0.67517 1.12528,-0.52513 0.30008,0.63766 0,-0.075 0.26256,0.48762 -1.05026,0.63766 z m -1.12528,-2.36309 -0.15004,-0.48762 -0.22505,-0.75019 1.2003,-0.33758 0.22505,0.75018 -0.0375,-0.0375 0.15004,0.45012 -1.16279,0.4126 z m -0.60015,-2.55064 -0.075,-0.48762 -0.075,-0.8252 1.27532,-0.0375 0.0375,0.78769 0,-0.075 0.075,0.45012 -1.23781,0.18754 z"
-         id="path3522"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 685.74584,229.44466 0,-1.27532 1.2378,0 0,1.27532 -1.2378,0 z m 0,-2.51313 0,-1.2378 1.2378,0 0,1.2378 -1.2378,0 z m 0,-2.51312 0,-1.23781 1.2378,0 0,1.23781 -1.2378,0 z m 0,-2.47562 0,-1.27532 1.2378,0 0,1.27532 -1.2378,0 z m 0,-2.51312 0,-1.23781 1.2378,0 0,1.23781 -1.2378,0 z m 0,-2.51313 0,-1.23781 1.2378,0 0,1.23781 -1.2378,0 z m 0,-2.47562 0,-1.27532 1.2378,0 0,1.27532 -1.2378,0 z m 0,-2.51312 0,-1.23781 1.2378,0 0,1.23781 -1.2378,0 z m 0,-2.51313 0,-1.23781 1.2378,0 0,1.23781 -1.2378,0 z m 0,-2.47562 0,-1.27531 1.2378,0 0,1.27531 -1.2378,0 z m 0,-2.51312 0,-1.23781 1.2378,0 0,1.23781 -1.2378,0 z m 0,-2.51313 0,-1.23781 1.2378,0 0,1.23781 -1.2378,0 z m 0,-2.47562 0,-1.27531 1.2378,0 0,1.27531 -1.2378,0 z m 0,-2.51312 0,-1.23781 1.2378,0 0,1.23781 -1.2378,0 z m 0,-2.51313 0,-1.23781 1.2378,0 0,1.23781 -1.2378,0 z m 0,-2.47561 0,-1.27532 1.2378,0 0,1.27532 -1.2378,0 z m 0,-2.51313 0,-1.23781 1.2378,0 0,1.23781 -1.2378,0 z m 0,-2.51313 0,-1.2378 1.2378,0 0,1.2378 -1
 .2378,0 z m 0,-2.47561 0,-1.27532 1.2378,0 0,1.27532 -1.2378,0 z m 0,-2.51313 0,-1.23781 1.2378,0 0,1.23781 -1.2378,0 z m 0,-2.51313 0,-0.71267 0.0375,-0.56264 1.2378,0.075 -0.0375,0.52513 0,-0.0375 0,0.71267 -1.2378,0 z m 0.15003,-2.58814 0.11253,-0.67517 0.15004,-0.60015 1.2003,0.33759 -0.15004,0.56264 0,-0.075 -0.075,0.63766 -1.23781,-0.18755 z m 0.63766,-2.51313 0.18755,-0.52513 0.33758,-0.67517 1.12528,0.52514 -0.30007,0.67516 0,-0.075 -0.15004,0.48762 -1.2003,-0.41261 z m 1.16279,-2.36308 0.18755,-0.30008 0.56264,-0.75019 1.01275,0.75019 -0.56264,0.71268 0.0375,-0.0375 -0.15004,0.26256 -1.08777,-0.63765 z m 1.6129,-2.02551 0.11253,-0.15004 0.86272,-0.75018 0.8252,0.90022 -0.8252,0.75019 0.075,-0.0375 -0.11253,0.11253 -0.93774,-0.82521 z m 2.06302,-1.72543 0.97524,-0.60015 0.15004,-0.075 0.56264,1.12528 -0.15004,0.075 0.0375,-0.0375 -0.93773,0.56264 -0.63766,-1.05026 z m 2.32558,-1.2003 0.90022,-0.33758 0.33759,-0.11253 0.30007,1.23781 -0.30007,0.075 0.075,-0.0375 -0.90022,0.33
 758 -0.4126,-1.16279 z m 2.51312,-0.71268 0.86272,-0.11252 0.45011,-0.0375 0.075,1.23781 -0.45011,0.0375 0.075,0 -0.82521,0.11253 -0.18755,-1.23781 z m 2.58815,-0.18754 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.51312,0 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.47562,0 1.27532,0 0,1.23781 -1.27532,0 0,-1.23781 z m 2.51313,0 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.51312,0 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.47562,0 1.27532,0 0,1.23781 -1.27532,0 0,-1.23781 z m 2.51313,0 1.2378,0 0,1.23781 -1.2378,0 0,-1.23781 z m 2.51312,0 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.47562,0 1.27532,0 0,1.23781 -1.27532,0 0,-1.23781 z m 2.51313,0 1.2378,0 0,1.23781 -1.2378,0 0,-1.23781 z m 2.51312,0 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.47562,0 1.27532,0 0,1.23781 -1.27532,0 0,-1.23781 z m 2.51312,0 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.51313,0 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.47562,0 1.27531,0 0,1.23781 -1.27531,0 0,-1.23781 z m 2.
 51312,0 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.51313,0 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.47562,0 1.27531,0 0,1.23781 -1.27531,0 0,-1.23781 z m 2.51312,0 1.23781,0 0,1.23781 -1.23781,0 0,-1.23781 z m 2.55064,0 1.2003,0.075 0.11252,0 -0.18754,1.2378 -0.075,0 0.075,0 -1.2003,-0.075 0.075,-1.23781 z m 2.58814,0.30007 1.08777,0.26257 0.18755,0.075 -0.45011,1.16279 -0.15004,-0.075 0.075,0.0375 -1.05026,-0.26257 0.30007,-1.2003 z m 2.51313,0.7877 0.8252,0.4126 0.30008,0.18755 -0.63766,1.08777 -0.30008,-0.18755 0.075,0.0375 -0.8252,-0.4126 0.56264,-1.12528 z m 2.25056,1.35033 0.60015,0.45012 0.4126,0.37509 -0.86271,0.90022 -0.3751,-0.33758 0.0375,0.0375 -0.56264,-0.45011 0.75019,-0.97525 z m 1.91298,1.72543 0.37509,0.41261 0.45011,0.60015 -0.97524,0.75018 -0.45011,-0.56264 0.0375,0.0375 -0.33759,-0.37509 0.90023,-0.86272 z m 1.53788,2.13804 0.18755,0.30007 0.4126,0.86272 -1.12528,0.52513 -0.3751,-0.82521 0,0.0375 -0.15003,-0.26256 1.05026,-0.63766 z m 1.08777,2.36309 
 0.075,0.18754 0.26256,1.08777 -1.2003,0.30008 -0.26256,-1.05026 0,0.075 -0.0375,-0.15004 1.16279,-0.45011 z m 0.56264,2.58814 0,0.075 0.075,1.2003 -1.23781,0.075 -0.075,-1.2003 0,0.075 0,-0.075 1.23781,-0.15004 z m 0.075,2.58815 0,1.2378 -1.23781,0 0,-1.2378 1.23781,0 z m 0,2.47561 0,1.27532 -1.23781,0 0,-1.27532 1.23781,0 z m 0,2.51313 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.51312 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.47562 0,1.27532 -1.23781,0 0,-1.27532 1.23781,0 z m 0,2.51313 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.51312 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.47562 0,1.27532 -1.23781,0 0,-1.27532 1.23781,0 z m 0,2.51313 0,1.2378 -1.23781,0 0,-1.2378 1.23781,0 z m 0,2.51312 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.47562 0,1.27532 -1.23781,0 0,-1.27532 1.23781,0 z m 0,2.51313 0,1.2378 -1.23781,0 0,-1.2378 1.23781,0 z m 0,2.51312 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.47562 0,1.27532 -1.23781,0 0,-1.27532 1.23781,0 z m 
 0,2.51312 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.51313 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.47562 0,1.27531 -1.23781,0 0,-1.27531 1.23781,0 z m 0,2.51312 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.51313 0,1.23781 -1.23781,0 0,-1.23781 1.23781,0 z m 0,2.47562 0,1.27531 -1.23781,0 0,-1.27531 1.23781,0 z m -0.0375,2.55063 -0.0375,0.75019 -0.075,0.56264 -1.23781,-0.18755 0.075,-0.52513 0,0.075 0.0375,-0.75019 1.23781,0.075 z m -0.37509,2.58815 -0.15004,0.63765 -0.22506,0.60015 -1.16279,-0.4126 0.18755,-0.60015 0,0.075 0.15004,-0.60015 1.2003,0.30008 z m -0.90023,2.4381 -0.22505,0.45012 -0.41261,0.71267 -1.08777,-0.67517 0.4126,-0.67516 0,0.075 0.18755,-0.41261 1.12528,0.52513 z m -1.38784,2.21305 -0.18755,0.22506 -0.67517,0.7877 -0.93773,-0.86272 0.67517,-0.71268 -0.0375,0.0375 0.15003,-0.22505 1.01276,0.75018 z m -1.83796,1.87547 -0.0375,0.075 -0.97524,0.71268 -0.075,0.0375 -0.63766,-1.05026 0.0375,-0.0375 -0.075,0.0375 0.93773,-0.67517 -0.0375,0.0375 0.0
 375,-0.0375 0.82521,0.90022 z m -2.21305,1.50038 -0.93774,0.45011 -0.26256,0.11253 -0.41261,-1.2003 0.22506,-0.075 -0.075,0.0375 0.90023,-0.45011 0.56264,1.12528 z m -2.43811,0.97524 -0.86272,0.22505 -0.4126,0.0375 -0.18755,-1.2378 0.3751,-0.0375 -0.075,0 0.86271,-0.18755 0.30008,1.2003 z m -2.58815,0.4126 -0.8252,0.075 -0.45011,0 0,-1.27532 0.45011,0 -0.0375,0 0.7877,-0.0375 0.075,1.23781 z m -2.51312,0.075 -1.27532,0 0,-1.27532 1.27532,0 0,1.27532 z m -2.51313,0 -1.23781,0 0,-1.27532 1.23781,0 0,1.27532 z m -2.51312,0 -1.23781,0 0,-1.27532 1.23781,0 0,1.27532 z m -2.47562,0 -1.27532,0 0,-1.27532 1.27532,0 0,1.27532 z m -2.51313,0 -1.23781,0 0,-1.27532 1.23781,0 0,1.27532 z m -2.51312,0 -1.23781,0 0,-1.27532 1.23781,0 0,1.27532 z m -2.47562,0 -1.27532,0 0,-1.27532 1.27532,0 0,1.27532 z m -2.51313,0 -1.2378,0 0,-1.27532 1.2378,0 0,1.27532 z m -2.51312,0 -1.23781,0 0,-1.27532 1.23781,0 0,1.27532 z m -2.47562,0 -1.27532,0 0,-1.27532 1.27532,0 0,1.27532 z m -2.51312,0 -1.23781,0 0,-1.2
 7532 1.23781,0 0,1.27532 z m -2.51313,0 -1.23781,0 0,-1.27532 1.23781,0 0,1.27532 z m -2.47562,0 -1.27532,0 0,-1.27532 1.27532,0 0,1.27532 z m -2.51312,0 -1.23781,0 0,-1.27532 1.23781,0 0,1.27532 z m -2.51313,0 -1.23781,0 0,-1.27532 1.23781,0 0,1.27532 z m -2.47562,0 -1.27531,0 0,-1.27532 1.27531,0 0,1.27532 z m -2.51312,0 -1.23781,0 0,-1.27532 1.23781,0 0,1.27532 z m -2.51313,0 -1.23781,0 0,-1.27532 1.23781,0 0,1.27532 z m -2.47562,0 -1.2378,0 -0.0375,0 0.0375,-1.27532 0.0375,0 -0.0375,0 1.2378,0 0,1.27532 z m -2.58814,-0.075 -1.2003,-0.18755 -0.11253,-0.0375 0.30008,-1.2003 0.11253,0 -0.075,0 1.16279,0.18754 -0.18755,1.23781 z m -2.58814,-0.56264 -0.97525,-0.37509 -0.22505,-0.11253 0.52513,-1.12528 0.22505,0.11253 -0.075,-0.0375 0.97525,0.37509 -0.45011,1.16279 z m -2.36309,-1.08777 -0.75019,-0.45011 -0.33758,-0.26257 0.71267,-1.01275 0.33759,0.26257 -0.0375,-0.0375 0.71268,0.45011 -0.63766,1.05026 z m -2.13804,-1.53788 -0.48762,-0.45011 -0.4126,-0.48763 0.90022,-0.8252 0.45012,0.
 45011 -0.075,-0.0375 0.48762,0.4126 -0.86272,0.93774 z m -1.72543,-1.91298 -0.30007,-0.37509 -0.45011,-0.75019 1.08777,-0.63766 0.4126,0.71268 -0.0375,-0.075 0.26257,0.33758 -0.97525,0.7877 z m -1.31282,-2.28807 -0.15004,-0.26256 -0.33758,-0.93774 1.16279,-0.45011 0.33758,0.93773 0,-0.0375 0.11253,0.22506 -1.12528,0.52513 z m -0.82521,-2.47562 -0.0375,-0.15003 -0.18754,-1.16279 1.2378,-0.18755 0.15004,1.12528 0,-0.0375 0.0375,0.11253 -1.2003,0.30007 z m -0.30007,-2.58814 0,-0.0375 1.2378,-0.075 0,0.0375 -1.2378,0.075 z"
-         id="path3524"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="460.63281"
-         y="47.214516"
-         id="text3526"
-         xml:space="preserve"
-         style="font-size:17.55437279px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Processes</text>
-      <path
-         d="m 480.15712,53.938437 -58.81465,35.990216 0.65641,1.050262 58.81465,-35.971461 -0.65641,-1.069017 z m -58.72088,33.720901 -2.96324,4.744932 5.57014,-0.468867 -2.6069,-4.276065 z"
-         id="path3528"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="184.65176"
-         y="327.59686"
-         id="text3530"
-         xml:space="preserve"
-         style="font-size:17.55437279px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Threads</text>
-      <text
-         x="638.97357"
-         y="327.59686"
-         id="text3532"
-         xml:space="preserve"
-         style="font-size:17.55437279px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Threads</text>
-      <path
-         d="m 203.61948,305.94497 -50.07498,-68.96093 1.01275,-0.73143 50.07498,68.96093 -1.01275,0.73143 z m -50.86267,-66.84165 -0.91898,-5.51388 4.96998,2.56939 -4.051,2.94449 z"
-         id="path3534"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 220.46117,305.73867 8.51462,-66.16649 -1.2378,-0.15003 -8.53338,66.14773 1.25656,0.16879 z m 10.2213,-64.68487 -1.83796,-5.28882 -3.11327,4.65116 4.95123,0.63766 z"
-         id="path3536"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 244.22334,307.2578 121.69907,-72.76813 -0.63766,-1.08777 -121.69907,72.76812 0.63766,1.08778 z m 121.58654,-70.51757 3.0195,-4.72618 -5.58889,0.43136 2.56939,4.29482 z"
-         id="path3538"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 674.7556,305.92621 28.80718,-66.46655 -1.16279,-0.48763 -28.80718,66.42905 1.16279,0.52513 z m 30.00747,-64.55358 -0.30007,-5.58889 -4.27607,3.6009 4.57614,1.98799 z"
-         id="path3540"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 654.87565,304.46335 -30.94521,-65.94143 1.12528,-0.56264 30.94521,65.97894 -1.12528,0.52513 z m -32.108,-64.02845 0.15003,-5.58889 4.3886,3.45086 -4.53863,2.13803 z"
-         id="path3542"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-    </g>
-  </g>
-</svg>

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/concepts/fig/windows.svg
----------------------------------------------------------------------
diff --git a/docs/concepts/fig/windows.svg b/docs/concepts/fig/windows.svg
deleted file mode 100644
index eee9bec..0000000
--- a/docs/concepts/fig/windows.svg
+++ /dev/null
@@ -1,193 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<svg
-   xmlns:dc="http://purl.org/dc/elements/1.1/"
-   xmlns:cc="http://creativecommons.org/ns#"
-   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
-   xmlns:svg="http://www.w3.org/2000/svg"
-   xmlns="http://www.w3.org/2000/svg"
-   version="1.1"
-   width="675.98212"
-   height="115.61612"
-   id="svg2">
-  <defs
-     id="defs4" />
-  <metadata
-     id="metadata7">
-    <rdf:RDF>
-      <cc:Work
-         rdf:about="">
-        <dc:format>image/svg+xml</dc:format>
-        <dc:type
-           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
-        <dc:title></dc:title>
-      </cc:Work>
-    </rdf:RDF>
-  </metadata>
-  <g
-     transform="translate(77.741049,-445.98269)"
-     id="layer1">
-    <g
-       transform="translate(-106.28566,414.06236)"
-       id="g2989">
-      <path
-         d="m 28.544611,75.993932 570.142039,0 0,-16.504112 32.97071,32.970714 -32.97071,32.970716 0,-16.4666 -570.142039,0 z"
-         id="path2991"
-         style="fill:#ffd966;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <text
-         x="629.14722"
-         y="114.28457"
-         id="text2993"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Event stream</text>
-      <text
-         x="282.09552"
-         y="40.469826"
-         id="text2995"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Time windows</text>
-      <path
-         d="m 626.29353,105.28873 -12.30307,-4.8012 0.45011,-1.162786 12.30307,4.801196 -0.45011,1.16279 z m -11.81545,-2.58814 -3.75093,-4.126033 5.55138,-0.52513 -1.80045,4.651163 z"
-         id="path2997"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 591.48485,70.892661 c 0,-3.150785 -2.06301,-5.70142 -4.59489,-5.70142 l -43.47333,0 c -2.53188,0 -4.5949,-2.56939 -4.5949,-5.720175 0,3.150785 -2.06301,5.720175 -4.61365,5.720175 l -43.45457,0 c -2.55064,0 -4.61365,2.550635 -4.61365,5.70142"
-         id="path2999"
-         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 567.72268,85.652588 c 0,-1.856712 1.51913,-3.357086 3.37584,-3.357086 l 17.02925,0 c 1.85671,0 3.35708,1.500374 3.35708,3.357086 l 0,13.4471 c 0,1.856712 -1.50037,3.357082 -3.35708,3.357082 l -17.02925,0 c -1.85671,0 -3.37584,-1.50037 -3.37584,-3.357082 z"
-         id="path3001"
-         style="fill:#a6a6a6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 535.6897,85.652588 c 0,-1.856712 1.50038,-3.357086 3.35709,-3.357086 l 17.048,0 c 1.85671,0 3.35708,1.500374 3.35708,3.357086 l 0,13.4471 c 0,1.856712 -1.50037,3.357082 -3.35708,3.357082 l -17.048,0 c -1.85671,0 -3.35709,-1.50037 -3.35709,-3.357082 z"
-         id="path3003"
-         style="fill:#a6a6a6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 451.61251,85.652588 c 0,-1.856712 1.50037,-3.357086 3.35708,-3.357086 l 17.02925,0 c 1.85671,0 3.35708,1.500374 3.35708,3.357086 l 0,13.4471 c 0,1.856712 -1.50037,3.357082 -3.35708,3.357082 l -17.02925,0 c -1.85671,0 -3.35708,-1.50037 -3.35708,-3.357082 z"
-         id="path3005"
-         style="fill:#a6a6a6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 414.2532,85.652588 c 0,-1.856712 1.50037,-3.357086 3.35709,-3.357086 l 17.04799,0 c 1.85672,0 3.35709,1.500374 3.35709,3.357086 l 0,13.4471 c 0,1.856712 -1.50037,3.357082 -3.35709,3.357082 l -17.04799,0 c -1.85672,0 -3.35709,-1.50037 -3.35709,-3.357082 z"
-         id="path3007"
-         style="fill:#a6a6a6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 323.143,85.652588 c 0,-1.856712 1.50038,-3.357086 3.35709,-3.357086 l 16.8792,0 c 1.85672,0 3.35709,1.500374 3.35709,3.357086 l 0,13.4471 c 0,1.856712 -1.50037,3.357082 -3.35709,3.357082 l -16.8792,0 c -1.85671,0 -3.35709,-1.50037 -3.35709,-3.357082 z"
-         id="path3009"
-         style="fill:#a6a6a6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 297.18654,85.652588 c 0,-1.856712 1.51913,-3.357086 3.37584,-3.357086 l 17.02924,0 c 1.85671,0 3.35709,1.500374 3.35709,3.357086 l 0,13.4471 c 0,1.856712 -1.50038,3.357082 -3.35709,3.357082 l -17.02924,0 c -1.85671,0 -3.37584,-1.50037 -3.37584,-3.357082 z"
-         id="path3011"
-         style="fill:#a6a6a6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 271.24882,85.652588 c 0,-1.856712 1.50038,-3.357086 3.35709,-3.357086 l 17.03862,0 c 1.85671,0 3.35709,1.500374 3.35709,3.357086 l 0,13.437723 c 0,1.856709 -1.50038,3.366459 -3.35709,3.366459 l -17.03862,0 c -1.85671,0 -3.35709,-1.50975 -3.35709,-3.366459 z"
-         id="path3013"
-         style="fill:#a6a6a6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 242.48853,85.652588 c 0,-1.856712 1.50976,-3.357086 3.35709,-3.357086 l 17.03862,0 c 1.85671,0 3.35709,1.500374 3.35709,3.357086 l 0,13.437723 c 0,1.856709 -1.50038,3.366459 -3.35709,3.366459 l -17.03862,0 c -1.84733,0 -3.35709,-1.50975 -3.35709,-3.366459 z"
-         id="path3015"
-         style="fill:#a6a6a6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 215.13485,85.652588 c 0,-1.856712 1.50975,-3.357086 3.36646,-3.357086 l 16.8792,0 c 1.85672,0 3.35709,1.500374 3.35709,3.357086 l 0,13.437723 c 0,1.856709 -1.50037,3.366459 -3.35709,3.366459 l -16.8792,0 c -1.85671,0 -3.36646,-1.50975 -3.36646,-3.366459 z"
-         id="path3017"
-         style="fill:#a6a6a6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 348.93068,85.652588 c 0,-1.856712 1.50037,-3.357086 3.35708,-3.357086 l 17.02925,0 c 1.85671,0 3.35708,1.500374 3.35708,3.357086 l 0,13.4471 c 0,1.856712 -1.50037,3.357082 -3.35708,3.357082 l -17.02925,0 c -1.85671,0 -3.35708,-1.50037 -3.35708,-3.357082 z"
-         id="path3019"
-         style="fill:#a6a6a6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 114.95676,85.652588 c 0,-1.856712 1.50975,-3.357086 3.35709,-3.357086 l 16.8792,0 c 1.85672,0 3.36647,1.500374 3.36647,3.357086 l 0,13.437723 c 0,1.856709 -1.50975,3.366459 -3.36647,3.366459 l -16.8792,0 c -1.84734,0 -3.35709,-1.50975 -3.35709,-3.366459 z"
-         id="path3021"
-         style="fill:#a6a6a6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 64.628601,85.652588 c 0,-1.856712 1.509751,-3.357086 3.366463,-3.357086 l 16.879205,0 c 1.856713,0 3.357087,1.500374 3.357087,3.357086 l 0,13.437723 c 0,1.856709 -1.500374,3.366459 -3.357087,3.366459 l -16.879205,0 c -1.856712,0 -3.366463,-1.50975 -3.366463,-3.366459 z"
-         id="path3023"
-         style="fill:#a6a6a6;fill-opacity:1;fill-rule:evenodd;stroke:none" />
-      <path
-         d="m 483.17662,70.892661 c 0,-3.150785 -2.06301,-5.70142 -4.59489,-5.70142 l -43.47333,0 c -2.53188,0 -4.5949,-2.56939 -4.5949,-5.720175 0,3.150785 -2.06301,5.720175 -4.61365,5.720175 l -43.45457,0 c -2.55064,0 -4.61365,2.550635 -4.61365,5.70142"
-         id="path3025"
-         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 374.71835,70.892661 c 0,-3.150785 -2.06301,-5.70142 -4.61365,-5.70142 l -43.45457,0 c -2.55064,0 -4.61365,-2.56939 -4.61365,-5.720175 0,3.150785 -2.04426,5.720175 -4.5949,5.720175 l -43.47333,0 c -2.53188,0 -4.59489,2.550635 -4.59489,5.70142"
-         id="path3027"
-         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 267.64793,70.883284 c 0,-3.150785 -2.05364,-5.701421 -4.5949,-5.701421 l -43.38893,0 c -2.54126,0 -4.60427,-2.560012 -4.60427,-5.710797 0,3.150785 -2.06302,5.710797 -4.60428,5.710797 l -43.37955,0 c -2.55064,0 -4.60427,2.550636 -4.60427,5.701421"
-         id="path3029"
-         style="fill:none;stroke:#000000;stroke-width:1.24718571px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 160.59626,70.883284 c 0,-3.150785 -2.06302,-5.701421 -4.60427,-5.701421 l -43.46396,0 c -2.54125,0 -4.60427,-2.560012 -4.60427,-5.710797 0,3.150785 -2.06301,5.710797 -4.60427,5.710797 l -43.463953,0 c -2.541259,0 -4.604273,2.550636 -4.604273,5.701421"
-         id="path3031"
-         style="fill:none;stroke:#000000;stroke-width:1.24718571px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 591.48485,105.5888 c 0,3.15079 -2.06301,5.70143 -4.59489,5.70143 l -63.09072,0 c -2.53188,0 -4.59489,2.55063 -4.59489,5.70142 0,-3.15079 -2.06302,-5.70142 -4.5949,-5.70142 l -63.09072,0 c -2.53188,0 -4.59489,-2.55064 -4.59489,-5.70143"
-         id="path3033"
-         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 441.29744,105.5888 c 0,3.15079 -2.06302,5.70143 -4.61365,5.70143 l -49.86867,0 c -2.53189,0 -4.5949,2.55063 -4.5949,5.70142 0,-3.15079 -2.06301,-5.70142 -4.61365,-5.70142 l -49.86867,0 c -2.53188,0 -4.5949,-2.55064 -4.5949,-5.70143"
-         id="path3035"
-         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 320.94871,105.5888 c 0,3.35709 -2.1943,6.09527 -4.91373,6.09527 l -29.38857,0 c -2.71943,0 -4.93248,2.71943 -4.93248,6.09527 0,-3.37584 -2.19429,-6.09527 -4.91372,-6.09527 l -29.38857,0 c -2.71943,0 -4.91373,-2.73818 -4.91373,-6.09527"
-         id="path3037"
-         style="fill:none;stroke:#000000;stroke-width:1.25656307px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 238.89702,105.57943 c 0,3.36646 -2.20368,6.09527 -4.92311,6.09527 l -77.29738,0 c -2.71005,0 -4.91372,2.7288 -4.91372,6.09526 0,-3.36646 -2.20368,-6.09526 -4.91373,-6.09526 l -77.297378,0 c -2.719427,0 -4.923101,-2.72881 -4.923101,-6.09527"
-         id="path3039"
-         style="fill:none;stroke:#000000;stroke-width:1.24718571px;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 55.879546,75.103085 0,1.247186 -1.247186,0 0,-1.247186 1.247186,0 z m 0,2.503749 0,1.247186 -1.247186,0 0,-1.247186 1.247186,0 z m 0,2.494371 0,1.256563 -1.247186,0 0,-1.256563 1.247186,0 z m 0,2.503749 0,1.247186 -1.247186,0 0,-1.247186 1.247186,0 z m 0,2.503749 0,1.247185 -1.247186,0 0,-1.247185 1.247186,0 z m 0,2.494371 0,1.256563 -1.247186,0 0,-1.256563 1.247186,0 z m 0,2.503749 0,1.247186 -1.247186,0 0,-1.247186 1.247186,0 z m 0,2.503749 0,1.247185 -1.247186,0 0,-1.247185 1.247186,0 z m 0,2.494371 0,1.256563 -1.247186,0 0,-1.256563 1.247186,0 z m 0,2.503749 0,1.247185 -1.247186,0 0,-1.247185 1.247186,0 z m 0,2.503748 0,1.24719 -1.247186,0 0,-1.24719 1.247186,0 z m 0,2.49437 0,1.25656 -1.247186,0 0,-1.25656 1.247186,0 z m 0,2.50375 0,1.24719 -1.247186,0 0,-1.24719 1.247186,0 z m 0,2.50375 0,1.24719 -1.247186,0 0,-1.24719 1.247186,0 z"
-         id="path3041"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 163.09063,75.103085 0,1.247186 -1.24719,0 0,-1.247186 1.24719,0 z m 0,2.503749 0,1.247186 -1.24719,0 0,-1.247186 1.24719,0 z m 0,2.494371 0,1.256563 -1.24719,0 0,-1.256563 1.24719,0 z m 0,2.503749 0,1.247186 -1.24719,0 0,-1.247186 1.24719,0 z m 0,2.503749 0,1.247185 -1.24719,0 0,-1.247185 1.24719,0 z m 0,2.494371 0,1.256563 -1.24719,0 0,-1.256563 1.24719,0 z m 0,2.503749 0,1.247186 -1.24719,0 0,-1.247186 1.24719,0 z m 0,2.503749 0,1.247185 -1.24719,0 0,-1.247185 1.24719,0 z m 0,2.494371 0,1.256563 -1.24719,0 0,-1.256563 1.24719,0 z m 0,2.503749 0,1.247185 -1.24719,0 0,-1.247185 1.24719,0 z m 0,2.503748 0,1.24719 -1.24719,0 0,-1.24719 1.24719,0 z m 0,2.49437 0,1.25656 -1.24719,0 0,-1.25656 1.24719,0 z m 0,2.50375 0,1.24719 -1.24719,0 0,-1.24719 1.24719,0 z m 0,2.50375 0,1.24719 -1.24719,0 0,-1.24719 1.24719,0 z"
-         id="path3043"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 269.99226,75.103085 0,1.247186 -1.24718,0 0,-1.247186 1.24718,0 z m 0,2.503749 0,1.247186 -1.24718,0 0,-1.247186 1.24718,0 z m 0,2.494371 0,1.256563 -1.24718,0 0,-1.256563 1.24718,0 z m 0,2.503749 0,1.247186 -1.24718,0 0,-1.247186 1.24718,0 z m 0,2.503749 0,1.247185 -1.24718,0 0,-1.247185 1.24718,0 z m 0,2.494371 0,1.256563 -1.24718,0 0,-1.256563 1.24718,0 z m 0,2.503749 0,1.247186 -1.24718,0 0,-1.247186 1.24718,0 z m 0,2.503749 0,1.247185 -1.24718,0 0,-1.247185 1.24718,0 z m 0,2.494371 0,1.256563 -1.24718,0 0,-1.256563 1.24718,0 z m 0,2.503749 0,1.247185 -1.24718,0 0,-1.247185 1.24718,0 z m 0,2.503748 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.49437 0,1.25656 -1.24718,0 0,-1.25656 1.24718,0 z m 0,2.50375 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z m 0,2.50375 0,1.24719 -1.24718,0 0,-1.24719 1.24718,0 z"
-         id="path3045"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.00937734px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 376.27499,75.112462 0,1.237809 -1.25656,0 0,-1.237809 1.25656,0 z m 0,2.494372 0,1.256563 -1.25656,0 0,-1.256563 1.25656,0 z m 0,2.494371 0,1.256563 -1.25656,0 0,-1.256563 1.25656,0 z m 0,2.513126 0,1.237809 -1.25656,0 0,-1.237809 1.25656,0 z m 0,2.494372 0,1.256563 -1.25656,0 0,-1.256563 1.25656,0 z m 0,2.494371 0,1.256563 -1.25656,0 0,-1.256563 1.25656,0 z m 0,2.513126 0,1.237809 -1.25656,0 0,-1.237809 1.25656,0 z m 0,2.494372 0,1.256563 -1.25656,0 0,-1.256563 1.25656,0 z m 0,2.494371 0,1.256563 -1.25656,0 0,-1.256563 1.25656,0 z m 0,2.513126 0,1.237808 -1.25656,0 0,-1.237808 1.25656,0 z m 0,2.494371 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51313 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z"
-         id="path3047"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <path
-         d="m 486.77752,75.112462 0,1.237809 -1.25656,0 0,-1.237809 1.25656,0 z m 0,2.494372 0,1.256563 -1.25656,0 0,-1.256563 1.25656,0 z m 0,2.494371 0,1.256563 -1.25656,0 0,-1.256563 1.25656,0 z m 0,2.513126 0,1.237809 -1.25656,0 0,-1.237809 1.25656,0 z m 0,2.494372 0,1.256563 -1.25656,0 0,-1.256563 1.25656,0 z m 0,2.494371 0,1.256563 -1.25656,0 0,-1.256563 1.25656,0 z m 0,2.513126 0,1.237809 -1.25656,0 0,-1.237809 1.25656,0 z m 0,2.494372 0,1.256563 -1.25656,0 0,-1.256563 1.25656,0 z m 0,2.494371 0,1.256563 -1.25656,0 0,-1.256563 1.25656,0 z m 0,2.513126 0,1.237808 -1.25656,0 0,-1.237808 1.25656,0 z m 0,2.494371 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z m 0,2.51313 0,1.23781 -1.25656,0 0,-1.23781 1.25656,0 z m 0,2.49437 0,1.25656 -1.25656,0 0,-1.25656 1.25656,0 z"
-         id="path3049"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.01875467px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-      <text
-         x="277.44809"
-         y="145.27271"
-         id="text3051"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Count(3) Windows</text>
-      <text
-         x="642.14154"
-         y="52.362846"
-         id="text3053"
-         xml:space="preserve"
-         style="font-size:11.2528038px;font-style:italic;font-weight:normal;text-align:start;text-anchor:start;fill:#000000;font-family:Verdana">Event</text>
-      <path
-         d="m 643.81039,54.576096 -59.86491,35.258784 0.63765,1.087771 59.86492,-35.258784 -0.63766,-1.087771 z m -59.75239,33.008223 -3.03825,4.688669 5.58889,-0.375094 -2.55064,-4.313575 z"
-         id="path3055"
-         style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0.03750934px;stroke-linecap:butt;stroke-linejoin:round;stroke-opacity:1;stroke-dasharray:none" />
-    </g>
-  </g>
-</svg>


[35/89] [abbrv] [partial] flink git commit: [FLINK-4317, FLIP-3] [docs] Restructure docs

Posted by se...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/sliding-windows.svg
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/sliding-windows.svg b/docs/apis/streaming/sliding-windows.svg
deleted file mode 100644
index 32c6bf0..0000000
--- a/docs/apis/streaming/sliding-windows.svg
+++ /dev/null
@@ -1,22 +0,0 @@
-<?xml version="1.0" standalone="yes"?>
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<svg version="1.1" viewBox="0.0 0.0 800.0 600.0" fill="none" stroke="none" stroke-linecap="square" stroke-miterlimit="10" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><clipPath id="p.0"><path d="m0 0l800.0 0l0 600.0l-800.0 0l0 -600.0z" clip-rule="nonzero"></path></clipPath><g clip-path="url(#p.0)"><path fill="#000000" fill-opacity="0.0" d="m0 0l800.0 0l0 600.0l-800.0 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m145.49606 485.0l509.0079 0" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m145.49606 485.0l503.0079 0" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m648.50397 486.65173l4.538086 -1.6517334l-4.538086 -1.6517334z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m145.49606 485.0l0 -394.99213" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" s
 troke-linejoin="round" stroke-linecap="butt" d="m145.49606 485.0l0 -388.99213" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m147.1478 96.00787l-1.6517334 -4.5380936l-1.6517334 4.5380936z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m587.0 477.0l60.0 0l0 42.992126l-60.0 0z" fill-rule="nonzero"></path><path fill="#000000" d="m600.90625 502.41998l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm1.5426636 -10.1875l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm4.1292114 0l0 -9.859375l1.5 0l0 1.390625q0.453125 
 -0.71875 1.21875 -1.15625q0.78125 -0.453125 1.765625 -0.453125q1.09375 0 1.796875 0.453125q0.703125 0.453125 0.984375 1.28125q1.171875 -1.734375 3.046875 -1.734375q1.46875 0 2.25 0.8125q0.796875 0.8125 0.796875 2.5l0 6.765625l-1.671875 0l0 -6.203125q0 -1.0 -0.15625 -1.4375q-0.15625 -0.453125 -0.59375 -0.71875q-0.421875 -0.265625 -1.0 -0.265625q-1.03125 0 -1.71875 0.6875q-0.6875 0.6875 -0.6875 2.21875l0 5.71875l-1.671875 0l0 -6.40625q0 -1.109375 -0.40625 -1.65625q-0.40625 -0.5625 -1.34375 -0.5625q-0.703125 0 -1.3125 0.375q-0.59375 0.359375 -0.859375 1.078125q-0.265625 0.71875 -0.265625 2.0625l0 5.109375l-1.671875 0zm22.290771 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.85
 9375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m42.0 133.0l82.01575 0l0 42.992126l-82.01575 0z" fill-rule="nonzero"></path><path fill="#000000" d="m58.703125 159.92l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.671875 0l0 9.859375l-1.5 0zm3.2507172 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46
 875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm16.75 -0.234375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.
 34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094467 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm17.23973 0l-1.671875 0l0 -10.640625q-0.59375 0.578125 -1.578125 1.15625q-0.984375 0.5625 -1.765625 0.859375l0
  -1.625q1.40625 -0.65625 2.453125 -1.59375q1.046875 -0.9375 1.484375 -1.8125l1.078125 0l0 13.65625z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m42.0 182.0l82.01575 0l0 42.992126l-82.01575 0z" fill-rule="nonzero"></path><path fill="#000000" d="m58.703125 208.92l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.671875 0l0 9.859375l-1.5 0zm3.2507172 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.343
 75 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm16.75 -0.234375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.
 375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094467 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm19.724106 -1.609375l0 1.609375l-8.984375 0q-0.015625 -0.609375 0.1875 -1.15625q0.34375 -0.921875 1.09375 -1.8125q0.765625 -0.890625 2.1875 -2.0625q2.21875 -1.8125 3.0 -2.875q0.78125 -1.0625 0.78125 -2.015625q0 -0.984375 -0.71875 -1.671875q-0.703125 -0.6875 -1.843
 75 -0.6875q-1.203125 0 -1.9375 0.734375q-0.71875 0.71875 -0.71875 2.0l-1.71875 -0.171875q0.171875 -1.921875 1.328125 -2.921875q1.15625 -1.015625 3.09375 -1.015625q1.953125 0 3.09375 1.09375q1.140625 1.078125 1.140625 2.6875q0 0.8125 -0.34375 1.609375q-0.328125 0.78125 -1.109375 1.65625q-0.765625 0.859375 -2.5625 2.390625q-1.5 1.265625 -1.9375 1.71875q-0.421875 0.4375 -0.703125 0.890625l6.671875 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m42.0 239.0l82.01575 0l0 42.992126l-82.01575 0z" fill-rule="nonzero"></path><path fill="#000000" d="m58.703125 265.91998l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.671
 875 0l0 9.859375l-1.5 0zm3.2507172 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.20310974 1.4375 -0.20310974q1.171875 0 2.046875 0.34373474q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.
 921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm16.75 -0.234375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.3749847 3.328125 -1.3749847q1.984375 0 3.234375 1.3437347q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094467 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34373474 1.078125 -0.34373474q0.84375 0 1.71875 0.54685974l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0
 .90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm10.958481 -3.59375l1.671875 -0.21875q0.28125 1.421875 0.96875 2.046875q0.703125 0.625 1.6875 0.625q1.1875 0 2.0 -0.8125q0.8125 -0.828125 0.8125 -2.03125q0 -1.140625 -0.765625 -1.890625q-0.75 -0.75 -1.90625 -0.75q-0.46875 0 -1.171875 0.1875l0.1875 -1.46875q0.15625 0.015625 0.265625 0.015625q1.0625 0 1.90625 -0.546875q0.859375 -0.5625 0.859375 -1.7187347q0 -0.921875 -0.625 -1.515625q-0.609375 -0.609375 -1.59375 -0.609375q-0.96875 0 -1.625 0.609375q-0.640625 0.609375 -0.828125 1.8437347l-1.671875 -0.29685974q0.296875 -1.6875 1.375 -2.609375q1.09375 -0.921875 2.71875 -0.921875q1.109375 0 2.046875 0.484375q0.9375 0.46875 1.421875 1.296875q0.5 0.828125 0.5 1.75q0 0.89060974 -0.46875 1.6093597q-0.46875 0.71875 -1.40625 1.15625q1.21875 0.265625 1.875 1.15625q0.671875 0.875 0.671875 2.1875q0 1.78125 -1.296875 3.015625q-1.296875 1.234375 -3.28125 1.234375q-1.796875 0 -2.984375 -1.0625q-1.171875 -1.0625 -1.34375 -2.765625z" fill
 -rule="nonzero"></path><path fill="#9900ff" d="m177.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.518509 0 4.9338684 1.000473 6.714737 2.7813263c1.7808533 1.7808685 2.7813263 4.196228 2.7813263 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m203.49606 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.518509 0 4.9338684 1.000473 6.714737 2.7813263c1.7808533 1.7808685 2.7813263 4.196228 2.7813263 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m290.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.244537
 4 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m323.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m348.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m373.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -
 4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m442.50394 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m469.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m492.50394 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.78086
 85 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m524.0 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496033 -9.496063l0 0c2.5185547 0 4.933899 1.000473 6.7147827 2.7813263c1.7808228 1.7808685 2.781311 4.196228 2.781311 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496094 9.496063l0 0c-5.244507 0 -9.496033 -4.251526 -9.496033 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m603.0079 154.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496094 -9.496063l0 0c2.5184937 0 4.933838 1.000473 6.7147217 2.7813263c1.7808228 1.7808685 2.781311 4.196228 2.781311 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496033 9.496063l0 0c-5.244568 0 -9.496094 -4.251526 -9.496094 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m374.97638 203.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.93386
 84 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m401.47244 203.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m209.0 203.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.518509 0 4.9338684 1.000473 6.714737 2.7813263c1.7808533 1.7808685 2.7813263 4.196228 2.7813263 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m242.0 203.49606l0 0c0 -5.2445374 4.251526 
 -9.496063 9.496063 -9.496063l0 0c2.518509 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m267.0 203.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m292.0 203.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900f
 f" d="m568.4803 203.49606l0 0c0 -5.2445374 4.251587 -9.496063 9.496094 -9.496063l0 0c2.5184937 0 4.933899 1.000473 6.7147217 2.7813263c1.7808838 1.7808685 2.781311 4.196228 2.781311 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496033 9.496063l0 0c-5.244507 0 -9.496094 -4.251526 -9.496094 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m594.9764 203.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496033 -9.496063l0 0c2.5185547 0 4.933899 1.000473 6.7147827 2.7813263c1.7808228 1.7808685 2.781311 4.196228 2.781311 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496094 9.496063l0 0c-5.244507 0 -9.496033 -4.251526 -9.496033 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m618.4803 203.49606l0 0c0 -5.2445374 4.251587 -9.496063 9.496094 -9.496063l0 0c2.5184937 0 4.933899 1.000473 6.7147217 2.7813263c1.7808838 1.7808685 2.781311 4.196228 2.781311 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496033 9.496063l0 0c-5.244507 0 -9.496094 -4.251526 -9.496094 -9.496063z
 " fill-rule="nonzero"></path><path fill="#9900ff" d="m477.0 203.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196228 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m487.99213 260.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196213 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m514.48816 260.49606l0 0c0 -5.2445374 4.251587 -9.496063 9.496094 -9.496063l0 0c2.5184937 0 4.933899 1.000473 6.7147217 2.7813263c1.7808838 1.7808685 2.781311 4.196213 2.781311 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496033 9.496063l0 
 0c-5.244507 0 -9.496094 -4.251526 -9.496094 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m185.76378 260.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.518509 0 4.9338684 1.000473 6.714737 2.7813263c1.7808533 1.7808685 2.7813263 4.196213 2.7813263 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m265.0 260.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196213 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m291.49606 260.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196213 2.7813416 6.714737l0
  0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m315.0 260.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496063 -9.496063l0 0c2.5185242 0 4.9338684 1.000473 6.7147217 2.7813263c1.7808533 1.7808685 2.7813416 4.196213 2.7813416 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496063 9.496063l0 0c-5.2445374 0 -9.496063 -4.251526 -9.496063 -9.496063z" fill-rule="nonzero"></path><path fill="#9900ff" d="m558.01575 260.49606l0 0c0 -5.2445374 4.251526 -9.496063 9.496094 -9.496063l0 0c2.5184937 0 4.933838 1.000473 6.7147217 2.7813263c1.7808228 1.7808685 2.781311 4.196213 2.781311 6.714737l0 0c0 5.2445374 -4.251526 9.496063 -9.496033 9.496063l0 0c-5.244568 0 -9.496094 -4.251526 -9.496094 -9.496063z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m173.0 111.0l0 354.99213" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round
 " stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m173.0 111.0l0 354.99213" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m266.59973 110.00787l0 354.99213" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m266.59973 110.00787l0 354.99213" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m360.19946 110.00787l0 354.99213" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m360.19946 110.00787l0 354.99213" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m453.79922 110.00787l0 354.99213" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m453.79922 110.00787l0 354.99213" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d
 ="m547.3989 111.0l0 354.99213" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m547.3989 111.0l0 354.99213" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m640.99866 111.0l0 354.99213" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" stroke-dasharray="4.0,3.0" d="m640.99866 111.0l0 354.99213" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m186.75 86.0l102.99213 0l0 38.992126l-102.99213 0z" fill-rule="nonzero"></path><path fill="#000000" d="m197.84375 107.8l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm8.390778 -7.984375l0 -1.32
 8125l1.140625 0l0 1.328125l-1.140625 0zm0 7.984375l0 -6.734375l1.140625 0l0 6.734375l-1.140625 0zm2.96109 0l0 -6.734375l1.03125 0l0 0.953125q0.734375 -1.109375 2.140625 -1.109375q0.609375 0 1.109375 0.21875q0.515625 0.21875 0.765625 0.578125q0.265625 0.34375 0.359375 0.84375q0.0625 0.3125 0.0625 1.109375l0 4.140625l-1.140625 0l0 -4.09375q0 -0.703125 -0.140625 -1.046875q-0.125 -0.34375 -0.46875 -0.546875q-0.328125 -0.21875 -0.78125 -0.21875q-0.734375 0 -1.265625 0.46875q-0.53125 0.453125 -0.53125 1.75l0 3.6875l-1.140625 0zm11.787476 0l0 -0.84375q-0.640625 1.0 -1.890625 1.0q-0.796875 0 -1.484375 -0.4375q-0.671875 -0.453125 -1.046875 -1.25q-0.375 -0.796875 -0.375 -1.828125q0 -1.015625 0.34375 -1.828125q0.34375 -0.828125 1.015625 -1.265625q0.671875 -0.4375 1.5 -0.4375q0.609375 0 1.078125 0.265625q0.484375 0.25 0.78125 0.65625l0 -3.34375l1.140625 0l0 9.3125l-1.0625 0zm-3.609375 -3.359375q0 1.296875 0.53125 1.9375q0.546875 0.640625 1.296875 0.640625q0.75 0 1.265625 -0.609375q0.53125 -0.62
 5 0.53125 -1.875q0 -1.390625 -0.53125 -2.03125q-0.53125 -0.65625 -1.3125 -0.65625q-0.765625 0 -1.28125 0.625q-0.5 0.625 -0.5 1.96875zm6.2249756 -0.015625q0 -1.875 1.03125 -2.765625q0.875 -0.75 2.125 -0.75q1.390625 0 2.265625 0.90625q0.890625 0.90625 0.890625 2.515625q0 1.296875 -0.390625 2.046875q-0.390625 0.75 -1.140625 1.171875q-0.75 0.40625 -1.625 0.40625q-1.421875 0 -2.296875 -0.90625q-0.859375 -0.90625 -0.859375 -2.625zm1.171875 0q0 1.296875 0.5625 1.953125q0.5625 0.640625 1.421875 0.640625q0.84375 0 1.40625 -0.640625q0.578125 -0.65625 0.578125 -1.984375q0 -1.25 -0.578125 -1.890625q-0.5625 -0.65625 -1.40625 -0.65625q-0.859375 0 -1.421875 0.640625q-0.5625 0.640625 -0.5625 1.9375zm7.8968506 3.375l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l
 -1.203125 0zm16.07814 0l-1.140625 0l0 -7.28125q-0.421875 0.390625 -1.09375 0.796875q-0.65625 0.390625 -1.1875 0.578125l0 -1.109375q0.953125 -0.4375 1.671875 -1.078125q0.71875 -0.640625 1.015625 -1.25l0.734375 0l0 9.34375z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m278.97604 86.0l102.99213 0l0 38.992126l-102.99213 0z" fill-rule="nonzero"></path><path fill="#000000" d="m290.0698 107.8l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm8.390778 -7.984375l0 -1.328125l1.140625 0l0 1.328125l-1.140625 0zm0 7.984375l0 -6.734375l1.140625 0l0 6.734375l-1.140625 0zm2.96109 0l0 -6.734375l1.03125 0l0 0.953125q0.734375 -1.109375 2.140625 -1.109375q0.609375 0 1.109375 0.21875q0.515625 0.21875 0.765625 0.578125q0.265625 0.34
 375 0.359375 0.84375q0.0625 0.3125 0.0625 1.109375l0 4.140625l-1.140625 0l0 -4.09375q0 -0.703125 -0.140625 -1.046875q-0.125 -0.34375 -0.46875 -0.546875q-0.328125 -0.21875 -0.78125 -0.21875q-0.734375 0 -1.265625 0.46875q-0.53125 0.453125 -0.53125 1.75l0 3.6875l-1.140625 0zm11.787476 0l0 -0.84375q-0.640625 1.0 -1.890625 1.0q-0.796875 0 -1.484375 -0.4375q-0.671875 -0.453125 -1.046875 -1.25q-0.375 -0.796875 -0.375 -1.828125q0 -1.015625 0.34375 -1.828125q0.34375 -0.828125 1.015625 -1.265625q0.671875 -0.4375 1.5 -0.4375q0.609375 0 1.078125 0.265625q0.484375 0.25 0.78125 0.65625l0 -3.34375l1.140625 0l0 9.3125l-1.0625 0zm-3.609375 -3.359375q0 1.296875 0.53125 1.9375q0.546875 0.640625 1.296875 0.640625q0.75 0 1.265625 -0.609375q0.53125 -0.625 0.53125 -1.875q0 -1.390625 -0.53125 -2.03125q-0.53125 -0.65625 -1.3125 -0.65625q-0.765625 0 -1.28125 0.625q-0.5 0.625 -0.5 1.96875zm6.2249756 -0.015625q0 -1.875 1.03125 -2.765625q0.875 -0.75 2.125 -0.75q1.390625 0 2.265625 0.90625q0.890625 0.90625 0.890
 625 2.515625q0 1.296875 -0.390625 2.046875q-0.390625 0.75 -1.140625 1.171875q-0.75 0.40625 -1.625 0.40625q-1.421875 0 -2.296875 -0.90625q-0.859375 -0.90625 -0.859375 -2.625zm1.171875 0q0 1.296875 0.5625 1.953125q0.5625 0.640625 1.421875 0.640625q0.84375 0 1.40625 -0.640625q0.578125 -0.65625 0.578125 -1.984375q0 -1.25 -0.578125 -1.890625q-0.5625 -0.65625 -1.40625 -0.65625q-0.859375 0 -1.421875 0.640625q-0.5625 0.640625 -0.5625 1.9375zm7.8968506 3.375l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm17.78128 -1.09375l0 1.09375l-6.15625 0q-0.015625 -0.40625 0.140625 -0.796875q0.234375 -0.625 0.75 -1.234375q0.515625 -0.609375 1.5 -1.40625q1.515625 -1.25 2.046875 -1.96875q0.53125 -0.734375 0.53125 -1.375q0 -0.6875 -0.484375 -1.140625q-0.484
 375 -0.46875 -1.265625 -0.46875q-0.828125 0 -1.328125 0.5q-0.484375 0.484375 -0.5 1.359375l-1.171875 -0.125q0.125 -1.3125 0.90625 -2.0q0.78125 -0.6875 2.109375 -0.6875q1.34375 0 2.125 0.75q0.78125 0.734375 0.78125 1.828125q0 0.5625 -0.234375 1.109375q-0.21875 0.53125 -0.75 1.140625q-0.53125 0.59375 -1.765625 1.625q-1.03125 0.859375 -1.328125 1.171875q-0.28125 0.3125 -0.46875 0.625l4.5625 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m372.83102 86.0l102.99213 0l0 38.992126l-102.99213 0z" fill-rule="nonzero"></path><path fill="#000000" d="m383.92477 107.8l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm8.390778 -7.984375l0 -1.328125l1.140625 0l0 1.328125l-1.140625 0zm0 7.984375l0 -6.734375l1.140625 0l0 6.7343
 75l-1.140625 0zm2.96109 0l0 -6.734375l1.03125 0l0 0.953125q0.734375 -1.109375 2.140625 -1.109375q0.609375 0 1.109375 0.21875q0.515625 0.21875 0.765625 0.578125q0.265625 0.34375 0.359375 0.84375q0.0625 0.3125 0.0625 1.109375l0 4.140625l-1.140625 0l0 -4.09375q0 -0.703125 -0.140625 -1.046875q-0.125 -0.34375 -0.46875 -0.546875q-0.328125 -0.21875 -0.78125 -0.21875q-0.734375 0 -1.265625 0.46875q-0.53125 0.453125 -0.53125 1.75l0 3.6875l-1.140625 0zm11.787476 0l0 -0.84375q-0.640625 1.0 -1.890625 1.0q-0.796875 0 -1.484375 -0.4375q-0.671875 -0.453125 -1.046875 -1.25q-0.375 -0.796875 -0.375 -1.828125q0 -1.015625 0.34375 -1.828125q0.34375 -0.828125 1.015625 -1.265625q0.671875 -0.4375 1.5 -0.4375q0.609375 0 1.078125 0.265625q0.484375 0.25 0.78125 0.65625l0 -3.34375l1.140625 0l0 9.3125l-1.0625 0zm-3.609375 -3.359375q0 1.296875 0.53125 1.9375q0.546875 0.640625 1.296875 0.640625q0.75 0 1.265625 -0.609375q0.53125 -0.625 0.53125 -1.875q0 -1.390625 -0.53125 -2.03125q-0.53125 -0.65625 -1.3125 -0.65625q
 -0.765625 0 -1.28125 0.625q-0.5 0.625 -0.5 1.96875zm6.2249756 -0.015625q0 -1.875 1.03125 -2.765625q0.875 -0.75 2.125 -0.75q1.390625 0 2.265625 0.90625q0.890625 0.90625 0.890625 2.515625q0 1.296875 -0.390625 2.046875q-0.390625 0.75 -1.140625 1.171875q-0.75 0.40625 -1.625 0.40625q-1.421875 0 -2.296875 -0.90625q-0.859375 -0.90625 -0.859375 -2.625zm1.171875 0q0 1.296875 0.5625 1.953125q0.5625 0.640625 1.421875 0.640625q0.84375 0 1.40625 -0.640625q0.578125 -0.65625 0.578125 -1.984375q0 -1.25 -0.578125 -1.890625q-0.5625 -0.65625 -1.40625 -0.65625q-0.859375 0 -1.421875 0.640625q-0.5625 0.640625 -0.5625 1.9375zm7.8968506 3.375l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm11.7812805 -2.453125l1.140625 -0.15625q0.203125 0.96875 0.671875 1.40
 625q0.46875 0.421875 1.15625 0.421875q0.796875 0 1.34375 -0.546875q0.5625 -0.5625 0.5625 -1.390625q0 -0.796875 -0.515625 -1.296875q-0.5 -0.515625 -1.296875 -0.515625q-0.328125 0 -0.8125 0.125l0.125 -1.0q0.125 0.015625 0.1875 0.015625q0.734375 0 1.3125 -0.375q0.59375 -0.390625 0.59375 -1.1875q0 -0.625 -0.4375 -1.03125q-0.421875 -0.421875 -1.09375 -0.421875q-0.671875 0 -1.109375 0.421875q-0.4375 0.421875 -0.578125 1.25l-1.140625 -0.203125q0.21875 -1.140625 0.953125 -1.765625q0.75 -0.640625 1.84375 -0.640625q0.765625 0 1.40625 0.328125q0.640625 0.328125 0.984375 0.890625q0.34375 0.5625 0.34375 1.203125q0 0.59375 -0.328125 1.09375q-0.328125 0.5 -0.953125 0.78125q0.8125 0.203125 1.265625 0.796875q0.46875 0.59375 0.46875 1.5q0 1.21875 -0.890625 2.078125q-0.890625 0.84375 -2.25 0.84375q-1.21875 0 -2.03125 -0.734375q-0.8125 -0.734375 -0.921875 -1.890625z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m467.55054 86.0l102.99213 0l0 38.992126l-102.99213 0z" fill-rule="n
 onzero"></path><path fill="#000000" d="m478.6443 107.8l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm8.390778 -7.984375l0 -1.328125l1.140625 0l0 1.328125l-1.140625 0zm0 7.984375l0 -6.734375l1.140625 0l0 6.734375l-1.140625 0zm2.96109 0l0 -6.734375l1.03125 0l0 0.953125q0.734375 -1.109375 2.140625 -1.109375q0.609375 0 1.109375 0.21875q0.515625 0.21875 0.765625 0.578125q0.265625 0.34375 0.359375 0.84375q0.0625 0.3125 0.0625 1.109375l0 4.140625l-1.140625 0l0 -4.09375q0 -0.703125 -0.140625 -1.046875q-0.125 -0.34375 -0.46875 -0.546875q-0.328125 -0.21875 -0.78125 -0.21875q-0.734375 0 -1.265625 0.46875q-0.53125 0.453125 -0.53125 1.75l0 3.6875l-1.140625 0zm11.787476 0l0 -0.84375q-0.640625 1.0 -1.890625 1.0q-0.796875 0 -1.484375 -0.4375q-0.671
 875 -0.453125 -1.046875 -1.25q-0.375 -0.796875 -0.375 -1.828125q0 -1.015625 0.34375 -1.828125q0.34375 -0.828125 1.015625 -1.265625q0.671875 -0.4375 1.5 -0.4375q0.609375 0 1.078125 0.265625q0.484375 0.25 0.78125 0.65625l0 -3.34375l1.140625 0l0 9.3125l-1.0625 0zm-3.609375 -3.359375q0 1.296875 0.53125 1.9375q0.546875 0.640625 1.296875 0.640625q0.75 0 1.265625 -0.609375q0.53125 -0.625 0.53125 -1.875q0 -1.390625 -0.53125 -2.03125q-0.53125 -0.65625 -1.3125 -0.65625q-0.765625 0 -1.28125 0.625q-0.5 0.625 -0.5 1.96875zm6.2249756 -0.015625q0 -1.875 1.03125 -2.765625q0.875 -0.75 2.125 -0.75q1.390625 0 2.265625 0.90625q0.890625 0.90625 0.890625 2.515625q0 1.296875 -0.390625 2.046875q-0.390625 0.75 -1.140625 1.171875q-0.75 0.40625 -1.625 0.40625q-1.421875 0 -2.296875 -0.90625q-0.859375 -0.90625 -0.859375 -2.625zm1.171875 0q0 1.296875 0.5625 1.953125q0.5625 0.640625 1.421875 0.640625q0.84375 0 1.40625 -0.640625q0.578125 -0.65625 0.578125 -1.984375q0 -1.25 -0.578125 -1.890625q-0.5625 -0.65625 -1.4
 0625 -0.65625q-0.859375 0 -1.421875 0.640625q-0.5625 0.640625 -0.5625 1.9375zm7.8968506 3.375l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm15.4375 0l0 -2.234375l-4.03125 0l0 -1.046875l4.234375 -6.03125l0.9375 0l0 6.03125l1.265625 0l0 1.046875l-1.265625 0l0 2.234375l-1.140625 0zm0 -3.28125l0 -4.1875l-2.921875 4.1875l2.921875 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m559.92816 86.0l102.99213 0l0 38.992126l-102.99213 0z" fill-rule="nonzero"></path><path fill="#000000" d="m571.0219 107.8l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.15625 -3.890625l1.109375 0l-2.109375 6.73437
 5l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm8.390747 -7.984375l0 -1.328125l1.140625 0l0 1.328125l-1.140625 0zm0 7.984375l0 -6.734375l1.140625 0l0 6.734375l-1.140625 0zm2.9611206 0l0 -6.734375l1.03125 0l0 0.953125q0.734375 -1.109375 2.140625 -1.109375q0.609375 0 1.109375 0.21875q0.515625 0.21875 0.765625 0.578125q0.265625 0.34375 0.359375 0.84375q0.0625 0.3125 0.0625 1.109375l0 4.140625l-1.140625 0l0 -4.09375q0 -0.703125 -0.140625 -1.046875q-0.125 -0.34375 -0.46875 -0.546875q-0.328125 -0.21875 -0.78125 -0.21875q-0.734375 0 -1.265625 0.46875q-0.53125 0.453125 -0.53125 1.75l0 3.6875l-1.140625 0zm11.787476 0l0 -0.84375q-0.640625 1.0 -1.890625 1.0q-0.796875 0 -1.484375 -0.4375q-0.671875 -0.453125 -1.046875 -1.25q-0.375 -0.796875 -0.375 -1.828125q0 -1.015625 0.34375 -1.828125q0.34375 -0.828125 1.015625 -1.265625q0.671875 -0.4375 1.5 -0.4375q0.609375 0 1.078125 0.265625q0.484375 0.25 0.78125 0.65625l0 -3.34375l1.140625 0l0 9.3125l-1.0625 0zm-3.609375 -
 3.359375q0 1.296875 0.53125 1.9375q0.546875 0.640625 1.296875 0.640625q0.75 0 1.265625 -0.609375q0.53125 -0.625 0.53125 -1.875q0 -1.390625 -0.53125 -2.03125q-0.53125 -0.65625 -1.3125 -0.65625q-0.765625 0 -1.28125 0.625q-0.5 0.625 -0.5 1.96875zm6.2249756 -0.015625q0 -1.875 1.03125 -2.765625q0.875 -0.75 2.125 -0.75q1.390625 0 2.265625 0.90625q0.890625 0.90625 0.890625 2.515625q0 1.296875 -0.390625 2.046875q-0.390625 0.75 -1.140625 1.171875q-0.75 0.40625 -1.625 0.40625q-1.421875 0 -2.296875 -0.90625q-0.859375 -0.90625 -0.859375 -2.625zm1.171875 0q0 1.296875 0.5625 1.953125q0.5625 0.640625 1.421875 0.640625q0.84375 0 1.40625 -0.640625q0.578125 -0.65625 0.578125 -1.984375q0 -1.25 -0.578125 -1.890625q-0.5625 -0.65625 -1.40625 -0.65625q-0.859375 0 -1.421875 0.640625q-0.5625 0.640625 -0.5625 1.9375zm7.8968506 3.375l-2.0625 -6.734375l1.1875 0l1.078125 3.890625l0.390625 1.4375q0.03125 -0.109375 0.359375 -1.390625l1.0625 -3.9375l1.171875 0l1.015625 3.90625l0.34375 1.28125l0.375 -1.296875l1.156
 25 -3.890625l1.109375 0l-2.109375 6.734375l-1.171875 0l-1.078125 -4.03125l-0.265625 -1.15625l-1.359375 5.1875l-1.203125 0zm11.78125 -2.4375l1.1875 -0.109375q0.140625 0.890625 0.625 1.328125q0.484375 0.4375 1.171875 0.4375q0.828125 0 1.390625 -0.625q0.578125 -0.625 0.578125 -1.640625q0 -0.984375 -0.546875 -1.546875q-0.546875 -0.5625 -1.4375 -0.5625q-0.5625 0 -1.015625 0.25q-0.4375 0.25 -0.6875 0.640625l-1.0625 -0.140625l0.890625 -4.765625l4.625 0l0 1.078125l-3.703125 0l-0.5 2.5q0.828125 -0.578125 1.75 -0.578125q1.21875 0 2.046875 0.84375q0.84375 0.84375 0.84375 2.171875q0 1.265625 -0.734375 2.1875q-0.890625 1.125 -2.4375 1.125q-1.265625 0 -2.078125 -0.703125q-0.796875 -0.71875 -0.90625 -1.890625z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m277.40027 412.37534l72.0 0" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m280.82733 412.37534l65.14584 0" fill-rule="evenodd"></path><path fill="#0
 00000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m280.82736 412.37534l1.1245728 -1.1246033l-3.0897827 1.1246033l3.0897827 1.1245728z" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m345.97318 412.37534l-1.1245728 1.1245728l3.0897522 -1.1245728l-3.0897522 -1.1246033z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m166.29134 504.00787l53.70079 -130.89764" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m166.29134 504.00787l51.423477 -125.34662" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m219.24295 379.28818l0.19430542 -4.8254395l-3.2505798 3.5715942z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m104.59055 504.00787l123.40157 0l0 42.992126l-123.40157 0z" fill-rule="nonzero"></path><path fill="#000000" d="m116.66868 530.9278
 6l-3.015625 -9.859375l1.71875 0l1.5625 5.6875l0.59375 2.125q0.03125 -0.15625 0.5 -2.03125l1.578125 -5.78125l1.71875 0l1.46875 5.71875l0.484375 1.890625l0.578125 -1.90625l1.6875 -5.703125l1.625 0l-3.078125 9.859375l-1.734375 0l-1.578125 -5.90625l-0.375 -1.671875l-2.0 7.578125l-1.734375 0zm11.660439 -11.6875l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm4.129196 0l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0zm16.766342 0l0 -1.25q-0.9375 1.46875 -2.75 1.46875q-1.171875 0 -2.171875 -0.640625q-0.984375 -0.65625 -1.53125 -1.8125q-0.53125 -1.171875 -0.53125 -2.6875q0 -1.46875 0.484375 -2.67187
 5q0.5 -1.203125 1.46875 -1.84375q0.984375 -0.640625 2.203125 -0.640625q0.890625 0 1.578125 0.375q0.703125 0.375 1.140625 0.984375l0 -4.875l1.65625 0l0 13.59375l-1.546875 0zm-5.28125 -4.921875q0 1.890625 0.796875 2.828125q0.8125 0.9375 1.890625 0.9375q1.09375 0 1.859375 -0.890625q0.765625 -0.890625 0.765625 -2.734375q0 -2.015625 -0.78125 -2.953125q-0.78125 -0.953125 -1.921875 -0.953125q-1.109375 0 -1.859375 0.90625q-0.75 0.90625 -0.75 2.859375zm8.641342 0q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125z
 m11.110092 4.921875l-3.015625 -9.859375l1.71875 0l1.5625 5.6875l0.59375 2.125q0.03125 -0.15625 0.5 -2.03125l1.578125 -5.78125l1.71875 0l1.46875 5.71875l0.484375 1.890625l0.578125 -1.90625l1.6875 -5.703125l1.625 0l-3.078125 9.859375l-1.734375 0l-1.578125 -5.90625l-0.375 -1.671875l-2.0 7.578125l-1.734375 0zm16.15625 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 
 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm10.015625 -8.75l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm3.2541962 0l0 -1.359375l6.265625 -7.1875q-1.0625 0.046875 -1.875 0.046875l-4.015625 0l0 -1.359375l8.046875 0l0 1.109375l-5.34375 6.25l-1.015625 1.140625q1.109375 -0.078125 2.09375 -0.078125l4.5625 0l0 1.4375l-8.71875 0zm16.953125 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.79
 6875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m176.99213 352.62466l178.01575 0" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m180.4192 352.62466l171.16158 0" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m180.41922 352.62466l1.1245728 -1.1245728l-3.0897675 1.1245728l3.0897675 1.1246033z" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m351.58078 352.62466l-1.1245728 1.1246033l3.0897522 -1.1246033l-3.0897522 -1.1245728z" fill-rul
 e="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m326.58298 510.63254l-0.9133911 -88.75589" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m326.58298 510.63254l-0.85162354 -82.756226" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m327.383 427.8593l-1.6983643 -4.5208435l-1.60495 4.55484z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m251.70108 510.63254l149.7638 0l0 42.992157l-149.7638 0z" fill-rule="nonzero"></path><path fill="#000000" d="m263.7792 537.55255l-3.015625 -9.859375l1.71875 0l1.5625 5.6875l0.59375 2.125q0.03125 -0.15625 0.5 -2.03125l1.578125 -5.78125l1.71875 0l1.46875 5.71875l0.484375 1.890625l0.578125 -1.90625l1.6875 -5.703125l1.625 0l-3.078125 9.859375l-1.734375 0l-1.578125 -5.90625l-0.375 -1.671875l-2.0 7.578125l-1.734375 0zm11.660461 -11.6875l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0
  -9.859375l1.671875 0l0 9.859375l-1.671875 0zm4.129181 0l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0zm16.766357 0l0 -1.25q-0.9375 1.46875 -2.75 1.46875q-1.171875 0 -2.171875 -0.640625q-0.984375 -0.65625 -1.53125 -1.8125q-0.53125 -1.171875 -0.53125 -2.6875q0 -1.46875 0.484375 -2.671875q0.5 -1.203125 1.46875 -1.84375q0.984375 -0.640625 2.203125 -0.640625q0.890625 0 1.578125 0.375q0.703125 0.375 1.140625 0.984375l0 -4.875l1.65625 0l0 13.59375l-1.546875 0zm-5.28125 -4.921875q0 1.890625 0.796875 2.828125q0.8125 0.9375 1.890625 0.9375q1.09375 0 1.859375 -0.890625q0.765625 -0.890625 0.765625 -2.734375q0 -2.015625 -0.78125 -2.953125q-0.78125 -0.953
 125 -1.921875 -0.953125q-1.109375 0 -1.859375 0.90625q-0.75 0.90625 -0.75 2.859375zm8.641327 0q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm11.110107 4.921875l-3.015625 -9.859375l1.71875 0l1.5625 5.6875l0.59375 2.125q0.03125 -0.15625 0.5 -2.03125l1.578125 -5.78125l1.71875 0l1.46875 5.71875l0.484375 1.890625l0.578125 -1.90625l1.6875 -5.703125l1.625 0l-3.078125 9.859375l-1.734375 0l-1.578125 -5.90625l-0.375 -1.671875l-2.0 7.578125l-1.734375 0zm16.15625 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 
 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -
 2.359375zm9.96875 2.9375l0 -13.59375l1.671875 0l0 13.59375l-1.671875 0zm4.191681 -11.6875l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm10.519836 0l0 -1.25q-0.9375 1.46875 -2.75 1.46875q-1.171875 0 -2.171875 -0.640625q-0.984375 -0.65625 -1.53125 -1.8125q-0.53125 -1.171875 -0.53125 -2.6875q0 -1.46875 0.484375 -2.671875q0.5 -1.203125 1.46875 -1.84375q0.984375 -0.640625 2.203125 -0.640625q0.890625 0 1.578125 0.375q0.703125 0.375 1.140625 0.984375l0 -4.875l1.65625 0l0 13.59375l-1.546875 0zm-5.28125 -4.921875q0 1.890625 0.796875 2.828125q0.8125 0.9375 1.890625 0.9375q1.09375 0 1.859375 -0.890625q0.765625 -0.890625 0.765625 -2.734375q0 -2.015625 -0.78125 -2.953125q-0.78125 -0.953125 -1.921875 -0.953125q-1.109375 0 -1.859375 0.90625q-0.75 0.90625 -0.75 2.859375zm16.016327 1.75l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.48
 4375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m175.0 130.0l185.19684 0l0 152.0l-185.19684 0z" fill-rule="nonzero"></path><path stroke="#ff0000" stroke-width="4.0" stroke-linejoin="round" stroke-linecap="butt" d="m175.0 130.0l185.19684 0l0 152.0l-185.19684 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m264.7979 133.24672l190.01575 0l0 145.51183l-190.01575 0z" fill-rule="nonzero"></path><path stroke="#4a86e8" stroke-width="4.0" stroke-linejoin="round" stroke-linecap="butt" d="m264.7979 133.246
 72l190.01575 0l0 145.51183l-190.01575 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m362.6378 129.99738l185.19687 0l0 152.0l-185.19687 0z" fill-rule="nonzero"></path><path stroke="#00ff00" stroke-width="4.0" stroke-linejoin="round" stroke-linecap="butt" d="m362.6378 129.99738l185.19687 0l0 152.0l-185.19687 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m457.9265 133.24672l180.34644 0l0 145.51183l-180.34644 0z" fill-rule="nonzero"></path><path stroke="#ff00ff" stroke-width="4.0" stroke-linejoin="round" stroke-linecap="butt" d="m457.9265 133.24672l180.34644 0l0 145.51183l-180.34644 0z" fill-rule="nonzero"></path></g></svg>
-

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/state.md
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/state.md b/docs/apis/streaming/state.md
deleted file mode 100644
index f32d504..0000000
--- a/docs/apis/streaming/state.md
+++ /dev/null
@@ -1,295 +0,0 @@
----
-title: "Working with State"
-
-sub-nav-parent: fault_tolerance
-sub-nav-group: streaming
-sub-nav-pos: 1
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-All transformations in Flink may look like functions (in the functional processing terminology), but
-are in fact stateful operators. You can make *every* transformation (`map`, `filter`, etc) stateful
-by using Flink's state interface or checkpointing instance fields of your function. You can register
-any instance field
-as ***managed*** state by implementing an interface. In this case, and also in the case of using
-Flink's native state interface, Flink will automatically take consistent snapshots of your state
-periodically, and restore its value in the case of a failure.
-
-The end effect is that updates to any form of state are the same under failure-free execution and
-execution under failures.
-
-First, we look at how to make instance fields consistent under failures, and then we look at
-Flink's state interface.
-
-By default state checkpoints will be stored in-memory at the JobManager. For proper persistence of large
-state, Flink supports storing the checkpoints on file systems (HDFS, S3, or any mounted POSIX file system),
-which can be configured in the `flink-conf.yaml` or via `StreamExecutionEnvironment.setStateBackend(\u2026)`.
-See [state backends]({{ site.baseurl }}/apis/streaming/state_backends.html) for information
-about the available state backends and how to configure them.
-
-* ToC
-{:toc}
-
-## Using the Key/Value State Interface
-
-The Key/Value state interface provides access to different types of state that are all scoped to
-the key of the current input element. This means that this type of state can only be used
-on a `KeyedStream`, which can be created via `stream.keyBy(\u2026)`.
-
-Now, we will first look at the different types of state available and then we will see
-how they can be used in a program. The available state primitives are:
-
-* `ValueState<T>`: This keeps a value that can be updated and
-retrieved (scoped to key of the input element, mentioned above, so there will possibly be one value
-for each key that the operation sees). The value can be set using `update(T)` and retrieved using
-`T value()`.
-
-* `ListState<T>`: This keeps a list of elements. You can append elements and retrieve an `Iterable`
-over all currently stored elements. Elements are added using `add(T)`, the Iterable can
-be retrieved using `Iterable<T> get()`.
-
-* `ReducingState<T>`: This keeps a single value that represents the aggregation of all values
-added to the state. The interface is the same as for `ListState` but elements added using
-`add(T)` are reduced to an aggregate using a specified `ReduceFunction`.
-
-All types of state also have a method `clear()` that clears the state for the currently
-active key (i.e. the key of the input element).
-
-It is important to keep in mind that these state objects are only used for interfacing
-with state. The state is not necessarily stored inside but might reside on disk or somewhere else.
-The second thing to keep in mind is that the value you get from the state
-depend on the key of the input element. So the value you get in one invocation of your
-user function can be different from the one you get in another invocation if the key of
-the element is different.
-
-To get a state handle you have to create a `StateDescriptor` this holds the name of the state
-(as we will later see you can create several states, and they have to have unique names so
-that you can reference them), the type of the values that the state holds and possibly
-a user-specified function, such as a `ReduceFunction`. Depending on what type of state you
-want to retrieve you create one of `ValueStateDescriptor`, `ListStateDescriptor` or
-`ReducingStateDescriptor`.
-
-State is accessed using the `RuntimeContext`, so it is only possible in *rich functions*.
-Please see [here]({{ site.baseurl }}/apis/common/#specifying-transformation-functions) for
-information about that but we will also see an example shortly. The `RuntimeContext` that
-is available in a `RichFunction` has these methods for accessing state:
-
-* `ValueState<T> getState(ValueStateDescriptor<T>)`
-* `ReducingState<T> getReducingState(ReducingStateDescriptor<T>)`
-* `ListState<T> getListState(ListStateDescriptor<T>)`
-
-This is an example `FlatMapFunction` that shows how all of the parts fit together:
-
-{% highlight java %}
-public class CountWindowAverage extends RichFlatMapFunction<Tuple2<Long, Long>, Tuple2<Long, Long>> {
-
-    /**
-     * The ValueState handle. The first field is the count, the second field a running sum.
-     */
-    private transient ValueState<Tuple2<Long, Long>> sum;
-
-    @Override
-    public void flatMap(Tuple2<Long, Long> input, Collector<Tuple2<Long, Long>> out) throws Exception {
-
-        // access the state value
-        Tuple2<Long, Long> currentSum = sum.value();
-
-        // update the count
-        currentSum.f0 += 1;
-
-        // add the second field of the input value
-        currentSum.f1 += input.f1;
-
-        // update the state
-        sum.update(currentSum);
-
-        // if the count reaches 2, emit the average and clear the state
-        if (currentSum.f0 >= 2) {
-            out.collect(new Tuple2<>(input.f0, currentSum.f1 / currentSum.f0));
-            sum.clear();
-        }
-    }
-
-    @Override
-    public void open(Configuration config) {
-        ValueStateDescriptor<Tuple2<Long, Long>> descriptor =
-                new ValueStateDescriptor<>(
-                        "average", // the state name
-                        TypeInformation.of(new TypeHint<Tuple2<Long, Long>>() {}), // type information
-                        Tuple2.of(0L, 0L)); // default value of the state, if nothing was set
-        sum = getRuntimeContext().getState(descriptor);
-    }
-}
-
-// this can be used in a streaming program like this (assuming we have a StreamExecutionEnvironment env)
-env.fromElements(Tuple2.of(1L, 3L), Tuple2.of(1L, 5L), Tuple2.of(1L, 7L), Tuple2.of(1L, 4L), Tuple2.of(1L, 2L))
-        .keyBy(0)
-        .flatMap(new CountWindowAverage())
-        .print();
-
-// the printed output will be (1,4) and (1,5)
-{% endhighlight %}
-
-This example implements a poor man's counting window. We key the tuples by the first field
-(in the example all have the same key `1`). The function stores the count and a running sum in
-a `ValueState`, once the count reaches 2 it will emit the average and clear the state so that
-we start over from `0`. Note that this would keep a different state value for each different input
-key if we had tuples with different values in the first field.
-
-### State in the Scala DataStream API
-
-In addition to the interface described above, the Scala API has shortcuts for stateful
-`map()` or `flatMap()` functions with a single `ValueState` on `KeyedStream`. The user function
-gets the current value of the `ValueState` in an `Option` and must return an updated value that
-will be used to update the state.
-
-{% highlight scala %}
-val stream: DataStream[(String, Int)] = ...
-
-val counts: DataStream[(String, Int)] = stream
-  .keyBy(_._1)
-  .mapWithState((in: (String, Int), count: Option[Int]) =>
-    count match {
-      case Some(c) => ( (in._1, c), Some(c + in._2) )
-      case None => ( (in._1, 0), Some(in._2) )
-    })
-{% endhighlight %}
-
-## Checkpointing Instance Fields
-
-Instance fields can be checkpointed by using the `Checkpointed` interface.
-
-When the user-defined function implements the `Checkpointed` interface, the `snapshotState(\u2026)` and `restoreState(\u2026)`
-methods will be executed to draw and restore function state.
-
-In addition to that, user functions can also implement the `CheckpointNotifier` interface to receive notifications on
-completed checkpoints via the `notifyCheckpointComplete(long checkpointId)` method.
-Note that there is no guarantee for the user function to receive a notification if a failure happens between
-checkpoint completion and notification. The notifications should hence be treated in a way that notifications from
-later checkpoints can subsume missing notifications.
-
-The above example for `ValueState` can be implemented using instance fields like this:
-
-{% highlight java %}
-
-public class CountWindowAverage
-        extends RichFlatMapFunction<Tuple2<Long, Long>, Tuple2<Long, Long>>
-        implements Checkpointed<Tuple2<Long, Long>> {
-
-    private Tuple2<Long, Long> sum = null;
-
-    @Override
-    public void flatMap(Tuple2<Long, Long> input, Collector<Tuple2<Long, Long>> out) throws Exception {
-
-        // update the count
-        sum.f0 += 1;
-
-        // add the second field of the input value
-        sum.f1 += input.f1;
-
-
-        // if the count reaches 2, emit the average and clear the state
-        if (sum.f0 >= 2) {
-            out.collect(new Tuple2<>(input.f0, sum.f1 / sum.f0));
-            sum = Tuple2.of(0L, 0L);
-        }
-    }
-
-    @Override
-    public void open(Configuration config) {
-        if (sum == null) {
-            // only recreate if null
-            // restoreState will be called before open()
-            // so this will already set the sum to the restored value
-            sum = Tuple2.of(0L, 0L);
-        }
-    }
-
-    // regularly persists state during normal operation
-    @Override
-    public Serializable snapshotState(long checkpointId, long checkpointTimestamp) {
-        return sum;
-    }
-
-    // restores state on recovery from failure
-    @Override
-    public void restoreState(Tuple2<Long, Long> state) {
-        sum = state;
-    }
-}
-{% endhighlight %}
-
-## Stateful Source Functions
-
-Stateful sources require a bit more care as opposed to other operators.
-In order to make the updates to the state and output collection atomic (required for exactly-once semantics
-on failure/recovery), the user is required to get a lock from the source's context.
-
-{% highlight java %}
-public static class CounterSource
-        extends RichParallelSourceFunction<Long>
-        implements Checkpointed<Long> {
-
-    /**  current offset for exactly once semantics */
-    private long offset;
-
-    /** flag for job cancellation */
-    private volatile boolean isRunning = true;
-
-    @Override
-    public void run(SourceContext<Long> ctx) {
-        final Object lock = ctx.getCheckpointLock();
-
-        while (isRunning) {
-            // output and state update are atomic
-            synchronized (lock) {
-                ctx.collect(offset);
-                offset += 1;
-            }
-        }
-    }
-
-    @Override
-    public void cancel() {
-        isRunning = false;
-    }
-
-    @Override
-    public Long snapshotState(long checkpointId, long checkpointTimestamp) {
-        return offset;
-
-    }
-
-    @Override
-	public void restoreState(Long state) {
-        offset = state;
-    }
-}
-{% endhighlight %}
-
-Some operators might need the information when a checkpoint is fully acknowledged by Flink to communicate that with the outside world. In this case see the `flink.streaming.api.checkpoint.CheckpointNotifier` interface.
-
-## State Checkpoints in Iterative Jobs
-
-Flink currently only provides processing guarantees for jobs without iterations. Enabling checkpointing on an iterative job causes an exception. In order to force checkpointing on an iterative program the user needs to set a special flag when enabling checkpointing: `env.enableCheckpointing(interval, force = true)`.
-
-Please note that records in flight in the loop edges (and the state changes associated with them) will be lost during failure.
-
-{% top %}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/state_backends.md
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/state_backends.md b/docs/apis/streaming/state_backends.md
deleted file mode 100644
index 027148a..0000000
--- a/docs/apis/streaming/state_backends.md
+++ /dev/null
@@ -1,163 +0,0 @@
----
-title:  "State Backends"
-sub-nav-group: streaming
-sub-nav-pos: 2
-sub-nav-parent: fault_tolerance
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-Programs written in the [Data Stream API](index.html) often hold state in various forms:
-
-- Windows gather elements or aggregates until they are triggered
-- Transformation functions may use the key/value state interface to store values
-- Transformation functions may implement the `Checkpointed` interface to make their local variables fault tolerant
-
-See also [Working with State](state.html) in the streaming API guide.
-
-When checkpointing is activated, such state is persisted upon checkpoints to guard against data loss and recover consistently.
-How the state is represented internally, and how and where it is persisted upon checkpoints depends on the
-chosen **State Backend**.
-
-* ToC
-{:toc}
-
-## Available State Backends
-
-Out of the box, Flink bundles these state backends:
-
- - *MemoryStateBacked*
- - *FsStateBackend*
- - *RocksDBStateBackend*
-
-If nothing else is configured, the system will use the MemoryStateBacked.
-
-
-### The MemoryStateBackend
-
-The *MemoryStateBacked* holds data internally as objects on the Java heap. Key/value state and window operators hold hash tables
-that store the values, triggers, etc.
-
-Upon checkpoints, this state backend will snapshot the state and send it as part of the checkpoint acknowledgement messages to the
-JobManager (master), which stores it on its heap as well.
-
-Limitations of the MemoryStateBackend:
-
-  - The size of each individual state is by default limited to 5 MB. This value can be increased in the constructor of the MemoryStateBackend.
-  - Irrespective of the configured maximal state size, the state cannot be larger than the akka frame size (see [Configuration]({{ site.baseurl }}/setup/config.html)).
-  - The aggregate state must fit into the JobManager memory.
-
-The MemoryStateBackend is encouraged for:
-
-  - Local development and debugging
-  - Jobs that do hold little state, such as jobs that consist only of record-at-a-time functions (Map, FlatMap, Filter, ...). The Kafka Consumer requires very little state.
-
-
-### The FsStateBackend
-
-The *FsStateBackend* is configured with a file system URL (type, address, path), such as "hdfs://namenode:40010/flink/checkpoints" or "file:///data/flink/checkpoints".
-
-The FsStateBackend holds in-flight data in the TaskManager's memory. Upon checkpointing, it writes state snapshots into files in the configured file system and directory. Minimal metadata is stored in the JobManager's memory (or, in high-availability mode, in the metadata checkpoint).
-
-The FsStateBackend is encouraged for:
-
-  - Jobs with large state, long windows, large key/value states.
-  - All high-availability setups.
-
-### The RocksDBStateBackend
-
-The *RocksDBStateBackend* is configured with a file system URL (type, address, path), such as "hdfs://namenode:40010/flink/checkpoints" or "file:///data/flink/checkpoints".
-
-The RocksDBStateBackend holds in-flight data in a [RocksDB](http://rocksdb.org) data base
-that is (per default) stored in the TaskManager data directories. Upon checkpointing, the whole
-RocksDB data base will be checkpointed into the configured file system and directory. Minimal
-metadata is stored in the JobManager's memory (or, in high-availability mode, in the metadata checkpoint).
-
-The RocksDBStateBackend is encouraged for:
-
-  - Jobs with very large state, long windows, large key/value states.
-  - All high-availability setups.
-
-Note that the amount of state that you can keep is only limited by the amount of disc space available.
-This allows keeping very large state, compared to the FsStateBackend that keeps state in memory.
-This also means, however, that the maximum throughput that can be achieved will be lower with
-this state backend.
-
-**NOTE:** To use the RocksDBStateBackend you also have to add the correct maven dependency to your
-project:
-
-{% highlight xml %}
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-statebackend-rocksdb{{ site.scala_version_suffix }}</artifactId>
-  <version>{{site.version }}</version>
-</dependency>
-{% endhighlight %}
-
-The backend is currently not part of the binary distribution. See
-[here]({{ site.baseurl}}/apis/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution)
-for an explanation of how to include it for cluster execution.
-
-## Configuring a State Backend
-
-State backends can be configured per job. In addition, you can define a default state backend to be used when the
-job does not explicitly define a state backend.
-
-
-### Setting the Per-job State Backend
-
-The per-job state backend is set on the `StreamExecutionEnvironment` of the job, as shown in the example below:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-{% highlight java %}
-StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
-env.setStateBackend(new FsStateBackend("hdfs://namenode:40010/flink/checkpoints"));
-{% endhighlight %}
-</div>
-<div data-lang="scala" markdown="1">
-{% highlight scala %}
-val env = StreamExecutionEnvironment.getExecutionEnvironment()
-env.setStateBackend(new FsStateBackend("hdfs://namenode:40010/flink/checkpoints"))
-{% endhighlight %}
-</div>
-</div>
-
-
-### Setting Default State Backend
-
-A default state backend can be configured in the `flink-conf.yaml`, using the configuration key `state.backend`.
-
-Possible values for the config entry are *jobmanager* (MemoryStateBackend), *filesystem* (FsStateBackend), or the fully qualified class
-name of the class that implements the state backend factory [FsStateBackendFactory](https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/filesystem/FsStateBackendFactory.java).
-
-In the case where the default state backend is set to *filesystem*, the entry `state.backend.fs.checkpointdir` defines the directory where the checkpoint data will be stored.
-
-A sample section in the configuration file could look as follows:
-
-~~~
-# The backend that will be used to store operator state checkpoints
-
-state.backend: filesystem
-
-
-# Directory for storing checkpoints
-
-state.backend.fs.checkpointdir: hdfs://namenode:40010/flink/checkpoints
-~~~

http://git-wip-us.apache.org/repos/asf/flink/blob/844c874b/docs/apis/streaming/storm_compatibility.md
----------------------------------------------------------------------
diff --git a/docs/apis/streaming/storm_compatibility.md b/docs/apis/streaming/storm_compatibility.md
deleted file mode 100644
index 94e0042..0000000
--- a/docs/apis/streaming/storm_compatibility.md
+++ /dev/null
@@ -1,287 +0,0 @@
----
-title: "Storm Compatibility"
-is_beta: true
-sub-nav-group: streaming
-sub-nav-pos: 9
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-[Flink streaming](index.html) is compatible with Apache Storm interfaces and therefore allows
-reusing code that was implemented for Storm.
-
-You can:
-
-- execute a whole Storm `Topology` in Flink.
-- use Storm `Spout`/`Bolt` as source/operator in Flink streaming programs.
-
-This document shows how to use existing Storm code with Flink.
-
-* This will be replaced by the TOC
-{:toc}
-
-# Project Configuration
-
-Support for Storm is contained in the `flink-storm` Maven module.
-The code resides in the `org.apache.flink.storm` package.
-
-Add the following dependency to your `pom.xml` if you want to execute Storm code in Flink.
-
-~~~xml
-<dependency>
-	<groupId>org.apache.flink</groupId>
-	<artifactId>flink-storm{{ site.scala_version_suffix }}</artifactId>
-	<version>{{site.version}}</version>
-</dependency>
-~~~
-
-**Please note**: Do not add `storm-core` as a dependency. It is already included via `flink-storm`.
-
-**Please note**: `flink-storm` is not part of the provided binary Flink distribution.
-Thus, you need to include `flink-storm` classes (and their dependencies) in your program jar (also called ueber-jar or fat-jar) that is submitted to Flink's JobManager.
-See *WordCount Storm* within `flink-storm-examples/pom.xml` for an example how to package a jar correctly.
-
-If you want to avoid large ueber-jars, you can manually copy `storm-core-0.9.4.jar`, `json-simple-1.1.jar` and `flink-storm-{{site.version}}.jar` into Flink's `lib/` folder of each cluster node (*before* the cluster is started).
-For this case, it is sufficient to include only your own Spout and Bolt classes (and their internal dependencies) into the program jar.
-
-# Execute Storm Topologies
-
-Flink provides a Storm compatible API (`org.apache.flink.storm.api`) that offers replacements for the following classes:
-
-- `StormSubmitter` replaced by `FlinkSubmitter`
-- `NimbusClient` and `Client` replaced by `FlinkClient`
-- `LocalCluster` replaced by `FlinkLocalCluster`
-
-In order to submit a Storm topology to Flink, it is sufficient to replace the used Storm classes with their Flink replacements in the Storm *client code that assembles* the topology.
-The actual runtime code, ie, Spouts and Bolts, can be used *unmodified*.
-If a topology is executed in a remote cluster, parameters `nimbus.host` and `nimbus.thrift.port` are used as `jobmanger.rpc.address` and `jobmanger.rpc.port`, respectively.  If a parameter is not specified, the value is taken from `flink-conf.yaml`.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-~~~java
-TopologyBuilder builder = new TopologyBuilder(); // the Storm topology builder
-
-// actual topology assembling code and used Spouts/Bolts can be used as-is
-builder.setSpout("source", new FileSpout(inputFilePath));
-builder.setBolt("tokenizer", new BoltTokenizer()).shuffleGrouping("source");
-builder.setBolt("counter", new BoltCounter()).fieldsGrouping("tokenizer", new Fields("word"));
-builder.setBolt("sink", new BoltFileSink(outputFilePath)).shuffleGrouping("counter");
-
-Config conf = new Config();
-if(runLocal) { // submit to test cluster
-	// replaces: LocalCluster cluster = new LocalCluster();
-	FlinkLocalCluster cluster = new FlinkLocalCluster();
-	cluster.submitTopology("WordCount", conf, FlinkTopology.createTopology(builder));
-} else { // submit to remote cluster
-	// optional
-	// conf.put(Config.NIMBUS_HOST, "remoteHost");
-	// conf.put(Config.NIMBUS_THRIFT_PORT, 6123);
-	// replaces: StormSubmitter.submitTopology(topologyId, conf, builder.createTopology());
-	FlinkSubmitter.submitTopology("WordCount", conf, FlinkTopology.createTopology(builder));
-}
-~~~
-</div>
-</div>
-
-# Embed Storm Operators in Flink Streaming Programs
-
-As an alternative, Spouts and Bolts can be embedded into regular streaming programs.
-The Storm compatibility layer offers a wrapper classes for each, namely `SpoutWrapper` and `BoltWrapper` (`org.apache.flink.storm.wrappers`).
-
-Per default, both wrappers convert Storm output tuples to Flink's [Tuple]({{site.baseurl}}/apis/batch/index.html#tuples-and-case-classes) types (ie, `Tuple0` to `Tuple25` according to the number of fields of the Storm tuples).
-For single field output tuples a conversion to the field's data type is also possible (eg, `String` instead of `Tuple1<String>`).
-
-Because Flink cannot infer the output field types of Storm operators, it is required to specify the output type manually.
-In order to get the correct `TypeInformation` object, Flink's `TypeExtractor` can be used.
-
-## Embed Spouts
-
-In order to use a Spout as Flink source, use `StreamExecutionEnvironment.addSource(SourceFunction, TypeInformation)`.
-The Spout object is handed to the constructor of `SpoutWrapper<OUT>` that serves as first argument to `addSource(...)`.
-The generic type declaration `OUT` specifies the type of the source output stream.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-~~~java
-StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
-
-// stream has `raw` type (single field output streams only)
-DataStream<String> rawInput = env.addSource(
-	new SpoutWrapper<String>(new FileSpout(localFilePath), new String[] { Utils.DEFAULT_STREAM_ID }), // emit default output stream as raw type
-	TypeExtractor.getForClass(String.class)); // output type
-
-// process data stream
-[...]
-~~~
-</div>
-</div>
-
-If a Spout emits a finite number of tuples, `SpoutWrapper` can be configures to terminate automatically by setting `numberOfInvocations` parameter in its constructor.
-This allows the Flink program to shut down automatically after all data is processed.
-Per default the program will run until it is [canceled]({{site.baseurl}}/apis/cli.html) manually.
-
-
-## Embed Bolts
-
-In order to use a Bolt as Flink operator, use `DataStream.transform(String, TypeInformation, OneInputStreamOperator)`.
-The Bolt object is handed to the constructor of `BoltWrapper<IN,OUT>` that serves as last argument to `transform(...)`.
-The generic type declarations `IN` and `OUT` specify the type of the operator's input and output stream, respectively.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-~~~java
-StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
-DataStream<String> text = env.readTextFile(localFilePath);
-
-DataStream<Tuple2<String, Integer>> counts = text.transform(
-	"tokenizer", // operator name
-	TypeExtractor.getForObject(new Tuple2<String, Integer>("", 0)), // output type
-	new BoltWrapper<String, Tuple2<String, Integer>>(new BoltTokenizer())); // Bolt operator
-
-// do further processing
-[...]
-~~~
-</div>
-</div>
-
-### Named Attribute Access for Embedded Bolts
-
-Bolts can accesses input tuple fields via name (additionally to access via index).
-To use this feature with embedded Bolts, you need to have either a
-
- 1. [POJO]({{site.baseurl}}/apis/batch/index.html#pojos) type input stream or
- 2. [Tuple]({{site.baseurl}}/apis/batch/index.html#tuples-and-case-classes) type input stream and specify the input schema (i.e. name-to-index-mapping)
-
-For POJO input types, Flink accesses the fields via reflection.
-For this case, Flink expects either a corresponding public member variable or public getter method.
-For example, if a Bolt accesses a field via name `sentence` (eg, `String s = input.getStringByField("sentence");`), the input POJO class must have a member variable `public String sentence;` or method `public String getSentence() { ... };` (pay attention to camel-case naming).
-
-For `Tuple` input types, it is required to specify the input schema using Storm's `Fields` class.
-For this case, the constructor of `BoltWrapper` takes an additional argument: `new BoltWrapper<Tuple1<String>, ...>(..., new Fields("sentence"))`.
-The input type is `Tuple1<String>` and `Fields("sentence")` specify that `input.getStringByField("sentence")` is equivalent to `input.getString(0)`.
-
-See [BoltTokenizerWordCountPojo](https://github.com/apache/flink/tree/master/flink-contrib/flink-storm-examples/src/main/java/org/apache/flink/storm/wordcount/BoltTokenizerWordCountPojo.java) and [BoltTokenizerWordCountWithNames](https://github.com/apache/flink/tree/master/flink-contrib/flink-storm-examples/src/main/java/org/apache/flink/storm/wordcount/BoltTokenizerWordCountWithNames.java) for examples.
-
-## Configuring Spouts and Bolts
-
-In Storm, Spouts and Bolts can be configured with a globally distributed `Map` object that is given to `submitTopology(...)` method of `LocalCluster` or `StormSubmitter`.
-This `Map` is provided by the user next to the topology and gets forwarded as a parameter to the calls `Spout.open(...)` and `Bolt.prepare(...)`.
-If a whole topology is executed in Flink using `FlinkTopologyBuilder` etc., there is no special attention required &ndash; it works as in regular Storm.
-
-For embedded usage, Flink's configuration mechanism must be used.
-A global configuration can be set in a `StreamExecutionEnvironment` via `.getConfig().setGlobalJobParameters(...)`.
-Flink's regular `Configuration` class can be used to configure Spouts and Bolts.
-However, `Configuration` does not support arbitrary key data types as Storm does (only `String` keys are allowed).
-Thus, Flink additionally provides `StormConfig` class that can be used like a raw `Map` to provide full compatibility to Storm.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-~~~java
-StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
-
-StormConfig config = new StormConfig();
-// set config values
-[...]
-
-// set global Storm configuration
-env.getConfig().setGlobalJobParameters(config);
-
-// assemble program with embedded Spouts and/or Bolts
-[...]
-~~~
-</div>
-</div>
-
-## Multiple Output Streams
-
-Flink can also handle the declaration of multiple output streams for Spouts and Bolts.
-If a whole topology is executed in Flink using `FlinkTopologyBuilder` etc., there is no special attention required &ndash; it works as in regular Storm.
-
-For embedded usage, the output stream will be of data type `SplitStreamType<T>` and must be split by using `DataStream.split(...)` and `SplitStream.select(...)`.
-Flink provides the predefined output selector `StormStreamSelector<T>` for `.split(...)` already.
-Furthermore, the wrapper type `SplitStreamTuple<T>` can be removed using `SplitStreamMapper<T>`.
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-~~~java
-[...]
-
-// get DataStream from Spout or Bolt which declares two output streams s1 and s2 with output type SomeType
-DataStream<SplitStreamType<SomeType>> multiStream = ...
-
-SplitStream<SplitStreamType<SomeType>> splitStream = multiStream.split(new StormStreamSelector<SomeType>());
-
-// remove SplitStreamType using SplitStreamMapper to get data stream of type SomeType
-DataStream<SomeType> s1 = splitStream.select("s1").map(new SplitStreamMapper<SomeType>()).returns(SomeType.class);
-DataStream<SomeType> s2 = splitStream.select("s2").map(new SplitStreamMapper<SomeType>()).returns(SomeType.class);
-
-// do further processing on s1 and s2
-[...]
-~~~
-</div>
-</div>
-
-See [SpoutSplitExample.java](https://github.com/apache/flink/tree/master/flink-contrib/flink-storm-examples/src/main/java/org/apache/flink/storm/split/SpoutSplitExample.java) for a full example.
-
-# Flink Extensions
-
-## Finite Spouts
-
-In Flink, streaming sources can be finite, ie, emit a finite number of records and stop after emitting the last record. However, Spouts usually emit infinite streams.
-The bridge between the two approaches is the `FiniteSpout` interface which, in addition to `IRichSpout`, contains a `reachedEnd()` method, where the user can specify a stopping-condition.
-The user can create a finite Spout by implementing this interface instead of (or additionally to) `IRichSpout`, and implementing the `reachedEnd()` method in addition.
-In contrast to a `SpoutWrapper` that is configured to emit a finite number of tuples, `FiniteSpout` interface allows to implement more complex termination criteria.
-
-Although finite Spouts are not necessary to embed Spouts into a Flink streaming program or to submit a whole Storm topology to Flink, there are cases where they may come in handy:
-
- * to achieve that a native Spout behaves the same way as a finite Flink source with minimal modifications
- * the user wants to process a stream only for some time; after that, the Spout can stop automatically
- * reading a file into a stream
- * for testing purposes
-
-An example of a finite Spout that emits records for 10 seconds only:
-
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-~~~java
-public class TimedFiniteSpout extends BaseRichSpout implements FiniteSpout {
-	[...] // implement open(), nextTuple(), ...
-
-	private long starttime = System.currentTimeMillis();
-
-	public boolean reachedEnd() {
-		return System.currentTimeMillis() - starttime > 10000l;
-	}
-}
-~~~
-</div>
-</div>
-
-# Storm Compatibility Examples
-
-You can find more examples in Maven module `flink-storm-examples`.
-For the different versions of WordCount, see [README.md](https://github.com/apache/flink/tree/master/flink-contrib/flink-storm-examples/README.md).
-To run the examples, you need to assemble a correct jar file.
-`flink-storm-examples-{{ site.version }}.jar` is **no** valid jar file for job execution (it is only a standard maven artifact).
-
-There are example jars for embedded Spout and Bolt, namely `WordCount-SpoutSource.jar` and `WordCount-BoltTokenizer.jar`, respectively.
-Compare `pom.xml` to see how both jars are built.
-Furthermore, there is one example for whole Storm topologies (`WordCount-StormTopology.jar`).
-
-You can run each of those examples via `bin/flink run <jarname>.jar`. The correct entry point class is contained in each jar's manifest file.