You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by GitBox <gi...@apache.org> on 2020/05/27 03:25:46 UTC

[GitHub] [flink-web] MarkSfik commented on a change in pull request #339: [blog] flink on zeppelin

MarkSfik commented on a change in pull request #339:
URL: https://github.com/apache/flink-web/pull/339#discussion_r429982337



##########
File path: _posts/2020-05-25-flink-on-zeppelin.md
##########
@@ -0,0 +1,83 @@
+---
+layout: post
+title:  "Flink on Zeppelin Notebooks for Interactive Data Analysis"
+date:   2020-05-25T08:00:00.000Z
+categories: news
+authors:
+- zjffdu:
+  name: "Jeff Zhang"
+  twitter: "zjffdu"
+---
+
+The latest release of Apache Zeppelin comes with a redesigned interpreter for Apache Flink (version Flink 1.10+ is only supported moving forward) 

Review comment:
       ```suggestion
   The latest release of [Apache Zeppelin](https://zeppelin.apache.org/) comes with a redesigned interpreter for Apache Flink (version Flink 1.10+ is only supported moving forward) 
   ```

##########
File path: _posts/2020-05-25-flink-on-zeppelin.md
##########
@@ -0,0 +1,83 @@
+---
+layout: post
+title:  "Flink on Zeppelin Notebooks for Interactive Data Analysis"
+date:   2020-05-25T08:00:00.000Z
+categories: news
+authors:
+- zjffdu:
+  name: "Jeff Zhang"
+  twitter: "zjffdu"
+---
+
+The latest release of Apache Zeppelin comes with a redesigned interpreter for Apache Flink (version Flink 1.10+ is only supported moving forward) 
+that allows developers and data engineers to use Flink directly on Zeppelin notebooks for interactive data analysis. In this post, we explain how the Flink interpreter in Zeppelin works, 
+and provide a tutorial for running Streaming ETL with Flink on Zeppelin.
+
+# The Flink Interpreter in Zeppelin 0.9
+
+The Flink interpreter can be accessed and configured from Zeppelin’s interpreter settings page. 
+The interpreter has been refactored so that Flink users can now take advantage of Zeppelin to write Flink applications in three languages, 
+namely Scala, Python (PyFlink) and SQL (for both batch & streaming executions). 
+Zeppelin 0.9 now comes with the Flink interpreter group, consisting of the below five interpreters: 
+
+* %flink     - Provides a Scala environment
+* %flink.pyflink   - Provides a python environment
+* %flink.ipyflink   - Provides an ipython environment
+* %flink.bsql     - Provides a stream sql environment
+* %flink.ssql     - Provides a batch sql environment
+
+Not only has the interpreter been extended to support writing Flink applications in three languages, but it has also extended the available execution modes for Flink that now include:
+* Running Flink in Local Mode
+* Running Flink in Remote Mode
+* Running Flink in Yarn Mode
+
+
+You can find more information about how to get started with Zeppelin and all the execution modes for Flink applications in Zeppelin notebooks in this post. 
+
+
+# Flink on Zeppelin for Stream processing
+
+Performing stream processing jobs with Apache Flink on Zeppelin allows you to run most major streaming cases, 
+such as streaming ETL and real time data analytics, with the use of Flink SQL and specific UDFs. 
+Below we showcase how you can execute streaming ETL using Flink on Zeppelin: 
+
+You can use Flink SQL to perform streaming ETL by following the steps below 
+(for the full tutorial, please refer to the Flink Tutorial/Streaming ETL tutorial of the Zeppelin distribution):

Review comment:
       I think there is a link missing here. 
   
   Maybe you can add the link to the Flink Tutorial/Streaming ETL page so that users can go directly there from the post?

##########
File path: _posts/2020-05-25-flink-on-zeppelin.md
##########
@@ -0,0 +1,83 @@
+---
+layout: post
+title:  "Flink on Zeppelin Notebooks for Interactive Data Analysis"
+date:   2020-05-25T08:00:00.000Z
+categories: news
+authors:
+- zjffdu:
+  name: "Jeff Zhang"
+  twitter: "zjffdu"
+---
+
+The latest release of Apache Zeppelin comes with a redesigned interpreter for Apache Flink (version Flink 1.10+ is only supported moving forward) 
+that allows developers and data engineers to use Flink directly on Zeppelin notebooks for interactive data analysis. In this post, we explain how the Flink interpreter in Zeppelin works, 
+and provide a tutorial for running Streaming ETL with Flink on Zeppelin.
+
+# The Flink Interpreter in Zeppelin 0.9
+
+The Flink interpreter can be accessed and configured from Zeppelin’s interpreter settings page. 
+The interpreter has been refactored so that Flink users can now take advantage of Zeppelin to write Flink applications in three languages, 
+namely Scala, Python (PyFlink) and SQL (for both batch & streaming executions). 
+Zeppelin 0.9 now comes with the Flink interpreter group, consisting of the below five interpreters: 
+
+* %flink     - Provides a Scala environment
+* %flink.pyflink   - Provides a python environment
+* %flink.ipyflink   - Provides an ipython environment
+* %flink.bsql     - Provides a stream sql environment
+* %flink.ssql     - Provides a batch sql environment
+
+Not only has the interpreter been extended to support writing Flink applications in three languages, but it has also extended the available execution modes for Flink that now include:
+* Running Flink in Local Mode
+* Running Flink in Remote Mode
+* Running Flink in Yarn Mode
+
+
+You can find more information about how to get started with Zeppelin and all the execution modes for Flink applications in Zeppelin notebooks in this post. 
+
+
+# Flink on Zeppelin for Stream processing
+
+Performing stream processing jobs with Apache Flink on Zeppelin allows you to run most major streaming cases, 
+such as streaming ETL and real time data analytics, with the use of Flink SQL and specific UDFs. 
+Below we showcase how you can execute streaming ETL using Flink on Zeppelin: 
+
+You can use Flink SQL to perform streaming ETL by following the steps below 
+(for the full tutorial, please refer to the Flink Tutorial/Streaming ETL tutorial of the Zeppelin distribution):
+
+* Step 1. Create source table to represent the source data.
+
+<center>
+<img src="{{ site.baseurl }}/img/blog/2020-05-25-flink-on-zeppelin/create_source.png" width="80%" alt="Create Source Table"/>
+</center>
+
+* Step 2. Create a sink table to represent the processed data.
+
+<center>
+<img src="{{ site.baseurl }}/img/blog/2020-05-25-flink-on-zeppelin/create_sink.png" width="80%" alt="Create Sink Table"/>
+</center>
+
+* Step 3. After creating the source and sink table, we can use insert them to our statement to trigger the streaming processing job as the following: 

Review comment:
       ```suggestion
   * Step 3. After creating the source and sink table, we can insert them to our statement to trigger the stream processing job as the following: 
   ```

##########
File path: _posts/2020-05-25-flink-on-zeppelin.md
##########
@@ -0,0 +1,83 @@
+---
+layout: post
+title:  "Flink on Zeppelin Notebooks for Interactive Data Analysis"
+date:   2020-05-25T08:00:00.000Z
+categories: news
+authors:
+- zjffdu:
+  name: "Jeff Zhang"
+  twitter: "zjffdu"
+---
+
+The latest release of Apache Zeppelin comes with a redesigned interpreter for Apache Flink (version Flink 1.10+ is only supported moving forward) 
+that allows developers and data engineers to use Flink directly on Zeppelin notebooks for interactive data analysis. In this post, we explain how the Flink interpreter in Zeppelin works, 
+and provide a tutorial for running Streaming ETL with Flink on Zeppelin.
+
+# The Flink Interpreter in Zeppelin 0.9
+
+The Flink interpreter can be accessed and configured from Zeppelin’s interpreter settings page. 
+The interpreter has been refactored so that Flink users can now take advantage of Zeppelin to write Flink applications in three languages, 
+namely Scala, Python (PyFlink) and SQL (for both batch & streaming executions). 
+Zeppelin 0.9 now comes with the Flink interpreter group, consisting of the below five interpreters: 
+
+* %flink     - Provides a Scala environment
+* %flink.pyflink   - Provides a python environment
+* %flink.ipyflink   - Provides an ipython environment
+* %flink.bsql     - Provides a stream sql environment
+* %flink.ssql     - Provides a batch sql environment
+
+Not only has the interpreter been extended to support writing Flink applications in three languages, but it has also extended the available execution modes for Flink that now include:
+* Running Flink in Local Mode
+* Running Flink in Remote Mode
+* Running Flink in Yarn Mode
+
+
+You can find more information about how to get started with Zeppelin and all the execution modes for Flink applications in Zeppelin notebooks in this post. 
+
+
+# Flink on Zeppelin for Stream processing
+
+Performing stream processing jobs with Apache Flink on Zeppelin allows you to run most major streaming cases, 
+such as streaming ETL and real time data analytics, with the use of Flink SQL and specific UDFs. 
+Below we showcase how you can execute streaming ETL using Flink on Zeppelin: 
+
+You can use Flink SQL to perform streaming ETL by following the steps below 
+(for the full tutorial, please refer to the Flink Tutorial/Streaming ETL tutorial of the Zeppelin distribution):
+
+* Step 1. Create source table to represent the source data.
+
+<center>
+<img src="{{ site.baseurl }}/img/blog/2020-05-25-flink-on-zeppelin/create_source.png" width="80%" alt="Create Source Table"/>
+</center>
+
+* Step 2. Create a sink table to represent the processed data.
+
+<center>
+<img src="{{ site.baseurl }}/img/blog/2020-05-25-flink-on-zeppelin/create_sink.png" width="80%" alt="Create Sink Table"/>
+</center>
+
+* Step 3. After creating the source and sink table, we can use insert them to our statement to trigger the streaming processing job as the following: 
+
+<center>
+<img src="{{ site.baseurl }}/img/blog/2020-05-25-flink-on-zeppelin/etl.png" width="80%" alt="ETL"/>
+</center>
+
+* Step 4. After initiating the streaming job, you can use another SQL statement to query the sink table to verify your streaming job. Here you can see the top 10 records which will be refreshed every 3 seconds.

Review comment:
       ```suggestion
   * Step 4. After initiating the streaming job, you can use another SQL statement to query the sink table to verify the results of your job. Here you can see the top 10 records which will be refreshed every 3 seconds.
   ```

##########
File path: _posts/2020-05-25-flink-on-zeppelin.md
##########
@@ -0,0 +1,83 @@
+---
+layout: post
+title:  "Flink on Zeppelin Notebooks for Interactive Data Analysis"
+date:   2020-05-25T08:00:00.000Z
+categories: news
+authors:
+- zjffdu:
+  name: "Jeff Zhang"
+  twitter: "zjffdu"
+---
+
+The latest release of Apache Zeppelin comes with a redesigned interpreter for Apache Flink (version Flink 1.10+ is only supported moving forward) 
+that allows developers and data engineers to use Flink directly on Zeppelin notebooks for interactive data analysis. In this post, we explain how the Flink interpreter in Zeppelin works, 
+and provide a tutorial for running Streaming ETL with Flink on Zeppelin.
+
+# The Flink Interpreter in Zeppelin 0.9
+
+The Flink interpreter can be accessed and configured from Zeppelin’s interpreter settings page. 
+The interpreter has been refactored so that Flink users can now take advantage of Zeppelin to write Flink applications in three languages, 
+namely Scala, Python (PyFlink) and SQL (for both batch & streaming executions). 
+Zeppelin 0.9 now comes with the Flink interpreter group, consisting of the below five interpreters: 
+
+* %flink     - Provides a Scala environment
+* %flink.pyflink   - Provides a python environment
+* %flink.ipyflink   - Provides an ipython environment
+* %flink.bsql     - Provides a stream sql environment
+* %flink.ssql     - Provides a batch sql environment
+
+Not only has the interpreter been extended to support writing Flink applications in three languages, but it has also extended the available execution modes for Flink that now include:
+* Running Flink in Local Mode
+* Running Flink in Remote Mode
+* Running Flink in Yarn Mode
+
+
+You can find more information about how to get started with Zeppelin and all the execution modes for Flink applications in Zeppelin notebooks in this post. 
+
+
+# Flink on Zeppelin for Stream processing
+
+Performing stream processing jobs with Apache Flink on Zeppelin allows you to run most major streaming cases, 
+such as streaming ETL and real time data analytics, with the use of Flink SQL and specific UDFs. 
+Below we showcase how you can execute streaming ETL using Flink on Zeppelin: 
+
+You can use Flink SQL to perform streaming ETL by following the steps below 
+(for the full tutorial, please refer to the Flink Tutorial/Streaming ETL tutorial of the Zeppelin distribution):
+
+* Step 1. Create source table to represent the source data.
+
+<center>
+<img src="{{ site.baseurl }}/img/blog/2020-05-25-flink-on-zeppelin/create_source.png" width="80%" alt="Create Source Table"/>
+</center>
+
+* Step 2. Create a sink table to represent the processed data.
+
+<center>
+<img src="{{ site.baseurl }}/img/blog/2020-05-25-flink-on-zeppelin/create_sink.png" width="80%" alt="Create Sink Table"/>
+</center>
+
+* Step 3. After creating the source and sink table, we can use insert them to our statement to trigger the streaming processing job as the following: 
+
+<center>
+<img src="{{ site.baseurl }}/img/blog/2020-05-25-flink-on-zeppelin/etl.png" width="80%" alt="ETL"/>
+</center>
+
+* Step 4. After initiating the streaming job, you can use another SQL statement to query the sink table to verify your streaming job. Here you can see the top 10 records which will be refreshed every 3 seconds.
+
+<center>
+<img src="{{ site.baseurl }}/img/blog/2020-05-25-flink-on-zeppelin/preview.png" width="80%" alt="Preview"/>
+</center>
+
+# Summary
+
+In this post, we explained how the redesigned Flink interpreter works in Zeppelin 0.9.0 and provided some examples for performing streaming ETL jobs with 
+Flink and Zeppelin. You can find additional tutorial for batch processing with Flink on Zeppelin as well as using Flink on Zeppelin for 

Review comment:
       ```suggestion
   Flink and Zeppelin. You can find an additional tutorial for batch processing with Flink on Zeppelin as well as using Flink on Zeppelin for 
   ```

##########
File path: _posts/2020-05-25-flink-on-zeppelin.md
##########
@@ -0,0 +1,83 @@
+---
+layout: post
+title:  "Flink on Zeppelin Notebooks for Interactive Data Analysis"
+date:   2020-05-25T08:00:00.000Z
+categories: news
+authors:
+- zjffdu:
+  name: "Jeff Zhang"
+  twitter: "zjffdu"
+---
+
+The latest release of Apache Zeppelin comes with a redesigned interpreter for Apache Flink (version Flink 1.10+ is only supported moving forward) 
+that allows developers and data engineers to use Flink directly on Zeppelin notebooks for interactive data analysis. In this post, we explain how the Flink interpreter in Zeppelin works, 
+and provide a tutorial for running Streaming ETL with Flink on Zeppelin.
+
+# The Flink Interpreter in Zeppelin 0.9
+
+The Flink interpreter can be accessed and configured from Zeppelin’s interpreter settings page. 
+The interpreter has been refactored so that Flink users can now take advantage of Zeppelin to write Flink applications in three languages, 
+namely Scala, Python (PyFlink) and SQL (for both batch & streaming executions). 
+Zeppelin 0.9 now comes with the Flink interpreter group, consisting of the below five interpreters: 
+
+* %flink     - Provides a Scala environment
+* %flink.pyflink   - Provides a python environment
+* %flink.ipyflink   - Provides an ipython environment
+* %flink.bsql     - Provides a stream sql environment
+* %flink.ssql     - Provides a batch sql environment
+
+Not only has the interpreter been extended to support writing Flink applications in three languages, but it has also extended the available execution modes for Flink that now include:
+* Running Flink in Local Mode
+* Running Flink in Remote Mode
+* Running Flink in Yarn Mode
+
+
+You can find more information about how to get started with Zeppelin and all the execution modes for Flink applications in Zeppelin notebooks in this post. 
+
+
+# Flink on Zeppelin for Stream processing
+
+Performing stream processing jobs with Apache Flink on Zeppelin allows you to run most major streaming cases, 
+such as streaming ETL and real time data analytics, with the use of Flink SQL and specific UDFs. 
+Below we showcase how you can execute streaming ETL using Flink on Zeppelin: 
+
+You can use Flink SQL to perform streaming ETL by following the steps below 
+(for the full tutorial, please refer to the Flink Tutorial/Streaming ETL tutorial of the Zeppelin distribution):
+
+* Step 1. Create source table to represent the source data.
+
+<center>
+<img src="{{ site.baseurl }}/img/blog/2020-05-25-flink-on-zeppelin/create_source.png" width="80%" alt="Create Source Table"/>
+</center>
+
+* Step 2. Create a sink table to represent the processed data.
+
+<center>
+<img src="{{ site.baseurl }}/img/blog/2020-05-25-flink-on-zeppelin/create_sink.png" width="80%" alt="Create Sink Table"/>
+</center>
+
+* Step 3. After creating the source and sink table, we can use insert them to our statement to trigger the streaming processing job as the following: 
+
+<center>
+<img src="{{ site.baseurl }}/img/blog/2020-05-25-flink-on-zeppelin/etl.png" width="80%" alt="ETL"/>
+</center>
+
+* Step 4. After initiating the streaming job, you can use another SQL statement to query the sink table to verify your streaming job. Here you can see the top 10 records which will be refreshed every 3 seconds.
+
+<center>
+<img src="{{ site.baseurl }}/img/blog/2020-05-25-flink-on-zeppelin/preview.png" width="80%" alt="Preview"/>
+</center>
+
+# Summary
+
+In this post, we explained how the redesigned Flink interpreter works in Zeppelin 0.9.0 and provided some examples for performing streaming ETL jobs with 
+Flink and Zeppelin. You can find additional tutorial for batch processing with Flink on Zeppelin as well as using Flink on Zeppelin for 

Review comment:
       ```suggestion
   Flink and Zeppelin. You can find an additional [tutorial for batch processing with Flink on Zeppelin](https://medium.com/@zjffdu/flink-on-zeppelin-part-2-batch-711731df5ad9) as well as using Flink on Zeppelin for 
   ```

##########
File path: _posts/2020-05-25-flink-on-zeppelin.md
##########
@@ -0,0 +1,83 @@
+---
+layout: post
+title:  "Flink on Zeppelin Notebooks for Interactive Data Analysis"
+date:   2020-05-25T08:00:00.000Z
+categories: news
+authors:
+- zjffdu:
+  name: "Jeff Zhang"
+  twitter: "zjffdu"
+---
+
+The latest release of Apache Zeppelin comes with a redesigned interpreter for Apache Flink (version Flink 1.10+ is only supported moving forward) 
+that allows developers and data engineers to use Flink directly on Zeppelin notebooks for interactive data analysis. In this post, we explain how the Flink interpreter in Zeppelin works, 
+and provide a tutorial for running Streaming ETL with Flink on Zeppelin.
+
+# The Flink Interpreter in Zeppelin 0.9
+
+The Flink interpreter can be accessed and configured from Zeppelin’s interpreter settings page. 
+The interpreter has been refactored so that Flink users can now take advantage of Zeppelin to write Flink applications in three languages, 
+namely Scala, Python (PyFlink) and SQL (for both batch & streaming executions). 
+Zeppelin 0.9 now comes with the Flink interpreter group, consisting of the below five interpreters: 
+
+* %flink     - Provides a Scala environment
+* %flink.pyflink   - Provides a python environment
+* %flink.ipyflink   - Provides an ipython environment
+* %flink.bsql     - Provides a stream sql environment
+* %flink.ssql     - Provides a batch sql environment
+
+Not only has the interpreter been extended to support writing Flink applications in three languages, but it has also extended the available execution modes for Flink that now include:
+* Running Flink in Local Mode
+* Running Flink in Remote Mode
+* Running Flink in Yarn Mode
+
+
+You can find more information about how to get started with Zeppelin and all the execution modes for Flink applications in Zeppelin notebooks in this post. 
+
+
+# Flink on Zeppelin for Stream processing
+
+Performing stream processing jobs with Apache Flink on Zeppelin allows you to run most major streaming cases, 
+such as streaming ETL and real time data analytics, with the use of Flink SQL and specific UDFs. 
+Below we showcase how you can execute streaming ETL using Flink on Zeppelin: 
+
+You can use Flink SQL to perform streaming ETL by following the steps below 
+(for the full tutorial, please refer to the Flink Tutorial/Streaming ETL tutorial of the Zeppelin distribution):
+
+* Step 1. Create source table to represent the source data.
+
+<center>
+<img src="{{ site.baseurl }}/img/blog/2020-05-25-flink-on-zeppelin/create_source.png" width="80%" alt="Create Source Table"/>
+</center>
+
+* Step 2. Create a sink table to represent the processed data.
+
+<center>
+<img src="{{ site.baseurl }}/img/blog/2020-05-25-flink-on-zeppelin/create_sink.png" width="80%" alt="Create Sink Table"/>
+</center>
+
+* Step 3. After creating the source and sink table, we can use insert them to our statement to trigger the streaming processing job as the following: 
+
+<center>
+<img src="{{ site.baseurl }}/img/blog/2020-05-25-flink-on-zeppelin/etl.png" width="80%" alt="ETL"/>
+</center>
+
+* Step 4. After initiating the streaming job, you can use another SQL statement to query the sink table to verify your streaming job. Here you can see the top 10 records which will be refreshed every 3 seconds.
+
+<center>
+<img src="{{ site.baseurl }}/img/blog/2020-05-25-flink-on-zeppelin/preview.png" width="80%" alt="Preview"/>
+</center>
+
+# Summary
+
+In this post, we explained how the redesigned Flink interpreter works in Zeppelin 0.9.0 and provided some examples for performing streaming ETL jobs with 
+Flink and Zeppelin. You can find additional tutorial for batch processing with Flink on Zeppelin as well as using Flink on Zeppelin for 
+more advance operations like resource isolation, job concurrency & parallelism, multiple Hadoop & Hive environments and more on our series of post on Medium.

Review comment:
       ```suggestion
   more advance operations like resource isolation, job concurrency & parallelism, multiple Hadoop & Hive environments and more on our series of posts on [Medium](https://medium.com/@zjffdu/flink-on-zeppelin-part-4-advanced-usage-998b74908cd9).
   ```

##########
File path: _posts/2020-05-25-flink-on-zeppelin.md
##########
@@ -0,0 +1,83 @@
+---
+layout: post
+title:  "Flink on Zeppelin Notebooks for Interactive Data Analysis"
+date:   2020-05-25T08:00:00.000Z
+categories: news
+authors:
+- zjffdu:
+  name: "Jeff Zhang"
+  twitter: "zjffdu"
+---
+
+The latest release of Apache Zeppelin comes with a redesigned interpreter for Apache Flink (version Flink 1.10+ is only supported moving forward) 
+that allows developers and data engineers to use Flink directly on Zeppelin notebooks for interactive data analysis. In this post, we explain how the Flink interpreter in Zeppelin works, 
+and provide a tutorial for running Streaming ETL with Flink on Zeppelin.
+
+# The Flink Interpreter in Zeppelin 0.9
+
+The Flink interpreter can be accessed and configured from Zeppelin’s interpreter settings page. 
+The interpreter has been refactored so that Flink users can now take advantage of Zeppelin to write Flink applications in three languages, 
+namely Scala, Python (PyFlink) and SQL (for both batch & streaming executions). 
+Zeppelin 0.9 now comes with the Flink interpreter group, consisting of the below five interpreters: 
+
+* %flink     - Provides a Scala environment
+* %flink.pyflink   - Provides a python environment
+* %flink.ipyflink   - Provides an ipython environment
+* %flink.bsql     - Provides a stream sql environment
+* %flink.ssql     - Provides a batch sql environment
+
+Not only has the interpreter been extended to support writing Flink applications in three languages, but it has also extended the available execution modes for Flink that now include:
+* Running Flink in Local Mode
+* Running Flink in Remote Mode
+* Running Flink in Yarn Mode
+
+
+You can find more information about how to get started with Zeppelin and all the execution modes for Flink applications in Zeppelin notebooks in this post. 
+
+
+# Flink on Zeppelin for Stream processing
+
+Performing stream processing jobs with Apache Flink on Zeppelin allows you to run most major streaming cases, 
+such as streaming ETL and real time data analytics, with the use of Flink SQL and specific UDFs. 
+Below we showcase how you can execute streaming ETL using Flink on Zeppelin: 
+
+You can use Flink SQL to perform streaming ETL by following the steps below 
+(for the full tutorial, please refer to the Flink Tutorial/Streaming ETL tutorial of the Zeppelin distribution):
+
+* Step 1. Create source table to represent the source data.
+
+<center>
+<img src="{{ site.baseurl }}/img/blog/2020-05-25-flink-on-zeppelin/create_source.png" width="80%" alt="Create Source Table"/>
+</center>
+
+* Step 2. Create a sink table to represent the processed data.
+
+<center>
+<img src="{{ site.baseurl }}/img/blog/2020-05-25-flink-on-zeppelin/create_sink.png" width="80%" alt="Create Sink Table"/>
+</center>
+
+* Step 3. After creating the source and sink table, we can use insert them to our statement to trigger the streaming processing job as the following: 
+
+<center>
+<img src="{{ site.baseurl }}/img/blog/2020-05-25-flink-on-zeppelin/etl.png" width="80%" alt="ETL"/>
+</center>
+
+* Step 4. After initiating the streaming job, you can use another SQL statement to query the sink table to verify your streaming job. Here you can see the top 10 records which will be refreshed every 3 seconds.
+
+<center>
+<img src="{{ site.baseurl }}/img/blog/2020-05-25-flink-on-zeppelin/preview.png" width="80%" alt="Preview"/>
+</center>
+
+# Summary
+
+In this post, we explained how the redesigned Flink interpreter works in Zeppelin 0.9.0 and provided some examples for performing streaming ETL jobs with 
+Flink and Zeppelin. You can find additional tutorial for batch processing with Flink on Zeppelin as well as using Flink on Zeppelin for 
+more advance operations like resource isolation, job concurrency & parallelism, multiple Hadoop & Hive environments and more on our series of post on Medium.

Review comment:
       ```suggestion
   more advanced operations like resource isolation, job concurrency & parallelism, multiple Hadoop & Hive environments and more on our series of posts on [Medium](https://medium.com/@zjffdu/flink-on-zeppelin-part-4-advanced-usage-998b74908cd9).
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org