You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by di...@apache.org on 2021/05/07 06:08:10 UTC

[flink] branch release-1.13 updated: [hotfix][docs][python] Fix the example in intro_to_datastream_api.md

This is an automated email from the ASF dual-hosted git repository.

dianfu pushed a commit to branch release-1.13
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.13 by this push:
     new 36ba407  [hotfix][docs][python] Fix the example in intro_to_datastream_api.md
36ba407 is described below

commit 36ba407f119fdf7271f9380fe5d779c0163ddd63
Author: Dian Fu <di...@apache.org>
AuthorDate: Fri May 7 14:07:16 2021 +0800

    [hotfix][docs][python] Fix the example in intro_to_datastream_api.md
---
 .../docs/dev/python/datastream/intro_to_datastream_api.md          | 7 ++++---
 docs/content/docs/dev/python/datastream/intro_to_datastream_api.md | 7 ++++---
 2 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/docs/content.zh/docs/dev/python/datastream/intro_to_datastream_api.md b/docs/content.zh/docs/dev/python/datastream/intro_to_datastream_api.md
index 93bc6ee..d3cd112 100644
--- a/docs/content.zh/docs/dev/python/datastream/intro_to_datastream_api.md
+++ b/docs/content.zh/docs/dev/python/datastream/intro_to_datastream_api.md
@@ -158,6 +158,10 @@ from pyflink.common.typeinfo import Types
 from pyflink.datastream import StreamExecutionEnvironment
 from pyflink.datastream.connectors import FlinkKafkaConsumer
 
+env = StreamExecutionEnvironment.get_execution_environment()
+# the sql connector for kafka is used here as it's a fat jar and could avoid dependency issues
+env.add_jars("file:///path/to/flink-sql-connector-kafka.jar")
+
 deserialization_schema = JsonRowDeserializationSchema.builder() \
     .type_info(type_info=Types.ROW([Types.INT(), Types.STRING()])).build()
 
@@ -166,9 +170,6 @@ kafka_consumer = FlinkKafkaConsumer(
     deserialization_schema=deserialization_schema,
     properties={'bootstrap.servers': 'localhost:9092', 'group.id': 'test_group'})
 
-env = StreamExecutionEnvironment.get_execution_environment()
-# the sql connector for kafka is used here as it's a fat jar and could avoid dependency issues
-env.add_jars("file:///path/to/flink-sql-connector-kafka.jar")
 ds = env.add_source(kafka_consumer)
 ```
 
diff --git a/docs/content/docs/dev/python/datastream/intro_to_datastream_api.md b/docs/content/docs/dev/python/datastream/intro_to_datastream_api.md
index 469c8b8..c6831a2 100644
--- a/docs/content/docs/dev/python/datastream/intro_to_datastream_api.md
+++ b/docs/content/docs/dev/python/datastream/intro_to_datastream_api.md
@@ -158,6 +158,10 @@ from pyflink.common.typeinfo import Types
 from pyflink.datastream import StreamExecutionEnvironment
 from pyflink.datastream.connectors import FlinkKafkaConsumer
 
+env = StreamExecutionEnvironment.get_execution_environment()
+# the sql connector for kafka is used here as it's a fat jar and could avoid dependency issues
+env.add_jars("file:///path/to/flink-sql-connector-kafka.jar")
+
 deserialization_schema = JsonRowDeserializationSchema.builder() \
     .type_info(type_info=Types.ROW([Types.INT(), Types.STRING()])).build()
 
@@ -166,9 +170,6 @@ kafka_consumer = FlinkKafkaConsumer(
     deserialization_schema=deserialization_schema,
     properties={'bootstrap.servers': 'localhost:9092', 'group.id': 'test_group'})
 
-env = StreamExecutionEnvironment.get_execution_environment()
-# the sql connector for kafka is used here as it's a fat jar and could avoid dependency issues
-env.add_jars("file:///path/to/flink-sql-connector-kafka.jar")
 ds = env.add_source(kafka_consumer)
 ```