You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by ch...@apache.org on 2021/02/08 02:22:01 UTC

[flink] branch master updated: [hotfix][docs] Fix typo

This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
     new 4862e10  [hotfix][docs] Fix typo
4862e10 is described below

commit 4862e10979fbeb9348daed1b278b3a33a1ffc359
Author: Svend Vanderveken <12...@users.noreply.github.com>
AuthorDate: Mon Feb 8 03:21:33 2021 +0100

    [hotfix][docs] Fix typo
---
 docs/dev/connectors/kafka.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/dev/connectors/kafka.md b/docs/dev/connectors/kafka.md
index e03aeb5..c9269c2 100644
--- a/docs/dev/connectors/kafka.md
+++ b/docs/dev/connectors/kafka.md
@@ -433,7 +433,7 @@ The Flink Kafka Producer needs to know how to turn Java/Scala objects into binar
 The `KafkaSerializationSchema` allows users to specify such a schema.
 The `ProducerRecord<byte[], byte[]> serialize(T element, @Nullable Long timestamp)` method gets called for each record, generating a `ProducerRecord` that is written to Kafka.
 
-The gives users fine-grained control over how data is written out to Kafka. 
+This gives users fine-grained control over how data is written out to Kafka. 
 Through the producer record you can:
 * Set header values
 * Define keys for each record