You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@camel.apache.org by ac...@apache.org on 2021/01/12 05:38:28 UTC

[camel-kafka-connector] branch master updated: Fixed idempotency images names

This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/camel-kafka-connector.git


The following commit(s) were added to refs/heads/master by this push:
     new 0b48c9b  Fixed idempotency images names
0b48c9b is described below

commit 0b48c9bf521661bd19fd7826bf12d5d09ef08505
Author: Andrea Cosentino <an...@gmail.com>
AuthorDate: Tue Jan 12 06:38:09 2021 +0100

    Fixed idempotency images names
---
 docs/modules/ROOT/pages/idempotency.adoc | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/docs/modules/ROOT/pages/idempotency.adoc b/docs/modules/ROOT/pages/idempotency.adoc
index 24cbc1e..d87cf3e 100644
--- a/docs/modules/ROOT/pages/idempotency.adoc
+++ b/docs/modules/ROOT/pages/idempotency.adoc
@@ -16,13 +16,13 @@ Suppose you're using a source connector of any kind. By using the idempotency fe
 
 This means, in the Kafkish language, you won't ingest the same payload multiple times in the target Kafka topic. 
 
-image::ckc-idempontency-source.png[image]
+image::ckc-idempotency-source.png[image]
 
 In the sink scenario, we'll stream out of a Kafka topic multiple records, transform/convert/manipulate them and send them to an external system, like a messaging broker, a storage infra, a database etc.
 
 In the Kafka topic used as source we may have multiple repeated records with the same payload or same metadata. Based on this information we can choose to skip the same records while sending data to the external system and for doing this we can leverage the idempotency feature of ckc.
 
-image::ckc-idempontency-sink.png[image]
+image::ckc-idempotency-sink.png[image]
 
 == Camel-Kafka-connector idempotency configuration