You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kafka.apache.org by mj...@apache.org on 2019/08/03 20:52:19 UTC

[kafka] branch trunk updated: MINOR: Fix typo in docs (#7158)

This is an automated email from the ASF dual-hosted git repository.

mjsax pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
     new c76f565  MINOR: Fix typo in docs (#7158)
c76f565 is described below

commit c76f5651fa1b406d8f7a88b20e435ad4f14c4797
Author: Victoria Bialas <lo...@users.noreply.github.com>
AuthorDate: Sat Aug 3 13:51:43 2019 -0700

    MINOR: Fix typo in docs (#7158)
    
    Reviewer: Matthias J. Sax <ma...@confluent.io>
---
 docs/uses.html | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/uses.html b/docs/uses.html
index 945b896..09bc45f 100644
--- a/docs/uses.html
+++ b/docs/uses.html
@@ -60,7 +60,7 @@ and much lower end-to-end latency.
 Many users of Kafka process data in processing pipelines consisting of multiple stages, where raw input data is consumed from Kafka topics and then
 aggregated, enriched, or otherwise transformed into new topics for further consumption or follow-up processing.
 For example, a processing pipeline for recommending news articles might crawl article content from RSS feeds and publish it to an "articles" topic;
-further processing might normalize or deduplicate this content and published the cleansed article content to a new topic;
+further processing might normalize or deduplicate this content and publish the cleansed article content to a new topic;
 a final processing stage might attempt to recommend this content to users.
 Such processing pipelines create graphs of real-time data flows based on the individual topics.
 Starting in 0.10.0.0, a light-weight but powerful stream processing library called <a href="/documentation/streams">Kafka Streams</a>