You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kafka.apache.org by gu...@apache.org on 2018/12/14 17:37:55 UTC
[kafka] branch trunk updated: MINOR: Replace tbd with the actual
link for out-of-ordering data (#6035)
This is an automated email from the ASF dual-hosted git repository.
guozhang pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git
The following commit(s) were added to refs/heads/trunk by this push:
new 5c549b2 MINOR: Replace tbd with the actual link for out-of-ordering data (#6035)
5c549b2 is described below
commit 5c549b2a89c2364eb966bd56f136a7891f0ffc9f
Author: Guozhang Wang <wa...@gmail.com>
AuthorDate: Fri Dec 14 09:37:43 2018 -0800
MINOR: Replace tbd with the actual link for out-of-ordering data (#6035)
Reviewers: Jason Gustafson <ja...@confluent.io>
---
docs/streams/core-concepts.html | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/streams/core-concepts.html b/docs/streams/core-concepts.html
index bffaa8a..c925c2e 100644
--- a/docs/streams/core-concepts.html
+++ b/docs/streams/core-concepts.html
@@ -224,7 +224,7 @@
<p>
Besides the guarantee that each record will be processed exactly-once, another issue that many stream processing application will face is how to
- handle <a href="tbd">out-of-order data</a> that may impact their business logic. In Kafka Streams, there are two causes that could potentially
+ handle <a href="https://www.confluent.io/wp-content/uploads/streams-tables-two-sides-same-coin.pdf">out-of-order data</a> that may impact their business logic. In Kafka Streams, there are two causes that could potentially
result in out-of-order data arrivals with respect to their timestamps:
</p>