You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Edmondo Porcu <ed...@gmail.com> on 2018/05/22 09:25:51 UTC

Architectural patterns for full log replayability

Hello Kafka Users,

we'd like to understand how you are designing systems based on Kafka so to
be able to replay the full log.

In particular, let's take the following example:
- A product service streams products
- A purchase service streams purchases
- A recommendation service join the two and determine "special offers" to
apply on product, then emit a recommendation for it
- A special offer service consume recommendations and update product.

This all work very well, however imagine the product update that the
special offer service can be done only when the product is in state
"VALID". When the special offer service update the product the first time,
this all works fine.

Imagine now I bring all the consumer offsets back and I restart executing
everything. Now when the special offer service receive a recommendation for
a product, the product has been marked by the sales team as "OBSOLETE". The
update of the product fails.

How are you tackling this sort of issues?

- Are you materializing products on the materialized view service before
performing the update and filtering out events which are not applicable
anymore?
- Are you making the product service failing in such a way that the special
offer service understand this specific error and handle it ?

Thanks
Edmondo