You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Johan Haleby <jo...@gmail.com> on 2015/09/14 08:19:57 UTC

How to model topics and partitions for Kafka when used to store all business events?

Hi,

We're considering using Kafka as a way to store all our business events
(effectively) forever. The purpose is to be able to spin up new
"microservices" that we haven't yet thought of that will be able to
leverage on all previous events to build up their projections/state.
Another use case might be an existing service where we'd like to "replay"
all events that is of interest to this service to recreate its state.

Note that we're not planning to use Kafka as an "event store" in the sense
that events will be projected/loaded into an aggregate on every request.

Also (as far as I can tell) we don't know how consumers will consume the
events. A new microservice might need all sorts of different events in
order to create its internal projection/state.

   1. Would Kafka be suitable for this?
   2. If so, what's a good way to model this (topics/partitions)? For
   example would it be alright to just use a single topic for all events?
   3. We're currently using RabbitMQ for messaging (business events are
   sent to RabbitMQ). It would be great if we could migrate away from RabbitMQ
   in the future and move entirely to Kafka. I assume that this could change
   the way topics and partitions are modelled since now we have a better
   understanding of how consumers will consume the events. Would this be
   compatible with the other use case (infinite retention and replay)?

Regards,
/Johan