You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Joe San <co...@gmail.com> on 2016/07/28 06:40:52 UTC

Kafka Consumer as a ReactiveStream

Hi Kafka Users,


0down votefavorite
<http://stackoverflow.com/questions/38628493/apache-kafka-consumer-as-a-stream#>

I have the following logic in my mind and would like to know if this is a
good approach to designing a scalable consumer. I have a topic that has say
10 partitions and I have a consumer group that would consume from all these
partitions. In my consumer client, I have the following configuration

consumer {
  topicName = "someTopic"
  partitionCount = "10"
  groupId = "sopmeGroupId"
}

Wehn I start my client, I read this configuration and pass this into a Akka
actor that I call the Supervisor actor. In this Supervisor actor, I iterate
over the partition count and create one child actor. In this child actor I
create an Observable and an Observer. The Observable emits events by
reading the messages from the given Kafka topic which is then Observerd by
the corresponding Observer. I then use the message that I have in my
Observer and write them to the final destination.

Now when the partition for the topic is increased, I just have to increase
the partition count in my consumer client and restart it. This would ensure
that I have a dedicated Observable and Observer that reads from the newly
created partition.

Is this a good design approach? What might be the pitfalls with this
approach? Any suggestions?