You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Steve Morin <st...@stevemorin.com> on 2014/08/01 16:48:14 UTC

Re: Consume more than produce

You have to remember statsd uses udp and possibly lossy which might account
for the errors.
-Steve


On Fri, Aug 1, 2014 at 1:28 AM, Guy Doulberg <Gu...@perion.com>
wrote:

> Hey,
>
>
> After a year or so I have Kafka as my streaming layer in my production, I
> decided it is time to audit, and to test how many events do I lose, if I
> lose events at all.
>
>
> I discovered something interesting which I can't explain.
>
>
> The producer produces less events that the consumer group consumes.
>
>
> It is not much more, it is about 0.1% more events
>
>
> I use the Consumer API (not the simple consumer API)
>
>
> I was thinking I might had rebalancing going on in my system, but it
> doesn't look like that.
>
>
> Did anyone see such a behaviour
>
>
> In order to audit, I calculated for each event the minute it arrived, and
> assigned this value to the event, I used statsd do to count all events from
> all my producer cluster, and all consumer group cluster.
>
>
> I must say that it is not a happening for every minute,
>
>
> Thanks, Guy
>
>
>